./gsl_DOC-1.15-s-i486/0000755000000000000000000000000012035456276012507 5ustar rootroot./gsl_DOC-1.15-s-i486/usr/0000755000000000000000000000000012035456100013302 5ustar rootroot./gsl_DOC-1.15-s-i486/usr/share/0000755000000000000000000000000012035456102014406 5ustar rootroot./gsl_DOC-1.15-s-i486/usr/share/man/0000755000000000000000000000000012035456103015162 5ustar rootroot./gsl_DOC-1.15-s-i486/usr/share/man/man3/0000755000000000000000000000000012035456005016021 5ustar rootroot./gsl_DOC-1.15-s-i486/usr/share/man/man3/gsl.30000644000000000000000000000241512035456005016674 0ustar rootroot.TH GSL 3 "GNU Scientific Library" "GSL Team" \" -*- nroff -*-
.SH NAME
gsl - GNU Scientific Library
.SH SYNOPSIS
#include
.SH DESCRIPTION
The GNU Scientific Library (GSL) is a collection of routines for
numerical computing. The routines are written from scratch by the GSL
team in C, and present a modern Applications Programming Interface
(API) for C programmers, allowing wrappers to be written for very high
level languages.
.PP
The library covers the following areas,
.TP
.nf
.BR
Complex Numbers
Roots of Polynomials
Special Functions
Vectors and Matrices
Permutations
Combinations
Sorting
BLAS Support
Linear Algebra
Eigensystems
Fast Fourier Transforms
Quadrature
Random Numbers
Quasi-Random Sequences
Random Distributions
Statistics
Histograms
N-Tuples
Monte Carlo Integration
Simulated Annealing
Differential Equations
Interpolation
Numerical Differentiation
Chebyshev Approximations
Series Acceleration
Discrete Hankel Transforms
Root-Finding
Minimization
Least-Squares Fitting
Physical Constants
IEEE Floating-Point
.fi
.PP
For more information please consult the GSL Reference Manual, which is
available as an info file. You can read it online using the shell
command
.B info gsl-ref
(if the library is installed).
.PP
Please report any bugs to
.B bug-gsl@gnu.org.
./gsl_DOC-1.15-s-i486/usr/share/man/man1/0000755000000000000000000000000012035456103016016 5ustar rootroot./gsl_DOC-1.15-s-i486/usr/share/man/man1/gsl-randist.10000644000000000000000000000236512035456005020336 0ustar rootroot.\" Man page contributed by Dirk Eddelbuettel
.\" and released under the GNU General Public License
.TH GSL-RANDIST 1 "" GNU
.SH NAME
gsl-randist - generate random samples from various distributions
.SH SYNOPSYS
.B gsl-randist seed n DIST param1 param2 [..]
.SH DESCRIPTION
.B gsl-randist
is a demonstration program for the GNU Scientific Library.
It generates n random samples from the distribution DIST using the distribution
parameters param1, param2, ...
.SH EXAMPLE
Here is an example. We generate 10000 random samples from a Cauchy
distribution with a width of 30 and histogram them over the range -100 to
100, using 200 bins.
gsl-randist 0 10000 cauchy 30 | gsl-histogram -100 100 200 > histogram.dat
A plot of the resulting histogram will show the familiar shape of the
Cauchy distribution with fluctuations caused by the finite sample
size.
awk '{print $1, $3 ; print $2, $3}' histogram.dat | graph -T X
.SH SEE ALSO
.BR gsl(3) ,
.BR gsl-histogram(1) .
.SH AUTHOR
.B gsl-randist
was written by James Theiler and Brian Gough.
Copyright 1996-2000; for copying conditions see the GNU General
Public Licence.
This manual page was added by the Dirk Eddelbuettel
, the Debian GNU/Linux maintainer for
.BR GSL .
./gsl_DOC-1.15-s-i486/usr/share/man/man1/gsl-histogram.10000644000000000000000000000270012035456005020660 0ustar rootroot.\" Man page contributed by Dirk Eddelbuettel
.\" and released under the GNU General Public License
.TH GSL-HISTOGRAM 1 "" GNU
.SH NAME
gsl-histogram - compute histogram of data on stdin
.SH SYNOPSYS
.B gsl-histogram xmin xmax [n]
.SH DESCRIPTION
.B gsl-histogram
is a demonstration program for the GNU Scientific Library.
It takes three arguments, specifying the upper and lower bounds of the
histogram and the number of bins. It then reads numbers from `stdin',
one line at a time, and adds them to the histogram. When there is no
more data to read it prints out the accumulated histogram using
gsl_histogram_fprintf. If n is unspecified then bins of integer width
are used.
.SH EXAMPLE
Here is an example. We generate 10000 random samples from a Cauchy
distribution with a width of 30 and histogram them over the range -100 to
100, using 200 bins.
gsl-randist 0 10000 cauchy 30 | gsl-histogram -100 100 200 > histogram.dat
A plot of the resulting histogram will show the familiar shape of the
Cauchy distribution with fluctuations caused by the finite sample
size.
awk '{print $1, $3 ; print $2, $3}' histogram.dat | graph -T X
.SH SEE ALSO
.BR gsl(3) ,
.BR gsl-randist(1) .
.SH AUTHOR
.B gsl-histogram
was written by Brian Gough.
Copyright 1996-2000; for copying conditions see the GNU General
Public Licence.
This manual page was added by the Dirk Eddelbuettel
, the Debian GNU/Linux maintainer for
.BR GSL .
./gsl_DOC-1.15-s-i486/usr/share/man/man1/gsl-config.10000644000000000000000000000265312035456005020137 0ustar rootroot.TH GSL 1 "22 May 2001"
.SH NAME
gsl-config - script to get version number and compiler flags of the installed GSL library
.SH SYNOPSIS
.B gsl-config
[\-\-prefix] [\-\-version] [\-\-libs] [\-\-libs\-without\-cblas] [\-\-cflags]
.SH DESCRIPTION
.PP
\fIgsl-config\fP is a tool that is used to configure to determine
the compiler and linker flags that should be used to compile
and link programs that use \fIGSL\fP. It is also used internally
to the .m4 macros for GNU autoconf that are included with \fIGSL\fP.
.
.SH OPTIONS
\fIgsl-config\fP accepts the following options:
.TP 8
.B \-\-version
Print the currently installed version of \fIGSL\fP on the standard output.
.TP 8
.B \-\-libs
Print the linker flags that are necessary to link a \fIGSL\fP program, with cblas
.TP 8
.B \-\-libs\-without\-cblas
Print the linker flags that are necessary to link a \fIGSL\fP program, without cblas
.TP 8
.B \-\-cflags
Print the compiler flags that are necessary to compile a \fIGSL\fP program.
.TP 8
.B \-\-prefix
Show the GSL installation prefix.
.SH SEE ALSO
.BR gtk-config (1),
.BR gnome-config (1)
.SH COPYRIGHT
Copyright \(co 2001 Christopher R. Gabriel
Permission to use, copy, modify, and distribute this software and its
documentation for any purpose and without fee is hereby granted,
provided that the above copyright notice appear in all copies and that
both that copyright notice and this permission notice appear in
supporting documentation.
./gsl_DOC-1.15-s-i486/usr/share/info/0000755000000000000000000000000012035456102015341 5ustar rootroot./gsl_DOC-1.15-s-i486/usr/share/info/gsl-ref.info-60000644000000000000000000031615512035456005017735 0ustar rootrootThis is gsl-ref.info, produced by makeinfo version 4.13 from
gsl-ref.texi.
INFO-DIR-SECTION Software libraries
START-INFO-DIR-ENTRY
* gsl-ref: (gsl-ref). GNU Scientific Library - Reference
END-INFO-DIR-ENTRY
Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004,
2005, 2006, 2007, 2008, 2009, 2010, 2011 The GSL Team.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.3 or
any later version published by the Free Software Foundation; with the
Invariant Sections being "GNU General Public License" and "Free Software
Needs Free Documentation", the Front-Cover text being "A GNU Manual",
and with the Back-Cover Text being (a) (see below). A copy of the
license is included in the section entitled "GNU Free Documentation
License".
(a) The Back-Cover Text is: "You have the freedom to copy and modify
this GNU Manual."
File: gsl-ref.info, Node: Concept Index, Prev: Type Index, Up: Top
Concept Index
*************
[index]
* Menu:
* $, shell prompt: Conventions used in this manual.
(line 6)
* 2D histograms: Two dimensional histograms.
(line 6)
* 2D random direction vector: Spherical Vector Distributions.
(line 14)
* 3-j symbols: Coupling Coefficients.
(line 6)
* 3D random direction vector: Spherical Vector Distributions.
(line 33)
* 6-j symbols: Coupling Coefficients.
(line 6)
* 9-j symbols: Coupling Coefficients.
(line 6)
* acceleration of series: Series Acceleration. (line 6)
* acosh: Elementary Functions.
(line 33)
* Adams method: Stepping Functions. (line 129)
* Adaptive step-size control, differential equations: Adaptive Step-size Control.
(line 6)
* Ai(x): Airy Functions and Derivatives.
(line 6)
* Airy functions: Airy Functions and Derivatives.
(line 6)
* Akima splines: Interpolation Types. (line 36)
* aliasing of arrays: Aliasing of arrays. (line 6)
* alternative optimized functions: Alternative optimized functions.
(line 6)
* AMAX, Level-1 BLAS: Level 1 GSL BLAS Interface.
(line 62)
* Angular Mathieu Functions: Angular Mathieu Functions.
(line 6)
* angular reduction: Restriction Functions.
(line 6)
* ANSI C, use of: Using the library. (line 6)
* Apell symbol, see Pochhammer symbol: Pochhammer Symbol. (line 9)
* approximate comparison of floating point numbers: Approximate Comparison of Floating Point Numbers.
(line 13)
* arctangent integral: Arctangent Integral. (line 6)
* argument of complex number: Properties of complex numbers.
(line 7)
* arithmetic exceptions: Setting up your IEEE environment.
(line 6)
* asinh: Elementary Functions.
(line 37)
* astronomical constants: Astronomy and Astrophysics.
(line 6)
* ASUM, Level-1 BLAS: Level 1 GSL BLAS Interface.
(line 47)
* atanh: Elementary Functions.
(line 41)
* atomic physics, constants: Atomic and Nuclear Physics.
(line 6)
* autoconf, using with GSL: Autoconf Macros. (line 6)
* AXPY, Level-1 BLAS: Level 1 GSL BLAS Interface.
(line 96)
* B-spline wavelets: DWT Initialization. (line 31)
* Bader and Deuflhard, Bulirsch-Stoer method.: Stepping Functions.
(line 124)
* balancing matrices: Balancing. (line 6)
* Basic Linear Algebra Subroutines (BLAS) <1>: GSL CBLAS Library.
(line 6)
* Basic Linear Algebra Subroutines (BLAS): BLAS Support. (line 6)
* basis splines, B-splines: Basis Splines. (line 6)
* basis splines, derivatives: Evaluation of B-spline basis function derivatives.
(line 6)
* basis splines, evaluation: Evaluation of B-spline basis functions.
(line 6)
* basis splines, examples: Example programs for B-splines.
(line 6)
* basis splines, Greville abscissae: Obtaining Greville abscissae for B-spline basis functions.
(line 6)
* basis splines, initializing: Initializing the B-splines solver.
(line 6)
* basis splines, overview: Overview of B-splines.
(line 6)
* BDF method: Stepping Functions. (line 137)
* Bernoulli trial, random variates: The Bernoulli Distribution.
(line 8)
* Bessel functions: Bessel Functions. (line 6)
* Bessel Functions, Fractional Order: Regular Bessel Function - Fractional Order.
(line 6)
* best-fit parameters, covariance: Computing the covariance matrix of best fit parameters.
(line 6)
* Beta distribution: The Beta Distribution.
(line 8)
* Beta function: Beta Functions. (line 9)
* Beta function, incomplete normalized: Incomplete Beta Function.
(line 9)
* BFGS algorithm, minimization: Multimin Algorithms with Derivatives.
(line 37)
* Bi(x): Airy Functions and Derivatives.
(line 6)
* bias, IEEE format: Representation of floating point numbers.
(line 6)
* bidiagonalization of real matrices: Bidiagonalization. (line 6)
* binning data: Histograms. (line 6)
* Binomial random variates: The Binomial Distribution.
(line 9)
* biorthogonal wavelets: DWT Initialization. (line 31)
* bisection algorithm for finding roots: Root Bracketing Algorithms.
(line 17)
* Bivariate Gaussian distribution: The Bivariate Gaussian Distribution.
(line 9)
* BLAS: BLAS Support. (line 6)
* BLAS, Low-level C interface: GSL CBLAS Library. (line 6)
* blocks: Vectors and Matrices.
(line 6)
* bounds checking, extension to GCC: Accessing vector elements.
(line 6)
* breakpoints: Using gdb. (line 6)
* Brent's method for finding minima: Minimization Algorithms.
(line 32)
* Brent's method for finding roots: Root Bracketing Algorithms.
(line 50)
* Broyden algorithm for multidimensional roots: Algorithms without Derivatives.
(line 45)
* BSD random number generator: Unix random number generators.
(line 18)
* bug-gsl mailing list: Reporting Bugs. (line 6)
* bugs, how to report: Reporting Bugs. (line 6)
* Bulirsch-Stoer method: Stepping Functions. (line 124)
* C extensions, compatible use of: Using the library. (line 6)
* C++, compatibility: Compatibility with C++.
(line 6)
* C99, inline keyword: Inline functions. (line 6)
* Carlson forms of Elliptic integrals: Definition of Carlson Forms.
(line 6)
* Cash-Karp, Runge-Kutta method: Stepping Functions. (line 100)
* Cauchy distribution: The Cauchy Distribution.
(line 7)
* Cauchy principal value, by numerical quadrature: QAWC adaptive integration for Cauchy principal values.
(line 6)
* CBLAS: BLAS Support. (line 6)
* CBLAS, Low-level interface: GSL CBLAS Library. (line 6)
* CDFs, cumulative distribution functions: Random Number Distributions.
(line 6)
* ce(q,x), Mathieu function: Angular Mathieu Functions.
(line 6)
* Chebyshev series: Chebyshev Approximations.
(line 6)
* checking combination for validity: Combination properties.
(line 18)
* checking multiset for validity: Multiset properties. (line 17)
* checking permutation for validity: Permutation properties.
(line 14)
* Chi(x): Hyperbolic Integrals.
(line 6)
* Chi-squared distribution: The Chi-squared Distribution.
(line 15)
* Cholesky decomposition: Cholesky Decomposition.
(line 6)
* Ci(x): Trigonometric Integrals.
(line 6)
* Clausen functions: Clausen Functions. (line 6)
* Clenshaw-Curtis quadrature: Integrands with weight functions.
(line 6)
* CMRG, combined multiple recursive random number generator: Random number generator algorithms.
(line 97)
* code reuse in applications: Code Reuse. (line 6)
* combinations: Combinations. (line 6)
* combinatorial factor C(m,n): Factorials. (line 42)
* combinatorial optimization: Simulated Annealing. (line 6)
* comparison functions, definition: Sorting objects. (line 16)
* compatibility: Using the library. (line 6)
* compiling programs, include paths: Compiling and Linking.
(line 6)
* compiling programs, library paths: Linking programs with the library.
(line 6)
* complementary incomplete Gamma function: Incomplete Gamma Functions.
(line 23)
* complete Fermi-Dirac integrals: Complete Fermi-Dirac Integrals.
(line 6)
* complex arithmetic: Complex arithmetic operators.
(line 6)
* complex cosine function, special functions: Trigonometric Functions for Complex Arguments.
(line 13)
* Complex Gamma function: Gamma Functions. (line 56)
* complex hermitian matrix, eigensystem: Complex Hermitian Matrices.
(line 9)
* complex log sine function, special functions: Trigonometric Functions for Complex Arguments.
(line 18)
* complex numbers: Complex Numbers. (line 6)
* complex sinc function, special functions: Circular Trigonometric Functions.
(line 22)
* complex sine function, special functions: Trigonometric Functions for Complex Arguments.
(line 8)
* confluent hypergeometric function: Laguerre Functions. (line 6)
* confluent hypergeometric functions: Hypergeometric Functions.
(line 6)
* conical functions: Legendre Functions and Spherical Harmonics.
(line 6)
* Conjugate gradient algorithm, minimization: Multimin Algorithms with Derivatives.
(line 12)
* conjugate of complex number: Complex arithmetic operators.
(line 55)
* constant matrix: Initializing matrix elements.
(line 6)
* constants, fundamental: Fundamental Constants.
(line 6)
* constants, mathematical--defined as macros: Mathematical Constants.
(line 6)
* constants, physical: Physical Constants. (line 6)
* constants, prefixes: Prefixes. (line 6)
* contacting the GSL developers: Further Information. (line 6)
* conventions, used in manual: Conventions used in this manual.
(line 6)
* convergence, accelerating a series: Series Acceleration. (line 6)
* conversion of units: Physical Constants. (line 6)
* cooling schedule: Simulated Annealing algorithm.
(line 23)
* COPY, Level-1 BLAS: Level 1 GSL BLAS Interface.
(line 85)
* correlation, of two datasets: Correlation. (line 6)
* cosine function, special functions: Circular Trigonometric Functions.
(line 12)
* cosine of complex number: Complex Trigonometric Functions.
(line 11)
* cost function: Simulated Annealing. (line 6)
* Coulomb wave functions: Coulomb Functions. (line 6)
* coupling coefficients: Coupling Coefficients.
(line 6)
* covariance matrix, from linear regression: Linear regression.
(line 9)
* covariance matrix, linear fits: Fitting Overview. (line 21)
* covariance matrix, nonlinear fits: Computing the covariance matrix of best fit parameters.
(line 6)
* covariance, of two datasets: Covariance. (line 6)
* cquad, doubly-adaptive integration: CQUAD doubly-adaptive integration.
(line 6)
* CRAY random number generator, RANF: Other random number generators.
(line 23)
* cubic equation, solving: Cubic Equations. (line 6)
* cubic splines: Interpolation Types. (line 20)
* cumulative distribution functions (CDFs): Random Number Distributions.
(line 6)
* Cylindrical Bessel Functions: Regular Cylindrical Bessel Functions.
(line 6)
* Daubechies wavelets: DWT Initialization. (line 20)
* Dawson function: Dawson Function. (line 6)
* DAXPY, Level-1 BLAS: Level 1 GSL BLAS Interface.
(line 96)
* debugging numerical programs: Using gdb. (line 6)
* Debye functions: Debye Functions. (line 6)
* denormalized form, IEEE format: Representation of floating point numbers.
(line 14)
* deprecated functions: Deprecated Functions.
(line 6)
* derivatives, calculating numerically: Numerical Differentiation.
(line 6)
* determinant of a matrix, by LU decomposition: LU Decomposition.
(line 83)
* Deuflhard and Bader, Bulirsch-Stoer method.: Stepping Functions.
(line 124)
* DFTs, see FFT: Fast Fourier Transforms.
(line 6)
* diagonal, of a matrix: Creating row and column views.
(line 62)
* differential equations, initial value problems: Ordinary Differential Equations.
(line 6)
* differentiation of functions, numeric: Numerical Differentiation.
(line 6)
* digamma function: Psi (Digamma) Function.
(line 6)
* dilogarithm: Dilogarithm. (line 6)
* direction vector, random 2D: Spherical Vector Distributions.
(line 14)
* direction vector, random 3D: Spherical Vector Distributions.
(line 33)
* direction vector, random N-dimensional: Spherical Vector Distributions.
(line 42)
* Dirichlet distribution: The Dirichlet Distribution.
(line 8)
* discontinuities, in ODE systems: Evolution. (line 77)
* Discrete Fourier Transforms, see FFT: Fast Fourier Transforms.
(line 6)
* discrete Hankel transforms: Discrete Hankel Transforms.
(line 6)
* Discrete Newton algorithm for multidimensional roots: Algorithms without Derivatives.
(line 26)
* Discrete random numbers: General Discrete Distributions.
(line 52)
* Discrete random numbers, preprocessing: General Discrete Distributions.
(line 52)
* divided differences, polynomials: Divided Difference Representation of Polynomials.
(line 6)
* division by zero, IEEE exceptions: Setting up your IEEE environment.
(line 6)
* dollar sign $, shell prompt: Conventions used in this manual.
(line 11)
* DOT, Level-1 BLAS: Level 1 GSL BLAS Interface.
(line 8)
* double factorial: Factorials. (line 22)
* double precision, IEEE format: Representation of floating point numbers.
(line 40)
* downloading GSL: Obtaining GSL. (line 6)
* DWT initialization: DWT Initialization. (line 6)
* DWT, mathematical definition: DWT Definitions. (line 6)
* DWT, one dimensional: DWT in one dimension.
(line 6)
* DWT, see wavelet transforms: Wavelet Transforms. (line 6)
* DWT, two dimensional: DWT in two dimension.
(line 6)
* e, defined as a macro: Mathematical Constants.
(line 10)
* E1(x), E2(x), Ei(x): Exponential Integral.
(line 6)
* eigenvalues and eigenvectors: Eigensystems. (line 6)
* elementary functions: Mathematical Functions.
(line 6)
* elementary operations: Elementary Operations.
(line 6)
* elliptic functions (Jacobi): Elliptic Functions (Jacobi).
(line 6)
* elliptic integrals: Elliptic Integrals. (line 6)
* energy function: Simulated Annealing. (line 6)
* energy, units of: Thermal Energy and Power.
(line 6)
* erf(x): Error Functions. (line 6)
* erfc(x): Error Functions. (line 6)
* Erlang distribution: The Gamma Distribution.
(line 16)
* error codes: Error Codes. (line 13)
* error codes, reserved: Error Codes. (line 6)
* error function: Error Functions. (line 6)
* Error handlers: Error Handlers. (line 6)
* error handling: Error Handling. (line 6)
* error handling macros: Using GSL error reporting in your own functions.
(line 6)
* Errors: Error Handling. (line 6)
* estimated standard deviation: Statistics. (line 6)
* estimated variance: Statistics. (line 6)
* Eta Function: Eta Function. (line 6)
* euclidean distance function, hypot: Elementary Functions.
(line 23)
* Euler's constant, defined as a macro: Mathematical Constants.
(line 58)
* evaluation of polynomials: Polynomial Evaluation.
(line 6)
* evaluation of polynomials, in divided difference form: Divided Difference Representation of Polynomials.
(line 6)
* examples, conventions used in: Conventions used in this manual.
(line 6)
* exceptions, C++: Compatibility with C++.
(line 6)
* exceptions, floating point: Handling floating point exceptions.
(line 6)
* exceptions, IEEE arithmetic: Setting up your IEEE environment.
(line 6)
* exchanging permutation elements: Accessing permutation elements.
(line 18)
* exp: Exponential Functions.
(line 6)
* expm1: Elementary Functions.
(line 18)
* exponent, IEEE format: Representation of floating point numbers.
(line 6)
* Exponential distribution: The Exponential Distribution.
(line 8)
* exponential function: Exponential Functions.
(line 6)
* exponential integrals: Exponential Integrals.
(line 6)
* Exponential power distribution: The Exponential Power Distribution.
(line 8)
* exponential, difference from 1 computed accurately: Elementary Functions.
(line 18)
* exponentiation of complex number: Elementary Complex Functions.
(line 16)
* extern inline: Inline functions. (line 6)
* F-distribution: The F-distribution. (line 16)
* factorial: Factorials. (line 6)
* factorization of matrices: Linear Algebra. (line 6)
* false position algorithm for finding roots: Root Bracketing Algorithms.
(line 33)
* Fast Fourier Transforms, see FFT: Fast Fourier Transforms.
(line 6)
* Fehlberg method, differential equations: Stepping Functions.
(line 96)
* Fermi-Dirac function: Fermi-Dirac Function.
(line 6)
* FFT: Fast Fourier Transforms.
(line 6)
* FFT mathematical definition: Mathematical Definitions.
(line 6)
* FFT of complex data, mixed-radix algorithm: Mixed-radix FFT routines for complex data.
(line 6)
* FFT of complex data, radix-2 algorithm: Radix-2 FFT routines for complex data.
(line 6)
* FFT of real data: Overview of real data FFTs.
(line 6)
* FFT of real data, mixed-radix algorithm: Mixed-radix FFT routines for real data.
(line 6)
* FFT of real data, radix-2 algorithm: Radix-2 FFT routines for real data.
(line 6)
* FFT, complex data: Overview of complex data FFTs.
(line 6)
* finding minima: One dimensional Minimization.
(line 6)
* finding roots: One dimensional Root-Finding.
(line 6)
* finding zeros: One dimensional Root-Finding.
(line 6)
* fits, multi-parameter linear: Multi-parameter fitting.
(line 6)
* fitting: Least-Squares Fitting.
(line 6)
* fitting, using Chebyshev polynomials: Chebyshev Approximations.
(line 6)
* Fj(x), Fermi-Dirac integral: Complete Fermi-Dirac Integrals.
(line 6)
* Fj(x,b), incomplete Fermi-Dirac integral: Incomplete Fermi-Dirac Integrals.
(line 6)
* flat distribution: The Flat (Uniform) Distribution.
(line 8)
* Fletcher-Reeves conjugate gradient algorithm, minimization: Multimin Algorithms with Derivatives.
(line 12)
* floating point exceptions: Handling floating point exceptions.
(line 6)
* floating point numbers, approximate comparison: Approximate Comparison of Floating Point Numbers.
(line 13)
* floating point registers: Examining floating point registers.
(line 6)
* force and energy, units of: Force and Energy. (line 6)
* Fortran range checking, equivalent in gcc: Accessing vector elements.
(line 6)
* Four-tap Generalized Feedback Shift Register: Random number generator algorithms.
(line 172)
* Fourier integrals, numerical: QAWF adaptive integration for Fourier integrals.
(line 6)
* Fourier Transforms, see FFT: Fast Fourier Transforms.
(line 6)
* Fractional Order Bessel Functions: Regular Bessel Function - Fractional Order.
(line 6)
* free documentation: Free Software Needs Free Documentation.
(line 6)
* free software, explanation of: GSL is Free Software.
(line 6)
* frexp: Elementary Functions.
(line 49)
* functions, numerical differentiation: Numerical Differentiation.
(line 6)
* fundamental constants: Fundamental Constants.
(line 6)
* Gamma distribution: The Gamma Distribution.
(line 9)
* gamma functions: Gamma Functions. (line 6)
* Gauss-Kronrod quadrature: Integrands without weight functions.
(line 6)
* Gaussian distribution: The Gaussian Distribution.
(line 7)
* Gaussian distribution, bivariate: The Bivariate Gaussian Distribution.
(line 9)
* Gaussian Tail distribution: The Gaussian Tail Distribution.
(line 8)
* gcc extensions, range-checking: Accessing vector elements.
(line 6)
* gcc warning options: GCC warning options for numerical programs.
(line 6)
* gdb: Using gdb. (line 6)
* Gegenbauer functions: Gegenbauer Functions.
(line 6)
* GEMM, Level-3 BLAS: Level 3 GSL BLAS Interface.
(line 22)
* GEMV, Level-2 BLAS: Level 2 GSL BLAS Interface.
(line 20)
* general polynomial equations, solving: General Polynomial Equations.
(line 6)
* generalized eigensystems: Real Generalized Nonsymmetric Eigensystems.
(line 6)
* generalized hermitian definite eigensystems: Complex Generalized Hermitian-Definite Eigensystems.
(line 6)
* generalized symmetric eigensystems: Real Generalized Symmetric-Definite Eigensystems.
(line 6)
* Geometric random variates <1>: The Hypergeometric Distribution.
(line 8)
* Geometric random variates: The Geometric Distribution.
(line 8)
* GER, Level-2 BLAS: Level 2 GSL BLAS Interface.
(line 104)
* GERC, Level-2 BLAS: Level 2 GSL BLAS Interface.
(line 113)
* GERU, Level-2 BLAS: Level 2 GSL BLAS Interface.
(line 104)
* Givens Rotation, BLAS: Level 1 GSL BLAS Interface.
(line 116)
* Givens Rotation, Modified, BLAS: Level 1 GSL BLAS Interface.
(line 135)
* GNU General Public License: Introduction. (line 6)
* golden section algorithm for finding minima: Minimization Algorithms.
(line 14)
* GSL_C99_INLINE: Inline functions. (line 6)
* GSL_RNG_SEED: Random number generator initialization.
(line 17)
* gsl_sf_result: The gsl_sf_result struct.
(line 6)
* gsl_sf_result_e10: The gsl_sf_result struct.
(line 6)
* Gumbel distribution (Type 1): The Type-1 Gumbel Distribution.
(line 8)
* Gumbel distribution (Type 2): The Type-2 Gumbel Distribution.
(line 8)
* Haar wavelets: DWT Initialization. (line 26)
* Hankel transforms, discrete: Discrete Hankel Transforms.
(line 6)
* HAVE_INLINE: Inline functions. (line 6)
* hazard function, normal distribution: Probability functions.
(line 19)
* HBOOK: Ntuple References and Further Reading.
(line 6)
* header files, including: Compiling and Linking.
(line 6)
* heapsort: Sorting. (line 6)
* HEMM, Level-3 BLAS: Level 3 GSL BLAS Interface.
(line 56)
* HEMV, Level-2 BLAS: Level 2 GSL BLAS Interface.
(line 85)
* HER, Level-2 BLAS: Level 2 GSL BLAS Interface.
(line 131)
* HER2, Level-2 BLAS: Level 2 GSL BLAS Interface.
(line 158)
* HER2K, Level-3 BLAS: Level 3 GSL BLAS Interface.
(line 181)
* HERK, Level-3 BLAS: Level 3 GSL BLAS Interface.
(line 140)
* hermitian matrix, complex, eigensystem: Complex Hermitian Matrices.
(line 9)
* Hessenberg decomposition: Hessenberg Decomposition of Real Matrices.
(line 6)
* Hessenberg triangular decomposition: Hessenberg-Triangular Decomposition of Real Matrices.
(line 6)
* histogram statistics: Histogram Statistics.
(line 6)
* histogram, from ntuple: Histogramming ntuple values.
(line 35)
* histograms: Histograms. (line 6)
* histograms, random sampling from: The histogram probability distribution struct.
(line 6)
* Householder linear solver: Householder solver for linear systems.
(line 6)
* Householder matrix: Householder Transformations.
(line 6)
* Householder transformation: Householder Transformations.
(line 6)
* Hurwitz Zeta Function: Hurwitz Zeta Function.
(line 6)
* HYBRID algorithm, unscaled without derivatives: Algorithms without Derivatives.
(line 22)
* HYBRID algorithms for nonlinear systems: Algorithms using Derivatives.
(line 13)
* HYBRIDJ algorithm: Algorithms using Derivatives.
(line 67)
* HYBRIDS algorithm, scaled without derivatives: Algorithms without Derivatives.
(line 14)
* HYBRIDSJ algorithm: Algorithms using Derivatives.
(line 14)
* hydrogen atom: Coulomb Functions. (line 6)
* hyperbolic cosine, inverse: Elementary Functions.
(line 33)
* hyperbolic functions, complex numbers: Complex Hyperbolic Functions.
(line 6)
* hyperbolic integrals: Hyperbolic Integrals.
(line 6)
* hyperbolic sine, inverse: Elementary Functions.
(line 37)
* hyperbolic space: Legendre Functions and Spherical Harmonics.
(line 6)
* hyperbolic tangent, inverse: Elementary Functions.
(line 41)
* hypergeometric functions: Hypergeometric Functions.
(line 6)
* hypergeometric random variates: The Hypergeometric Distribution.
(line 6)
* hypot: Elementary Functions.
(line 23)
* hypot function, special functions: Circular Trigonometric Functions.
(line 17)
* i(x), Bessel Functions: Regular Modified Spherical Bessel Functions.
(line 6)
* I(x), Bessel Functions: Regular Modified Cylindrical Bessel Functions.
(line 6)
* identity matrix: Initializing matrix elements.
(line 6)
* identity permutation: Permutation allocation.
(line 20)
* IEEE exceptions: Setting up your IEEE environment.
(line 6)
* IEEE floating point: IEEE floating-point arithmetic.
(line 6)
* IEEE format for floating point numbers: Representation of floating point numbers.
(line 6)
* IEEE infinity, defined as a macro: Infinities and Not-a-number.
(line 6)
* IEEE NaN, defined as a macro: Infinities and Not-a-number.
(line 6)
* illumination, units of: Light and Illumination.
(line 6)
* imperial units: Imperial Units. (line 6)
* Implicit Euler method: Stepping Functions. (line 106)
* Implicit Runge-Kutta method: Stepping Functions. (line 112)
* importance sampling, VEGAS: VEGAS. (line 6)
* including GSL header files: Compiling and Linking.
(line 6)
* incomplete Beta function, normalized: Incomplete Beta Function.
(line 9)
* incomplete Fermi-Dirac integral: Incomplete Fermi-Dirac Integrals.
(line 6)
* incomplete Gamma function: Incomplete Gamma Functions.
(line 16)
* indirect sorting: Sorting objects. (line 57)
* indirect sorting, of vector elements: Sorting vectors. (line 31)
* infinity, defined as a macro: Infinities and Not-a-number.
(line 6)
* infinity, IEEE format: Representation of floating point numbers.
(line 27)
* info-gsl mailing list: Obtaining GSL. (line 6)
* initial value problems, differential equations: Ordinary Differential Equations.
(line 6)
* initializing matrices: Initializing matrix elements.
(line 6)
* initializing vectors: Initializing vector elements.
(line 6)
* inline functions: Inline functions. (line 6)
* integer powers: Power Function. (line 6)
* integrals, exponential: Exponential Integrals.
(line 6)
* integration, numerical (quadrature): Numerical Integration.
(line 6)
* interpolation: Interpolation. (line 6)
* interpolation, using Chebyshev polynomials: Chebyshev Approximations.
(line 6)
* inverse complex trigonometric functions: Inverse Complex Trigonometric Functions.
(line 6)
* inverse cumulative distribution functions: Random Number Distributions.
(line 6)
* inverse hyperbolic cosine: Elementary Functions.
(line 33)
* inverse hyperbolic functions, complex numbers: Inverse Complex Hyperbolic Functions.
(line 6)
* inverse hyperbolic sine: Elementary Functions.
(line 37)
* inverse hyperbolic tangent: Elementary Functions.
(line 41)
* inverse of a matrix, by LU decomposition: LU Decomposition. (line 69)
* inverting a permutation: Permutation functions.
(line 11)
* Irregular Cylindrical Bessel Functions: Irregular Cylindrical Bessel Functions.
(line 6)
* Irregular Modified Bessel Functions, Fractional Order: Irregular Modified Bessel Functions - Fractional Order.
(line 6)
* Irregular Modified Cylindrical Bessel Functions: Irregular Modified Cylindrical Bessel Functions.
(line 6)
* Irregular Modified Spherical Bessel Functions: Irregular Modified Spherical Bessel Functions.
(line 6)
* Irregular Spherical Bessel Functions: Irregular Spherical Bessel Functions.
(line 6)
* iterating through combinations: Combination functions.
(line 7)
* iterating through multisets: Multiset functions. (line 7)
* iterating through permutations: Permutation functions.
(line 15)
* iterative refinement of solutions in linear systems: LU Decomposition.
(line 57)
* j(x), Bessel Functions: Regular Spherical Bessel Functions.
(line 6)
* J(x), Bessel Functions: Regular Cylindrical Bessel Functions.
(line 6)
* Jacobi elliptic functions: Elliptic Functions (Jacobi).
(line 6)
* Jacobi orthogonalization: Singular Value Decomposition.
(line 58)
* Jacobian matrix, fitting: Overview of Nonlinear Least-Squares Fitting.
(line 34)
* Jacobian matrix, ODEs: Defining the ODE System.
(line 34)
* Jacobian matrix, root finding: Overview of Multidimensional Root Finding.
(line 42)
* k(x), Bessel Functions: Irregular Modified Spherical Bessel Functions.
(line 6)
* K(x), Bessel Functions: Irregular Modified Cylindrical Bessel Functions.
(line 6)
* knots, basis splines: Constructing the knots vector.
(line 6)
* kurtosis: Higher moments (skewness and kurtosis).
(line 6)
* Laguerre functions: Laguerre Functions. (line 6)
* Lambert function: Lambert W Functions. (line 6)
* Landau distribution: The Landau Distribution.
(line 8)
* LAPACK: Eigenvalue and Eigenvector References.
(line 18)
* Laplace distribution: The Laplace Distribution.
(line 7)
* LD_LIBRARY_PATH: Shared Libraries. (line 6)
* ldexp: Elementary Functions.
(line 45)
* leading dimension, matrices: Matrices. (line 6)
* least squares fit: Least-Squares Fitting.
(line 6)
* least squares fitting, nonlinear: Nonlinear Least-Squares Fitting.
(line 7)
* least squares, covariance of best-fit parameters: Computing the covariance matrix of best fit parameters.
(line 6)
* Legendre forms of elliptic integrals: Definition of Legendre Forms.
(line 6)
* Legendre functions: Legendre Functions and Spherical Harmonics.
(line 6)
* Legendre polynomials: Legendre Functions and Spherical Harmonics.
(line 6)
* length, computed accurately using hypot: Elementary Functions.
(line 23)
* Levenberg-Marquardt algorithms: Minimization Algorithms using Derivatives.
(line 12)
* Levin u-transform: Series Acceleration. (line 6)
* Levy distribution: The Levy alpha-Stable Distributions.
(line 9)
* Levy distribution, skew: The Levy skew alpha-Stable Distribution.
(line 9)
* libraries, linking with: Linking programs with the library.
(line 6)
* libraries, shared: Shared Libraries. (line 6)
* license of GSL: Introduction. (line 6)
* light, units of: Light and Illumination.
(line 6)
* linear algebra: Linear Algebra. (line 6)
* linear algebra, BLAS: BLAS Support. (line 6)
* linear interpolation: Interpolation Types. (line 9)
* linear regression: Linear regression. (line 6)
* linear systems, refinement of solutions: LU Decomposition. (line 57)
* linear systems, solution of: LU Decomposition. (line 39)
* linking with GSL libraries: Linking programs with the library.
(line 6)
* LMDER algorithm: Minimization Algorithms using Derivatives.
(line 13)
* log1p: Elementary Functions.
(line 13)
* logarithm and related functions: Logarithm and Related Functions.
(line 6)
* logarithm of Beta function: Beta Functions. (line 16)
* logarithm of combinatorial factor C(m,n): Factorials. (line 48)
* logarithm of complex number: Elementary Complex Functions.
(line 29)
* logarithm of cosh function, special functions: Hyperbolic Trigonometric Functions.
(line 12)
* logarithm of double factorial: Factorials. (line 36)
* logarithm of factorial: Factorials. (line 29)
* logarithm of Gamma function: Gamma Functions. (line 24)
* logarithm of Pochhammer symbol: Pochhammer Symbol. (line 18)
* logarithm of sinh function, special functions: Hyperbolic Trigonometric Functions.
(line 8)
* logarithm of the determinant of a matrix: LU Decomposition. (line 91)
* logarithm, computed accurately near 1: Elementary Functions.
(line 13)
* Logarithmic random variates: The Logarithmic Distribution.
(line 8)
* Logistic distribution: The Logistic Distribution.
(line 7)
* Lognormal distribution: The Lognormal Distribution.
(line 8)
* long double: Long double. (line 6)
* low discrepancy sequences: Quasi-Random Sequences.
(line 6)
* Low-level CBLAS: GSL CBLAS Library. (line 6)
* LU decomposition: LU Decomposition. (line 6)
* macros for mathematical constants: Mathematical Constants.
(line 6)
* magnitude of complex number: Properties of complex numbers.
(line 11)
* mailing list archives: Further Information. (line 6)
* mailing list for GSL announcements: Obtaining GSL. (line 6)
* mailing list, bug-gsl: Reporting Bugs. (line 6)
* mantissa, IEEE format: Representation of floating point numbers.
(line 6)
* mass, units of: Mass and Weight. (line 6)
* mathematical constants, defined as macros: Mathematical Constants.
(line 6)
* mathematical functions, elementary: Mathematical Functions.
(line 6)
* Mathieu Function Characteristic Values: Mathieu Function Characteristic Values.
(line 6)
* Mathieu functions: Mathieu Functions. (line 6)
* matrices <1>: Matrices. (line 6)
* matrices: Vectors and Matrices.
(line 6)
* matrices, initializing: Initializing matrix elements.
(line 6)
* matrices, range-checking: Accessing matrix elements.
(line 6)
* matrix determinant: LU Decomposition. (line 83)
* matrix diagonal: Creating row and column views.
(line 62)
* matrix factorization: Linear Algebra. (line 6)
* matrix inverse: LU Decomposition. (line 69)
* matrix square root, Cholesky decomposition: Cholesky Decomposition.
(line 6)
* matrix subdiagonal: Creating row and column views.
(line 74)
* matrix superdiagonal: Creating row and column views.
(line 86)
* matrix, constant: Initializing matrix elements.
(line 6)
* matrix, identity: Initializing matrix elements.
(line 6)
* matrix, operations: BLAS Support. (line 6)
* matrix, zero: Initializing matrix elements.
(line 6)
* max: Statistics. (line 6)
* maximal phase, Daubechies wavelets: DWT Initialization. (line 20)
* maximization, see minimization: One dimensional Minimization.
(line 6)
* maximum of two numbers: Maximum and Minimum functions.
(line 11)
* maximum value, from histogram: Histogram Statistics.
(line 6)
* mean: Statistics. (line 6)
* mean value, from histogram: Histogram Statistics.
(line 24)
* Mills' ratio, inverse: Probability functions.
(line 19)
* min: Statistics. (line 6)
* minimization, BFGS algorithm: Multimin Algorithms with Derivatives.
(line 37)
* minimization, caveats: Minimization Caveats.
(line 6)
* minimization, conjugate gradient algorithm: Multimin Algorithms with Derivatives.
(line 12)
* minimization, multidimensional: Multidimensional Minimization.
(line 6)
* minimization, one-dimensional: One dimensional Minimization.
(line 6)
* minimization, overview: Minimization Overview.
(line 6)
* minimization, Polak-Ribiere algorithm: Multimin Algorithms with Derivatives.
(line 29)
* minimization, providing a function to minimize: Providing the function to minimize.
(line 6)
* minimization, simplex algorithm: Multimin Algorithms without Derivatives.
(line 11)
* minimization, steepest descent algorithm: Multimin Algorithms with Derivatives.
(line 57)
* minimization, stopping parameters: Minimization Stopping Parameters.
(line 6)
* minimum finding, Brent's method: Minimization Algorithms.
(line 32)
* minimum finding, golden section algorithm: Minimization Algorithms.
(line 14)
* minimum of two numbers: Maximum and Minimum functions.
(line 15)
* minimum value, from histogram: Histogram Statistics.
(line 6)
* MINPACK, minimization algorithms <1>: Minimization Algorithms using Derivatives.
(line 13)
* MINPACK, minimization algorithms: Algorithms using Derivatives.
(line 14)
* MISCFUN: Special Functions References and Further Reading.
(line 11)
* MISER monte carlo integration: MISER. (line 6)
* Mixed-radix FFT, complex data: Mixed-radix FFT routines for complex data.
(line 6)
* Mixed-radix FFT, real data: Mixed-radix FFT routines for real data.
(line 6)
* Modified Bessel Functions, Fractional Order: Regular Modified Bessel Functions - Fractional Order.
(line 6)
* Modified Clenshaw-Curtis quadrature: Integrands with weight functions.
(line 6)
* Modified Cylindrical Bessel Functions: Regular Modified Cylindrical Bessel Functions.
(line 6)
* Modified Givens Rotation, BLAS: Level 1 GSL BLAS Interface.
(line 135)
* Modified Newton's method for nonlinear systems: Algorithms using Derivatives.
(line 92)
* Modified Spherical Bessel Functions: Regular Modified Spherical Bessel Functions.
(line 6)
* Monte Carlo integration: Monte Carlo Integration.
(line 6)
* MRG, multiple recursive random number generator: Random number generator algorithms.
(line 119)
* MT19937 random number generator: Random number generator algorithms.
(line 20)
* multi-parameter regression: Multi-parameter fitting.
(line 6)
* multidimensional integration: Monte Carlo Integration.
(line 6)
* multidimensional root finding, Broyden algorithm: Algorithms without Derivatives.
(line 45)
* multidimensional root finding, overview: Overview of Multidimensional Root Finding.
(line 6)
* multidimensional root finding, providing a function to solve: Providing the multidimensional system of equations to solve.
(line 6)
* Multimin, caveats: Multimin Caveats. (line 6)
* Multinomial distribution: The Multinomial Distribution.
(line 8)
* multiplication: Elementary Operations.
(line 6)
* multisets: Multisets. (line 6)
* multistep methods, ODEs: Stepping Functions. (line 129)
* N-dimensional random direction vector: Spherical Vector Distributions.
(line 42)
* NaN, defined as a macro: Infinities and Not-a-number.
(line 6)
* nautical units: Speed and Nautical Units.
(line 6)
* Negative Binomial distribution, random variates: The Negative Binomial Distribution.
(line 8)
* Nelder-Mead simplex algorithm for minimization: Multimin Algorithms without Derivatives.
(line 11)
* Newton algorithm, discrete: Algorithms without Derivatives.
(line 26)
* Newton algorithm, globally convergent: Algorithms using Derivatives.
(line 92)
* Newton's method for finding roots: Root Finding Algorithms using Derivatives.
(line 16)
* Newton's method for systems of nonlinear equations: Algorithms using Derivatives.
(line 73)
* Niederreiter sequence: Quasi-Random Sequences.
(line 6)
* NIST Statistical Reference Datasets: Fitting References and Further Reading.
(line 16)
* non-normalized incomplete Gamma function: Incomplete Gamma Functions.
(line 9)
* nonlinear equation, solutions of: One dimensional Root-Finding.
(line 6)
* nonlinear fitting, stopping parameters: Search Stopping Parameters for Minimization Algorithms.
(line 6)
* nonlinear functions, minimization: One dimensional Minimization.
(line 6)
* nonlinear least squares fitting: Nonlinear Least-Squares Fitting.
(line 7)
* nonlinear least squares fitting, overview: Overview of Nonlinear Least-Squares Fitting.
(line 6)
* nonlinear systems of equations, solution of: Multidimensional Root-Finding.
(line 6)
* nonsymmetric matrix, real, eigensystem: Real Nonsymmetric Matrices.
(line 6)
* Nordsieck form: Stepping Functions. (line 129)
* normalized form, IEEE format: Representation of floating point numbers.
(line 14)
* normalized incomplete Beta function: Incomplete Beta Function.
(line 9)
* Not-a-number, defined as a macro: Infinities and Not-a-number.
(line 6)
* NRM2, Level-1 BLAS: Level 1 GSL BLAS Interface.
(line 36)
* ntuples: N-tuples. (line 6)
* nuclear physics, constants: Atomic and Nuclear Physics.
(line 6)
* numerical constants, defined as macros: Mathematical Constants.
(line 6)
* numerical derivatives: Numerical Differentiation.
(line 6)
* numerical integration (quadrature): Numerical Integration.
(line 6)
* obtaining GSL: Obtaining GSL. (line 6)
* ODEs, initial value problems: Ordinary Differential Equations.
(line 6)
* optimization, combinatorial: Simulated Annealing. (line 6)
* optimization, see minimization: One dimensional Minimization.
(line 6)
* optimized functions, alternatives: Alternative optimized functions.
(line 6)
* ordering, matrix elements: Matrices. (line 6)
* ordinary differential equations, initial value problem: Ordinary Differential Equations.
(line 6)
* oscillatory functions, numerical integration of: QAWO adaptive integration for oscillatory functions.
(line 6)
* overflow, IEEE exceptions: Setting up your IEEE environment.
(line 6)
* Pareto distribution: The Pareto Distribution.
(line 8)
* PAW: Ntuple References and Further Reading.
(line 6)
* permutations: Permutations. (line 6)
* physical constants: Physical Constants. (line 6)
* physical dimension, matrices: Matrices. (line 6)
* pi, defined as a macro: Mathematical Constants.
(line 28)
* plain Monte Carlo: PLAIN Monte Carlo. (line 6)
* Pochhammer symbol: Pochhammer Symbol. (line 9)
* Poisson random numbers: The Poisson Distribution.
(line 8)
* Polak-Ribiere algorithm, minimization: Multimin Algorithms with Derivatives.
(line 29)
* polar form of complex numbers: Representation of complex numbers.
(line 6)
* polar to rectangular conversion: Conversion Functions.
(line 6)
* polygamma functions: Psi (Digamma) Function.
(line 6)
* polynomial evaluation: Polynomial Evaluation.
(line 6)
* polynomial interpolation: Interpolation Types. (line 13)
* polynomials, roots of: Polynomials. (line 6)
* power function: Power Function. (line 6)
* power of complex number: Elementary Complex Functions.
(line 16)
* power, units of: Thermal Energy and Power.
(line 6)
* precision, IEEE arithmetic: Setting up your IEEE environment.
(line 6)
* predictor-corrector method, ODEs: Stepping Functions. (line 129)
* prefixes: Prefixes. (line 6)
* pressure, units of: Pressure. (line 6)
* Prince-Dormand, Runge-Kutta method: Stepping Functions. (line 103)
* printers units: Printers Units. (line 6)
* probability distribution, from histogram: The histogram probability distribution struct.
(line 6)
* probability distributions, from histograms: Resampling from histograms.
(line 6)
* projection of ntuples: Histogramming ntuple values.
(line 35)
* psi function: Psi (Digamma) Function.
(line 6)
* QAG quadrature algorithm: QAG adaptive integration.
(line 6)
* QAGI quadrature algorithm: QAGI adaptive integration on infinite intervals.
(line 6)
* QAGP quadrature algorithm: QAGP adaptive integration with known singular points.
(line 6)
* QAGS quadrature algorithm: QAGS adaptive integration with singularities.
(line 6)
* QAWC quadrature algorithm: QAWC adaptive integration for Cauchy principal values.
(line 6)
* QAWF quadrature algorithm: QAWF adaptive integration for Fourier integrals.
(line 6)
* QAWO quadrature algorithm: QAWO adaptive integration for oscillatory functions.
(line 6)
* QAWS quadrature algorithm: QAWS adaptive integration for singular functions.
(line 6)
* QNG quadrature algorithm: QNG non-adaptive Gauss-Kronrod integration.
(line 6)
* QR decomposition: QR Decomposition. (line 6)
* QR decomposition with column pivoting: QR Decomposition with Column Pivoting.
(line 6)
* QUADPACK: Numerical Integration.
(line 6)
* quadratic equation, solving: Quadratic Equations. (line 6)
* quadrature: Numerical Integration.
(line 6)
* quantile functions: Random Number Distributions.
(line 6)
* quasi-random sequences: Quasi-Random Sequences.
(line 6)
* R250 shift-register random number generator: Other random number generators.
(line 60)
* Racah coefficients: Coupling Coefficients.
(line 6)
* Radial Mathieu Functions: Radial Mathieu Functions.
(line 6)
* radioactivity, units of: Radioactivity. (line 6)
* Radix-2 FFT for real data: Radix-2 FFT routines for real data.
(line 6)
* Radix-2 FFT, complex data: Radix-2 FFT routines for complex data.
(line 6)
* rand, BSD random number generator: Unix random number generators.
(line 17)
* rand48 random number generator: Unix random number generators.
(line 58)
* random number distributions: Random Number Distributions.
(line 6)
* random number generators: Random Number Generation.
(line 6)
* random sampling from histograms: The histogram probability distribution struct.
(line 6)
* RANDU random number generator: Other random number generators.
(line 105)
* RANF random number generator: Other random number generators.
(line 23)
* range: Statistics. (line 6)
* range-checking for matrices: Accessing matrix elements.
(line 6)
* range-checking for vectors: Accessing vector elements.
(line 6)
* RANLUX random number generator: Random number generator algorithms.
(line 71)
* RANLXD random number generator: Random number generator algorithms.
(line 65)
* RANLXS random number generator: Random number generator algorithms.
(line 47)
* RANMAR random number generator: Other random number generators.
(line 54)
* Rayleigh distribution: The Rayleigh Distribution.
(line 7)
* Rayleigh Tail distribution: The Rayleigh Tail Distribution.
(line 8)
* real nonsymmetric matrix, eigensystem: Real Nonsymmetric Matrices.
(line 6)
* real symmetric matrix, eigensystem: Real Symmetric Matrices.
(line 6)
* Reciprocal Gamma function: Gamma Functions. (line 51)
* rectangular to polar conversion: Conversion Functions.
(line 6)
* recursive stratified sampling, MISER: MISER. (line 6)
* reduction of angular variables: Restriction Functions.
(line 6)
* refinement of solutions in linear systems: LU Decomposition.
(line 57)
* regression, least squares: Least-Squares Fitting.
(line 6)
* Regular Bessel Functions, Fractional Order: Regular Bessel Function - Fractional Order.
(line 6)
* Regular Bessel Functions, Zeros of: Zeros of Regular Bessel Functions.
(line 6)
* Regular Cylindrical Bessel Functions: Regular Cylindrical Bessel Functions.
(line 6)
* Regular Modified Bessel Functions, Fractional Order: Regular Modified Bessel Functions - Fractional Order.
(line 6)
* Regular Modified Cylindrical Bessel Functions: Regular Modified Cylindrical Bessel Functions.
(line 6)
* Regular Modified Spherical Bessel Functions: Regular Modified Spherical Bessel Functions.
(line 6)
* Regular Spherical Bessel Functions: Regular Spherical Bessel Functions.
(line 6)
* Regulated Gamma function: Gamma Functions. (line 42)
* relative Pochhammer symbol: Pochhammer Symbol. (line 31)
* reporting bugs in GSL: Reporting Bugs. (line 6)
* representations of complex numbers: Representation of complex numbers.
(line 6)
* resampling from histograms: Resampling from histograms.
(line 6)
* residual, in nonlinear systems of equations <1>: Search Stopping Parameters for Minimization Algorithms.
(line 30)
* residual, in nonlinear systems of equations: Search Stopping Parameters for the multidimensional solver.
(line 31)
* reversing a permutation: Permutation functions.
(line 7)
* Riemann Zeta Function: Riemann Zeta Function.
(line 6)
* RK2, Runge-Kutta method: Stepping Functions. (line 88)
* RK4, Runge-Kutta method: Stepping Functions. (line 91)
* RKF45, Runge-Kutta-Fehlberg method: Stepping Functions. (line 96)
* root finding: One dimensional Root-Finding.
(line 6)
* root finding, bisection algorithm: Root Bracketing Algorithms.
(line 17)
* root finding, Brent's method: Root Bracketing Algorithms.
(line 50)
* root finding, caveats: Root Finding Caveats.
(line 6)
* root finding, false position algorithm: Root Bracketing Algorithms.
(line 33)
* root finding, initial guess: Search Bounds and Guesses.
(line 6)
* root finding, Newton's method: Root Finding Algorithms using Derivatives.
(line 16)
* root finding, overview: Root Finding Overview.
(line 6)
* root finding, providing a function to solve: Providing the function to solve.
(line 6)
* root finding, search bounds: Search Bounds and Guesses.
(line 6)
* root finding, secant method: Root Finding Algorithms using Derivatives.
(line 30)
* root finding, Steffenson's method: Root Finding Algorithms using Derivatives.
(line 61)
* root finding, stopping parameters <1>: Search Stopping Parameters for the multidimensional solver.
(line 6)
* root finding, stopping parameters: Search Stopping Parameters.
(line 6)
* roots: One dimensional Root-Finding.
(line 6)
* ROTG, Level-1 BLAS: Level 1 GSL BLAS Interface.
(line 116)
* rounding mode: Setting up your IEEE environment.
(line 6)
* Runge-Kutta Cash-Karp method: Stepping Functions. (line 100)
* Runge-Kutta methods, ordinary differential equations: Stepping Functions.
(line 88)
* Runge-Kutta Prince-Dormand method: Stepping Functions. (line 103)
* safe comparison of floating point numbers: Approximate Comparison of Floating Point Numbers.
(line 13)
* safeguarded step-length algorithm: Minimization Algorithms.
(line 48)
* sampling from histograms <1>: The histogram probability distribution struct.
(line 6)
* sampling from histograms: Resampling from histograms.
(line 6)
* SAXPY, Level-1 BLAS: Level 1 GSL BLAS Interface.
(line 96)
* SCAL, Level-1 BLAS: Level 1 GSL BLAS Interface.
(line 109)
* schedule, cooling: Simulated Annealing algorithm.
(line 23)
* se(q,x), Mathieu function: Angular Mathieu Functions.
(line 6)
* secant method for finding roots: Root Finding Algorithms using Derivatives.
(line 30)
* selection function, ntuples: Histogramming ntuple values.
(line 13)
* series, acceleration: Series Acceleration. (line 6)
* shared libraries: Shared Libraries. (line 6)
* shell prompt: Conventions used in this manual.
(line 6)
* Shi(x): Hyperbolic Integrals.
(line 6)
* shift-register random number generator: Other random number generators.
(line 60)
* Si(x): Trigonometric Integrals.
(line 6)
* sign bit, IEEE format: Representation of floating point numbers.
(line 6)
* sign of the determinant of a matrix: LU Decomposition. (line 99)
* simplex algorithm, minimization: Multimin Algorithms without Derivatives.
(line 11)
* simulated annealing: Simulated Annealing. (line 6)
* sin, of complex number: Complex Trigonometric Functions.
(line 7)
* sine function, special functions: Circular Trigonometric Functions.
(line 8)
* single precision, IEEE format: Representation of floating point numbers.
(line 31)
* singular functions, numerical integration of: QAWS adaptive integration for singular functions.
(line 6)
* singular points, specifying positions in quadrature: QAGP adaptive integration with known singular points.
(line 6)
* singular value decomposition: Singular Value Decomposition.
(line 6)
* Skew Levy distribution: The Levy skew alpha-Stable Distribution.
(line 9)
* skewness: Higher moments (skewness and kurtosis).
(line 6)
* slope, see numerical derivative: Numerical Differentiation.
(line 6)
* Sobol sequence: Quasi-Random Sequences.
(line 6)
* solution of linear system by Householder transformations: Householder solver for linear systems.
(line 6)
* solution of linear systems, Ax=b: Linear Algebra. (line 6)
* solving a nonlinear equation: One dimensional Root-Finding.
(line 6)
* solving nonlinear systems of equations: Multidimensional Root-Finding.
(line 6)
* sorting: Sorting. (line 6)
* sorting eigenvalues and eigenvectors: Sorting Eigenvalues and Eigenvectors.
(line 6)
* sorting vector elements: Sorting vectors. (line 23)
* source code, reuse in applications: Code Reuse. (line 6)
* special functions: Special Functions. (line 6)
* Spherical Bessel Functions: Regular Spherical Bessel Functions.
(line 6)
* spherical harmonics: Legendre Functions and Spherical Harmonics.
(line 6)
* spherical random variates, 2D: Spherical Vector Distributions.
(line 14)
* spherical random variates, 3D: Spherical Vector Distributions.
(line 33)
* spherical random variates, N-dimensional: Spherical Vector Distributions.
(line 42)
* spline: Interpolation. (line 6)
* splines, basis: Basis Splines. (line 6)
* square root of a matrix, Cholesky decomposition: Cholesky Decomposition.
(line 6)
* square root of complex number: Elementary Complex Functions.
(line 7)
* standard deviation: Statistics. (line 6)
* standard deviation, from histogram: Histogram Statistics.
(line 30)
* standards conformance, ANSI C: Using the library. (line 6)
* Statistical Reference Datasets (StRD): Fitting References and Further Reading.
(line 16)
* statistics: Statistics. (line 6)
* statistics, from histogram: Histogram Statistics.
(line 6)
* steepest descent algorithm, minimization: Multimin Algorithms with Derivatives.
(line 57)
* Steffenson's method for finding roots: Root Finding Algorithms using Derivatives.
(line 61)
* stratified sampling in Monte Carlo integration: Monte Carlo Integration.
(line 6)
* stride, of vector index: Vectors. (line 6)
* Student t-distribution: The t-distribution. (line 15)
* subdiagonal, of a matrix: Creating row and column views.
(line 74)
* summation, acceleration: Series Acceleration. (line 6)
* superdiagonal, matrix: Creating row and column views.
(line 86)
* SVD: Singular Value Decomposition.
(line 6)
* SWAP, Level-1 BLAS: Level 1 GSL BLAS Interface.
(line 76)
* swapping permutation elements: Accessing permutation elements.
(line 18)
* SYMM, Level-3 BLAS: Level 3 GSL BLAS Interface.
(line 41)
* symmetric matrix, real, eigensystem: Real Symmetric Matrices.
(line 6)
* SYMV, Level-2 BLAS: Level 2 GSL BLAS Interface.
(line 71)
* synchrotron functions: Synchrotron Functions.
(line 6)
* SYR, Level-2 BLAS: Level 2 GSL BLAS Interface.
(line 120)
* SYR2, Level-2 BLAS: Level 2 GSL BLAS Interface.
(line 144)
* SYR2K, Level-3 BLAS: Level 3 GSL BLAS Interface.
(line 164)
* SYRK, Level-3 BLAS: Level 3 GSL BLAS Interface.
(line 126)
* systems of equations, nonlinear: Multidimensional Root-Finding.
(line 6)
* t-distribution: The t-distribution. (line 15)
* t-test: Statistics. (line 6)
* tangent of complex number: Complex Trigonometric Functions.
(line 15)
* Tausworthe random number generator: Random number generator algorithms.
(line 137)
* Taylor coefficients, computation of: Factorials. (line 54)
* testing combination for validity: Combination properties.
(line 18)
* testing multiset for validity: Multiset properties. (line 17)
* testing permutation for validity: Permutation properties.
(line 14)
* thermal energy, units of: Thermal Energy and Power.
(line 6)
* time units: Measurement of Time. (line 6)
* trailing dimension, matrices: Matrices. (line 6)
* transformation, Householder: Householder Transformations.
(line 6)
* transforms, Hankel: Discrete Hankel Transforms.
(line 6)
* transforms, wavelet: Wavelet Transforms. (line 6)
* transport functions: Transport Functions. (line 6)
* traveling salesman problem: Traveling Salesman Problem.
(line 6)
* tridiagonal decomposition <1>: Tridiagonal Decomposition of Hermitian Matrices.
(line 6)
* tridiagonal decomposition: Tridiagonal Decomposition of Real Symmetric Matrices.
(line 6)
* tridiagonal systems: Tridiagonal Systems. (line 6)
* trigonometric functions: Trigonometric Functions.
(line 6)
* trigonometric functions of complex numbers: Complex Trigonometric Functions.
(line 6)
* trigonometric integrals: Trigonometric Integrals.
(line 6)
* TRMM, Level-3 BLAS: Level 3 GSL BLAS Interface.
(line 78)
* TRMV, Level-2 BLAS: Level 2 GSL BLAS Interface.
(line 36)
* TRSM, Level-3 BLAS: Level 3 GSL BLAS Interface.
(line 102)
* TRSV, Level-2 BLAS: Level 2 GSL BLAS Interface.
(line 57)
* TSP: Traveling Salesman Problem.
(line 6)
* TT800 random number generator: Other random number generators.
(line 75)
* two dimensional Gaussian distribution: The Bivariate Gaussian Distribution.
(line 9)
* two dimensional histograms: Two dimensional histograms.
(line 6)
* two-sided exponential distribution: The Laplace Distribution.
(line 7)
* Type 1 Gumbel distribution, random variates: The Type-1 Gumbel Distribution.
(line 8)
* Type 2 Gumbel distribution: The Type-2 Gumbel Distribution.
(line 8)
* u-transform for series: Series Acceleration. (line 6)
* underflow, IEEE exceptions: Setting up your IEEE environment.
(line 6)
* uniform distribution: The Flat (Uniform) Distribution.
(line 8)
* units, conversion of: Physical Constants. (line 6)
* units, imperial: Imperial Units. (line 6)
* Unix random number generators, rand: Unix random number generators.
(line 17)
* Unix random number generators, rand48: Unix random number generators.
(line 17)
* unnormalized incomplete Gamma function: Incomplete Gamma Functions.
(line 9)
* unweighted linear fits: Least-Squares Fitting.
(line 6)
* usage, compiling application programs: Using the library. (line 6)
* value function, ntuples: Histogramming ntuple values.
(line 24)
* Van der Pol oscillator, example: ODE Example programs.
(line 6)
* variance: Statistics. (line 6)
* variance, from histogram: Histogram Statistics.
(line 30)
* variance-covariance matrix, linear fits: Fitting Overview. (line 47)
* VAX random number generator: Other random number generators.
(line 87)
* vector, operations: BLAS Support. (line 6)
* vector, sorting elements of: Sorting vectors. (line 23)
* vectors <1>: Vectors. (line 6)
* vectors: Vectors and Matrices.
(line 6)
* vectors, initializing: Initializing vector elements.
(line 6)
* vectors, range-checking: Accessing vector elements.
(line 6)
* VEGAS Monte Carlo integration: VEGAS. (line 6)
* viscosity, units of: Viscosity. (line 6)
* volume units: Volume Area and Length.
(line 6)
* W function: Lambert W Functions. (line 6)
* warning options: GCC warning options for numerical programs.
(line 6)
* warranty (none): No Warranty. (line 6)
* wavelet transforms: Wavelet Transforms. (line 6)
* website, developer information: Further Information. (line 6)
* Weibull distribution: The Weibull Distribution.
(line 8)
* weight, units of: Mass and Weight. (line 6)
* weighted linear fits: Least-Squares Fitting.
(line 6)
* Wigner coefficients: Coupling Coefficients.
(line 6)
* y(x), Bessel Functions: Irregular Spherical Bessel Functions.
(line 6)
* Y(x), Bessel Functions: Irregular Cylindrical Bessel Functions.
(line 6)
* zero finding: One dimensional Root-Finding.
(line 6)
* zero matrix: Initializing matrix elements.
(line 6)
* zero, IEEE format: Representation of floating point numbers.
(line 27)
* Zeros of Regular Bessel Functions: Zeros of Regular Bessel Functions.
(line 6)
* Zeta functions: Zeta Functions. (line 6)
* Ziggurat method: The Gaussian Distribution.
(line 29)
./gsl_DOC-1.15-s-i486/usr/share/info/gsl-ref.info-50000644000000000000000000101415512035456005017730 0ustar rootrootThis is gsl-ref.info, produced by makeinfo version 4.13 from
gsl-ref.texi.
INFO-DIR-SECTION Software libraries
START-INFO-DIR-ENTRY
* gsl-ref: (gsl-ref). GNU Scientific Library - Reference
END-INFO-DIR-ENTRY
Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004,
2005, 2006, 2007, 2008, 2009, 2010, 2011 The GSL Team.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.3 or
any later version published by the Free Software Foundation; with the
Invariant Sections being "GNU General Public License" and "Free Software
Needs Free Documentation", the Front-Cover text being "A GNU Manual",
and with the Back-Cover Text being (a) (see below). A copy of the
license is included in the section entitled "GNU Free Documentation
License".
(a) The Back-Cover Text is: "You have the freedom to copy and modify
this GNU Manual."
File: gsl-ref.info, Node: Function Index, Next: Variable Index, Prev: GNU Free Documentation License, Up: Top
Function Index
**************
[index]
* Menu:
* cblas_caxpy: Level 1 CBLAS Functions.
(line 92)
* cblas_ccopy: Level 1 CBLAS Functions.
(line 89)
* cblas_cdotc_sub: Level 1 CBLAS Functions.
(line 23)
* cblas_cdotu_sub: Level 1 CBLAS Functions.
(line 20)
* cblas_cgbmv: Level 2 CBLAS Functions.
(line 99)
* cblas_cgemm: Level 3 CBLAS Functions.
(line 81)
* cblas_cgemv: Level 2 CBLAS Functions.
(line 93)
* cblas_cgerc: Level 2 CBLAS Functions.
(line 269)
* cblas_cgeru: Level 2 CBLAS Functions.
(line 265)
* cblas_chbmv: Level 2 CBLAS Functions.
(line 256)
* cblas_chemm: Level 3 CBLAS Functions.
(line 151)
* cblas_chemv: Level 2 CBLAS Functions.
(line 251)
* cblas_cher: Level 2 CBLAS Functions.
(line 273)
* cblas_cher2: Level 2 CBLAS Functions.
(line 282)
* cblas_cher2k: Level 3 CBLAS Functions.
(line 162)
* cblas_cherk: Level 3 CBLAS Functions.
(line 156)
* cblas_chpmv: Level 2 CBLAS Functions.
(line 261)
* cblas_chpr: Level 2 CBLAS Functions.
(line 277)
* cblas_chpr2: Level 2 CBLAS Functions.
(line 287)
* cblas_cscal: Level 1 CBLAS Functions.
(line 134)
* cblas_csscal: Level 1 CBLAS Functions.
(line 140)
* cblas_cswap: Level 1 CBLAS Functions.
(line 86)
* cblas_csymm: Level 3 CBLAS Functions.
(line 87)
* cblas_csyr2k: Level 3 CBLAS Functions.
(line 98)
* cblas_csyrk: Level 3 CBLAS Functions.
(line 92)
* cblas_ctbmv: Level 2 CBLAS Functions.
(line 109)
* cblas_ctbsv: Level 2 CBLAS Functions.
(line 124)
* cblas_ctpmv: Level 2 CBLAS Functions.
(line 114)
* cblas_ctpsv: Level 2 CBLAS Functions.
(line 129)
* cblas_ctrmm: Level 3 CBLAS Functions.
(line 104)
* cblas_ctrmv: Level 2 CBLAS Functions.
(line 104)
* cblas_ctrsm: Level 3 CBLAS Functions.
(line 110)
* cblas_ctrsv: Level 2 CBLAS Functions.
(line 119)
* cblas_dasum: Level 1 CBLAS Functions.
(line 41)
* cblas_daxpy: Level 1 CBLAS Functions.
(line 83)
* cblas_dcopy: Level 1 CBLAS Functions.
(line 80)
* cblas_ddot: Level 1 CBLAS Functions.
(line 17)
* cblas_dgbmv: Level 2 CBLAS Functions.
(line 58)
* cblas_dgemm: Level 3 CBLAS Functions.
(line 46)
* cblas_dgemv: Level 2 CBLAS Functions.
(line 52)
* cblas_dger: Level 2 CBLAS Functions.
(line 228)
* cblas_dnrm2: Level 1 CBLAS Functions.
(line 38)
* cblas_drot: Level 1 CBLAS Functions.
(line 122)
* cblas_drotg: Level 1 CBLAS Functions.
(line 116)
* cblas_drotm: Level 1 CBLAS Functions.
(line 125)
* cblas_drotmg: Level 1 CBLAS Functions.
(line 119)
* cblas_dsbmv: Level 2 CBLAS Functions.
(line 218)
* cblas_dscal: Level 1 CBLAS Functions.
(line 131)
* cblas_dsdot: Level 1 CBLAS Functions.
(line 11)
* cblas_dspmv: Level 2 CBLAS Functions.
(line 223)
* cblas_dspr: Level 2 CBLAS Functions.
(line 236)
* cblas_dspr2: Level 2 CBLAS Functions.
(line 246)
* cblas_dswap: Level 1 CBLAS Functions.
(line 77)
* cblas_dsymm: Level 3 CBLAS Functions.
(line 52)
* cblas_dsymv: Level 2 CBLAS Functions.
(line 213)
* cblas_dsyr: Level 2 CBLAS Functions.
(line 232)
* cblas_dsyr2: Level 2 CBLAS Functions.
(line 241)
* cblas_dsyr2k: Level 3 CBLAS Functions.
(line 63)
* cblas_dsyrk: Level 3 CBLAS Functions.
(line 57)
* cblas_dtbmv: Level 2 CBLAS Functions.
(line 68)
* cblas_dtbsv: Level 2 CBLAS Functions.
(line 83)
* cblas_dtpmv: Level 2 CBLAS Functions.
(line 73)
* cblas_dtpsv: Level 2 CBLAS Functions.
(line 88)
* cblas_dtrmm: Level 3 CBLAS Functions.
(line 69)
* cblas_dtrmv: Level 2 CBLAS Functions.
(line 63)
* cblas_dtrsm: Level 3 CBLAS Functions.
(line 75)
* cblas_dtrsv: Level 2 CBLAS Functions.
(line 78)
* cblas_dzasum: Level 1 CBLAS Functions.
(line 53)
* cblas_dznrm2: Level 1 CBLAS Functions.
(line 50)
* cblas_icamax: Level 1 CBLAS Functions.
(line 62)
* cblas_idamax: Level 1 CBLAS Functions.
(line 59)
* cblas_isamax: Level 1 CBLAS Functions.
(line 56)
* cblas_izamax: Level 1 CBLAS Functions.
(line 65)
* cblas_sasum: Level 1 CBLAS Functions.
(line 35)
* cblas_saxpy: Level 1 CBLAS Functions.
(line 74)
* cblas_scasum: Level 1 CBLAS Functions.
(line 47)
* cblas_scnrm2: Level 1 CBLAS Functions.
(line 44)
* cblas_scopy: Level 1 CBLAS Functions.
(line 71)
* cblas_sdot: Level 1 CBLAS Functions.
(line 14)
* cblas_sdsdot: Level 1 CBLAS Functions.
(line 8)
* cblas_sgbmv: Level 2 CBLAS Functions.
(line 16)
* cblas_sgemm: Level 3 CBLAS Functions.
(line 11)
* cblas_sgemv: Level 2 CBLAS Functions.
(line 10)
* cblas_sger: Level 2 CBLAS Functions.
(line 190)
* cblas_snrm2: Level 1 CBLAS Functions.
(line 32)
* cblas_srot: Level 1 CBLAS Functions.
(line 110)
* cblas_srotg: Level 1 CBLAS Functions.
(line 104)
* cblas_srotm: Level 1 CBLAS Functions.
(line 113)
* cblas_srotmg: Level 1 CBLAS Functions.
(line 107)
* cblas_ssbmv: Level 2 CBLAS Functions.
(line 180)
* cblas_sscal: Level 1 CBLAS Functions.
(line 128)
* cblas_sspmv: Level 2 CBLAS Functions.
(line 185)
* cblas_sspr: Level 2 CBLAS Functions.
(line 198)
* cblas_sspr2: Level 2 CBLAS Functions.
(line 208)
* cblas_sswap: Level 1 CBLAS Functions.
(line 68)
* cblas_ssymm: Level 3 CBLAS Functions.
(line 17)
* cblas_ssymv: Level 2 CBLAS Functions.
(line 175)
* cblas_ssyr: Level 2 CBLAS Functions.
(line 194)
* cblas_ssyr2: Level 2 CBLAS Functions.
(line 203)
* cblas_ssyr2k: Level 3 CBLAS Functions.
(line 28)
* cblas_ssyrk: Level 3 CBLAS Functions.
(line 22)
* cblas_stbmv: Level 2 CBLAS Functions.
(line 26)
* cblas_stbsv: Level 2 CBLAS Functions.
(line 41)
* cblas_stpmv: Level 2 CBLAS Functions.
(line 31)
* cblas_stpsv: Level 2 CBLAS Functions.
(line 46)
* cblas_strmm: Level 3 CBLAS Functions.
(line 34)
* cblas_strmv: Level 2 CBLAS Functions.
(line 21)
* cblas_strsm: Level 3 CBLAS Functions.
(line 40)
* cblas_strsv: Level 2 CBLAS Functions.
(line 36)
* cblas_xerbla: Level 3 CBLAS Functions.
(line 182)
* cblas_zaxpy: Level 1 CBLAS Functions.
(line 101)
* cblas_zcopy: Level 1 CBLAS Functions.
(line 98)
* cblas_zdotc_sub: Level 1 CBLAS Functions.
(line 29)
* cblas_zdotu_sub: Level 1 CBLAS Functions.
(line 26)
* cblas_zdscal: Level 1 CBLAS Functions.
(line 143)
* cblas_zgbmv: Level 2 CBLAS Functions.
(line 140)
* cblas_zgemm: Level 3 CBLAS Functions.
(line 116)
* cblas_zgemv: Level 2 CBLAS Functions.
(line 134)
* cblas_zgerc: Level 2 CBLAS Functions.
(line 310)
* cblas_zgeru: Level 2 CBLAS Functions.
(line 306)
* cblas_zhbmv: Level 2 CBLAS Functions.
(line 297)
* cblas_zhemm: Level 3 CBLAS Functions.
(line 168)
* cblas_zhemv: Level 2 CBLAS Functions.
(line 292)
* cblas_zher: Level 2 CBLAS Functions.
(line 314)
* cblas_zher2: Level 2 CBLAS Functions.
(line 323)
* cblas_zher2k: Level 3 CBLAS Functions.
(line 179)
* cblas_zherk: Level 3 CBLAS Functions.
(line 173)
* cblas_zhpmv: Level 2 CBLAS Functions.
(line 302)
* cblas_zhpr: Level 2 CBLAS Functions.
(line 318)
* cblas_zhpr2: Level 2 CBLAS Functions.
(line 328)
* cblas_zscal: Level 1 CBLAS Functions.
(line 137)
* cblas_zswap: Level 1 CBLAS Functions.
(line 95)
* cblas_zsymm: Level 3 CBLAS Functions.
(line 122)
* cblas_zsyr2k: Level 3 CBLAS Functions.
(line 133)
* cblas_zsyrk: Level 3 CBLAS Functions.
(line 127)
* cblas_ztbmv: Level 2 CBLAS Functions.
(line 150)
* cblas_ztbsv: Level 2 CBLAS Functions.
(line 165)
* cblas_ztpmv: Level 2 CBLAS Functions.
(line 155)
* cblas_ztpsv: Level 2 CBLAS Functions.
(line 170)
* cblas_ztrmm: Level 3 CBLAS Functions.
(line 139)
* cblas_ztrmv: Level 2 CBLAS Functions.
(line 145)
* cblas_ztrsm: Level 3 CBLAS Functions.
(line 145)
* cblas_ztrsv: Level 2 CBLAS Functions.
(line 160)
* gsl_acosh: Elementary Functions.
(line 33)
* gsl_asinh: Elementary Functions.
(line 37)
* gsl_atanh: Elementary Functions.
(line 41)
* gsl_blas_caxpy: Level 1 GSL BLAS Interface.
(line 94)
* gsl_blas_ccopy: Level 1 GSL BLAS Interface.
(line 83)
* gsl_blas_cdotc: Level 1 GSL BLAS Interface.
(line 29)
* gsl_blas_cdotu: Level 1 GSL BLAS Interface.
(line 22)
* gsl_blas_cgemm: Level 3 GSL BLAS Interface.
(line 18)
* gsl_blas_cgemv: Level 2 GSL BLAS Interface.
(line 16)
* gsl_blas_cgerc: Level 2 GSL BLAS Interface.
(line 110)
* gsl_blas_cgeru: Level 2 GSL BLAS Interface.
(line 101)
* gsl_blas_chemm: Level 3 GSL BLAS Interface.
(line 52)
* gsl_blas_chemv: Level 2 GSL BLAS Interface.
(line 82)
* gsl_blas_cher: Level 2 GSL BLAS Interface.
(line 129)
* gsl_blas_cher2: Level 2 GSL BLAS Interface.
(line 155)
* gsl_blas_cher2k: Level 3 GSL BLAS Interface.
(line 177)
* gsl_blas_cherk: Level 3 GSL BLAS Interface.
(line 137)
* gsl_blas_cscal: Level 1 GSL BLAS Interface.
(line 103)
* gsl_blas_csscal: Level 1 GSL BLAS Interface.
(line 107)
* gsl_blas_cswap: Level 1 GSL BLAS Interface.
(line 74)
* gsl_blas_csymm: Level 3 GSL BLAS Interface.
(line 37)
* gsl_blas_csyr2k: Level 3 GSL BLAS Interface.
(line 160)
* gsl_blas_csyrk: Level 3 GSL BLAS Interface.
(line 123)
* gsl_blas_ctrmm: Level 3 GSL BLAS Interface.
(line 74)
* gsl_blas_ctrmv: Level 2 GSL BLAS Interface.
(line 33)
* gsl_blas_ctrsm: Level 3 GSL BLAS Interface.
(line 98)
* gsl_blas_ctrsv: Level 2 GSL BLAS Interface.
(line 54)
* gsl_blas_dasum: Level 1 GSL BLAS Interface.
(line 47)
* gsl_blas_daxpy: Level 1 GSL BLAS Interface.
(line 92)
* gsl_blas_dcopy: Level 1 GSL BLAS Interface.
(line 81)
* gsl_blas_ddot: Level 1 GSL BLAS Interface.
(line 17)
* gsl_blas_dgemm: Level 3 GSL BLAS Interface.
(line 13)
* gsl_blas_dgemv: Level 2 GSL BLAS Interface.
(line 12)
* gsl_blas_dger: Level 2 GSL BLAS Interface.
(line 98)
* gsl_blas_dnrm2: Level 1 GSL BLAS Interface.
(line 36)
* gsl_blas_drot: Level 1 GSL BLAS Interface.
(line 128)
* gsl_blas_drotg: Level 1 GSL BLAS Interface.
(line 116)
* gsl_blas_drotm: Level 1 GSL BLAS Interface.
(line 143)
* gsl_blas_drotmg: Level 1 GSL BLAS Interface.
(line 135)
* gsl_blas_dscal: Level 1 GSL BLAS Interface.
(line 101)
* gsl_blas_dsdot: Level 1 GSL BLAS Interface.
(line 15)
* gsl_blas_dswap: Level 1 GSL BLAS Interface.
(line 72)
* gsl_blas_dsymm: Level 3 GSL BLAS Interface.
(line 33)
* gsl_blas_dsymv: Level 2 GSL BLAS Interface.
(line 71)
* gsl_blas_dsyr: Level 2 GSL BLAS Interface.
(line 120)
* gsl_blas_dsyr2: Level 2 GSL BLAS Interface.
(line 144)
* gsl_blas_dsyr2k: Level 3 GSL BLAS Interface.
(line 155)
* gsl_blas_dsyrk: Level 3 GSL BLAS Interface.
(line 119)
* gsl_blas_dtrmm: Level 3 GSL BLAS Interface.
(line 70)
* gsl_blas_dtrmv: Level 2 GSL BLAS Interface.
(line 30)
* gsl_blas_dtrsm: Level 3 GSL BLAS Interface.
(line 94)
* gsl_blas_dtrsv: Level 2 GSL BLAS Interface.
(line 51)
* gsl_blas_dzasum: Level 1 GSL BLAS Interface.
(line 52)
* gsl_blas_dznrm2: Level 1 GSL BLAS Interface.
(line 41)
* gsl_blas_icamax: Level 1 GSL BLAS Interface.
(line 60)
* gsl_blas_idamax: Level 1 GSL BLAS Interface.
(line 58)
* gsl_blas_isamax: Level 1 GSL BLAS Interface.
(line 57)
* gsl_blas_izamax: Level 1 GSL BLAS Interface.
(line 62)
* gsl_blas_sasum: Level 1 GSL BLAS Interface.
(line 46)
* gsl_blas_saxpy: Level 1 GSL BLAS Interface.
(line 90)
* gsl_blas_scasum: Level 1 GSL BLAS Interface.
(line 51)
* gsl_blas_scnrm2: Level 1 GSL BLAS Interface.
(line 40)
* gsl_blas_scopy: Level 1 GSL BLAS Interface.
(line 80)
* gsl_blas_sdot: Level 1 GSL BLAS Interface.
(line 13)
* gsl_blas_sdsdot: Level 1 GSL BLAS Interface.
(line 8)
* gsl_blas_sgemm: Level 3 GSL BLAS Interface.
(line 10)
* gsl_blas_sgemv: Level 2 GSL BLAS Interface.
(line 9)
* gsl_blas_sger: Level 2 GSL BLAS Interface.
(line 96)
* gsl_blas_snrm2: Level 1 GSL BLAS Interface.
(line 35)
* gsl_blas_srot: Level 1 GSL BLAS Interface.
(line 126)
* gsl_blas_srotg: Level 1 GSL BLAS Interface.
(line 114)
* gsl_blas_srotm: Level 1 GSL BLAS Interface.
(line 141)
* gsl_blas_srotmg: Level 1 GSL BLAS Interface.
(line 133)
* gsl_blas_sscal: Level 1 GSL BLAS Interface.
(line 100)
* gsl_blas_sswap: Level 1 GSL BLAS Interface.
(line 71)
* gsl_blas_ssymm: Level 3 GSL BLAS Interface.
(line 30)
* gsl_blas_ssymv: Level 2 GSL BLAS Interface.
(line 68)
* gsl_blas_ssyr: Level 2 GSL BLAS Interface.
(line 118)
* gsl_blas_ssyr2: Level 2 GSL BLAS Interface.
(line 142)
* gsl_blas_ssyr2k: Level 3 GSL BLAS Interface.
(line 152)
* gsl_blas_ssyrk: Level 3 GSL BLAS Interface.
(line 116)
* gsl_blas_strmm: Level 3 GSL BLAS Interface.
(line 67)
* gsl_blas_strmv: Level 2 GSL BLAS Interface.
(line 27)
* gsl_blas_strsm: Level 3 GSL BLAS Interface.
(line 91)
* gsl_blas_strsv: Level 2 GSL BLAS Interface.
(line 48)
* gsl_blas_zaxpy: Level 1 GSL BLAS Interface.
(line 96)
* gsl_blas_zcopy: Level 1 GSL BLAS Interface.
(line 85)
* gsl_blas_zdotc: Level 1 GSL BLAS Interface.
(line 31)
* gsl_blas_zdotu: Level 1 GSL BLAS Interface.
(line 24)
* gsl_blas_zdscal: Level 1 GSL BLAS Interface.
(line 109)
* gsl_blas_zgemm: Level 3 GSL BLAS Interface.
(line 22)
* gsl_blas_zgemv: Level 2 GSL BLAS Interface.
(line 20)
* gsl_blas_zgerc: Level 2 GSL BLAS Interface.
(line 113)
* gsl_blas_zgeru: Level 2 GSL BLAS Interface.
(line 104)
* gsl_blas_zhemm: Level 3 GSL BLAS Interface.
(line 56)
* gsl_blas_zhemv: Level 2 GSL BLAS Interface.
(line 85)
* gsl_blas_zher: Level 2 GSL BLAS Interface.
(line 131)
* gsl_blas_zher2: Level 2 GSL BLAS Interface.
(line 158)
* gsl_blas_zher2k: Level 3 GSL BLAS Interface.
(line 181)
* gsl_blas_zherk: Level 3 GSL BLAS Interface.
(line 140)
* gsl_blas_zscal: Level 1 GSL BLAS Interface.
(line 105)
* gsl_blas_zswap: Level 1 GSL BLAS Interface.
(line 76)
* gsl_blas_zsymm: Level 3 GSL BLAS Interface.
(line 41)
* gsl_blas_zsyr2k: Level 3 GSL BLAS Interface.
(line 164)
* gsl_blas_zsyrk: Level 3 GSL BLAS Interface.
(line 126)
* gsl_blas_ztrmm: Level 3 GSL BLAS Interface.
(line 78)
* gsl_blas_ztrmv: Level 2 GSL BLAS Interface.
(line 36)
* gsl_blas_ztrsm: Level 3 GSL BLAS Interface.
(line 102)
* gsl_blas_ztrsv: Level 2 GSL BLAS Interface.
(line 57)
* gsl_block_alloc: Block allocation. (line 15)
* gsl_block_calloc: Block allocation. (line 25)
* gsl_block_fprintf: Reading and writing blocks.
(line 27)
* gsl_block_fread: Reading and writing blocks.
(line 17)
* gsl_block_free: Block allocation. (line 29)
* gsl_block_fscanf: Reading and writing blocks.
(line 34)
* gsl_block_fwrite: Reading and writing blocks.
(line 10)
* gsl_bspline_alloc: Initializing the B-splines solver.
(line 12)
* gsl_bspline_deriv_alloc: Initializing the B-splines solver.
(line 22)
* gsl_bspline_deriv_eval: Evaluation of B-spline basis function derivatives.
(line 9)
* gsl_bspline_deriv_eval_nonzero: Evaluation of B-spline basis function derivatives.
(line 24)
* gsl_bspline_deriv_free: Initializing the B-splines solver.
(line 28)
* gsl_bspline_eval: Evaluation of B-spline basis functions.
(line 8)
* gsl_bspline_eval_nonzero: Evaluation of B-spline basis functions.
(line 19)
* gsl_bspline_free: Initializing the B-splines solver.
(line 18)
* gsl_bspline_greville_abscissa: Obtaining Greville abscissae for B-spline basis functions.
(line 19)
* gsl_bspline_knots: Constructing the knots vector.
(line 8)
* gsl_bspline_knots_uniform: Constructing the knots vector.
(line 13)
* gsl_bspline_ncoeffs: Evaluation of B-spline basis functions.
(line 29)
* gsl_cdf_beta_P: The Beta Distribution.
(line 22)
* gsl_cdf_beta_Pinv: The Beta Distribution.
(line 24)
* gsl_cdf_beta_Q: The Beta Distribution.
(line 23)
* gsl_cdf_beta_Qinv: The Beta Distribution.
(line 25)
* gsl_cdf_binomial_P: The Binomial Distribution.
(line 27)
* gsl_cdf_binomial_Q: The Binomial Distribution.
(line 29)
* gsl_cdf_cauchy_P: The Cauchy Distribution.
(line 23)
* gsl_cdf_cauchy_Pinv: The Cauchy Distribution.
(line 25)
* gsl_cdf_cauchy_Q: The Cauchy Distribution.
(line 24)
* gsl_cdf_cauchy_Qinv: The Cauchy Distribution.
(line 26)
* gsl_cdf_chisq_P: The Chi-squared Distribution.
(line 30)
* gsl_cdf_chisq_Pinv: The Chi-squared Distribution.
(line 32)
* gsl_cdf_chisq_Q: The Chi-squared Distribution.
(line 31)
* gsl_cdf_chisq_Qinv: The Chi-squared Distribution.
(line 33)
* gsl_cdf_exponential_P: The Exponential Distribution.
(line 22)
* gsl_cdf_exponential_Pinv: The Exponential Distribution.
(line 24)
* gsl_cdf_exponential_Q: The Exponential Distribution.
(line 23)
* gsl_cdf_exponential_Qinv: The Exponential Distribution.
(line 25)
* gsl_cdf_exppow_P: The Exponential Power Distribution.
(line 25)
* gsl_cdf_exppow_Q: The Exponential Power Distribution.
(line 26)
* gsl_cdf_fdist_P: The F-distribution. (line 35)
* gsl_cdf_fdist_Pinv: The F-distribution. (line 38)
* gsl_cdf_fdist_Q: The F-distribution. (line 36)
* gsl_cdf_fdist_Qinv: The F-distribution. (line 40)
* gsl_cdf_flat_P: The Flat (Uniform) Distribution.
(line 21)
* gsl_cdf_flat_Pinv: The Flat (Uniform) Distribution.
(line 23)
* gsl_cdf_flat_Q: The Flat (Uniform) Distribution.
(line 22)
* gsl_cdf_flat_Qinv: The Flat (Uniform) Distribution.
(line 24)
* gsl_cdf_gamma_P: The Gamma Distribution.
(line 35)
* gsl_cdf_gamma_Pinv: The Gamma Distribution.
(line 37)
* gsl_cdf_gamma_Q: The Gamma Distribution.
(line 36)
* gsl_cdf_gamma_Qinv: The Gamma Distribution.
(line 38)
* gsl_cdf_gaussian_P: The Gaussian Distribution.
(line 42)
* gsl_cdf_gaussian_Pinv: The Gaussian Distribution.
(line 44)
* gsl_cdf_gaussian_Q: The Gaussian Distribution.
(line 43)
* gsl_cdf_gaussian_Qinv: The Gaussian Distribution.
(line 45)
* gsl_cdf_geometric_P: The Geometric Distribution.
(line 26)
* gsl_cdf_geometric_Q: The Geometric Distribution.
(line 27)
* gsl_cdf_gumbel1_P: The Type-1 Gumbel Distribution.
(line 22)
* gsl_cdf_gumbel1_Pinv: The Type-1 Gumbel Distribution.
(line 24)
* gsl_cdf_gumbel1_Q: The Type-1 Gumbel Distribution.
(line 23)
* gsl_cdf_gumbel1_Qinv: The Type-1 Gumbel Distribution.
(line 25)
* gsl_cdf_gumbel2_P: The Type-2 Gumbel Distribution.
(line 22)
* gsl_cdf_gumbel2_Pinv: The Type-2 Gumbel Distribution.
(line 24)
* gsl_cdf_gumbel2_Q: The Type-2 Gumbel Distribution.
(line 23)
* gsl_cdf_gumbel2_Qinv: The Type-2 Gumbel Distribution.
(line 25)
* gsl_cdf_hypergeometric_P: The Hypergeometric Distribution.
(line 31)
* gsl_cdf_hypergeometric_Q: The Hypergeometric Distribution.
(line 33)
* gsl_cdf_laplace_P: The Laplace Distribution.
(line 20)
* gsl_cdf_laplace_Pinv: The Laplace Distribution.
(line 22)
* gsl_cdf_laplace_Q: The Laplace Distribution.
(line 21)
* gsl_cdf_laplace_Qinv: The Laplace Distribution.
(line 23)
* gsl_cdf_logistic_P: The Logistic Distribution.
(line 21)
* gsl_cdf_logistic_Pinv: The Logistic Distribution.
(line 23)
* gsl_cdf_logistic_Q: The Logistic Distribution.
(line 22)
* gsl_cdf_logistic_Qinv: The Logistic Distribution.
(line 24)
* gsl_cdf_lognormal_P: The Lognormal Distribution.
(line 24)
* gsl_cdf_lognormal_Pinv: The Lognormal Distribution.
(line 28)
* gsl_cdf_lognormal_Q: The Lognormal Distribution.
(line 26)
* gsl_cdf_lognormal_Qinv: The Lognormal Distribution.
(line 30)
* gsl_cdf_negative_binomial_P: The Negative Binomial Distribution.
(line 26)
* gsl_cdf_negative_binomial_Q: The Negative Binomial Distribution.
(line 28)
* gsl_cdf_pareto_P: The Pareto Distribution.
(line 22)
* gsl_cdf_pareto_Pinv: The Pareto Distribution.
(line 24)
* gsl_cdf_pareto_Q: The Pareto Distribution.
(line 23)
* gsl_cdf_pareto_Qinv: The Pareto Distribution.
(line 25)
* gsl_cdf_pascal_P: The Pascal Distribution.
(line 25)
* gsl_cdf_pascal_Q: The Pascal Distribution.
(line 27)
* gsl_cdf_poisson_P: The Poisson Distribution.
(line 22)
* gsl_cdf_poisson_Q: The Poisson Distribution.
(line 23)
* gsl_cdf_rayleigh_P: The Rayleigh Distribution.
(line 21)
* gsl_cdf_rayleigh_Pinv: The Rayleigh Distribution.
(line 23)
* gsl_cdf_rayleigh_Q: The Rayleigh Distribution.
(line 22)
* gsl_cdf_rayleigh_Qinv: The Rayleigh Distribution.
(line 24)
* gsl_cdf_tdist_P: The t-distribution. (line 30)
* gsl_cdf_tdist_Pinv: The t-distribution. (line 32)
* gsl_cdf_tdist_Q: The t-distribution. (line 31)
* gsl_cdf_tdist_Qinv: The t-distribution. (line 33)
* gsl_cdf_ugaussian_P: The Gaussian Distribution.
(line 50)
* gsl_cdf_ugaussian_Pinv: The Gaussian Distribution.
(line 52)
* gsl_cdf_ugaussian_Q: The Gaussian Distribution.
(line 51)
* gsl_cdf_ugaussian_Qinv: The Gaussian Distribution.
(line 53)
* gsl_cdf_weibull_P: The Weibull Distribution.
(line 22)
* gsl_cdf_weibull_Pinv: The Weibull Distribution.
(line 24)
* gsl_cdf_weibull_Q: The Weibull Distribution.
(line 23)
* gsl_cdf_weibull_Qinv: The Weibull Distribution.
(line 25)
* gsl_cheb_alloc: Creation and Calculation of Chebyshev Series.
(line 7)
* gsl_cheb_calc_deriv: Derivatives and Integrals.
(line 14)
* gsl_cheb_calc_integ: Derivatives and Integrals.
(line 21)
* gsl_cheb_coeffs: Auxiliary Functions for Chebyshev Series.
(line 14)
* gsl_cheb_eval: Chebyshev Series Evaluation.
(line 8)
* gsl_cheb_eval_err: Chebyshev Series Evaluation.
(line 12)
* gsl_cheb_eval_n: Chebyshev Series Evaluation.
(line 19)
* gsl_cheb_eval_n_err: Chebyshev Series Evaluation.
(line 25)
* gsl_cheb_free: Creation and Calculation of Chebyshev Series.
(line 11)
* gsl_cheb_init: Creation and Calculation of Chebyshev Series.
(line 15)
* gsl_cheb_order: Auxiliary Functions for Chebyshev Series.
(line 10)
* gsl_cheb_size: Auxiliary Functions for Chebyshev Series.
(line 13)
* gsl_combination_alloc: Combination allocation.
(line 8)
* gsl_combination_calloc: Combination allocation.
(line 17)
* gsl_combination_data: Combination properties.
(line 14)
* gsl_combination_fprintf: Reading and writing combinations.
(line 29)
* gsl_combination_fread: Reading and writing combinations.
(line 19)
* gsl_combination_free: Combination allocation.
(line 31)
* gsl_combination_fscanf: Reading and writing combinations.
(line 38)
* gsl_combination_fwrite: Reading and writing combinations.
(line 11)
* gsl_combination_get: Accessing combination elements.
(line 11)
* gsl_combination_init_first: Combination allocation.
(line 23)
* gsl_combination_init_last: Combination allocation.
(line 27)
* gsl_combination_k: Combination properties.
(line 10)
* gsl_combination_memcpy: Combination allocation.
(line 35)
* gsl_combination_n: Combination properties.
(line 7)
* gsl_combination_next: Combination functions.
(line 7)
* gsl_combination_prev: Combination functions.
(line 15)
* gsl_combination_valid: Combination properties.
(line 18)
* gsl_complex_abs: Properties of complex numbers.
(line 11)
* gsl_complex_abs2: Properties of complex numbers.
(line 14)
* gsl_complex_add: Complex arithmetic operators.
(line 7)
* gsl_complex_add_imag: Complex arithmetic operators.
(line 39)
* gsl_complex_add_real: Complex arithmetic operators.
(line 23)
* gsl_complex_arccos: Inverse Complex Trigonometric Functions.
(line 20)
* gsl_complex_arccos_real: Inverse Complex Trigonometric Functions.
(line 25)
* gsl_complex_arccosh: Inverse Complex Hyperbolic Functions.
(line 12)
* gsl_complex_arccosh_real: Inverse Complex Hyperbolic Functions.
(line 19)
* gsl_complex_arccot: Inverse Complex Trigonometric Functions.
(line 53)
* gsl_complex_arccoth: Inverse Complex Hyperbolic Functions.
(line 40)
* gsl_complex_arccsc: Inverse Complex Trigonometric Functions.
(line 45)
* gsl_complex_arccsc_real: Inverse Complex Trigonometric Functions.
(line 49)
* gsl_complex_arccsch: Inverse Complex Hyperbolic Functions.
(line 36)
* gsl_complex_arcsec: Inverse Complex Trigonometric Functions.
(line 37)
* gsl_complex_arcsec_real: Inverse Complex Trigonometric Functions.
(line 41)
* gsl_complex_arcsech: Inverse Complex Hyperbolic Functions.
(line 32)
* gsl_complex_arcsin: Inverse Complex Trigonometric Functions.
(line 7)
* gsl_complex_arcsin_real: Inverse Complex Trigonometric Functions.
(line 12)
* gsl_complex_arcsinh: Inverse Complex Hyperbolic Functions.
(line 7)
* gsl_complex_arctan: Inverse Complex Trigonometric Functions.
(line 32)
* gsl_complex_arctanh: Inverse Complex Hyperbolic Functions.
(line 23)
* gsl_complex_arctanh_real: Inverse Complex Hyperbolic Functions.
(line 28)
* gsl_complex_arg: Properties of complex numbers.
(line 7)
* gsl_complex_conjugate: Complex arithmetic operators.
(line 55)
* gsl_complex_cos: Complex Trigonometric Functions.
(line 11)
* gsl_complex_cosh: Complex Hyperbolic Functions.
(line 11)
* gsl_complex_cot: Complex Trigonometric Functions.
(line 27)
* gsl_complex_coth: Complex Hyperbolic Functions.
(line 27)
* gsl_complex_csc: Complex Trigonometric Functions.
(line 23)
* gsl_complex_csch: Complex Hyperbolic Functions.
(line 23)
* gsl_complex_div: Complex arithmetic operators.
(line 19)
* gsl_complex_div_imag: Complex arithmetic operators.
(line 51)
* gsl_complex_div_real: Complex arithmetic operators.
(line 35)
* gsl_complex_exp: Elementary Complex Functions.
(line 25)
* gsl_complex_inverse: Complex arithmetic operators.
(line 59)
* gsl_complex_log: Elementary Complex Functions.
(line 29)
* gsl_complex_log10: Elementary Complex Functions.
(line 34)
* gsl_complex_log_b: Elementary Complex Functions.
(line 39)
* gsl_complex_logabs: Properties of complex numbers.
(line 18)
* gsl_complex_mul: Complex arithmetic operators.
(line 15)
* gsl_complex_mul_imag: Complex arithmetic operators.
(line 47)
* gsl_complex_mul_real: Complex arithmetic operators.
(line 31)
* gsl_complex_negative: Complex arithmetic operators.
(line 63)
* gsl_complex_polar: Representation of complex numbers.
(line 30)
* gsl_complex_poly_complex_eval: Polynomial Evaluation.
(line 23)
* gsl_complex_pow: Elementary Complex Functions.
(line 16)
* gsl_complex_pow_real: Elementary Complex Functions.
(line 21)
* gsl_complex_rect: Representation of complex numbers.
(line 25)
* gsl_complex_sec: Complex Trigonometric Functions.
(line 19)
* gsl_complex_sech: Complex Hyperbolic Functions.
(line 19)
* gsl_complex_sin: Complex Trigonometric Functions.
(line 7)
* gsl_complex_sinh: Complex Hyperbolic Functions.
(line 7)
* gsl_complex_sqrt: Elementary Complex Functions.
(line 7)
* gsl_complex_sqrt_real: Elementary Complex Functions.
(line 12)
* gsl_complex_sub: Complex arithmetic operators.
(line 11)
* gsl_complex_sub_imag: Complex arithmetic operators.
(line 43)
* gsl_complex_sub_real: Complex arithmetic operators.
(line 27)
* gsl_complex_tan: Complex Trigonometric Functions.
(line 15)
* gsl_complex_tanh: Complex Hyperbolic Functions.
(line 15)
* gsl_deriv_backward: Numerical Differentiation functions.
(line 42)
* gsl_deriv_central: Numerical Differentiation functions.
(line 8)
* gsl_deriv_forward: Numerical Differentiation functions.
(line 24)
* gsl_dht_alloc: Discrete Hankel Transform Functions.
(line 7)
* gsl_dht_apply: Discrete Hankel Transform Functions.
(line 24)
* gsl_dht_free: Discrete Hankel Transform Functions.
(line 20)
* gsl_dht_init: Discrete Hankel Transform Functions.
(line 11)
* gsl_dht_k_sample: Discrete Hankel Transform Functions.
(line 37)
* gsl_dht_new: Discrete Hankel Transform Functions.
(line 16)
* gsl_dht_x_sample: Discrete Hankel Transform Functions.
(line 32)
* gsl_eigen_gen: Real Generalized Nonsymmetric Eigensystems.
(line 75)
* gsl_eigen_gen_alloc: Real Generalized Nonsymmetric Eigensystems.
(line 43)
* gsl_eigen_gen_free: Real Generalized Nonsymmetric Eigensystems.
(line 48)
* gsl_eigen_gen_params: Real Generalized Nonsymmetric Eigensystems.
(line 52)
* gsl_eigen_gen_QZ: Real Generalized Nonsymmetric Eigensystems.
(line 92)
* gsl_eigen_genherm: Complex Generalized Hermitian-Definite Eigensystems.
(line 30)
* gsl_eigen_genherm_alloc: Complex Generalized Hermitian-Definite Eigensystems.
(line 19)
* gsl_eigen_genherm_free: Complex Generalized Hermitian-Definite Eigensystems.
(line 25)
* gsl_eigen_genhermv: Complex Generalized Hermitian-Definite Eigensystems.
(line 48)
* gsl_eigen_genhermv_alloc: Complex Generalized Hermitian-Definite Eigensystems.
(line 37)
* gsl_eigen_genhermv_free: Complex Generalized Hermitian-Definite Eigensystems.
(line 43)
* gsl_eigen_genhermv_sort: Sorting Eigenvalues and Eigenvectors.
(line 51)
* gsl_eigen_gensymm: Real Generalized Symmetric-Definite Eigensystems.
(line 34)
* gsl_eigen_gensymm_alloc: Real Generalized Symmetric-Definite Eigensystems.
(line 24)
* gsl_eigen_gensymm_free: Real Generalized Symmetric-Definite Eigensystems.
(line 30)
* gsl_eigen_gensymmv: Real Generalized Symmetric-Definite Eigensystems.
(line 52)
* gsl_eigen_gensymmv_alloc: Real Generalized Symmetric-Definite Eigensystems.
(line 41)
* gsl_eigen_gensymmv_free: Real Generalized Symmetric-Definite Eigensystems.
(line 47)
* gsl_eigen_gensymmv_sort: Sorting Eigenvalues and Eigenvectors.
(line 44)
* gsl_eigen_genv: Real Generalized Nonsymmetric Eigensystems.
(line 108)
* gsl_eigen_genv_alloc: Real Generalized Nonsymmetric Eigensystems.
(line 98)
* gsl_eigen_genv_free: Real Generalized Nonsymmetric Eigensystems.
(line 103)
* gsl_eigen_genv_QZ: Real Generalized Nonsymmetric Eigensystems.
(line 124)
* gsl_eigen_genv_sort: Sorting Eigenvalues and Eigenvectors.
(line 59)
* gsl_eigen_herm: Complex Hermitian Matrices.
(line 20)
* gsl_eigen_herm_alloc: Complex Hermitian Matrices.
(line 11)
* gsl_eigen_herm_free: Complex Hermitian Matrices.
(line 16)
* gsl_eigen_hermv: Complex Hermitian Matrices.
(line 40)
* gsl_eigen_hermv_alloc: Complex Hermitian Matrices.
(line 30)
* gsl_eigen_hermv_free: Complex Hermitian Matrices.
(line 35)
* gsl_eigen_hermv_sort: Sorting Eigenvalues and Eigenvectors.
(line 28)
* gsl_eigen_nonsymm: Real Nonsymmetric Matrices.
(line 62)
* gsl_eigen_nonsymm_alloc: Real Nonsymmetric Matrices.
(line 19)
* gsl_eigen_nonsymm_free: Real Nonsymmetric Matrices.
(line 25)
* gsl_eigen_nonsymm_params: Real Nonsymmetric Matrices.
(line 29)
* gsl_eigen_nonsymm_Z: Real Nonsymmetric Matrices.
(line 75)
* gsl_eigen_nonsymmv: Real Nonsymmetric Matrices.
(line 100)
* gsl_eigen_nonsymmv_alloc: Real Nonsymmetric Matrices.
(line 80)
* gsl_eigen_nonsymmv_free: Real Nonsymmetric Matrices.
(line 86)
* gsl_eigen_nonsymmv_params: Real Nonsymmetric Matrices.
(line 90)
* gsl_eigen_nonsymmv_sort: Sorting Eigenvalues and Eigenvectors.
(line 35)
* gsl_eigen_nonsymmv_Z: Real Nonsymmetric Matrices.
(line 114)
* gsl_eigen_symm: Real Symmetric Matrices.
(line 23)
* gsl_eigen_symm_alloc: Real Symmetric Matrices.
(line 14)
* gsl_eigen_symm_free: Real Symmetric Matrices.
(line 19)
* gsl_eigen_symmv: Real Symmetric Matrices.
(line 41)
* gsl_eigen_symmv_alloc: Real Symmetric Matrices.
(line 32)
* gsl_eigen_symmv_free: Real Symmetric Matrices.
(line 37)
* gsl_eigen_symmv_sort: Sorting Eigenvalues and Eigenvectors.
(line 8)
* GSL_ERROR: Using GSL error reporting in your own functions.
(line 16)
* GSL_ERROR_VAL: Using GSL error reporting in your own functions.
(line 36)
* gsl_expm1: Elementary Functions.
(line 18)
* gsl_fcmp: Approximate Comparison of Floating Point Numbers.
(line 13)
* gsl_fft_complex_backward: Mixed-radix FFT routines for complex data.
(line 130)
* gsl_fft_complex_forward: Mixed-radix FFT routines for complex data.
(line 122)
* gsl_fft_complex_inverse: Mixed-radix FFT routines for complex data.
(line 134)
* gsl_fft_complex_radix2_backward: Radix-2 FFT routines for complex data.
(line 23)
* gsl_fft_complex_radix2_dif_backward: Radix-2 FFT routines for complex data.
(line 43)
* gsl_fft_complex_radix2_dif_forward: Radix-2 FFT routines for complex data.
(line 38)
* gsl_fft_complex_radix2_dif_inverse: Radix-2 FFT routines for complex data.
(line 45)
* gsl_fft_complex_radix2_dif_transform: Radix-2 FFT routines for complex data.
(line 41)
* gsl_fft_complex_radix2_forward: Radix-2 FFT routines for complex data.
(line 18)
* gsl_fft_complex_radix2_inverse: Radix-2 FFT routines for complex data.
(line 25)
* gsl_fft_complex_radix2_transform: Radix-2 FFT routines for complex data.
(line 21)
* gsl_fft_complex_transform: Mixed-radix FFT routines for complex data.
(line 126)
* gsl_fft_complex_wavetable_alloc: Mixed-radix FFT routines for complex data.
(line 47)
* gsl_fft_complex_wavetable_free: Mixed-radix FFT routines for complex data.
(line 66)
* gsl_fft_complex_workspace_alloc: Mixed-radix FFT routines for complex data.
(line 107)
* gsl_fft_complex_workspace_free: Mixed-radix FFT routines for complex data.
(line 112)
* gsl_fft_halfcomplex_radix2_backward: Radix-2 FFT routines for real data.
(line 59)
* gsl_fft_halfcomplex_radix2_inverse: Radix-2 FFT routines for real data.
(line 57)
* gsl_fft_halfcomplex_radix2_unpack: Radix-2 FFT routines for real data.
(line 68)
* gsl_fft_halfcomplex_transform: Mixed-radix FFT routines for real data.
(line 125)
* gsl_fft_halfcomplex_unpack: Mixed-radix FFT routines for real data.
(line 155)
* gsl_fft_halfcomplex_wavetable_alloc: Mixed-radix FFT routines for real data.
(line 76)
* gsl_fft_halfcomplex_wavetable_free: Mixed-radix FFT routines for real data.
(line 97)
* gsl_fft_real_radix2_transform: Radix-2 FFT routines for real data.
(line 15)
* gsl_fft_real_transform: Mixed-radix FFT routines for real data.
(line 122)
* gsl_fft_real_unpack: Mixed-radix FFT routines for real data.
(line 139)
* gsl_fft_real_wavetable_alloc: Mixed-radix FFT routines for real data.
(line 74)
* gsl_fft_real_wavetable_free: Mixed-radix FFT routines for real data.
(line 95)
* gsl_fft_real_workspace_alloc: Mixed-radix FFT routines for real data.
(line 106)
* gsl_fft_real_workspace_free: Mixed-radix FFT routines for real data.
(line 112)
* gsl_finite: Infinities and Not-a-number.
(line 26)
* gsl_fit_linear: Linear regression. (line 13)
* gsl_fit_linear_est: Linear regression. (line 45)
* gsl_fit_mul: Linear fitting without a constant term.
(line 13)
* gsl_fit_mul_est: Linear fitting without a constant term.
(line 38)
* gsl_fit_wlinear: Linear regression. (line 30)
* gsl_fit_wmul: Linear fitting without a constant term.
(line 25)
* gsl_frexp: Elementary Functions.
(line 49)
* gsl_heapsort: Sorting objects. (line 18)
* gsl_heapsort_index: Sorting objects. (line 59)
* gsl_histogram2d_accumulate: Updating and accessing 2D histogram elements.
(line 27)
* gsl_histogram2d_add: 2D Histogram Operations.
(line 13)
* gsl_histogram2d_alloc: 2D Histogram allocation.
(line 16)
* gsl_histogram2d_clone: Copying 2D Histograms.
(line 14)
* gsl_histogram2d_cov: 2D Histogram Statistics.
(line 53)
* gsl_histogram2d_div: 2D Histogram Operations.
(line 33)
* gsl_histogram2d_equal_bins_p: 2D Histogram Operations.
(line 8)
* gsl_histogram2d_find: Searching 2D histogram ranges.
(line 11)
* gsl_histogram2d_fprintf: Reading and writing 2D histograms.
(line 30)
* gsl_histogram2d_fread: Reading and writing 2D histograms.
(line 19)
* gsl_histogram2d_free: 2D Histogram allocation.
(line 38)
* gsl_histogram2d_fscanf: Reading and writing 2D histograms.
(line 66)
* gsl_histogram2d_fwrite: Reading and writing 2D histograms.
(line 11)
* gsl_histogram2d_get: Updating and accessing 2D histogram elements.
(line 33)
* gsl_histogram2d_get_xrange: Updating and accessing 2D histogram elements.
(line 40)
* gsl_histogram2d_get_yrange: Updating and accessing 2D histogram elements.
(line 42)
* gsl_histogram2d_increment: Updating and accessing 2D histogram elements.
(line 14)
* gsl_histogram2d_max_bin: 2D Histogram Statistics.
(line 12)
* gsl_histogram2d_max_val: 2D Histogram Statistics.
(line 7)
* gsl_histogram2d_memcpy: Copying 2D Histograms.
(line 8)
* gsl_histogram2d_min_bin: 2D Histogram Statistics.
(line 23)
* gsl_histogram2d_min_val: 2D Histogram Statistics.
(line 18)
* gsl_histogram2d_mul: 2D Histogram Operations.
(line 26)
* gsl_histogram2d_nx: Updating and accessing 2D histogram elements.
(line 56)
* gsl_histogram2d_ny: Updating and accessing 2D histogram elements.
(line 59)
* gsl_histogram2d_pdf_alloc: Resampling from 2D histograms.
(line 43)
* gsl_histogram2d_pdf_free: Resampling from 2D histograms.
(line 58)
* gsl_histogram2d_pdf_init: Resampling from 2D histograms.
(line 51)
* gsl_histogram2d_pdf_sample: Resampling from 2D histograms.
(line 63)
* gsl_histogram2d_reset: Updating and accessing 2D histogram elements.
(line 65)
* gsl_histogram2d_scale: 2D Histogram Operations.
(line 40)
* gsl_histogram2d_set_ranges: 2D Histogram allocation.
(line 27)
* gsl_histogram2d_set_ranges_uniform: 2D Histogram allocation.
(line 33)
* gsl_histogram2d_shift: 2D Histogram Operations.
(line 45)
* gsl_histogram2d_sub: 2D Histogram Operations.
(line 19)
* gsl_histogram2d_sum: 2D Histogram Statistics.
(line 59)
* gsl_histogram2d_xmax: Updating and accessing 2D histogram elements.
(line 54)
* gsl_histogram2d_xmean: 2D Histogram Statistics.
(line 29)
* gsl_histogram2d_xmin: Updating and accessing 2D histogram elements.
(line 55)
* gsl_histogram2d_xsigma: 2D Histogram Statistics.
(line 41)
* gsl_histogram2d_ymax: Updating and accessing 2D histogram elements.
(line 57)
* gsl_histogram2d_ymean: 2D Histogram Statistics.
(line 35)
* gsl_histogram2d_ymin: Updating and accessing 2D histogram elements.
(line 58)
* gsl_histogram2d_ysigma: 2D Histogram Statistics.
(line 47)
* gsl_histogram_accumulate: Updating and accessing histogram elements.
(line 27)
* gsl_histogram_add: Histogram Operations.
(line 13)
* gsl_histogram_alloc: Histogram allocation.
(line 15)
* gsl_histogram_bins: Updating and accessing histogram elements.
(line 54)
* gsl_histogram_clone: Copying Histograms. (line 14)
* gsl_histogram_div: Histogram Operations.
(line 32)
* gsl_histogram_equal_bins_p: Histogram Operations.
(line 8)
* gsl_histogram_find: Searching histogram ranges.
(line 11)
* gsl_histogram_fprintf: Reading and writing histograms.
(line 29)
* gsl_histogram_fread: Reading and writing histograms.
(line 18)
* gsl_histogram_free: Histogram allocation.
(line 62)
* gsl_histogram_fscanf: Reading and writing histograms.
(line 53)
* gsl_histogram_fwrite: Reading and writing histograms.
(line 11)
* gsl_histogram_get: Updating and accessing histogram elements.
(line 33)
* gsl_histogram_get_range: Updating and accessing histogram elements.
(line 40)
* gsl_histogram_increment: Updating and accessing histogram elements.
(line 12)
* gsl_histogram_max: Updating and accessing histogram elements.
(line 52)
* gsl_histogram_max_bin: Histogram Statistics.
(line 11)
* gsl_histogram_max_val: Histogram Statistics.
(line 7)
* gsl_histogram_mean: Histogram Statistics.
(line 25)
* gsl_histogram_memcpy: Copying Histograms. (line 8)
* gsl_histogram_min: Updating and accessing histogram elements.
(line 53)
* gsl_histogram_min_bin: Histogram Statistics.
(line 20)
* gsl_histogram_min_val: Histogram Statistics.
(line 16)
* gsl_histogram_mul: Histogram Operations.
(line 25)
* gsl_histogram_pdf_alloc: The histogram probability distribution struct.
(line 36)
* gsl_histogram_pdf_free: The histogram probability distribution struct.
(line 50)
* gsl_histogram_pdf_init: The histogram probability distribution struct.
(line 44)
* gsl_histogram_pdf_sample: The histogram probability distribution struct.
(line 55)
* gsl_histogram_reset: Updating and accessing histogram elements.
(line 60)
* gsl_histogram_scale: Histogram Operations.
(line 38)
* gsl_histogram_set_ranges: Histogram allocation.
(line 25)
* gsl_histogram_set_ranges_uniform: Histogram allocation.
(line 50)
* gsl_histogram_shift: Histogram Operations.
(line 42)
* gsl_histogram_sigma: Histogram Statistics.
(line 31)
* gsl_histogram_sub: Histogram Operations.
(line 19)
* gsl_histogram_sum: Histogram Statistics.
(line 38)
* gsl_hypot: Elementary Functions.
(line 23)
* gsl_hypot3: Elementary Functions.
(line 29)
* gsl_ieee_env_setup: Setting up your IEEE environment.
(line 24)
* gsl_ieee_fprintf_double: Representation of floating point numbers.
(line 57)
* gsl_ieee_fprintf_float: Representation of floating point numbers.
(line 55)
* gsl_ieee_printf_double: Representation of floating point numbers.
(line 84)
* gsl_ieee_printf_float: Representation of floating point numbers.
(line 83)
* GSL_IMAG: Representation of complex numbers.
(line 36)
* gsl_integration_cquad: CQUAD doubly-adaptive integration.
(line 37)
* gsl_integration_cquad_workspace_alloc: CQUAD doubly-adaptive integration.
(line 22)
* gsl_integration_cquad_workspace_free: CQUAD doubly-adaptive integration.
(line 31)
* gsl_integration_glfixed: Fixed order Gauss-Legendre integration.
(line 24)
* gsl_integration_glfixed_point: Fixed order Gauss-Legendre integration.
(line 30)
* gsl_integration_glfixed_table_alloc: Fixed order Gauss-Legendre integration.
(line 16)
* gsl_integration_glfixed_table_free: Fixed order Gauss-Legendre integration.
(line 37)
* gsl_integration_qag: QAG adaptive integration.
(line 28)
* gsl_integration_qagi: QAGI adaptive integration on infinite intervals.
(line 10)
* gsl_integration_qagil: QAGI adaptive integration on infinite intervals.
(line 41)
* gsl_integration_qagiu: QAGI adaptive integration on infinite intervals.
(line 27)
* gsl_integration_qagp: QAGP adaptive integration with known singular points.
(line 10)
* gsl_integration_qags: QAGS adaptive integration with singularities.
(line 19)
* gsl_integration_qawc: QAWC adaptive integration for Cauchy principal values.
(line 10)
* gsl_integration_qawf: QAWF adaptive integration for Fourier integrals.
(line 12)
* gsl_integration_qawo: QAWO adaptive integration for oscillatory functions.
(line 59)
* gsl_integration_qawo_table_alloc: QAWO adaptive integration for oscillatory functions.
(line 14)
* gsl_integration_qawo_table_free: QAWO adaptive integration for oscillatory functions.
(line 52)
* gsl_integration_qawo_table_set: QAWO adaptive integration for oscillatory functions.
(line 42)
* gsl_integration_qawo_table_set_length: QAWO adaptive integration for oscillatory functions.
(line 47)
* gsl_integration_qaws: QAWS adaptive integration for singular functions.
(line 53)
* gsl_integration_qaws_table_alloc: QAWS adaptive integration for singular functions.
(line 14)
* gsl_integration_qaws_table_free: QAWS adaptive integration for singular functions.
(line 45)
* gsl_integration_qaws_table_set: QAWS adaptive integration for singular functions.
(line 40)
* gsl_integration_qng: QNG non-adaptive Gauss-Kronrod integration.
(line 13)
* gsl_integration_workspace_alloc: QAG adaptive integration.
(line 17)
* gsl_integration_workspace_free: QAG adaptive integration.
(line 22)
* gsl_interp_accel_alloc: Index Look-up and Acceleration.
(line 20)
* gsl_interp_accel_find: Index Look-up and Acceleration.
(line 27)
* gsl_interp_accel_free: Index Look-up and Acceleration.
(line 40)
* gsl_interp_accel_reset: Index Look-up and Acceleration.
(line 35)
* gsl_interp_akima: Interpolation Types. (line 36)
* gsl_interp_akima_periodic: Interpolation Types. (line 40)
* gsl_interp_alloc: Interpolation Functions.
(line 11)
* gsl_interp_bsearch: Index Look-up and Acceleration.
(line 14)
* gsl_interp_cspline: Interpolation Types. (line 20)
* gsl_interp_cspline_periodic: Interpolation Types. (line 26)
* gsl_interp_eval: Evaluation of Interpolating Functions.
(line 9)
* gsl_interp_eval_deriv: Evaluation of Interpolating Functions.
(line 21)
* gsl_interp_eval_deriv2: Evaluation of Interpolating Functions.
(line 31)
* gsl_interp_eval_deriv2_e: Evaluation of Interpolating Functions.
(line 34)
* gsl_interp_eval_deriv_e: Evaluation of Interpolating Functions.
(line 24)
* gsl_interp_eval_e: Evaluation of Interpolating Functions.
(line 12)
* gsl_interp_eval_integ: Evaluation of Interpolating Functions.
(line 41)
* gsl_interp_eval_integ_e: Evaluation of Interpolating Functions.
(line 44)
* gsl_interp_free: Interpolation Functions.
(line 25)
* gsl_interp_init: Interpolation Functions.
(line 16)
* gsl_interp_linear: Interpolation Types. (line 9)
* gsl_interp_min_size: Interpolation Types. (line 58)
* gsl_interp_name: Interpolation Types. (line 46)
* gsl_interp_polynomial: Interpolation Types. (line 13)
* gsl_interp_type_min_size: Interpolation Types. (line 60)
* GSL_IS_EVEN: Testing for Odd and Even Numbers.
(line 11)
* GSL_IS_ODD: Testing for Odd and Even Numbers.
(line 7)
* gsl_isinf: Infinities and Not-a-number.
(line 22)
* gsl_isnan: Infinities and Not-a-number.
(line 19)
* gsl_ldexp: Elementary Functions.
(line 45)
* gsl_linalg_balance_matrix: Balancing. (line 19)
* gsl_linalg_bidiag_decomp: Bidiagonalization. (line 17)
* gsl_linalg_bidiag_unpack: Bidiagonalization. (line 29)
* gsl_linalg_bidiag_unpack2: Bidiagonalization. (line 37)
* gsl_linalg_bidiag_unpack_B: Bidiagonalization. (line 44)
* gsl_linalg_cholesky_decomp: Cholesky Decomposition.
(line 19)
* gsl_linalg_cholesky_invert: Cholesky Decomposition.
(line 56)
* gsl_linalg_cholesky_solve: Cholesky Decomposition.
(line 37)
* gsl_linalg_cholesky_svx: Cholesky Decomposition.
(line 47)
* gsl_linalg_complex_cholesky_decomp: Cholesky Decomposition.
(line 21)
* gsl_linalg_complex_cholesky_invert: Cholesky Decomposition.
(line 58)
* gsl_linalg_complex_cholesky_solve: Cholesky Decomposition.
(line 40)
* gsl_linalg_complex_cholesky_svx: Cholesky Decomposition.
(line 49)
* gsl_linalg_complex_householder_hm: Householder Transformations.
(line 29)
* gsl_linalg_complex_householder_hv: Householder Transformations.
(line 45)
* gsl_linalg_complex_householder_mh: Householder Transformations.
(line 37)
* gsl_linalg_complex_householder_transform: Householder Transformations.
(line 20)
* gsl_linalg_complex_LU_decomp: LU Decomposition. (line 22)
* gsl_linalg_complex_LU_det: LU Decomposition. (line 86)
* gsl_linalg_complex_LU_invert: LU Decomposition. (line 74)
* gsl_linalg_complex_LU_lndet: LU Decomposition. (line 94)
* gsl_linalg_complex_LU_refine: LU Decomposition. (line 64)
* gsl_linalg_complex_LU_sgndet: LU Decomposition. (line 102)
* gsl_linalg_complex_LU_solve: LU Decomposition. (line 44)
* gsl_linalg_complex_LU_svx: LU Decomposition. (line 52)
* gsl_linalg_hermtd_decomp: Tridiagonal Decomposition of Hermitian Matrices.
(line 16)
* gsl_linalg_hermtd_unpack: Tridiagonal Decomposition of Hermitian Matrices.
(line 29)
* gsl_linalg_hermtd_unpack_T: Tridiagonal Decomposition of Hermitian Matrices.
(line 36)
* gsl_linalg_hessenberg_decomp: Hessenberg Decomposition of Real Matrices.
(line 18)
* gsl_linalg_hessenberg_set_zero: Hessenberg Decomposition of Real Matrices.
(line 44)
* gsl_linalg_hessenberg_unpack: Hessenberg Decomposition of Real Matrices.
(line 29)
* gsl_linalg_hessenberg_unpack_accum: Hessenberg Decomposition of Real Matrices.
(line 36)
* gsl_linalg_hesstri_decomp: Hessenberg-Triangular Decomposition of Real Matrices.
(line 19)
* gsl_linalg_HH_solve: Householder solver for linear systems.
(line 8)
* gsl_linalg_HH_svx: Householder solver for linear systems.
(line 14)
* gsl_linalg_householder_hm: Householder Transformations.
(line 27)
* gsl_linalg_householder_hv: Householder Transformations.
(line 43)
* gsl_linalg_householder_mh: Householder Transformations.
(line 35)
* gsl_linalg_householder_transform: Householder Transformations.
(line 18)
* gsl_linalg_LU_decomp: LU Decomposition. (line 20)
* gsl_linalg_LU_det: LU Decomposition. (line 84)
* gsl_linalg_LU_invert: LU Decomposition. (line 71)
* gsl_linalg_LU_lndet: LU Decomposition. (line 92)
* gsl_linalg_LU_refine: LU Decomposition. (line 60)
* gsl_linalg_LU_sgndet: LU Decomposition. (line 100)
* gsl_linalg_LU_solve: LU Decomposition. (line 41)
* gsl_linalg_LU_svx: LU Decomposition. (line 50)
* gsl_linalg_QR_decomp: QR Decomposition. (line 21)
* gsl_linalg_QR_lssolve: QR Decomposition. (line 55)
* gsl_linalg_QR_QRsolve: QR Decomposition. (line 106)
* gsl_linalg_QR_QTmat: QR Decomposition. (line 80)
* gsl_linalg_QR_QTvec: QR Decomposition. (line 65)
* gsl_linalg_QR_Qvec: QR Decomposition. (line 73)
* gsl_linalg_QR_Rsolve: QR Decomposition. (line 88)
* gsl_linalg_QR_Rsvx: QR Decomposition. (line 94)
* gsl_linalg_QR_solve: QR Decomposition. (line 38)
* gsl_linalg_QR_svx: QR Decomposition. (line 46)
* gsl_linalg_QR_unpack: QR Decomposition. (line 101)
* gsl_linalg_QR_update: QR Decomposition. (line 112)
* gsl_linalg_QRPT_decomp: QR Decomposition with Column Pivoting.
(line 20)
* gsl_linalg_QRPT_decomp2: QR Decomposition with Column Pivoting.
(line 42)
* gsl_linalg_QRPT_QRsolve: QR Decomposition with Column Pivoting.
(line 63)
* gsl_linalg_QRPT_Rsolve: QR Decomposition with Column Pivoting.
(line 78)
* gsl_linalg_QRPT_Rsvx: QR Decomposition with Column Pivoting.
(line 83)
* gsl_linalg_QRPT_solve: QR Decomposition with Column Pivoting.
(line 49)
* gsl_linalg_QRPT_svx: QR Decomposition with Column Pivoting.
(line 55)
* gsl_linalg_QRPT_update: QR Decomposition with Column Pivoting.
(line 70)
* gsl_linalg_R_solve: QR Decomposition. (line 119)
* gsl_linalg_R_svx: QR Decomposition. (line 124)
* gsl_linalg_solve_cyc_tridiag: Tridiagonal Systems. (line 43)
* gsl_linalg_solve_symm_cyc_tridiag: Tridiagonal Systems. (line 57)
* gsl_linalg_solve_symm_tridiag: Tridiagonal Systems. (line 30)
* gsl_linalg_solve_tridiag: Tridiagonal Systems. (line 16)
* gsl_linalg_SV_decomp: Singular Value Decomposition.
(line 39)
* gsl_linalg_SV_decomp_jacobi: Singular Value Decomposition.
(line 58)
* gsl_linalg_SV_decomp_mod: Singular Value Decomposition.
(line 52)
* gsl_linalg_SV_solve: Singular Value Decomposition.
(line 66)
* gsl_linalg_symmtd_decomp: Tridiagonal Decomposition of Real Symmetric Matrices.
(line 15)
* gsl_linalg_symmtd_unpack: Tridiagonal Decomposition of Real Symmetric Matrices.
(line 27)
* gsl_linalg_symmtd_unpack_T: Tridiagonal Decomposition of Real Symmetric Matrices.
(line 34)
* gsl_log1p: Elementary Functions.
(line 13)
* gsl_matrix_add: Matrix operations. (line 9)
* gsl_matrix_add_constant: Matrix operations. (line 40)
* gsl_matrix_alloc: Matrix allocation. (line 15)
* gsl_matrix_calloc: Matrix allocation. (line 22)
* gsl_matrix_column: Creating row and column views.
(line 26)
* gsl_matrix_const_column: Creating row and column views.
(line 28)
* gsl_matrix_const_diagonal: Creating row and column views.
(line 65)
* gsl_matrix_const_ptr: Accessing matrix elements.
(line 38)
* gsl_matrix_const_row: Creating row and column views.
(line 16)
* gsl_matrix_const_subcolumn: Creating row and column views.
(line 53)
* gsl_matrix_const_subdiagonal: Creating row and column views.
(line 78)
* gsl_matrix_const_submatrix: Matrix views. (line 22)
* gsl_matrix_const_subrow: Creating row and column views.
(line 40)
* gsl_matrix_const_superdiagonal: Creating row and column views.
(line 90)
* gsl_matrix_const_view_array: Matrix views. (line 54)
* gsl_matrix_const_view_array_with_tda: Matrix views. (line 79)
* gsl_matrix_const_view_vector: Matrix views. (line 105)
* gsl_matrix_const_view_vector_with_tda: Matrix views. (line 130)
* gsl_matrix_diagonal: Creating row and column views.
(line 63)
* gsl_matrix_div_elements: Matrix operations. (line 29)
* gsl_matrix_equal: Matrix properties. (line 22)
* gsl_matrix_fprintf: Reading and writing matrices.
(line 28)
* gsl_matrix_fread: Reading and writing matrices.
(line 18)
* gsl_matrix_free: Matrix allocation. (line 26)
* gsl_matrix_fscanf: Reading and writing matrices.
(line 35)
* gsl_matrix_fwrite: Reading and writing matrices.
(line 11)
* gsl_matrix_get: Accessing matrix elements.
(line 22)
* gsl_matrix_get_col: Copying rows and columns.
(line 21)
* gsl_matrix_get_row: Copying rows and columns.
(line 15)
* gsl_matrix_isneg: Matrix properties. (line 13)
* gsl_matrix_isnonneg: Matrix properties. (line 14)
* gsl_matrix_isnull: Matrix properties. (line 11)
* gsl_matrix_ispos: Matrix properties. (line 12)
* gsl_matrix_max: Finding maximum and minimum elements of matrices.
(line 9)
* gsl_matrix_max_index: Finding maximum and minimum elements of matrices.
(line 21)
* gsl_matrix_memcpy: Copying matrices. (line 8)
* gsl_matrix_min: Finding maximum and minimum elements of matrices.
(line 12)
* gsl_matrix_min_index: Finding maximum and minimum elements of matrices.
(line 28)
* gsl_matrix_minmax: Finding maximum and minimum elements of matrices.
(line 16)
* gsl_matrix_minmax_index: Finding maximum and minimum elements of matrices.
(line 35)
* gsl_matrix_mul_elements: Matrix operations. (line 22)
* gsl_matrix_ptr: Accessing matrix elements.
(line 36)
* gsl_matrix_row: Creating row and column views.
(line 14)
* gsl_matrix_scale: Matrix operations. (line 35)
* gsl_matrix_set: Accessing matrix elements.
(line 29)
* gsl_matrix_set_all: Initializing matrix elements.
(line 7)
* gsl_matrix_set_col: Copying rows and columns.
(line 33)
* gsl_matrix_set_identity: Initializing matrix elements.
(line 13)
* gsl_matrix_set_row: Copying rows and columns.
(line 27)
* gsl_matrix_set_zero: Initializing matrix elements.
(line 10)
* gsl_matrix_sub: Matrix operations. (line 15)
* gsl_matrix_subcolumn: Creating row and column views.
(line 51)
* gsl_matrix_subdiagonal: Creating row and column views.
(line 76)
* gsl_matrix_submatrix: Matrix views. (line 20)
* gsl_matrix_subrow: Creating row and column views.
(line 38)
* gsl_matrix_superdiagonal: Creating row and column views.
(line 88)
* gsl_matrix_swap: Copying matrices. (line 12)
* gsl_matrix_swap_columns: Exchanging rows and columns.
(line 16)
* gsl_matrix_swap_rowcol: Exchanging rows and columns.
(line 21)
* gsl_matrix_swap_rows: Exchanging rows and columns.
(line 11)
* gsl_matrix_transpose: Exchanging rows and columns.
(line 33)
* gsl_matrix_transpose_memcpy: Exchanging rows and columns.
(line 27)
* gsl_matrix_view_array: Matrix views. (line 52)
* gsl_matrix_view_array_with_tda: Matrix views. (line 76)
* gsl_matrix_view_vector: Matrix views. (line 103)
* gsl_matrix_view_vector_with_tda: Matrix views. (line 127)
* GSL_MAX: Maximum and Minimum functions.
(line 11)
* GSL_MAX_DBL: Maximum and Minimum functions.
(line 19)
* GSL_MAX_INT: Maximum and Minimum functions.
(line 33)
* GSL_MAX_LDBL: Maximum and Minimum functions.
(line 41)
* GSL_MIN: Maximum and Minimum functions.
(line 15)
* GSL_MIN_DBL: Maximum and Minimum functions.
(line 26)
* gsl_min_fminimizer_alloc: Initializing the Minimizer.
(line 8)
* gsl_min_fminimizer_brent: Minimization Algorithms.
(line 32)
* gsl_min_fminimizer_f_lower: Minimization Iteration.
(line 47)
* gsl_min_fminimizer_f_minimum: Minimization Iteration.
(line 43)
* gsl_min_fminimizer_f_upper: Minimization Iteration.
(line 45)
* gsl_min_fminimizer_free: Initializing the Minimizer.
(line 40)
* gsl_min_fminimizer_goldensection: Minimization Algorithms.
(line 14)
* gsl_min_fminimizer_iterate: Minimization Iteration.
(line 13)
* gsl_min_fminimizer_name: Initializing the Minimizer.
(line 44)
* gsl_min_fminimizer_quad_golden: Minimization Algorithms.
(line 48)
* gsl_min_fminimizer_set: Initializing the Minimizer.
(line 24)
* gsl_min_fminimizer_set_with_values: Initializing the Minimizer.
(line 35)
* gsl_min_fminimizer_x_lower: Minimization Iteration.
(line 38)
* gsl_min_fminimizer_x_minimum: Minimization Iteration.
(line 31)
* gsl_min_fminimizer_x_upper: Minimization Iteration.
(line 36)
* GSL_MIN_INT: Maximum and Minimum functions.
(line 34)
* GSL_MIN_LDBL: Maximum and Minimum functions.
(line 43)
* gsl_min_test_interval: Minimization Stopping Parameters.
(line 20)
* gsl_monte_miser_alloc: MISER. (line 47)
* gsl_monte_miser_free: MISER. (line 70)
* gsl_monte_miser_init: MISER. (line 52)
* gsl_monte_miser_integrate: MISER. (line 60)
* gsl_monte_miser_params_get: MISER. (line 78)
* gsl_monte_miser_params_set: MISER. (line 83)
* gsl_monte_plain_alloc: PLAIN Monte Carlo. (line 30)
* gsl_monte_plain_free: PLAIN Monte Carlo. (line 52)
* gsl_monte_plain_init: PLAIN Monte Carlo. (line 34)
* gsl_monte_plain_integrate: PLAIN Monte Carlo. (line 42)
* gsl_monte_vegas_alloc: VEGAS. (line 51)
* gsl_monte_vegas_chisq: VEGAS. (line 112)
* gsl_monte_vegas_free: VEGAS. (line 78)
* gsl_monte_vegas_init: VEGAS. (line 56)
* gsl_monte_vegas_integrate: VEGAS. (line 64)
* gsl_monte_vegas_params_get: VEGAS. (line 130)
* gsl_monte_vegas_params_set: VEGAS. (line 135)
* gsl_monte_vegas_runval: VEGAS. (line 121)
* gsl_multifit_covar: Computing the covariance matrix of best fit parameters.
(line 8)
* gsl_multifit_fdfsolver_alloc: Initializing the Nonlinear Least-Squares Solver.
(line 18)
* gsl_multifit_fdfsolver_free: Initializing the Nonlinear Least-Squares Solver.
(line 48)
* gsl_multifit_fdfsolver_iterate: Iteration of the Minimization Algorithm.
(line 16)
* gsl_multifit_fdfsolver_lmder: Minimization Algorithms using Derivatives.
(line 65)
* gsl_multifit_fdfsolver_lmsder: Minimization Algorithms using Derivatives.
(line 13)
* gsl_multifit_fdfsolver_name: Initializing the Nonlinear Least-Squares Solver.
(line 54)
* gsl_multifit_fdfsolver_position: Iteration of the Minimization Algorithm.
(line 45)
* gsl_multifit_fdfsolver_set: Initializing the Nonlinear Least-Squares Solver.
(line 42)
* gsl_multifit_fsolver_alloc: Initializing the Nonlinear Least-Squares Solver.
(line 8)
* gsl_multifit_fsolver_free: Initializing the Nonlinear Least-Squares Solver.
(line 46)
* gsl_multifit_fsolver_iterate: Iteration of the Minimization Algorithm.
(line 14)
* gsl_multifit_fsolver_name: Initializing the Nonlinear Least-Squares Solver.
(line 52)
* gsl_multifit_fsolver_position: Iteration of the Minimization Algorithm.
(line 43)
* gsl_multifit_fsolver_set: Initializing the Nonlinear Least-Squares Solver.
(line 37)
* gsl_multifit_gradient: Search Stopping Parameters for Minimization Algorithms.
(line 46)
* gsl_multifit_linear: Multi-parameter fitting.
(line 53)
* gsl_multifit_linear_alloc: Multi-parameter fitting.
(line 43)
* gsl_multifit_linear_est: Multi-parameter fitting.
(line 111)
* gsl_multifit_linear_free: Multi-parameter fitting.
(line 48)
* gsl_multifit_linear_residuals: Multi-parameter fitting.
(line 118)
* gsl_multifit_linear_svd: Multi-parameter fitting.
(line 89)
* gsl_multifit_linear_usvd: Multi-parameter fitting.
(line 101)
* gsl_multifit_test_delta: Search Stopping Parameters for Minimization Algorithms.
(line 21)
* gsl_multifit_test_gradient: Search Stopping Parameters for Minimization Algorithms.
(line 32)
* gsl_multifit_wlinear: Multi-parameter fitting.
(line 74)
* gsl_multifit_wlinear_svd: Multi-parameter fitting.
(line 93)
* gsl_multifit_wlinear_usvd: Multi-parameter fitting.
(line 105)
* gsl_multimin_fdfminimizer_alloc: Initializing the Multidimensional Minimizer.
(line 13)
* gsl_multimin_fdfminimizer_conjugate_fr: Multimin Algorithms with Derivatives.
(line 12)
* gsl_multimin_fdfminimizer_conjugate_pr: Multimin Algorithms with Derivatives.
(line 29)
* gsl_multimin_fdfminimizer_free: Initializing the Multidimensional Minimizer.
(line 47)
* gsl_multimin_fdfminimizer_gradient: Multimin Iteration. (line 36)
* gsl_multimin_fdfminimizer_iterate: Multimin Iteration. (line 13)
* gsl_multimin_fdfminimizer_minimum: Multimin Iteration. (line 32)
* gsl_multimin_fdfminimizer_name: Initializing the Multidimensional Minimizer.
(line 53)
* gsl_multimin_fdfminimizer_restart: Multimin Iteration. (line 45)
* gsl_multimin_fdfminimizer_set: Initializing the Multidimensional Minimizer.
(line 24)
* gsl_multimin_fdfminimizer_steepest_descent: Multimin Algorithms with Derivatives.
(line 57)
* gsl_multimin_fdfminimizer_vector_bfgs: Multimin Algorithms with Derivatives.
(line 37)
* gsl_multimin_fdfminimizer_vector_bfgs2: Multimin Algorithms with Derivatives.
(line 36)
* gsl_multimin_fdfminimizer_x: Multimin Iteration. (line 28)
* gsl_multimin_fminimizer_alloc: Initializing the Multidimensional Minimizer.
(line 15)
* gsl_multimin_fminimizer_free: Initializing the Multidimensional Minimizer.
(line 49)
* gsl_multimin_fminimizer_iterate: Multimin Iteration. (line 15)
* gsl_multimin_fminimizer_minimum: Multimin Iteration. (line 34)
* gsl_multimin_fminimizer_name: Initializing the Multidimensional Minimizer.
(line 55)
* gsl_multimin_fminimizer_nmsimplex: Multimin Algorithms without Derivatives.
(line 11)
* gsl_multimin_fminimizer_nmsimplex2: Multimin Algorithms without Derivatives.
(line 10)
* gsl_multimin_fminimizer_nmsimplex2rand: Multimin Algorithms without Derivatives.
(line 50)
* gsl_multimin_fminimizer_set: Initializing the Multidimensional Minimizer.
(line 40)
* gsl_multimin_fminimizer_size: Multimin Iteration. (line 38)
* gsl_multimin_fminimizer_x: Multimin Iteration. (line 30)
* gsl_multimin_test_gradient: Multimin Stopping Criteria.
(line 20)
* gsl_multimin_test_size: Multimin Stopping Criteria.
(line 34)
* gsl_multiroot_fdfsolver_alloc: Initializing the Multidimensional Solver.
(line 29)
* gsl_multiroot_fdfsolver_dx: Iteration of the multidimensional solver.
(line 50)
* gsl_multiroot_fdfsolver_f: Iteration of the multidimensional solver.
(line 43)
* gsl_multiroot_fdfsolver_free: Initializing the Multidimensional Solver.
(line 56)
* gsl_multiroot_fdfsolver_gnewton: Algorithms using Derivatives.
(line 92)
* gsl_multiroot_fdfsolver_hybridj: Algorithms using Derivatives.
(line 67)
* gsl_multiroot_fdfsolver_hybridsj: Algorithms using Derivatives.
(line 14)
* gsl_multiroot_fdfsolver_iterate: Iteration of the multidimensional solver.
(line 16)
* gsl_multiroot_fdfsolver_name: Initializing the Multidimensional Solver.
(line 62)
* gsl_multiroot_fdfsolver_newton: Algorithms using Derivatives.
(line 73)
* gsl_multiroot_fdfsolver_root: Iteration of the multidimensional solver.
(line 36)
* gsl_multiroot_fdfsolver_set: Initializing the Multidimensional Solver.
(line 47)
* gsl_multiroot_fsolver_alloc: Initializing the Multidimensional Solver.
(line 13)
* gsl_multiroot_fsolver_broyden: Algorithms without Derivatives.
(line 45)
* gsl_multiroot_fsolver_dnewton: Algorithms without Derivatives.
(line 26)
* gsl_multiroot_fsolver_dx: Iteration of the multidimensional solver.
(line 48)
* gsl_multiroot_fsolver_f: Iteration of the multidimensional solver.
(line 41)
* gsl_multiroot_fsolver_free: Initializing the Multidimensional Solver.
(line 54)
* gsl_multiroot_fsolver_hybrid: Algorithms without Derivatives.
(line 22)
* gsl_multiroot_fsolver_hybrids: Algorithms without Derivatives.
(line 14)
* gsl_multiroot_fsolver_iterate: Iteration of the multidimensional solver.
(line 14)
* gsl_multiroot_fsolver_name: Initializing the Multidimensional Solver.
(line 60)
* gsl_multiroot_fsolver_root: Iteration of the multidimensional solver.
(line 34)
* gsl_multiroot_fsolver_set: Initializing the Multidimensional Solver.
(line 45)
* gsl_multiroot_test_delta: Search Stopping Parameters for the multidimensional solver.
(line 22)
* gsl_multiroot_test_residual: Search Stopping Parameters for the multidimensional solver.
(line 33)
* gsl_multiset_alloc: Multiset allocation. (line 7)
* gsl_multiset_calloc: Multiset allocation. (line 15)
* gsl_multiset_data: Multiset properties. (line 13)
* gsl_multiset_fprintf: Reading and writing multisets.
(line 28)
* gsl_multiset_fread: Reading and writing multisets.
(line 18)
* gsl_multiset_free: Multiset allocation. (line 29)
* gsl_multiset_fscanf: Reading and writing multisets.
(line 36)
* gsl_multiset_fwrite: Reading and writing multisets.
(line 11)
* gsl_multiset_get: Accessing multiset elements.
(line 10)
* gsl_multiset_init_first: Multiset allocation. (line 21)
* gsl_multiset_init_last: Multiset allocation. (line 25)
* gsl_multiset_k: Multiset properties. (line 10)
* gsl_multiset_memcpy: Multiset allocation. (line 33)
* gsl_multiset_n: Multiset properties. (line 7)
* gsl_multiset_next: Multiset functions. (line 7)
* gsl_multiset_prev: Multiset functions. (line 15)
* gsl_multiset_valid: Multiset properties. (line 17)
* gsl_ntuple_bookdata: Writing ntuples. (line 11)
* gsl_ntuple_close: Closing an ntuple file.
(line 7)
* gsl_ntuple_create: Creating ntuples. (line 8)
* gsl_ntuple_open: Opening an existing ntuple file.
(line 8)
* gsl_ntuple_project: Histogramming ntuple values.
(line 38)
* gsl_ntuple_read: Reading ntuples. (line 7)
* gsl_ntuple_write: Writing ntuples. (line 7)
* gsl_odeiv2_control_alloc: Adaptive Step-size Control.
(line 75)
* gsl_odeiv2_control_errlevel: Adaptive Step-size Control.
(line 118)
* gsl_odeiv2_control_free: Adaptive Step-size Control.
(line 87)
* gsl_odeiv2_control_hadjust: Adaptive Step-size Control.
(line 93)
* gsl_odeiv2_control_init: Adaptive Step-size Control.
(line 82)
* gsl_odeiv2_control_name: Adaptive Step-size Control.
(line 107)
* gsl_odeiv2_control_scaled_new: Adaptive Step-size Control.
(line 63)
* gsl_odeiv2_control_set_driver: Adaptive Step-size Control.
(line 124)
* gsl_odeiv2_control_standard_new: Adaptive Step-size Control.
(line 12)
* gsl_odeiv2_control_y_new: Adaptive Step-size Control.
(line 46)
* gsl_odeiv2_control_yp_new: Adaptive Step-size Control.
(line 54)
* gsl_odeiv2_driver_alloc_scaled_new: Driver. (line 24)
* gsl_odeiv2_driver_alloc_standard_new: Driver. (line 19)
* gsl_odeiv2_driver_alloc_y_new: Driver. (line 12)
* gsl_odeiv2_driver_alloc_yp_new: Driver. (line 15)
* gsl_odeiv2_driver_apply: Driver. (line 48)
* gsl_odeiv2_driver_apply_fixed_step: Driver. (line 65)
* gsl_odeiv2_driver_free: Driver. (line 74)
* gsl_odeiv2_driver_reset: Driver. (line 71)
* gsl_odeiv2_driver_set_hmax: Driver. (line 38)
* gsl_odeiv2_driver_set_hmin: Driver. (line 33)
* gsl_odeiv2_driver_set_nmax: Driver. (line 43)
* gsl_odeiv2_evolve_alloc: Evolution. (line 11)
* gsl_odeiv2_evolve_apply: Evolution. (line 18)
* gsl_odeiv2_evolve_apply_fixed_step: Evolution. (line 56)
* gsl_odeiv2_evolve_free: Evolution. (line 69)
* gsl_odeiv2_evolve_reset: Evolution. (line 64)
* gsl_odeiv2_evolve_set_driver: Evolution. (line 74)
* gsl_odeiv2_step_alloc: Stepping Functions. (line 12)
* gsl_odeiv2_step_apply: Stepping Functions. (line 54)
* gsl_odeiv2_step_bsimp: Stepping Functions. (line 124)
* gsl_odeiv2_step_free: Stepping Functions. (line 24)
* gsl_odeiv2_step_msadams: Stepping Functions. (line 129)
* gsl_odeiv2_step_msbdf: Stepping Functions. (line 137)
* gsl_odeiv2_step_name: Stepping Functions. (line 29)
* gsl_odeiv2_step_order: Stepping Functions. (line 39)
* gsl_odeiv2_step_reset: Stepping Functions. (line 19)
* gsl_odeiv2_step_rk1imp: Stepping Functions. (line 106)
* gsl_odeiv2_step_rk2: Stepping Functions. (line 88)
* gsl_odeiv2_step_rk2imp: Stepping Functions. (line 112)
* gsl_odeiv2_step_rk4: Stepping Functions. (line 91)
* gsl_odeiv2_step_rk4imp: Stepping Functions. (line 118)
* gsl_odeiv2_step_rk8pd: Stepping Functions. (line 103)
* gsl_odeiv2_step_rkck: Stepping Functions. (line 100)
* gsl_odeiv2_step_rkf45: Stepping Functions. (line 96)
* gsl_odeiv2_step_set_driver: Stepping Functions. (line 45)
* gsl_permutation_alloc: Permutation allocation.
(line 7)
* gsl_permutation_calloc: Permutation allocation.
(line 15)
* gsl_permutation_canonical_cycles: Permutations in cyclic form.
(line 71)
* gsl_permutation_canonical_to_linear: Permutations in cyclic form.
(line 53)
* gsl_permutation_data: Permutation properties.
(line 10)
* gsl_permutation_fprintf: Reading and writing permutations.
(line 29)
* gsl_permutation_fread: Reading and writing permutations.
(line 19)
* gsl_permutation_free: Permutation allocation.
(line 24)
* gsl_permutation_fscanf: Reading and writing permutations.
(line 38)
* gsl_permutation_fwrite: Reading and writing permutations.
(line 11)
* gsl_permutation_get: Accessing permutation elements.
(line 11)
* gsl_permutation_init: Permutation allocation.
(line 20)
* gsl_permutation_inverse: Permutation functions.
(line 11)
* gsl_permutation_inversions: Permutations in cyclic form.
(line 58)
* gsl_permutation_linear_cycles: Permutations in cyclic form.
(line 66)
* gsl_permutation_linear_to_canonical: Permutations in cyclic form.
(line 48)
* gsl_permutation_memcpy: Permutation allocation.
(line 28)
* gsl_permutation_mul: Applying Permutations.
(line 37)
* gsl_permutation_next: Permutation functions.
(line 15)
* gsl_permutation_prev: Permutation functions.
(line 23)
* gsl_permutation_reverse: Permutation functions.
(line 7)
* gsl_permutation_size: Permutation properties.
(line 7)
* gsl_permutation_swap: Accessing permutation elements.
(line 18)
* gsl_permutation_valid: Permutation properties.
(line 14)
* gsl_permute: Applying Permutations.
(line 8)
* gsl_permute_inverse: Applying Permutations.
(line 13)
* gsl_permute_vector: Applying Permutations.
(line 18)
* gsl_permute_vector_inverse: Applying Permutations.
(line 27)
* gsl_poly_complex_eval: Polynomial Evaluation.
(line 18)
* gsl_poly_complex_solve: General Polynomial Equations.
(line 27)
* gsl_poly_complex_solve_cubic: Cubic Equations. (line 26)
* gsl_poly_complex_solve_quadratic: Quadratic Equations. (line 32)
* gsl_poly_complex_workspace_alloc: General Polynomial Equations.
(line 13)
* gsl_poly_complex_workspace_free: General Polynomial Equations.
(line 23)
* gsl_poly_dd_eval: Divided Difference Representation of Polynomials.
(line 20)
* gsl_poly_dd_init: Divided Difference Representation of Polynomials.
(line 12)
* gsl_poly_dd_taylor: Divided Difference Representation of Polynomials.
(line 27)
* gsl_poly_eval: Polynomial Evaluation.
(line 13)
* gsl_poly_eval_derivs: Polynomial Evaluation.
(line 28)
* gsl_poly_solve_cubic: Cubic Equations. (line 8)
* gsl_poly_solve_quadratic: Quadratic Equations. (line 8)
* gsl_pow_2: Small integer powers.
(line 20)
* gsl_pow_3: Small integer powers.
(line 21)
* gsl_pow_4: Small integer powers.
(line 22)
* gsl_pow_5: Small integer powers.
(line 23)
* gsl_pow_6: Small integer powers.
(line 24)
* gsl_pow_7: Small integer powers.
(line 25)
* gsl_pow_8: Small integer powers.
(line 26)
* gsl_pow_9: Small integer powers.
(line 27)
* gsl_pow_int: Small integer powers.
(line 12)
* gsl_pow_uint: Small integer powers.
(line 13)
* gsl_qrng_alloc: Quasi-random number generator initialization.
(line 8)
* gsl_qrng_clone: Saving and resorting quasi-random number generator state.
(line 13)
* gsl_qrng_free: Quasi-random number generator initialization.
(line 15)
* gsl_qrng_get: Sampling from a quasi-random number generator.
(line 7)
* gsl_qrng_halton: Quasi-random number generator algorithms.
(line 19)
* gsl_qrng_init: Quasi-random number generator initialization.
(line 18)
* gsl_qrng_memcpy: Saving and resorting quasi-random number generator state.
(line 8)
* gsl_qrng_name: Auxiliary quasi-random number generator functions.
(line 7)
* gsl_qrng_niederreiter_2: Quasi-random number generator algorithms.
(line 9)
* gsl_qrng_reversehalton: Quasi-random number generator algorithms.
(line 20)
* gsl_qrng_size: Auxiliary quasi-random number generator functions.
(line 10)
* gsl_qrng_sobol: Quasi-random number generator algorithms.
(line 14)
* gsl_qrng_state: Auxiliary quasi-random number generator functions.
(line 11)
* gsl_ran_bernoulli: The Bernoulli Distribution.
(line 8)
* gsl_ran_bernoulli_pdf: The Bernoulli Distribution.
(line 17)
* gsl_ran_beta: The Beta Distribution.
(line 8)
* gsl_ran_beta_pdf: The Beta Distribution.
(line 16)
* gsl_ran_binomial: The Binomial Distribution.
(line 9)
* gsl_ran_binomial_pdf: The Binomial Distribution.
(line 20)
* gsl_ran_bivariate_gaussian: The Bivariate Gaussian Distribution.
(line 9)
* gsl_ran_bivariate_gaussian_pdf: The Bivariate Gaussian Distribution.
(line 21)
* gsl_ran_cauchy: The Cauchy Distribution.
(line 7)
* gsl_ran_cauchy_pdf: The Cauchy Distribution.
(line 17)
* gsl_ran_chisq: The Chi-squared Distribution.
(line 15)
* gsl_ran_chisq_pdf: The Chi-squared Distribution.
(line 24)
* gsl_ran_choose: Shuffling and Sampling.
(line 36)
* gsl_ran_dir_2d: Spherical Vector Distributions.
(line 12)
* gsl_ran_dir_2d_trig_method: Spherical Vector Distributions.
(line 14)
* gsl_ran_dir_3d: Spherical Vector Distributions.
(line 33)
* gsl_ran_dir_nd: Spherical Vector Distributions.
(line 42)
* gsl_ran_dirichlet: The Dirichlet Distribution.
(line 8)
* gsl_ran_dirichlet_lnpdf: The Dirichlet Distribution.
(line 32)
* gsl_ran_dirichlet_pdf: The Dirichlet Distribution.
(line 26)
* gsl_ran_discrete: General Discrete Distributions.
(line 62)
* gsl_ran_discrete_free: General Discrete Distributions.
(line 74)
* gsl_ran_discrete_pdf: General Discrete Distributions.
(line 67)
* gsl_ran_discrete_preproc: General Discrete Distributions.
(line 52)
* gsl_ran_exponential: The Exponential Distribution.
(line 8)
* gsl_ran_exponential_pdf: The Exponential Distribution.
(line 16)
* gsl_ran_exppow: The Exponential Power Distribution.
(line 8)
* gsl_ran_exppow_pdf: The Exponential Power Distribution.
(line 19)
* gsl_ran_fdist: The F-distribution. (line 16)
* gsl_ran_fdist_pdf: The F-distribution. (line 29)
* gsl_ran_flat: The Flat (Uniform) Distribution.
(line 8)
* gsl_ran_flat_pdf: The Flat (Uniform) Distribution.
(line 16)
* gsl_ran_gamma: The Gamma Distribution.
(line 9)
* gsl_ran_gamma_knuth: The Gamma Distribution.
(line 25)
* gsl_ran_gamma_pdf: The Gamma Distribution.
(line 29)
* gsl_ran_gaussian: The Gaussian Distribution.
(line 7)
* gsl_ran_gaussian_pdf: The Gaussian Distribution.
(line 20)
* gsl_ran_gaussian_ratio_method: The Gaussian Distribution.
(line 29)
* gsl_ran_gaussian_tail: The Gaussian Tail Distribution.
(line 8)
* gsl_ran_gaussian_tail_pdf: The Gaussian Tail Distribution.
(line 27)
* gsl_ran_gaussian_ziggurat: The Gaussian Distribution.
(line 27)
* gsl_ran_geometric: The Geometric Distribution.
(line 8)
* gsl_ran_geometric_pdf: The Geometric Distribution.
(line 20)
* gsl_ran_gumbel1: The Type-1 Gumbel Distribution.
(line 8)
* gsl_ran_gumbel1_pdf: The Type-1 Gumbel Distribution.
(line 16)
* gsl_ran_gumbel2: The Type-2 Gumbel Distribution.
(line 8)
* gsl_ran_gumbel2_pdf: The Type-2 Gumbel Distribution.
(line 16)
* gsl_ran_hypergeometric: The Hypergeometric Distribution.
(line 8)
* gsl_ran_hypergeometric_pdf: The Hypergeometric Distribution.
(line 24)
* gsl_ran_landau: The Landau Distribution.
(line 8)
* gsl_ran_landau_pdf: The Landau Distribution.
(line 19)
* gsl_ran_laplace: The Laplace Distribution.
(line 7)
* gsl_ran_laplace_pdf: The Laplace Distribution.
(line 15)
* gsl_ran_levy: The Levy alpha-Stable Distributions.
(line 9)
* gsl_ran_levy_skew: The Levy skew alpha-Stable Distribution.
(line 9)
* gsl_ran_logarithmic: The Logarithmic Distribution.
(line 8)
* gsl_ran_logarithmic_pdf: The Logarithmic Distribution.
(line 17)
* gsl_ran_logistic: The Logistic Distribution.
(line 7)
* gsl_ran_logistic_pdf: The Logistic Distribution.
(line 15)
* gsl_ran_lognormal: The Lognormal Distribution.
(line 8)
* gsl_ran_lognormal_pdf: The Lognormal Distribution.
(line 17)
* gsl_ran_multinomial: The Multinomial Distribution.
(line 8)
* gsl_ran_multinomial_lnpdf: The Multinomial Distribution.
(line 35)
* gsl_ran_multinomial_pdf: The Multinomial Distribution.
(line 29)
* gsl_ran_negative_binomial: The Negative Binomial Distribution.
(line 8)
* gsl_ran_negative_binomial_pdf: The Negative Binomial Distribution.
(line 19)
* gsl_ran_pareto: The Pareto Distribution.
(line 8)
* gsl_ran_pareto_pdf: The Pareto Distribution.
(line 16)
* gsl_ran_pascal: The Pascal Distribution.
(line 8)
* gsl_ran_pascal_pdf: The Pascal Distribution.
(line 18)
* gsl_ran_poisson: The Poisson Distribution.
(line 8)
* gsl_ran_poisson_pdf: The Poisson Distribution.
(line 17)
* gsl_ran_rayleigh: The Rayleigh Distribution.
(line 7)
* gsl_ran_rayleigh_pdf: The Rayleigh Distribution.
(line 15)
* gsl_ran_rayleigh_tail: The Rayleigh Tail Distribution.
(line 8)
* gsl_ran_rayleigh_tail_pdf: The Rayleigh Tail Distribution.
(line 18)
* gsl_ran_sample: Shuffling and Sampling.
(line 64)
* gsl_ran_shuffle: Shuffling and Sampling.
(line 16)
* gsl_ran_tdist: The t-distribution. (line 15)
* gsl_ran_tdist_pdf: The t-distribution. (line 24)
* gsl_ran_ugaussian: The Gaussian Distribution.
(line 35)
* gsl_ran_ugaussian_pdf: The Gaussian Distribution.
(line 36)
* gsl_ran_ugaussian_ratio_method: The Gaussian Distribution.
(line 37)
* gsl_ran_ugaussian_tail: The Gaussian Tail Distribution.
(line 34)
* gsl_ran_ugaussian_tail_pdf: The Gaussian Tail Distribution.
(line 35)
* gsl_ran_weibull: The Weibull Distribution.
(line 8)
* gsl_ran_weibull_pdf: The Weibull Distribution.
(line 16)
* GSL_REAL: Representation of complex numbers.
(line 35)
* gsl_rng_alloc: Random number generator initialization.
(line 7)
* gsl_rng_borosh13: Other random number generators.
(line 175)
* gsl_rng_clone: Copying random number generator state.
(line 17)
* gsl_rng_cmrg: Random number generator algorithms.
(line 97)
* gsl_rng_coveyou: Other random number generators.
(line 203)
* gsl_rng_env_setup: Random number environment variables.
(line 12)
* gsl_rng_fishman18: Other random number generators.
(line 176)
* gsl_rng_fishman20: Other random number generators.
(line 177)
* gsl_rng_fishman2x: Other random number generators.
(line 192)
* gsl_rng_fread: Reading and writing random number generator state.
(line 18)
* gsl_rng_free: Random number generator initialization.
(line 46)
* gsl_rng_fwrite: Reading and writing random number generator state.
(line 10)
* gsl_rng_get: Sampling from a random number generator.
(line 12)
* gsl_rng_gfsr4: Random number generator algorithms.
(line 172)
* gsl_rng_knuthran: Other random number generators.
(line 168)
* gsl_rng_knuthran2: Other random number generators.
(line 158)
* gsl_rng_knuthran2002: Other random number generators.
(line 167)
* gsl_rng_lecuyer21: Other random number generators.
(line 178)
* gsl_rng_max: Auxiliary random number generator functions.
(line 20)
* gsl_rng_memcpy: Copying random number generator state.
(line 12)
* gsl_rng_min: Auxiliary random number generator functions.
(line 24)
* gsl_rng_minstd: Other random number generators.
(line 114)
* gsl_rng_mrg: Random number generator algorithms.
(line 119)
* gsl_rng_mt19937: Random number generator algorithms.
(line 20)
* gsl_rng_name: Auxiliary random number generator functions.
(line 11)
* gsl_rng_r250: Other random number generators.
(line 60)
* gsl_rng_rand: Unix random number generators.
(line 18)
* gsl_rng_rand48: Unix random number generators.
(line 58)
* gsl_rng_random_bsd: Unix random number generators.
(line 27)
* gsl_rng_random_glibc2: Unix random number generators.
(line 29)
* gsl_rng_random_libc5: Unix random number generators.
(line 28)
* gsl_rng_randu: Other random number generators.
(line 105)
* gsl_rng_ranf: Other random number generators.
(line 23)
* gsl_rng_ranlux: Random number generator algorithms.
(line 70)
* gsl_rng_ranlux389: Random number generator algorithms.
(line 71)
* gsl_rng_ranlxd1: Random number generator algorithms.
(line 64)
* gsl_rng_ranlxd2: Random number generator algorithms.
(line 65)
* gsl_rng_ranlxs0: Random number generator algorithms.
(line 45)
* gsl_rng_ranlxs1: Random number generator algorithms.
(line 46)
* gsl_rng_ranlxs2: Random number generator algorithms.
(line 47)
* gsl_rng_ranmar: Other random number generators.
(line 54)
* gsl_rng_set: Random number generator initialization.
(line 26)
* gsl_rng_size: Auxiliary random number generator functions.
(line 31)
* gsl_rng_slatec: Other random number generators.
(line 141)
* gsl_rng_state: Auxiliary random number generator functions.
(line 30)
* gsl_rng_taus: Random number generator algorithms.
(line 136)
* gsl_rng_taus2: Random number generator algorithms.
(line 137)
* gsl_rng_transputer: Other random number generators.
(line 96)
* gsl_rng_tt800: Other random number generators.
(line 75)
* gsl_rng_types_setup: Auxiliary random number generator functions.
(line 41)
* gsl_rng_uni: Other random number generators.
(line 134)
* gsl_rng_uni32: Other random number generators.
(line 135)
* gsl_rng_uniform: Sampling from a random number generator.
(line 19)
* gsl_rng_uniform_int: Sampling from a random number generator.
(line 38)
* gsl_rng_uniform_pos: Sampling from a random number generator.
(line 29)
* gsl_rng_vax: Other random number generators.
(line 87)
* gsl_rng_waterman14: Other random number generators.
(line 179)
* gsl_rng_zuf: Other random number generators.
(line 145)
* gsl_root_fdfsolver_alloc: Initializing the Solver.
(line 23)
* gsl_root_fdfsolver_free: Initializing the Solver.
(line 49)
* gsl_root_fdfsolver_iterate: Root Finding Iteration.
(line 14)
* gsl_root_fdfsolver_name: Initializing the Solver.
(line 55)
* gsl_root_fdfsolver_newton: Root Finding Algorithms using Derivatives.
(line 16)
* gsl_root_fdfsolver_root: Root Finding Iteration.
(line 35)
* gsl_root_fdfsolver_secant: Root Finding Algorithms using Derivatives.
(line 30)
* gsl_root_fdfsolver_set: Initializing the Solver.
(line 44)
* gsl_root_fdfsolver_steffenson: Root Finding Algorithms using Derivatives.
(line 61)
* gsl_root_fsolver_alloc: Initializing the Solver.
(line 8)
* gsl_root_fsolver_bisection: Root Bracketing Algorithms.
(line 17)
* gsl_root_fsolver_brent: Root Bracketing Algorithms.
(line 50)
* gsl_root_fsolver_falsepos: Root Bracketing Algorithms.
(line 33)
* gsl_root_fsolver_free: Initializing the Solver.
(line 48)
* gsl_root_fsolver_iterate: Root Finding Iteration.
(line 13)
* gsl_root_fsolver_name: Initializing the Solver.
(line 53)
* gsl_root_fsolver_root: Root Finding Iteration.
(line 33)
* gsl_root_fsolver_set: Initializing the Solver.
(line 38)
* gsl_root_fsolver_x_lower: Root Finding Iteration.
(line 40)
* gsl_root_fsolver_x_upper: Root Finding Iteration.
(line 42)
* gsl_root_test_delta: Search Stopping Parameters.
(line 44)
* gsl_root_test_interval: Search Stopping Parameters.
(line 21)
* gsl_root_test_residual: Search Stopping Parameters.
(line 53)
* GSL_SET_COMPLEX: Representation of complex numbers.
(line 40)
* gsl_set_error_handler: Error Handlers. (line 44)
* gsl_set_error_handler_off: Error Handlers. (line 68)
* GSL_SET_IMAG: Representation of complex numbers.
(line 50)
* GSL_SET_REAL: Representation of complex numbers.
(line 49)
* gsl_sf_airy_Ai: Airy Functions. (line 7)
* gsl_sf_airy_Ai_deriv: Derivatives of Airy Functions.
(line 7)
* gsl_sf_airy_Ai_deriv_e: Derivatives of Airy Functions.
(line 9)
* gsl_sf_airy_Ai_deriv_scaled: Derivatives of Airy Functions.
(line 20)
* gsl_sf_airy_Ai_deriv_scaled_e: Derivatives of Airy Functions.
(line 22)
* gsl_sf_airy_Ai_e: Airy Functions. (line 9)
* gsl_sf_airy_Ai_scaled: Airy Functions. (line 19)
* gsl_sf_airy_Ai_scaled_e: Airy Functions. (line 21)
* gsl_sf_airy_Bi: Airy Functions. (line 13)
* gsl_sf_airy_Bi_deriv: Derivatives of Airy Functions.
(line 13)
* gsl_sf_airy_Bi_deriv_e: Derivatives of Airy Functions.
(line 15)
* gsl_sf_airy_Bi_deriv_scaled: Derivatives of Airy Functions.
(line 28)
* gsl_sf_airy_Bi_deriv_scaled_e: Derivatives of Airy Functions.
(line 30)
* gsl_sf_airy_Bi_e: Airy Functions. (line 15)
* gsl_sf_airy_Bi_scaled: Airy Functions. (line 26)
* gsl_sf_airy_Bi_scaled_e: Airy Functions. (line 28)
* gsl_sf_airy_zero_Ai: Zeros of Airy Functions.
(line 7)
* gsl_sf_airy_zero_Ai_deriv: Zeros of Derivatives of Airy Functions.
(line 7)
* gsl_sf_airy_zero_Ai_deriv_e: Zeros of Derivatives of Airy Functions.
(line 9)
* gsl_sf_airy_zero_Ai_e: Zeros of Airy Functions.
(line 9)
* gsl_sf_airy_zero_Bi: Zeros of Airy Functions.
(line 13)
* gsl_sf_airy_zero_Bi_deriv: Zeros of Derivatives of Airy Functions.
(line 13)
* gsl_sf_airy_zero_Bi_deriv_e: Zeros of Derivatives of Airy Functions.
(line 15)
* gsl_sf_airy_zero_Bi_e: Zeros of Airy Functions.
(line 15)
* gsl_sf_angle_restrict_pos: Restriction Functions.
(line 16)
* gsl_sf_angle_restrict_pos_e: Restriction Functions.
(line 17)
* gsl_sf_angle_restrict_symm: Restriction Functions.
(line 7)
* gsl_sf_angle_restrict_symm_e: Restriction Functions.
(line 8)
* gsl_sf_atanint: Arctangent Integral. (line 7)
* gsl_sf_atanint_e: Arctangent Integral. (line 8)
* gsl_sf_bessel_I0: Regular Modified Cylindrical Bessel Functions.
(line 7)
* gsl_sf_bessel_I0_e: Regular Modified Cylindrical Bessel Functions.
(line 8)
* gsl_sf_bessel_i0_scaled: Regular Modified Spherical Bessel Functions.
(line 11)
* gsl_sf_bessel_I0_scaled: Regular Modified Cylindrical Bessel Functions.
(line 32)
* gsl_sf_bessel_i0_scaled_e: Regular Modified Spherical Bessel Functions.
(line 13)
* gsl_sf_bessel_I0_scaled_e: Regular Modified Cylindrical Bessel Functions.
(line 34)
* gsl_sf_bessel_I1: Regular Modified Cylindrical Bessel Functions.
(line 12)
* gsl_sf_bessel_I1_e: Regular Modified Cylindrical Bessel Functions.
(line 13)
* gsl_sf_bessel_i1_scaled: Regular Modified Spherical Bessel Functions.
(line 17)
* gsl_sf_bessel_I1_scaled: Regular Modified Cylindrical Bessel Functions.
(line 38)
* gsl_sf_bessel_i1_scaled_e: Regular Modified Spherical Bessel Functions.
(line 19)
* gsl_sf_bessel_I1_scaled_e: Regular Modified Cylindrical Bessel Functions.
(line 40)
* gsl_sf_bessel_i2_scaled: Regular Modified Spherical Bessel Functions.
(line 23)
* gsl_sf_bessel_i2_scaled_e: Regular Modified Spherical Bessel Functions.
(line 25)
* gsl_sf_bessel_il_scaled: Regular Modified Spherical Bessel Functions.
(line 29)
* gsl_sf_bessel_il_scaled_array: Regular Modified Spherical Bessel Functions.
(line 36)
* gsl_sf_bessel_il_scaled_e: Regular Modified Spherical Bessel Functions.
(line 31)
* gsl_sf_bessel_In: Regular Modified Cylindrical Bessel Functions.
(line 17)
* gsl_sf_bessel_In_array: Regular Modified Cylindrical Bessel Functions.
(line 24)
* gsl_sf_bessel_In_e: Regular Modified Cylindrical Bessel Functions.
(line 19)
* gsl_sf_bessel_In_scaled: Regular Modified Cylindrical Bessel Functions.
(line 44)
* gsl_sf_bessel_In_scaled_array: Regular Modified Cylindrical Bessel Functions.
(line 51)
* gsl_sf_bessel_In_scaled_e: Regular Modified Cylindrical Bessel Functions.
(line 46)
* gsl_sf_bessel_Inu: Regular Modified Bessel Functions - Fractional Order.
(line 7)
* gsl_sf_bessel_Inu_e: Regular Modified Bessel Functions - Fractional Order.
(line 9)
* gsl_sf_bessel_Inu_scaled: Regular Modified Bessel Functions - Fractional Order.
(line 13)
* gsl_sf_bessel_Inu_scaled_e: Regular Modified Bessel Functions - Fractional Order.
(line 15)
* gsl_sf_bessel_j0: Regular Spherical Bessel Functions.
(line 7)
* gsl_sf_bessel_J0: Regular Cylindrical Bessel Functions.
(line 7)
* gsl_sf_bessel_j0_e: Regular Spherical Bessel Functions.
(line 8)
* gsl_sf_bessel_J0_e: Regular Cylindrical Bessel Functions.
(line 8)
* gsl_sf_bessel_j1: Regular Spherical Bessel Functions.
(line 12)
* gsl_sf_bessel_J1: Regular Cylindrical Bessel Functions.
(line 12)
* gsl_sf_bessel_j1_e: Regular Spherical Bessel Functions.
(line 13)
* gsl_sf_bessel_J1_e: Regular Cylindrical Bessel Functions.
(line 13)
* gsl_sf_bessel_j2: Regular Spherical Bessel Functions.
(line 17)
* gsl_sf_bessel_j2_e: Regular Spherical Bessel Functions.
(line 18)
* gsl_sf_bessel_jl: Regular Spherical Bessel Functions.
(line 22)
* gsl_sf_bessel_jl_array: Regular Spherical Bessel Functions.
(line 29)
* gsl_sf_bessel_jl_e: Regular Spherical Bessel Functions.
(line 24)
* gsl_sf_bessel_jl_steed_array: Regular Spherical Bessel Functions.
(line 37)
* gsl_sf_bessel_Jn: Regular Cylindrical Bessel Functions.
(line 17)
* gsl_sf_bessel_Jn_array: Regular Cylindrical Bessel Functions.
(line 24)
* gsl_sf_bessel_Jn_e: Regular Cylindrical Bessel Functions.
(line 19)
* gsl_sf_bessel_Jnu: Regular Bessel Function - Fractional Order.
(line 7)
* gsl_sf_bessel_Jnu_e: Regular Bessel Function - Fractional Order.
(line 9)
* gsl_sf_bessel_K0: Irregular Modified Cylindrical Bessel Functions.
(line 7)
* gsl_sf_bessel_K0_e: Irregular Modified Cylindrical Bessel Functions.
(line 8)
* gsl_sf_bessel_k0_scaled: Irregular Modified Spherical Bessel Functions.
(line 11)
* gsl_sf_bessel_K0_scaled: Irregular Modified Cylindrical Bessel Functions.
(line 33)
* gsl_sf_bessel_k0_scaled_e: Irregular Modified Spherical Bessel Functions.
(line 13)
* gsl_sf_bessel_K0_scaled_e: Irregular Modified Cylindrical Bessel Functions.
(line 35)
* gsl_sf_bessel_K1: Irregular Modified Cylindrical Bessel Functions.
(line 12)
* gsl_sf_bessel_K1_e: Irregular Modified Cylindrical Bessel Functions.
(line 13)
* gsl_sf_bessel_k1_scaled: Irregular Modified Spherical Bessel Functions.
(line 17)
* gsl_sf_bessel_K1_scaled: Irregular Modified Cylindrical Bessel Functions.
(line 39)
* gsl_sf_bessel_k1_scaled_e: Irregular Modified Spherical Bessel Functions.
(line 19)
* gsl_sf_bessel_K1_scaled_e: Irregular Modified Cylindrical Bessel Functions.
(line 41)
* gsl_sf_bessel_k2_scaled: Irregular Modified Spherical Bessel Functions.
(line 23)
* gsl_sf_bessel_k2_scaled_e: Irregular Modified Spherical Bessel Functions.
(line 25)
* gsl_sf_bessel_kl_scaled: Irregular Modified Spherical Bessel Functions.
(line 29)
* gsl_sf_bessel_kl_scaled_array: Irregular Modified Spherical Bessel Functions.
(line 36)
* gsl_sf_bessel_kl_scaled_e: Irregular Modified Spherical Bessel Functions.
(line 31)
* gsl_sf_bessel_Kn: Irregular Modified Cylindrical Bessel Functions.
(line 17)
* gsl_sf_bessel_Kn_array: Irregular Modified Cylindrical Bessel Functions.
(line 24)
* gsl_sf_bessel_Kn_e: Irregular Modified Cylindrical Bessel Functions.
(line 19)
* gsl_sf_bessel_Kn_scaled: Irregular Modified Cylindrical Bessel Functions.
(line 45)
* gsl_sf_bessel_Kn_scaled_array: Irregular Modified Cylindrical Bessel Functions.
(line 52)
* gsl_sf_bessel_Kn_scaled_e: Irregular Modified Cylindrical Bessel Functions.
(line 47)
* gsl_sf_bessel_Knu: Irregular Modified Bessel Functions - Fractional Order.
(line 7)
* gsl_sf_bessel_Knu_e: Irregular Modified Bessel Functions - Fractional Order.
(line 9)
* gsl_sf_bessel_Knu_scaled: Irregular Modified Bessel Functions - Fractional Order.
(line 20)
* gsl_sf_bessel_Knu_scaled_e: Irregular Modified Bessel Functions - Fractional Order.
(line 22)
* gsl_sf_bessel_lnKnu: Irregular Modified Bessel Functions - Fractional Order.
(line 13)
* gsl_sf_bessel_lnKnu_e: Irregular Modified Bessel Functions - Fractional Order.
(line 15)
* gsl_sf_bessel_sequence_Jnu_e: Regular Bessel Function - Fractional Order.
(line 14)
* gsl_sf_bessel_y0: Irregular Spherical Bessel Functions.
(line 7)
* gsl_sf_bessel_Y0: Irregular Cylindrical Bessel Functions.
(line 7)
* gsl_sf_bessel_y0_e: Irregular Spherical Bessel Functions.
(line 8)
* gsl_sf_bessel_Y0_e: Irregular Cylindrical Bessel Functions.
(line 8)
* gsl_sf_bessel_y1: Irregular Spherical Bessel Functions.
(line 12)
* gsl_sf_bessel_Y1: Irregular Cylindrical Bessel Functions.
(line 12)
* gsl_sf_bessel_y1_e: Irregular Spherical Bessel Functions.
(line 13)
* gsl_sf_bessel_Y1_e: Irregular Cylindrical Bessel Functions.
(line 13)
* gsl_sf_bessel_y2: Irregular Spherical Bessel Functions.
(line 17)
* gsl_sf_bessel_y2_e: Irregular Spherical Bessel Functions.
(line 18)
* gsl_sf_bessel_yl: Irregular Spherical Bessel Functions.
(line 22)
* gsl_sf_bessel_yl_array: Irregular Spherical Bessel Functions.
(line 29)
* gsl_sf_bessel_yl_e: Irregular Spherical Bessel Functions.
(line 24)
* gsl_sf_bessel_Yn: Irregular Cylindrical Bessel Functions.
(line 17)
* gsl_sf_bessel_Yn_array: Irregular Cylindrical Bessel Functions.
(line 24)
* gsl_sf_bessel_Yn_e: Irregular Cylindrical Bessel Functions.
(line 19)
* gsl_sf_bessel_Ynu: Irregular Bessel Functions - Fractional Order.
(line 7)
* gsl_sf_bessel_Ynu_e: Irregular Bessel Functions - Fractional Order.
(line 9)
* gsl_sf_bessel_zero_J0: Zeros of Regular Bessel Functions.
(line 7)
* gsl_sf_bessel_zero_J0_e: Zeros of Regular Bessel Functions.
(line 9)
* gsl_sf_bessel_zero_J1: Zeros of Regular Bessel Functions.
(line 13)
* gsl_sf_bessel_zero_J1_e: Zeros of Regular Bessel Functions.
(line 15)
* gsl_sf_bessel_zero_Jnu: Zeros of Regular Bessel Functions.
(line 19)
* gsl_sf_bessel_zero_Jnu_e: Zeros of Regular Bessel Functions.
(line 21)
* gsl_sf_beta: Beta Functions. (line 7)
* gsl_sf_beta_e: Beta Functions. (line 9)
* gsl_sf_beta_inc: Incomplete Beta Function.
(line 7)
* gsl_sf_beta_inc_e: Incomplete Beta Function.
(line 9)
* gsl_sf_Chi: Hyperbolic Integrals.
(line 12)
* gsl_sf_Chi_e: Hyperbolic Integrals.
(line 13)
* gsl_sf_choose: Factorials. (line 40)
* gsl_sf_choose_e: Factorials. (line 42)
* gsl_sf_Ci: Trigonometric Integrals.
(line 12)
* gsl_sf_Ci_e: Trigonometric Integrals.
(line 13)
* gsl_sf_clausen: Clausen Functions. (line 15)
* gsl_sf_clausen_e: Clausen Functions. (line 16)
* gsl_sf_complex_cos_e: Trigonometric Functions for Complex Arguments.
(line 13)
* gsl_sf_complex_dilog_e: Complex Argument. (line 8)
* gsl_sf_complex_log_e: Logarithm and Related Functions.
(line 21)
* gsl_sf_complex_logsin_e: Trigonometric Functions for Complex Arguments.
(line 18)
* gsl_sf_complex_sin_e: Trigonometric Functions for Complex Arguments.
(line 8)
* gsl_sf_conicalP_0: Conical Functions. (line 23)
* gsl_sf_conicalP_0_e: Conical Functions. (line 25)
* gsl_sf_conicalP_1: Conical Functions. (line 29)
* gsl_sf_conicalP_1_e: Conical Functions. (line 31)
* gsl_sf_conicalP_cyl_reg: Conical Functions. (line 43)
* gsl_sf_conicalP_cyl_reg_e: Conical Functions. (line 45)
* gsl_sf_conicalP_half: Conical Functions. (line 11)
* gsl_sf_conicalP_half_e: Conical Functions. (line 13)
* gsl_sf_conicalP_mhalf: Conical Functions. (line 17)
* gsl_sf_conicalP_mhalf_e: Conical Functions. (line 19)
* gsl_sf_conicalP_sph_reg: Conical Functions. (line 36)
* gsl_sf_conicalP_sph_reg_e: Conical Functions. (line 38)
* gsl_sf_cos: Circular Trigonometric Functions.
(line 11)
* gsl_sf_cos_e: Circular Trigonometric Functions.
(line 12)
* gsl_sf_cos_err_e: Trigonometric Functions With Error Estimates.
(line 15)
* gsl_sf_coulomb_CL_array: Coulomb Wave Function Normalization Constant.
(line 16)
* gsl_sf_coulomb_CL_e: Coulomb Wave Function Normalization Constant.
(line 11)
* gsl_sf_coulomb_wave_F_array: Coulomb Wave Functions.
(line 36)
* gsl_sf_coulomb_wave_FG_array: Coulomb Wave Functions.
(line 43)
* gsl_sf_coulomb_wave_FG_e: Coulomb Wave Functions.
(line 24)
* gsl_sf_coulomb_wave_FGp_array: Coulomb Wave Functions.
(line 52)
* gsl_sf_coulomb_wave_sphF_array: Coulomb Wave Functions.
(line 61)
* gsl_sf_coupling_3j: 3-j Symbols. (line 8)
* gsl_sf_coupling_3j_e: 3-j Symbols. (line 11)
* gsl_sf_coupling_6j: 6-j Symbols. (line 8)
* gsl_sf_coupling_6j_e: 6-j Symbols. (line 11)
* gsl_sf_coupling_9j: 9-j Symbols. (line 9)
* gsl_sf_coupling_9j_e: 9-j Symbols. (line 12)
* gsl_sf_dawson: Dawson Function. (line 12)
* gsl_sf_dawson_e: Dawson Function. (line 13)
* gsl_sf_debye_1: Debye Functions. (line 14)
* gsl_sf_debye_1_e: Debye Functions. (line 15)
* gsl_sf_debye_2: Debye Functions. (line 19)
* gsl_sf_debye_2_e: Debye Functions. (line 20)
* gsl_sf_debye_3: Debye Functions. (line 24)
* gsl_sf_debye_3_e: Debye Functions. (line 25)
* gsl_sf_debye_4: Debye Functions. (line 29)
* gsl_sf_debye_4_e: Debye Functions. (line 30)
* gsl_sf_debye_5: Debye Functions. (line 34)
* gsl_sf_debye_5_e: Debye Functions. (line 35)
* gsl_sf_debye_6: Debye Functions. (line 39)
* gsl_sf_debye_6_e: Debye Functions. (line 40)
* gsl_sf_dilog: Real Argument. (line 7)
* gsl_sf_dilog_e: Real Argument. (line 8)
* gsl_sf_doublefact: Factorials. (line 20)
* gsl_sf_doublefact_e: Factorials. (line 22)
* gsl_sf_ellint_D: Legendre Form of Incomplete Elliptic Integrals.
(line 36)
* gsl_sf_ellint_D_e: Legendre Form of Incomplete Elliptic Integrals.
(line 38)
* gsl_sf_ellint_E: Legendre Form of Incomplete Elliptic Integrals.
(line 17)
* gsl_sf_ellint_E_e: Legendre Form of Incomplete Elliptic Integrals.
(line 19)
* gsl_sf_ellint_Ecomp: Legendre Form of Complete Elliptic Integrals.
(line 15)
* gsl_sf_ellint_Ecomp_e: Legendre Form of Complete Elliptic Integrals.
(line 17)
* gsl_sf_ellint_F: Legendre Form of Incomplete Elliptic Integrals.
(line 8)
* gsl_sf_ellint_F_e: Legendre Form of Incomplete Elliptic Integrals.
(line 10)
* gsl_sf_ellint_Kcomp: Legendre Form of Complete Elliptic Integrals.
(line 7)
* gsl_sf_ellint_Kcomp_e: Legendre Form of Complete Elliptic Integrals.
(line 9)
* gsl_sf_ellint_P: Legendre Form of Incomplete Elliptic Integrals.
(line 26)
* gsl_sf_ellint_P_e: Legendre Form of Incomplete Elliptic Integrals.
(line 28)
* gsl_sf_ellint_Pcomp: Legendre Form of Complete Elliptic Integrals.
(line 24)
* gsl_sf_ellint_Pcomp_e: Legendre Form of Complete Elliptic Integrals.
(line 26)
* gsl_sf_ellint_RC: Carlson Forms. (line 8)
* gsl_sf_ellint_RC_e: Carlson Forms. (line 10)
* gsl_sf_ellint_RD: Carlson Forms. (line 15)
* gsl_sf_ellint_RD_e: Carlson Forms. (line 17)
* gsl_sf_ellint_RF: Carlson Forms. (line 22)
* gsl_sf_ellint_RF_e: Carlson Forms. (line 24)
* gsl_sf_ellint_RJ: Carlson Forms. (line 29)
* gsl_sf_ellint_RJ_e: Carlson Forms. (line 31)
* gsl_sf_elljac_e: Elliptic Functions (Jacobi).
(line 12)
* gsl_sf_erf: Error Function. (line 7)
* gsl_sf_erf_e: Error Function. (line 8)
* gsl_sf_erf_Q: Probability functions.
(line 15)
* gsl_sf_erf_Q_e: Probability functions.
(line 16)
* gsl_sf_erf_Z: Probability functions.
(line 10)
* gsl_sf_erf_Z_e: Probability functions.
(line 11)
* gsl_sf_erfc: Complementary Error Function.
(line 7)
* gsl_sf_erfc_e: Complementary Error Function.
(line 8)
* gsl_sf_eta: Eta Function. (line 13)
* gsl_sf_eta_e: Eta Function. (line 14)
* gsl_sf_eta_int: Eta Function. (line 9)
* gsl_sf_eta_int_e: Eta Function. (line 10)
* gsl_sf_exp: Exponential Function.
(line 7)
* gsl_sf_exp_e: Exponential Function.
(line 8)
* gsl_sf_exp_e10_e: Exponential Function.
(line 13)
* gsl_sf_exp_err_e: Exponentiation With Error Estimate.
(line 8)
* gsl_sf_exp_err_e10_e: Exponentiation With Error Estimate.
(line 12)
* gsl_sf_exp_mult: Exponential Function.
(line 19)
* gsl_sf_exp_mult_e: Exponential Function.
(line 21)
* gsl_sf_exp_mult_e10_e: Exponential Function.
(line 26)
* gsl_sf_exp_mult_err_e: Exponentiation With Error Estimate.
(line 18)
* gsl_sf_exp_mult_err_e10_e: Exponentiation With Error Estimate.
(line 23)
* gsl_sf_expint_3: Ei_3(x). (line 7)
* gsl_sf_expint_3_e: Ei_3(x). (line 8)
* gsl_sf_expint_E1: Exponential Integral.
(line 7)
* gsl_sf_expint_E1_e: Exponential Integral.
(line 8)
* gsl_sf_expint_E2: Exponential Integral.
(line 15)
* gsl_sf_expint_E2_e: Exponential Integral.
(line 16)
* gsl_sf_expint_Ei: Ei(x). (line 7)
* gsl_sf_expint_Ei_e: Ei(x). (line 8)
* gsl_sf_expint_En: Exponential Integral.
(line 24)
* gsl_sf_expint_En_e: Exponential Integral.
(line 26)
* gsl_sf_expm1: Relative Exponential Functions.
(line 7)
* gsl_sf_expm1_e: Relative Exponential Functions.
(line 8)
* gsl_sf_exprel: Relative Exponential Functions.
(line 12)
* gsl_sf_exprel_2: Relative Exponential Functions.
(line 19)
* gsl_sf_exprel_2_e: Relative Exponential Functions.
(line 20)
* gsl_sf_exprel_e: Relative Exponential Functions.
(line 13)
* gsl_sf_exprel_n: Relative Exponential Functions.
(line 26)
* gsl_sf_exprel_n_e: Relative Exponential Functions.
(line 28)
* gsl_sf_fact: Factorials. (line 13)
* gsl_sf_fact_e: Factorials. (line 14)
* gsl_sf_fermi_dirac_0: Complete Fermi-Dirac Integrals.
(line 20)
* gsl_sf_fermi_dirac_0_e: Complete Fermi-Dirac Integrals.
(line 22)
* gsl_sf_fermi_dirac_1: Complete Fermi-Dirac Integrals.
(line 26)
* gsl_sf_fermi_dirac_1_e: Complete Fermi-Dirac Integrals.
(line 28)
* gsl_sf_fermi_dirac_2: Complete Fermi-Dirac Integrals.
(line 32)
* gsl_sf_fermi_dirac_2_e: Complete Fermi-Dirac Integrals.
(line 34)
* gsl_sf_fermi_dirac_3half: Complete Fermi-Dirac Integrals.
(line 57)
* gsl_sf_fermi_dirac_3half_e: Complete Fermi-Dirac Integrals.
(line 59)
* gsl_sf_fermi_dirac_half: Complete Fermi-Dirac Integrals.
(line 51)
* gsl_sf_fermi_dirac_half_e: Complete Fermi-Dirac Integrals.
(line 53)
* gsl_sf_fermi_dirac_inc_0: Incomplete Fermi-Dirac Integrals.
(line 11)
* gsl_sf_fermi_dirac_inc_0_e: Incomplete Fermi-Dirac Integrals.
(line 13)
* gsl_sf_fermi_dirac_int: Complete Fermi-Dirac Integrals.
(line 38)
* gsl_sf_fermi_dirac_int_e: Complete Fermi-Dirac Integrals.
(line 40)
* gsl_sf_fermi_dirac_m1: Complete Fermi-Dirac Integrals.
(line 13)
* gsl_sf_fermi_dirac_m1_e: Complete Fermi-Dirac Integrals.
(line 15)
* gsl_sf_fermi_dirac_mhalf: Complete Fermi-Dirac Integrals.
(line 45)
* gsl_sf_fermi_dirac_mhalf_e: Complete Fermi-Dirac Integrals.
(line 47)
* gsl_sf_gamma: Gamma Functions. (line 15)
* gsl_sf_gamma_e: Gamma Functions. (line 16)
* gsl_sf_gamma_inc: Incomplete Gamma Functions.
(line 7)
* gsl_sf_gamma_inc_e: Incomplete Gamma Functions.
(line 9)
* gsl_sf_gamma_inc_P: Incomplete Gamma Functions.
(line 21)
* gsl_sf_gamma_inc_P_e: Incomplete Gamma Functions.
(line 23)
* gsl_sf_gamma_inc_Q: Incomplete Gamma Functions.
(line 14)
* gsl_sf_gamma_inc_Q_e: Incomplete Gamma Functions.
(line 16)
* gsl_sf_gammainv: Gamma Functions. (line 50)
* gsl_sf_gammainv_e: Gamma Functions. (line 51)
* gsl_sf_gammastar: Gamma Functions. (line 41)
* gsl_sf_gammastar_e: Gamma Functions. (line 42)
* gsl_sf_gegenpoly_1: Gegenbauer Functions.
(line 12)
* gsl_sf_gegenpoly_1_e: Gegenbauer Functions.
(line 16)
* gsl_sf_gegenpoly_2: Gegenbauer Functions.
(line 13)
* gsl_sf_gegenpoly_2_e: Gegenbauer Functions.
(line 18)
* gsl_sf_gegenpoly_3: Gegenbauer Functions.
(line 14)
* gsl_sf_gegenpoly_3_e: Gegenbauer Functions.
(line 20)
* gsl_sf_gegenpoly_array: Gegenbauer Functions.
(line 32)
* gsl_sf_gegenpoly_n: Gegenbauer Functions.
(line 24)
* gsl_sf_gegenpoly_n_e: Gegenbauer Functions.
(line 26)
* gsl_sf_hazard: Probability functions.
(line 28)
* gsl_sf_hazard_e: Probability functions.
(line 29)
* gsl_sf_hydrogenicR: Normalized Hydrogenic Bound States.
(line 14)
* gsl_sf_hydrogenicR_1: Normalized Hydrogenic Bound States.
(line 7)
* gsl_sf_hydrogenicR_1_e: Normalized Hydrogenic Bound States.
(line 9)
* gsl_sf_hydrogenicR_e: Normalized Hydrogenic Bound States.
(line 16)
* gsl_sf_hyperg_0F1: Hypergeometric Functions.
(line 11)
* gsl_sf_hyperg_0F1_e: Hypergeometric Functions.
(line 13)
* gsl_sf_hyperg_1F1: Hypergeometric Functions.
(line 22)
* gsl_sf_hyperg_1F1_e: Hypergeometric Functions.
(line 24)
* gsl_sf_hyperg_1F1_int: Hypergeometric Functions.
(line 16)
* gsl_sf_hyperg_1F1_int_e: Hypergeometric Functions.
(line 18)
* gsl_sf_hyperg_2F0: Hypergeometric Functions.
(line 86)
* gsl_sf_hyperg_2F0_e: Hypergeometric Functions.
(line 88)
* gsl_sf_hyperg_2F1: Hypergeometric Functions.
(line 53)
* gsl_sf_hyperg_2F1_conj: Hypergeometric Functions.
(line 65)
* gsl_sf_hyperg_2F1_conj_e: Hypergeometric Functions.
(line 67)
* gsl_sf_hyperg_2F1_conj_renorm: Hypergeometric Functions.
(line 79)
* gsl_sf_hyperg_2F1_conj_renorm_e: Hypergeometric Functions.
(line 81)
* gsl_sf_hyperg_2F1_e: Hypergeometric Functions.
(line 55)
* gsl_sf_hyperg_2F1_renorm: Hypergeometric Functions.
(line 72)
* gsl_sf_hyperg_2F1_renorm_e: Hypergeometric Functions.
(line 74)
* gsl_sf_hyperg_U: Hypergeometric Functions.
(line 40)
* gsl_sf_hyperg_U_e: Hypergeometric Functions.
(line 42)
* gsl_sf_hyperg_U_e10_e: Hypergeometric Functions.
(line 47)
* gsl_sf_hyperg_U_int: Hypergeometric Functions.
(line 28)
* gsl_sf_hyperg_U_int_e: Hypergeometric Functions.
(line 30)
* gsl_sf_hyperg_U_int_e10_e: Hypergeometric Functions.
(line 35)
* gsl_sf_hypot: Circular Trigonometric Functions.
(line 15)
* gsl_sf_hypot_e: Circular Trigonometric Functions.
(line 17)
* gsl_sf_hzeta: Hurwitz Zeta Function.
(line 10)
* gsl_sf_hzeta_e: Hurwitz Zeta Function.
(line 12)
* gsl_sf_laguerre_1: Laguerre Functions. (line 17)
* gsl_sf_laguerre_1_e: Laguerre Functions. (line 21)
* gsl_sf_laguerre_2: Laguerre Functions. (line 18)
* gsl_sf_laguerre_2_e: Laguerre Functions. (line 23)
* gsl_sf_laguerre_3: Laguerre Functions. (line 19)
* gsl_sf_laguerre_3_e: Laguerre Functions. (line 25)
* gsl_sf_laguerre_n: Laguerre Functions. (line 30)
* gsl_sf_laguerre_n_e: Laguerre Functions. (line 32)
* gsl_sf_lambert_W0: Lambert W Functions. (line 14)
* gsl_sf_lambert_W0_e: Lambert W Functions. (line 15)
* gsl_sf_lambert_Wm1: Lambert W Functions. (line 19)
* gsl_sf_lambert_Wm1_e: Lambert W Functions. (line 21)
* gsl_sf_legendre_array_size: Associated Legendre Polynomials and Spherical Harmonics.
(line 50)
* gsl_sf_legendre_H3d: Radial Functions for Hyperbolic Space.
(line 32)
* gsl_sf_legendre_H3d_0: Radial Functions for Hyperbolic Space.
(line 12)
* gsl_sf_legendre_H3d_0_e: Radial Functions for Hyperbolic Space.
(line 14)
* gsl_sf_legendre_H3d_1: Radial Functions for Hyperbolic Space.
(line 21)
* gsl_sf_legendre_H3d_1_e: Radial Functions for Hyperbolic Space.
(line 23)
* gsl_sf_legendre_H3d_array: Radial Functions for Hyperbolic Space.
(line 41)
* gsl_sf_legendre_H3d_e: Radial Functions for Hyperbolic Space.
(line 34)
* gsl_sf_legendre_P1: Legendre Polynomials.
(line 7)
* gsl_sf_legendre_P1_e: Legendre Polynomials.
(line 11)
* gsl_sf_legendre_P2: Legendre Polynomials.
(line 8)
* gsl_sf_legendre_P2_e: Legendre Polynomials.
(line 13)
* gsl_sf_legendre_P3: Legendre Polynomials.
(line 9)
* gsl_sf_legendre_P3_e: Legendre Polynomials.
(line 15)
* gsl_sf_legendre_Pl: Legendre Polynomials.
(line 19)
* gsl_sf_legendre_Pl_array: Legendre Polynomials.
(line 26)
* gsl_sf_legendre_Pl_deriv_array: Legendre Polynomials.
(line 28)
* gsl_sf_legendre_Pl_e: Legendre Polynomials.
(line 21)
* gsl_sf_legendre_Plm: Associated Legendre Polynomials and Spherical Harmonics.
(line 18)
* gsl_sf_legendre_Plm_array: Associated Legendre Polynomials and Spherical Harmonics.
(line 25)
* gsl_sf_legendre_Plm_deriv_array: Associated Legendre Polynomials and Spherical Harmonics.
(line 27)
* gsl_sf_legendre_Plm_e: Associated Legendre Polynomials and Spherical Harmonics.
(line 20)
* gsl_sf_legendre_Q0: Legendre Polynomials.
(line 32)
* gsl_sf_legendre_Q0_e: Legendre Polynomials.
(line 34)
* gsl_sf_legendre_Q1: Legendre Polynomials.
(line 38)
* gsl_sf_legendre_Q1_e: Legendre Polynomials.
(line 40)
* gsl_sf_legendre_Ql: Legendre Polynomials.
(line 44)
* gsl_sf_legendre_Ql_e: Legendre Polynomials.
(line 46)
* gsl_sf_legendre_sphPlm: Associated Legendre Polynomials and Spherical Harmonics.
(line 32)
* gsl_sf_legendre_sphPlm_array: Associated Legendre Polynomials and Spherical Harmonics.
(line 42)
* gsl_sf_legendre_sphPlm_deriv_array: Associated Legendre Polynomials and Spherical Harmonics.
(line 44)
* gsl_sf_legendre_sphPlm_e: Associated Legendre Polynomials and Spherical Harmonics.
(line 34)
* gsl_sf_lnbeta: Beta Functions. (line 14)
* gsl_sf_lnbeta_e: Beta Functions. (line 16)
* gsl_sf_lnchoose: Factorials. (line 46)
* gsl_sf_lnchoose_e: Factorials. (line 48)
* gsl_sf_lncosh: Hyperbolic Trigonometric Functions.
(line 11)
* gsl_sf_lncosh_e: Hyperbolic Trigonometric Functions.
(line 12)
* gsl_sf_lndoublefact: Factorials. (line 34)
* gsl_sf_lndoublefact_e: Factorials. (line 36)
* gsl_sf_lnfact: Factorials. (line 27)
* gsl_sf_lnfact_e: Factorials. (line 29)
* gsl_sf_lngamma: Gamma Functions. (line 23)
* gsl_sf_lngamma_complex_e: Gamma Functions. (line 56)
* gsl_sf_lngamma_e: Gamma Functions. (line 24)
* gsl_sf_lngamma_sgn_e: Gamma Functions. (line 32)
* gsl_sf_lnpoch: Pochhammer Symbol. (line 16)
* gsl_sf_lnpoch_e: Pochhammer Symbol. (line 18)
* gsl_sf_lnpoch_sgn_e: Pochhammer Symbol. (line 23)
* gsl_sf_lnsinh: Hyperbolic Trigonometric Functions.
(line 7)
* gsl_sf_lnsinh_e: Hyperbolic Trigonometric Functions.
(line 8)
* gsl_sf_log: Logarithm and Related Functions.
(line 11)
* gsl_sf_log_1plusx: Logarithm and Related Functions.
(line 26)
* gsl_sf_log_1plusx_e: Logarithm and Related Functions.
(line 27)
* gsl_sf_log_1plusx_mx: Logarithm and Related Functions.
(line 31)
* gsl_sf_log_1plusx_mx_e: Logarithm and Related Functions.
(line 33)
* gsl_sf_log_abs: Logarithm and Related Functions.
(line 15)
* gsl_sf_log_abs_e: Logarithm and Related Functions.
(line 16)
* gsl_sf_log_e: Logarithm and Related Functions.
(line 12)
* gsl_sf_log_erfc: Log Complementary Error Function.
(line 7)
* gsl_sf_log_erfc_e: Log Complementary Error Function.
(line 8)
* gsl_sf_mathieu_a: Mathieu Function Characteristic Values.
(line 8)
* gsl_sf_mathieu_a_array: Mathieu Function Characteristic Values.
(line 16)
* gsl_sf_mathieu_alloc: Mathieu Function Workspace.
(line 12)
* gsl_sf_mathieu_b: Mathieu Function Characteristic Values.
(line 10)
* gsl_sf_mathieu_b_array: Mathieu Function Characteristic Values.
(line 19)
* gsl_sf_mathieu_ce: Angular Mathieu Functions.
(line 8)
* gsl_sf_mathieu_ce_array: Angular Mathieu Functions.
(line 16)
* gsl_sf_mathieu_free: Mathieu Function Workspace.
(line 19)
* gsl_sf_mathieu_Mc: Radial Mathieu Functions.
(line 8)
* gsl_sf_mathieu_Mc_array: Radial Mathieu Functions.
(line 21)
* gsl_sf_mathieu_Ms: Radial Mathieu Functions.
(line 10)
* gsl_sf_mathieu_Ms_array: Radial Mathieu Functions.
(line 24)
* gsl_sf_mathieu_se: Angular Mathieu Functions.
(line 10)
* gsl_sf_mathieu_se_array: Angular Mathieu Functions.
(line 19)
* gsl_sf_multiply_e: Elementary Operations.
(line 12)
* gsl_sf_multiply_err_e: Elementary Operations.
(line 17)
* gsl_sf_poch: Pochhammer Symbol. (line 7)
* gsl_sf_poch_e: Pochhammer Symbol. (line 9)
* gsl_sf_pochrel: Pochhammer Symbol. (line 29)
* gsl_sf_pochrel_e: Pochhammer Symbol. (line 31)
* gsl_sf_polar_to_rect: Conversion Functions.
(line 8)
* gsl_sf_pow_int: Power Function. (line 11)
* gsl_sf_pow_int_e: Power Function. (line 13)
* gsl_sf_psi: Digamma Function. (line 12)
* gsl_sf_psi_1: Trigamma Function. (line 12)
* gsl_sf_psi_1_e: Trigamma Function. (line 13)
* gsl_sf_psi_1_int: Trigamma Function. (line 7)
* gsl_sf_psi_1_int_e: Trigamma Function. (line 8)
* gsl_sf_psi_1piy: Digamma Function. (line 17)
* gsl_sf_psi_1piy_e: Digamma Function. (line 18)
* gsl_sf_psi_e: Digamma Function. (line 13)
* gsl_sf_psi_int: Digamma Function. (line 7)
* gsl_sf_psi_int_e: Digamma Function. (line 8)
* gsl_sf_psi_n: Polygamma Function. (line 7)
* gsl_sf_psi_n_e: Polygamma Function. (line 9)
* gsl_sf_rect_to_polar: Conversion Functions.
(line 14)
* gsl_sf_Shi: Hyperbolic Integrals.
(line 7)
* gsl_sf_Shi_e: Hyperbolic Integrals.
(line 8)
* gsl_sf_Si: Trigonometric Integrals.
(line 7)
* gsl_sf_Si_e: Trigonometric Integrals.
(line 8)
* gsl_sf_sin: Circular Trigonometric Functions.
(line 7)
* gsl_sf_sin_e: Circular Trigonometric Functions.
(line 8)
* gsl_sf_sin_err_e: Trigonometric Functions With Error Estimates.
(line 8)
* gsl_sf_sinc: Circular Trigonometric Functions.
(line 21)
* gsl_sf_sinc_e: Circular Trigonometric Functions.
(line 22)
* gsl_sf_synchrotron_1: Synchrotron Functions.
(line 10)
* gsl_sf_synchrotron_1_e: Synchrotron Functions.
(line 12)
* gsl_sf_synchrotron_2: Synchrotron Functions.
(line 16)
* gsl_sf_synchrotron_2_e: Synchrotron Functions.
(line 18)
* gsl_sf_taylorcoeff: Factorials. (line 52)
* gsl_sf_taylorcoeff_e: Factorials. (line 54)
* gsl_sf_transport_2: Transport Functions. (line 11)
* gsl_sf_transport_2_e: Transport Functions. (line 13)
* gsl_sf_transport_3: Transport Functions. (line 16)
* gsl_sf_transport_3_e: Transport Functions. (line 18)
* gsl_sf_transport_4: Transport Functions. (line 21)
* gsl_sf_transport_4_e: Transport Functions. (line 23)
* gsl_sf_transport_5: Transport Functions. (line 26)
* gsl_sf_transport_5_e: Transport Functions. (line 28)
* gsl_sf_zeta: Riemann Zeta Function.
(line 15)
* gsl_sf_zeta_e: Riemann Zeta Function.
(line 16)
* gsl_sf_zeta_int: Riemann Zeta Function.
(line 10)
* gsl_sf_zeta_int_e: Riemann Zeta Function.
(line 11)
* gsl_sf_zetam1: Riemann Zeta Function Minus One.
(line 15)
* gsl_sf_zetam1_e: Riemann Zeta Function Minus One.
(line 16)
* gsl_sf_zetam1_int: Riemann Zeta Function Minus One.
(line 11)
* gsl_sf_zetam1_int_e: Riemann Zeta Function Minus One.
(line 12)
* GSL_SIGN: Testing the Sign of Numbers.
(line 7)
* gsl_siman_solve: Simulated Annealing functions.
(line 13)
* gsl_sort: Sorting vectors. (line 24)
* gsl_sort_index: Sorting vectors. (line 33)
* gsl_sort_largest: Selecting the k smallest or largest elements.
(line 25)
* gsl_sort_largest_index: Selecting the k smallest or largest elements.
(line 51)
* gsl_sort_smallest: Selecting the k smallest or largest elements.
(line 18)
* gsl_sort_smallest_index: Selecting the k smallest or largest elements.
(line 43)
* gsl_sort_vector: Sorting vectors. (line 28)
* gsl_sort_vector_index: Sorting vectors. (line 43)
* gsl_sort_vector_largest: Selecting the k smallest or largest elements.
(line 34)
* gsl_sort_vector_largest_index: Selecting the k smallest or largest elements.
(line 61)
* gsl_sort_vector_smallest: Selecting the k smallest or largest elements.
(line 32)
* gsl_sort_vector_smallest_index: Selecting the k smallest or largest elements.
(line 59)
* gsl_spline_alloc: Higher-level Interface.
(line 16)
* gsl_spline_eval: Higher-level Interface.
(line 29)
* gsl_spline_eval_deriv: Higher-level Interface.
(line 34)
* gsl_spline_eval_deriv2: Higher-level Interface.
(line 39)
* gsl_spline_eval_deriv2_e: Higher-level Interface.
(line 41)
* gsl_spline_eval_deriv_e: Higher-level Interface.
(line 36)
* gsl_spline_eval_e: Higher-level Interface.
(line 31)
* gsl_spline_eval_integ: Higher-level Interface.
(line 44)
* gsl_spline_eval_integ_e: Higher-level Interface.
(line 46)
* gsl_spline_free: Higher-level Interface.
(line 21)
* gsl_spline_init: Higher-level Interface.
(line 19)
* gsl_spline_min_size: Higher-level Interface.
(line 26)
* gsl_spline_name: Higher-level Interface.
(line 23)
* gsl_stats_absdev: Absolute deviation. (line 8)
* gsl_stats_absdev_m: Absolute deviation. (line 21)
* gsl_stats_correlation: Correlation. (line 9)
* gsl_stats_covariance: Covariance. (line 9)
* gsl_stats_covariance_m: Covariance. (line 18)
* gsl_stats_kurtosis: Higher moments (skewness and kurtosis).
(line 31)
* gsl_stats_kurtosis_m_sd: Higher moments (skewness and kurtosis).
(line 42)
* gsl_stats_lag1_autocorrelation: Autocorrelation. (line 8)
* gsl_stats_lag1_autocorrelation_m: Autocorrelation. (line 18)
* gsl_stats_max: Maximum and Minimum values.
(line 14)
* gsl_stats_max_index: Maximum and Minimum values.
(line 39)
* gsl_stats_mean: Mean and standard deviation and variance.
(line 8)
* gsl_stats_median_from_sorted_data: Median and Percentiles.
(line 12)
* gsl_stats_min: Maximum and Minimum values.
(line 24)
* gsl_stats_min_index: Maximum and Minimum values.
(line 47)
* gsl_stats_minmax: Maximum and Minimum values.
(line 34)
* gsl_stats_minmax_index: Maximum and Minimum values.
(line 55)
* gsl_stats_quantile_from_sorted_data: Median and Percentiles.
(line 27)
* gsl_stats_sd: Mean and standard deviation and variance.
(line 46)
* gsl_stats_sd_m: Mean and standard deviation and variance.
(line 48)
* gsl_stats_sd_with_fixed_mean: Mean and standard deviation and variance.
(line 75)
* gsl_stats_skew: Higher moments (skewness and kurtosis).
(line 8)
* gsl_stats_skew_m_sd: Higher moments (skewness and kurtosis).
(line 21)
* gsl_stats_tss: Mean and standard deviation and variance.
(line 54)
* gsl_stats_tss_m: Mean and standard deviation and variance.
(line 56)
* gsl_stats_variance: Mean and standard deviation and variance.
(line 20)
* gsl_stats_variance_m: Mean and standard deviation and variance.
(line 38)
* gsl_stats_variance_with_fixed_mean: Mean and standard deviation and variance.
(line 65)
* gsl_stats_wabsdev: Weighted Samples. (line 85)
* gsl_stats_wabsdev_m: Weighted Samples. (line 94)
* gsl_stats_wkurtosis: Weighted Samples. (line 112)
* gsl_stats_wkurtosis_m_sd: Weighted Samples. (line 119)
* gsl_stats_wmean: Weighted Samples. (line 16)
* gsl_stats_wsd: Weighted Samples. (line 44)
* gsl_stats_wsd_m: Weighted Samples. (line 50)
* gsl_stats_wsd_with_fixed_mean: Weighted Samples. (line 67)
* gsl_stats_wskew: Weighted Samples. (line 99)
* gsl_stats_wskew_m_sd: Weighted Samples. (line 106)
* gsl_stats_wtss: Weighted Samples. (line 73)
* gsl_stats_wtss_m: Weighted Samples. (line 76)
* gsl_stats_wvariance: Weighted Samples. (line 24)
* gsl_stats_wvariance_m: Weighted Samples. (line 39)
* gsl_stats_wvariance_with_fixed_mean: Weighted Samples. (line 56)
* gsl_strerror: Error Codes. (line 38)
* gsl_sum_levin_u_accel: Acceleration functions.
(line 34)
* gsl_sum_levin_u_alloc: Acceleration functions.
(line 25)
* gsl_sum_levin_u_free: Acceleration functions.
(line 29)
* gsl_sum_levin_utrunc_accel: Acceleration functions without error estimation.
(line 35)
* gsl_sum_levin_utrunc_alloc: Acceleration functions without error estimation.
(line 24)
* gsl_sum_levin_utrunc_free: Acceleration functions without error estimation.
(line 30)
* gsl_vector_add: Vector operations. (line 7)
* gsl_vector_add_constant: Vector operations. (line 33)
* gsl_vector_alloc: Vector allocation. (line 15)
* gsl_vector_calloc: Vector allocation. (line 22)
* gsl_vector_complex_const_imag: Vector views. (line 110)
* gsl_vector_complex_const_real: Vector views. (line 99)
* gsl_vector_complex_imag: Vector views. (line 108)
* gsl_vector_complex_real: Vector views. (line 97)
* gsl_vector_const_ptr: Accessing vector elements.
(line 54)
* gsl_vector_const_subvector: Vector views. (line 31)
* gsl_vector_const_subvector_with_stride: Vector views. (line 62)
* gsl_vector_const_view_array: Vector views. (line 121)
* gsl_vector_const_view_array_with_stride: Vector views. (line 145)
* gsl_vector_div: Vector operations. (line 23)
* gsl_vector_equal: Vector properties. (line 20)
* gsl_vector_fprintf: Reading and writing vectors.
(line 28)
* gsl_vector_fread: Reading and writing vectors.
(line 18)
* gsl_vector_free: Vector allocation. (line 26)
* gsl_vector_fscanf: Reading and writing vectors.
(line 35)
* gsl_vector_fwrite: Reading and writing vectors.
(line 11)
* gsl_vector_get: Accessing vector elements.
(line 40)
* gsl_vector_isneg: Vector properties. (line 13)
* gsl_vector_isnonneg: Vector properties. (line 14)
* gsl_vector_isnull: Vector properties. (line 11)
* gsl_vector_ispos: Vector properties. (line 12)
* gsl_vector_max: Finding maximum and minimum elements of vectors.
(line 9)
* gsl_vector_max_index: Finding maximum and minimum elements of vectors.
(line 20)
* gsl_vector_memcpy: Copying vectors. (line 14)
* gsl_vector_min: Finding maximum and minimum elements of vectors.
(line 12)
* gsl_vector_min_index: Finding maximum and minimum elements of vectors.
(line 25)
* gsl_vector_minmax: Finding maximum and minimum elements of vectors.
(line 16)
* gsl_vector_minmax_index: Finding maximum and minimum elements of vectors.
(line 31)
* gsl_vector_mul: Vector operations. (line 18)
* gsl_vector_ptr: Accessing vector elements.
(line 52)
* gsl_vector_reverse: Exchanging elements. (line 15)
* gsl_vector_scale: Vector operations. (line 28)
* gsl_vector_set: Accessing vector elements.
(line 46)
* gsl_vector_set_all: Initializing vector elements.
(line 7)
* gsl_vector_set_basis: Initializing vector elements.
(line 13)
* gsl_vector_set_zero: Initializing vector elements.
(line 10)
* gsl_vector_sub: Vector operations. (line 12)
* gsl_vector_subvector: Vector views. (line 29)
* gsl_vector_subvector_with_stride: Vector views. (line 59)
* gsl_vector_swap: Copying vectors. (line 18)
* gsl_vector_swap_elements: Exchanging elements. (line 11)
* gsl_vector_view_array: Vector views. (line 119)
* gsl_vector_view_array_with_stride: Vector views. (line 142)
* gsl_wavelet2d_nstransform: DWT in two dimension.
(line 66)
* gsl_wavelet2d_nstransform_forward: DWT in two dimension.
(line 69)
* gsl_wavelet2d_nstransform_inverse: DWT in two dimension.
(line 72)
* gsl_wavelet2d_nstransform_matrix: DWT in two dimension.
(line 78)
* gsl_wavelet2d_nstransform_matrix_forward: DWT in two dimension.
(line 80)
* gsl_wavelet2d_nstransform_matrix_inverse: DWT in two dimension.
(line 82)
* gsl_wavelet2d_transform: DWT in two dimension.
(line 32)
* gsl_wavelet2d_transform_forward: DWT in two dimension.
(line 35)
* gsl_wavelet2d_transform_inverse: DWT in two dimension.
(line 38)
* gsl_wavelet2d_transform_matrix: DWT in two dimension.
(line 56)
* gsl_wavelet2d_transform_matrix_forward: DWT in two dimension.
(line 58)
* gsl_wavelet2d_transform_matrix_inverse: DWT in two dimension.
(line 60)
* gsl_wavelet_alloc: DWT Initialization. (line 11)
* gsl_wavelet_bspline: DWT Initialization. (line 30)
* gsl_wavelet_bspline_centered: DWT Initialization. (line 31)
* gsl_wavelet_daubechies: DWT Initialization. (line 19)
* gsl_wavelet_daubechies_centered: DWT Initialization. (line 20)
* gsl_wavelet_free: DWT Initialization. (line 45)
* gsl_wavelet_haar: DWT Initialization. (line 25)
* gsl_wavelet_haar_centered: DWT Initialization. (line 26)
* gsl_wavelet_name: DWT Initialization. (line 41)
* gsl_wavelet_transform: DWT in one dimension.
(line 9)
* gsl_wavelet_transform_forward: DWT in one dimension.
(line 12)
* gsl_wavelet_transform_inverse: DWT in one dimension.
(line 15)
* gsl_wavelet_workspace_alloc: DWT Initialization. (line 53)
* gsl_wavelet_workspace_free: DWT Initialization. (line 63)
File: gsl-ref.info, Node: Variable Index, Next: Type Index, Prev: Function Index, Up: Top
Variable Index
**************
[index]
* Menu:
* alpha <1>: VEGAS. (line 146)
* alpha: MISER. (line 115)
* dither: MISER. (line 131)
* estimate_frac: MISER. (line 94)
* GSL_C99_INLINE <1>: Accessing vector elements.
(line 26)
* GSL_C99_INLINE: Inline functions. (line 6)
* gsl_check_range: Accessing vector elements.
(line 31)
* GSL_EDOM: Error Codes. (line 14)
* GSL_EINVAL: Error Codes. (line 30)
* GSL_ENOMEM: Error Codes. (line 24)
* GSL_ERANGE: Error Codes. (line 19)
* GSL_IEEE_MODE: Setting up your IEEE environment.
(line 24)
* GSL_NAN: Infinities and Not-a-number.
(line 15)
* GSL_NEGINF: Infinities and Not-a-number.
(line 11)
* GSL_POSINF: Infinities and Not-a-number.
(line 7)
* GSL_RANGE_CHECK_OFF: Accessing vector elements.
(line 17)
* gsl_rng_default: Random number environment variables.
(line 12)
* gsl_rng_default_seed <1>: Random number environment variables.
(line 12)
* gsl_rng_default_seed: Random number generator initialization.
(line 17)
* GSL_RNG_SEED <1>: Random number environment variables.
(line 12)
* GSL_RNG_SEED: Random number generator initialization.
(line 17)
* GSL_RNG_TYPE: Random number environment variables.
(line 12)
* HAVE_INLINE: Inline functions. (line 6)
* iterations: VEGAS. (line 151)
* min_calls: MISER. (line 99)
* min_calls_per_bisection: MISER. (line 107)
* mode: VEGAS. (line 169)
* ostream: VEGAS. (line 179)
* stage: VEGAS. (line 155)
* verbose: VEGAS. (line 178)
File: gsl-ref.info, Node: Type Index, Next: Concept Index, Prev: Variable Index, Up: Top
Type Index
**********
[index]
* Menu:
* gsl_block: Blocks. (line 6)
* gsl_bspline_deriv_workspace: Initializing the B-splines solver.
(line 22)
* gsl_bspline_workspace: Initializing the B-splines solver.
(line 12)
* gsl_cheb_series: Chebyshev Definitions.
(line 6)
* gsl_combination: The Combination struct.
(line 6)
* gsl_complex: Representation of complex numbers.
(line 6)
* gsl_dht: Discrete Hankel Transform Functions.
(line 7)
* gsl_eigen_gen_workspace: Real Generalized Nonsymmetric Eigensystems.
(line 43)
* gsl_eigen_genherm_workspace: Complex Generalized Hermitian-Definite Eigensystems.
(line 19)
* gsl_eigen_genhermv_workspace: Complex Generalized Hermitian-Definite Eigensystems.
(line 37)
* gsl_eigen_gensymm_workspace: Real Generalized Symmetric-Definite Eigensystems.
(line 24)
* gsl_eigen_gensymmv_workspace: Real Generalized Symmetric-Definite Eigensystems.
(line 41)
* gsl_eigen_genv_workspace: Real Generalized Nonsymmetric Eigensystems.
(line 98)
* gsl_eigen_herm_workspace: Complex Hermitian Matrices.
(line 11)
* gsl_eigen_hermv_workspace: Complex Hermitian Matrices.
(line 30)
* gsl_eigen_nonsymm_workspace: Real Nonsymmetric Matrices.
(line 19)
* gsl_eigen_nonsymmv_workspace: Real Nonsymmetric Matrices.
(line 80)
* gsl_eigen_symm_workspace: Real Symmetric Matrices.
(line 14)
* gsl_eigen_symmv_workspace: Real Symmetric Matrices.
(line 32)
* gsl_error_handler_t: Error Handlers. (line 24)
* gsl_fft_complex_wavetable: Mixed-radix FFT routines for complex data.
(line 79)
* gsl_fft_complex_workspace: Mixed-radix FFT routines for complex data.
(line 107)
* gsl_fft_halfcomplex_wavetable: Mixed-radix FFT routines for real data.
(line 76)
* gsl_fft_real_wavetable: Mixed-radix FFT routines for real data.
(line 76)
* gsl_fft_real_workspace: Mixed-radix FFT routines for real data.
(line 106)
* gsl_function: Providing the function to solve.
(line 12)
* gsl_function_fdf: Providing the function to solve.
(line 51)
* gsl_histogram: The histogram struct.
(line 9)
* gsl_histogram2d: The 2D histogram struct.
(line 9)
* gsl_histogram2d_pdf: Resampling from 2D histograms.
(line 21)
* gsl_histogram_pdf: The histogram probability distribution struct.
(line 19)
* gsl_integration_cquad_workspace: CQUAD doubly-adaptive integration.
(line 22)
* gsl_integration_glfixed_table: Fixed order Gauss-Legendre integration.
(line 16)
* gsl_integration_qawo_table: QAWO adaptive integration for oscillatory functions.
(line 14)
* gsl_integration_qaws_table: QAWS adaptive integration for singular functions.
(line 14)
* gsl_integration_workspace: QAG adaptive integration.
(line 17)
* gsl_interp: Interpolation Functions.
(line 11)
* gsl_interp_accel: Index Look-up and Acceleration.
(line 20)
* gsl_interp_type: Interpolation Types. (line 6)
* gsl_matrix: Matrices. (line 10)
* gsl_matrix_const_view: Matrix views. (line 6)
* gsl_matrix_view: Matrix views. (line 6)
* gsl_min_fminimizer: Initializing the Minimizer.
(line 8)
* gsl_min_fminimizer_type: Initializing the Minimizer.
(line 8)
* gsl_monte_function: Monte Carlo Interface.
(line 28)
* gsl_monte_miser_state: MISER. (line 47)
* gsl_monte_plain_state: PLAIN Monte Carlo. (line 30)
* gsl_monte_vegas_state: VEGAS. (line 51)
* gsl_multifit_fdfsolver: Initializing the Nonlinear Least-Squares Solver.
(line 18)
* gsl_multifit_fdfsolver_type: Initializing the Nonlinear Least-Squares Solver.
(line 18)
* gsl_multifit_fsolver: Initializing the Nonlinear Least-Squares Solver.
(line 8)
* gsl_multifit_fsolver_type: Initializing the Nonlinear Least-Squares Solver.
(line 8)
* gsl_multifit_function: Providing the Function to be Minimized.
(line 11)
* gsl_multifit_function_fdf: Providing the Function to be Minimized.
(line 31)
* gsl_multifit_linear_workspace: Multi-parameter fitting.
(line 43)
* gsl_multimin_fdfminimizer: Initializing the Multidimensional Minimizer.
(line 15)
* gsl_multimin_fdfminimizer_type: Initializing the Multidimensional Minimizer.
(line 15)
* gsl_multimin_fminimizer: Initializing the Multidimensional Minimizer.
(line 15)
* gsl_multimin_fminimizer_type: Initializing the Multidimensional Minimizer.
(line 15)
* gsl_multimin_function: Providing a function to minimize.
(line 43)
* gsl_multimin_function_fdf: Providing a function to minimize.
(line 14)
* gsl_multiroot_fdfsolver: Initializing the Multidimensional Solver.
(line 29)
* gsl_multiroot_fdfsolver_type: Initializing the Multidimensional Solver.
(line 29)
* gsl_multiroot_fsolver: Initializing the Multidimensional Solver.
(line 13)
* gsl_multiroot_fsolver_type: Initializing the Multidimensional Solver.
(line 13)
* gsl_multiroot_function: Providing the multidimensional system of equations to solve.
(line 11)
* gsl_multiroot_function_fdf: Providing the multidimensional system of equations to solve.
(line 58)
* gsl_multiset: The Multiset struct. (line 6)
* gsl_ntuple: The ntuple struct. (line 6)
* gsl_ntuple_select_fn: Histogramming ntuple values.
(line 13)
* gsl_ntuple_value_fn: Histogramming ntuple values.
(line 24)
* gsl_odeiv2_control: Adaptive Step-size Control.
(line 12)
* gsl_odeiv2_control_type: Adaptive Step-size Control.
(line 12)
* gsl_odeiv2_evolve: Evolution. (line 11)
* gsl_odeiv2_step: Stepping Functions. (line 12)
* gsl_odeiv2_step_type: Stepping Functions. (line 12)
* gsl_odeiv2_system: Defining the ODE System.
(line 15)
* gsl_permutation: The Permutation struct.
(line 6)
* gsl_poly_complex_workspace: General Polynomial Equations.
(line 13)
* gsl_qrng: Quasi-random number generator initialization.
(line 8)
* gsl_qrng_type: Quasi-random number generator initialization.
(line 8)
* gsl_ran_discrete_t: General Discrete Distributions.
(line 52)
* gsl_rng: Random number generator initialization.
(line 7)
* gsl_rng_type: The Random Number Generator Interface.
(line 18)
* gsl_root_fdfsolver: Initializing the Solver.
(line 23)
* gsl_root_fdfsolver_type: Initializing the Solver.
(line 23)
* gsl_root_fsolver: Initializing the Solver.
(line 8)
* gsl_root_fsolver_type: Initializing the Solver.
(line 8)
* gsl_sf_mathieu_workspace: Mathieu Function Workspace.
(line 12)
* gsl_sf_result: The gsl_sf_result struct.
(line 6)
* gsl_sf_result_e10: The gsl_sf_result struct.
(line 6)
* gsl_siman_copy_construct_t: Simulated Annealing functions.
(line 82)
* gsl_siman_copy_t: Simulated Annealing functions.
(line 77)
* gsl_siman_destroy_t: Simulated Annealing functions.
(line 88)
* gsl_siman_Efunc_t: Simulated Annealing functions.
(line 52)
* gsl_siman_metric_t: Simulated Annealing functions.
(line 65)
* gsl_siman_params_t: Simulated Annealing functions.
(line 94)
* gsl_siman_print_t: Simulated Annealing functions.
(line 71)
* gsl_siman_step_t: Simulated Annealing functions.
(line 57)
* gsl_spline: Higher-level Interface.
(line 16)
* gsl_sum_levin_u_workspace: Acceleration functions.
(line 25)
* gsl_sum_levin_utrunc_workspace: Acceleration functions without error estimation.
(line 24)
* gsl_vector: Vectors. (line 11)
* gsl_vector_const_view: Vector views. (line 12)
* gsl_vector_view: Vector views. (line 12)
* gsl_wavelet: DWT Initialization. (line 11)
* gsl_wavelet_type: DWT Initialization. (line 6)
* gsl_wavelet_workspace: DWT Initialization. (line 53)
./gsl_DOC-1.15-s-i486/usr/share/info/gsl-ref.info-40000644000000000000000000102127512035456005017730 0ustar rootrootThis is gsl-ref.info, produced by makeinfo version 4.13 from
gsl-ref.texi.
INFO-DIR-SECTION Software libraries
START-INFO-DIR-ENTRY
* gsl-ref: (gsl-ref). GNU Scientific Library - Reference
END-INFO-DIR-ENTRY
Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004,
2005, 2006, 2007, 2008, 2009, 2010, 2011 The GSL Team.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.3 or
any later version published by the Free Software Foundation; with the
Invariant Sections being "GNU General Public License" and "Free Software
Needs Free Documentation", the Front-Cover text being "A GNU Manual",
and with the Back-Cover Text being (a) (see below). A copy of the
license is included in the section entitled "GNU Free Documentation
License".
(a) The Back-Cover Text is: "You have the freedom to copy and modify
this GNU Manual."
File: gsl-ref.info, Node: Initializing the Multidimensional Solver, Next: Providing the multidimensional system of equations to solve, Prev: Overview of Multidimensional Root Finding, Up: Multidimensional Root-Finding
35.2 Initializing the Solver
============================
The following functions initialize a multidimensional solver, either
with or without derivatives. The solver itself depends only on the
dimension of the problem and the algorithm and can be reused for
different problems.
-- Function: gsl_multiroot_fsolver * gsl_multiroot_fsolver_alloc
(const gsl_multiroot_fsolver_type * T, size_t N)
This function returns a pointer to a newly allocated instance of a
solver of type T for a system of N dimensions. For example, the
following code creates an instance of a hybrid solver, to solve a
3-dimensional system of equations.
const gsl_multiroot_fsolver_type * T
= gsl_multiroot_fsolver_hybrid;
gsl_multiroot_fsolver * s
= gsl_multiroot_fsolver_alloc (T, 3);
If there is insufficient memory to create the solver then the
function returns a null pointer and the error handler is invoked
with an error code of `GSL_ENOMEM'.
-- Function: gsl_multiroot_fdfsolver * gsl_multiroot_fdfsolver_alloc
(const gsl_multiroot_fdfsolver_type * T, size_t N)
This function returns a pointer to a newly allocated instance of a
derivative solver of type T for a system of N dimensions. For
example, the following code creates an instance of a
Newton-Raphson solver, for a 2-dimensional system of equations.
const gsl_multiroot_fdfsolver_type * T
= gsl_multiroot_fdfsolver_newton;
gsl_multiroot_fdfsolver * s =
gsl_multiroot_fdfsolver_alloc (T, 2);
If there is insufficient memory to create the solver then the
function returns a null pointer and the error handler is invoked
with an error code of `GSL_ENOMEM'.
-- Function: int gsl_multiroot_fsolver_set (gsl_multiroot_fsolver * S,
gsl_multiroot_function * F, const gsl_vector * X)
-- Function: int gsl_multiroot_fdfsolver_set (gsl_multiroot_fdfsolver
* S, gsl_multiroot_function_fdf * FDF, const gsl_vector * X)
These functions set, or reset, an existing solver S to use the
function F or function and derivative FDF, and the initial guess
X. Note that the initial position is copied from X, this argument
is not modified by subsequent iterations.
-- Function: void gsl_multiroot_fsolver_free (gsl_multiroot_fsolver *
S)
-- Function: void gsl_multiroot_fdfsolver_free
(gsl_multiroot_fdfsolver * S)
These functions free all the memory associated with the solver S.
-- Function: const char * gsl_multiroot_fsolver_name (const
gsl_multiroot_fsolver * S)
-- Function: const char * gsl_multiroot_fdfsolver_name (const
gsl_multiroot_fdfsolver * S)
These functions return a pointer to the name of the solver. For
example,
printf ("s is a '%s' solver\n",
gsl_multiroot_fdfsolver_name (s));
would print something like `s is a 'newton' solver'.
File: gsl-ref.info, Node: Providing the multidimensional system of equations to solve, Next: Iteration of the multidimensional solver, Prev: Initializing the Multidimensional Solver, Up: Multidimensional Root-Finding
35.3 Providing the function to solve
====================================
You must provide n functions of n variables for the root finders to
operate on. In order to allow for general parameters the functions are
defined by the following data types:
-- Data Type: gsl_multiroot_function
This data type defines a general system of functions with
parameters.
`int (* f) (const gsl_vector * X, void * PARAMS, gsl_vector * F)'
this function should store the vector result f(x,params) in F
for argument X and parameters PARAMS, returning an
appropriate error code if the function cannot be computed.
`size_t n'
the dimension of the system, i.e. the number of components of
the vectors X and F.
`void * params'
a pointer to the parameters of the function.
Here is an example using Powell's test function,
f_1(x) = A x_0 x_1 - 1,
f_2(x) = exp(-x_0) + exp(-x_1) - (1 + 1/A)
with A = 10^4. The following code defines a `gsl_multiroot_function'
system `F' which you could pass to a solver:
struct powell_params { double A; };
int
powell (gsl_vector * x, void * p, gsl_vector * f) {
struct powell_params * params
= *(struct powell_params *)p;
const double A = (params->A);
const double x0 = gsl_vector_get(x,0);
const double x1 = gsl_vector_get(x,1);
gsl_vector_set (f, 0, A * x0 * x1 - 1);
gsl_vector_set (f, 1, (exp(-x0) + exp(-x1)
- (1.0 + 1.0/A)));
return GSL_SUCCESS
}
gsl_multiroot_function F;
struct powell_params params = { 10000.0 };
F.f = &powell;
F.n = 2;
F.params = ¶ms;
-- Data Type: gsl_multiroot_function_fdf
This data type defines a general system of functions with
parameters and the corresponding Jacobian matrix of derivatives,
`int (* f) (const gsl_vector * X, void * PARAMS, gsl_vector * F)'
this function should store the vector result f(x,params) in F
for argument X and parameters PARAMS, returning an
appropriate error code if the function cannot be computed.
`int (* df) (const gsl_vector * X, void * PARAMS, gsl_matrix * J)'
this function should store the N-by-N matrix result J_ij = d
f_i(x,params) / d x_j in J for argument X and parameters
PARAMS, returning an appropriate error code if the function
cannot be computed.
`int (* fdf) (const gsl_vector * X, void * PARAMS, gsl_vector * F, gsl_matrix * J)'
This function should set the values of the F and J as above,
for arguments X and parameters PARAMS. This function
provides an optimization of the separate functions for f(x)
and J(x)--it is always faster to compute the function and its
derivative at the same time.
`size_t n'
the dimension of the system, i.e. the number of components of
the vectors X and F.
`void * params'
a pointer to the parameters of the function.
The example of Powell's test function defined above can be extended to
include analytic derivatives using the following code,
int
powell_df (gsl_vector * x, void * p, gsl_matrix * J)
{
struct powell_params * params
= *(struct powell_params *)p;
const double A = (params->A);
const double x0 = gsl_vector_get(x,0);
const double x1 = gsl_vector_get(x,1);
gsl_matrix_set (J, 0, 0, A * x1);
gsl_matrix_set (J, 0, 1, A * x0);
gsl_matrix_set (J, 1, 0, -exp(-x0));
gsl_matrix_set (J, 1, 1, -exp(-x1));
return GSL_SUCCESS
}
int
powell_fdf (gsl_vector * x, void * p,
gsl_matrix * f, gsl_matrix * J) {
struct powell_params * params
= *(struct powell_params *)p;
const double A = (params->A);
const double x0 = gsl_vector_get(x,0);
const double x1 = gsl_vector_get(x,1);
const double u0 = exp(-x0);
const double u1 = exp(-x1);
gsl_vector_set (f, 0, A * x0 * x1 - 1);
gsl_vector_set (f, 1, u0 + u1 - (1 + 1/A));
gsl_matrix_set (J, 0, 0, A * x1);
gsl_matrix_set (J, 0, 1, A * x0);
gsl_matrix_set (J, 1, 0, -u0);
gsl_matrix_set (J, 1, 1, -u1);
return GSL_SUCCESS
}
gsl_multiroot_function_fdf FDF;
FDF.f = &powell_f;
FDF.df = &powell_df;
FDF.fdf = &powell_fdf;
FDF.n = 2;
FDF.params = 0;
Note that the function `powell_fdf' is able to reuse existing terms
from the function when calculating the Jacobian, thus saving time.
File: gsl-ref.info, Node: Iteration of the multidimensional solver, Next: Search Stopping Parameters for the multidimensional solver, Prev: Providing the multidimensional system of equations to solve, Up: Multidimensional Root-Finding
35.4 Iteration
==============
The following functions drive the iteration of each algorithm. Each
function performs one iteration to update the state of any solver of the
corresponding type. The same functions work for all solvers so that
different methods can be substituted at runtime without modifications to
the code.
-- Function: int gsl_multiroot_fsolver_iterate (gsl_multiroot_fsolver
* S)
-- Function: int gsl_multiroot_fdfsolver_iterate
(gsl_multiroot_fdfsolver * S)
These functions perform a single iteration of the solver S. If the
iteration encounters an unexpected problem then an error code will
be returned,
`GSL_EBADFUNC'
the iteration encountered a singular point where the function
or its derivative evaluated to `Inf' or `NaN'.
`GSL_ENOPROG'
the iteration is not making any progress, preventing the
algorithm from continuing.
The solver maintains a current best estimate of the root `s->x' and
its function value `s->f' at all times. This information can be
accessed with the following auxiliary functions,
-- Function: gsl_vector * gsl_multiroot_fsolver_root (const
gsl_multiroot_fsolver * S)
-- Function: gsl_vector * gsl_multiroot_fdfsolver_root (const
gsl_multiroot_fdfsolver * S)
These functions return the current estimate of the root for the
solver S, given by `s->x'.
-- Function: gsl_vector * gsl_multiroot_fsolver_f (const
gsl_multiroot_fsolver * S)
-- Function: gsl_vector * gsl_multiroot_fdfsolver_f (const
gsl_multiroot_fdfsolver * S)
These functions return the function value f(x) at the current
estimate of the root for the solver S, given by `s->f'.
-- Function: gsl_vector * gsl_multiroot_fsolver_dx (const
gsl_multiroot_fsolver * S)
-- Function: gsl_vector * gsl_multiroot_fdfsolver_dx (const
gsl_multiroot_fdfsolver * S)
These functions return the last step dx taken by the solver S,
given by `s->dx'.
File: gsl-ref.info, Node: Search Stopping Parameters for the multidimensional solver, Next: Algorithms using Derivatives, Prev: Iteration of the multidimensional solver, Up: Multidimensional Root-Finding
35.5 Search Stopping Parameters
===============================
A root finding procedure should stop when one of the following
conditions is true:
* A multidimensional root has been found to within the
user-specified precision.
* A user-specified maximum number of iterations has been reached.
* An error has occurred.
The handling of these conditions is under user control. The functions
below allow the user to test the precision of the current result in
several standard ways.
-- Function: int gsl_multiroot_test_delta (const gsl_vector * DX,
const gsl_vector * X, double EPSABS, double EPSREL)
This function tests for the convergence of the sequence by
comparing the last step DX with the absolute error EPSABS and
relative error EPSREL to the current position X. The test returns
`GSL_SUCCESS' if the following condition is achieved,
|dx_i| < epsabs + epsrel |x_i|
for each component of X and returns `GSL_CONTINUE' otherwise.
-- Function: int gsl_multiroot_test_residual (const gsl_vector * F,
double EPSABS)
This function tests the residual value F against the absolute
error bound EPSABS. The test returns `GSL_SUCCESS' if the
following condition is achieved,
\sum_i |f_i| < epsabs
and returns `GSL_CONTINUE' otherwise. This criterion is suitable
for situations where the precise location of the root, x, is
unimportant provided a value can be found where the residual is
small enough.
File: gsl-ref.info, Node: Algorithms using Derivatives, Next: Algorithms without Derivatives, Prev: Search Stopping Parameters for the multidimensional solver, Up: Multidimensional Root-Finding
35.6 Algorithms using Derivatives
=================================
The root finding algorithms described in this section make use of both
the function and its derivative. They require an initial guess for the
location of the root, but there is no absolute guarantee of
convergence--the function must be suitable for this technique and the
initial guess must be sufficiently close to the root for it to work.
When the conditions are satisfied then convergence is quadratic.
-- Derivative Solver: gsl_multiroot_fdfsolver_hybridsj
This is a modified version of Powell's Hybrid method as
implemented in the HYBRJ algorithm in MINPACK. Minpack was
written by Jorge J. More', Burton S. Garbow and Kenneth E.
Hillstrom. The Hybrid algorithm retains the fast convergence of
Newton's method but will also reduce the residual when Newton's
method is unreliable.
The algorithm uses a generalized trust region to keep each step
under control. In order to be accepted a proposed new position x'
must satisfy the condition |D (x' - x)| < \delta, where D is a
diagonal scaling matrix and \delta is the size of the trust
region. The components of D are computed internally, using the
column norms of the Jacobian to estimate the sensitivity of the
residual to each component of x. This improves the behavior of the
algorithm for badly scaled functions.
On each iteration the algorithm first determines the standard
Newton step by solving the system J dx = - f. If this step falls
inside the trust region it is used as a trial step in the next
stage. If not, the algorithm uses the linear combination of the
Newton and gradient directions which is predicted to minimize the
norm of the function while staying inside the trust region,
dx = - \alpha J^{-1} f(x) - \beta \nabla |f(x)|^2.
This combination of Newton and gradient directions is referred to
as a "dogleg step".
The proposed step is now tested by evaluating the function at the
resulting point, x'. If the step reduces the norm of the function
sufficiently then it is accepted and size of the trust region is
increased. If the proposed step fails to improve the solution
then the size of the trust region is decreased and another trial
step is computed.
The speed of the algorithm is increased by computing the changes
to the Jacobian approximately, using a rank-1 update. If two
successive attempts fail to reduce the residual then the full
Jacobian is recomputed. The algorithm also monitors the progress
of the solution and returns an error if several steps fail to make
any improvement,
`GSL_ENOPROG'
the iteration is not making any progress, preventing the
algorithm from continuing.
`GSL_ENOPROGJ'
re-evaluations of the Jacobian indicate that the iteration is
not making any progress, preventing the algorithm from
continuing.
-- Derivative Solver: gsl_multiroot_fdfsolver_hybridj
This algorithm is an unscaled version of `hybridsj'. The steps are
controlled by a spherical trust region |x' - x| < \delta, instead
of a generalized region. This can be useful if the generalized
region estimated by `hybridsj' is inappropriate.
-- Derivative Solver: gsl_multiroot_fdfsolver_newton
Newton's Method is the standard root-polishing algorithm. The
algorithm begins with an initial guess for the location of the
solution. On each iteration a linear approximation to the
function F is used to estimate the step which will zero all the
components of the residual. The iteration is defined by the
following sequence,
x -> x' = x - J^{-1} f(x)
where the Jacobian matrix J is computed from the derivative
functions provided by F. The step dx is obtained by solving the
linear system,
J dx = - f(x)
using LU decomposition. If the Jacobian matrix is singular, an
error code of `GSL_EDOM' is returned.
-- Derivative Solver: gsl_multiroot_fdfsolver_gnewton
This is a modified version of Newton's method which attempts to
improve global convergence by requiring every step to reduce the
Euclidean norm of the residual, |f(x)|. If the Newton step leads
to an increase in the norm then a reduced step of relative size,
t = (\sqrt(1 + 6 r) - 1) / (3 r)
is proposed, with r being the ratio of norms |f(x')|^2/|f(x)|^2.
This procedure is repeated until a suitable step size is found.
File: gsl-ref.info, Node: Algorithms without Derivatives, Next: Example programs for Multidimensional Root finding, Prev: Algorithms using Derivatives, Up: Multidimensional Root-Finding
35.7 Algorithms without Derivatives
===================================
The algorithms described in this section do not require any derivative
information to be supplied by the user. Any derivatives needed are
approximated by finite differences. Note that if the
finite-differencing step size chosen by these routines is inappropriate,
an explicit user-supplied numerical derivative can always be used with
the algorithms described in the previous section.
-- Solver: gsl_multiroot_fsolver_hybrids
This is a version of the Hybrid algorithm which replaces calls to
the Jacobian function by its finite difference approximation. The
finite difference approximation is computed using
`gsl_multiroots_fdjac' with a relative step size of
`GSL_SQRT_DBL_EPSILON'. Note that this step size will not be
suitable for all problems.
-- Solver: gsl_multiroot_fsolver_hybrid
This is a finite difference version of the Hybrid algorithm without
internal scaling.
-- Solver: gsl_multiroot_fsolver_dnewton
The "discrete Newton algorithm" is the simplest method of solving a
multidimensional system. It uses the Newton iteration
x -> x - J^{-1} f(x)
where the Jacobian matrix J is approximated by taking finite
differences of the function F. The approximation scheme used by
this implementation is,
J_{ij} = (f_i(x + \delta_j) - f_i(x)) / \delta_j
where \delta_j is a step of size \sqrt\epsilon |x_j| with \epsilon
being the machine precision (\epsilon \approx 2.22 \times 10^-16).
The order of convergence of Newton's algorithm is quadratic, but
the finite differences require n^2 function evaluations on each
iteration. The algorithm may become unstable if the finite
differences are not a good approximation to the true derivatives.
-- Solver: gsl_multiroot_fsolver_broyden
The "Broyden algorithm" is a version of the discrete Newton
algorithm which attempts to avoids the expensive update of the
Jacobian matrix on each iteration. The changes to the Jacobian
are also approximated, using a rank-1 update,
J^{-1} \to J^{-1} - (J^{-1} df - dx) dx^T J^{-1} / dx^T J^{-1} df
where the vectors dx and df are the changes in x and f. On the
first iteration the inverse Jacobian is estimated using finite
differences, as in the discrete Newton algorithm.
This approximation gives a fast update but is unreliable if the
changes are not small, and the estimate of the inverse Jacobian
becomes worse as time passes. The algorithm has a tendency to
become unstable unless it starts close to the root. The Jacobian
is refreshed if this instability is detected (consult the source
for details).
This algorithm is included only for demonstration purposes, and is
not recommended for serious use.
File: gsl-ref.info, Node: Example programs for Multidimensional Root finding, Next: References and Further Reading for Multidimensional Root Finding, Prev: Algorithms without Derivatives, Up: Multidimensional Root-Finding
35.8 Examples
=============
The multidimensional solvers are used in a similar way to the
one-dimensional root finding algorithms. This first example
demonstrates the `hybrids' scaled-hybrid algorithm, which does not
require derivatives. The program solves the Rosenbrock system of
equations,
f_1 (x, y) = a (1 - x)
f_2 (x, y) = b (y - x^2)
with a = 1, b = 10. The solution of this system lies at (x,y) = (1,1)
in a narrow valley.
The first stage of the program is to define the system of equations,
#include
#include
#include
#include
struct rparams
{
double a;
double b;
};
int
rosenbrock_f (const gsl_vector * x, void *params,
gsl_vector * f)
{
double a = ((struct rparams *) params)->a;
double b = ((struct rparams *) params)->b;
const double x0 = gsl_vector_get (x, 0);
const double x1 = gsl_vector_get (x, 1);
const double y0 = a * (1 - x0);
const double y1 = b * (x1 - x0 * x0);
gsl_vector_set (f, 0, y0);
gsl_vector_set (f, 1, y1);
return GSL_SUCCESS;
}
The main program begins by creating the function object `f', with the
arguments `(x,y)' and parameters `(a,b)'. The solver `s' is initialized
to use this function, with the `hybrids' method.
int
main (void)
{
const gsl_multiroot_fsolver_type *T;
gsl_multiroot_fsolver *s;
int status;
size_t i, iter = 0;
const size_t n = 2;
struct rparams p = {1.0, 10.0};
gsl_multiroot_function f = {&rosenbrock_f, n, &p};
double x_init[2] = {-10.0, -5.0};
gsl_vector *x = gsl_vector_alloc (n);
gsl_vector_set (x, 0, x_init[0]);
gsl_vector_set (x, 1, x_init[1]);
T = gsl_multiroot_fsolver_hybrids;
s = gsl_multiroot_fsolver_alloc (T, 2);
gsl_multiroot_fsolver_set (s, &f, x);
print_state (iter, s);
do
{
iter++;
status = gsl_multiroot_fsolver_iterate (s);
print_state (iter, s);
if (status) /* check if solver is stuck */
break;
status =
gsl_multiroot_test_residual (s->f, 1e-7);
}
while (status == GSL_CONTINUE && iter < 1000);
printf ("status = %s\n", gsl_strerror (status));
gsl_multiroot_fsolver_free (s);
gsl_vector_free (x);
return 0;
}
Note that it is important to check the return status of each solver
step, in case the algorithm becomes stuck. If an error condition is
detected, indicating that the algorithm cannot proceed, then the error
can be reported to the user, a new starting point chosen or a different
algorithm used.
The intermediate state of the solution is displayed by the following
function. The solver state contains the vector `s->x' which is the
current position, and the vector `s->f' with corresponding function
values.
int
print_state (size_t iter, gsl_multiroot_fsolver * s)
{
printf ("iter = %3u x = % .3f % .3f "
"f(x) = % .3e % .3e\n",
iter,
gsl_vector_get (s->x, 0),
gsl_vector_get (s->x, 1),
gsl_vector_get (s->f, 0),
gsl_vector_get (s->f, 1));
}
Here are the results of running the program. The algorithm is started at
(-10,-5) far from the solution. Since the solution is hidden in a
narrow valley the earliest steps follow the gradient of the function
downhill, in an attempt to reduce the large value of the residual. Once
the root has been approximately located, on iteration 8, the Newton
behavior takes over and convergence is very rapid.
iter = 0 x = -10.000 -5.000 f(x) = 1.100e+01 -1.050e+03
iter = 1 x = -10.000 -5.000 f(x) = 1.100e+01 -1.050e+03
iter = 2 x = -3.976 24.827 f(x) = 4.976e+00 9.020e+01
iter = 3 x = -3.976 24.827 f(x) = 4.976e+00 9.020e+01
iter = 4 x = -3.976 24.827 f(x) = 4.976e+00 9.020e+01
iter = 5 x = -1.274 -5.680 f(x) = 2.274e+00 -7.302e+01
iter = 6 x = -1.274 -5.680 f(x) = 2.274e+00 -7.302e+01
iter = 7 x = 0.249 0.298 f(x) = 7.511e-01 2.359e+00
iter = 8 x = 0.249 0.298 f(x) = 7.511e-01 2.359e+00
iter = 9 x = 1.000 0.878 f(x) = 1.268e-10 -1.218e+00
iter = 10 x = 1.000 0.989 f(x) = 1.124e-11 -1.080e-01
iter = 11 x = 1.000 1.000 f(x) = 0.000e+00 0.000e+00
status = success
Note that the algorithm does not update the location on every
iteration. Some iterations are used to adjust the trust-region
parameter, after trying a step which was found to be divergent, or to
recompute the Jacobian, when poor convergence behavior is detected.
The next example program adds derivative information, in order to
accelerate the solution. There are two derivative functions
`rosenbrock_df' and `rosenbrock_fdf'. The latter computes both the
function and its derivative simultaneously. This allows the
optimization of any common terms. For simplicity we substitute calls to
the separate `f' and `df' functions at this point in the code below.
int
rosenbrock_df (const gsl_vector * x, void *params,
gsl_matrix * J)
{
const double a = ((struct rparams *) params)->a;
const double b = ((struct rparams *) params)->b;
const double x0 = gsl_vector_get (x, 0);
const double df00 = -a;
const double df01 = 0;
const double df10 = -2 * b * x0;
const double df11 = b;
gsl_matrix_set (J, 0, 0, df00);
gsl_matrix_set (J, 0, 1, df01);
gsl_matrix_set (J, 1, 0, df10);
gsl_matrix_set (J, 1, 1, df11);
return GSL_SUCCESS;
}
int
rosenbrock_fdf (const gsl_vector * x, void *params,
gsl_vector * f, gsl_matrix * J)
{
rosenbrock_f (x, params, f);
rosenbrock_df (x, params, J);
return GSL_SUCCESS;
}
The main program now makes calls to the corresponding `fdfsolver'
versions of the functions,
int
main (void)
{
const gsl_multiroot_fdfsolver_type *T;
gsl_multiroot_fdfsolver *s;
int status;
size_t i, iter = 0;
const size_t n = 2;
struct rparams p = {1.0, 10.0};
gsl_multiroot_function_fdf f = {&rosenbrock_f,
&rosenbrock_df,
&rosenbrock_fdf,
n, &p};
double x_init[2] = {-10.0, -5.0};
gsl_vector *x = gsl_vector_alloc (n);
gsl_vector_set (x, 0, x_init[0]);
gsl_vector_set (x, 1, x_init[1]);
T = gsl_multiroot_fdfsolver_gnewton;
s = gsl_multiroot_fdfsolver_alloc (T, n);
gsl_multiroot_fdfsolver_set (s, &f, x);
print_state (iter, s);
do
{
iter++;
status = gsl_multiroot_fdfsolver_iterate (s);
print_state (iter, s);
if (status)
break;
status = gsl_multiroot_test_residual (s->f, 1e-7);
}
while (status == GSL_CONTINUE && iter < 1000);
printf ("status = %s\n", gsl_strerror (status));
gsl_multiroot_fdfsolver_free (s);
gsl_vector_free (x);
return 0;
}
The addition of derivative information to the `hybrids' solver does not
make any significant difference to its behavior, since it able to
approximate the Jacobian numerically with sufficient accuracy. To
illustrate the behavior of a different derivative solver we switch to
`gnewton'. This is a traditional Newton solver with the constraint that
it scales back its step if the full step would lead "uphill". Here is
the output for the `gnewton' algorithm,
iter = 0 x = -10.000 -5.000 f(x) = 1.100e+01 -1.050e+03
iter = 1 x = -4.231 -65.317 f(x) = 5.231e+00 -8.321e+02
iter = 2 x = 1.000 -26.358 f(x) = -8.882e-16 -2.736e+02
iter = 3 x = 1.000 1.000 f(x) = -2.220e-16 -4.441e-15
status = success
The convergence is much more rapid, but takes a wide excursion out to
the point (-4.23,-65.3). This could cause the algorithm to go astray in
a realistic application. The hybrid algorithm follows the downhill
path to the solution more reliably.
File: gsl-ref.info, Node: References and Further Reading for Multidimensional Root Finding, Prev: Example programs for Multidimensional Root finding, Up: Multidimensional Root-Finding
35.9 References and Further Reading
===================================
The original version of the Hybrid method is described in the following
articles by Powell,
M.J.D. Powell, "A Hybrid Method for Nonlinear Equations" (Chap 6, p
87-114) and "A Fortran Subroutine for Solving systems of Nonlinear
Algebraic Equations" (Chap 7, p 115-161), in `Numerical Methods for
Nonlinear Algebraic Equations', P. Rabinowitz, editor. Gordon and
Breach, 1970.
The following papers are also relevant to the algorithms described in
this section,
J.J. More', M.Y. Cosnard, "Numerical Solution of Nonlinear
Equations", `ACM Transactions on Mathematical Software', Vol 5, No
1, (1979), p 64-85
C.G. Broyden, "A Class of Methods for Solving Nonlinear
Simultaneous Equations", `Mathematics of Computation', Vol 19
(1965), p 577-593
J.J. More', B.S. Garbow, K.E. Hillstrom, "Testing Unconstrained
Optimization Software", ACM Transactions on Mathematical Software,
Vol 7, No 1 (1981), p 17-41
File: gsl-ref.info, Node: Multidimensional Minimization, Next: Least-Squares Fitting, Prev: Multidimensional Root-Finding, Up: Top
36 Multidimensional Minimization
********************************
This chapter describes routines for finding minima of arbitrary
multidimensional functions. The library provides low level components
for a variety of iterative minimizers and convergence tests. These can
be combined by the user to achieve the desired solution, while providing
full access to the intermediate steps of the algorithms. Each class of
methods uses the same framework, so that you can switch between
minimizers at runtime without needing to recompile your program. Each
instance of a minimizer keeps track of its own state, allowing the
minimizers to be used in multi-threaded programs. The minimization
algorithms can be used to maximize a function by inverting its sign.
The header file `gsl_multimin.h' contains prototypes for the
minimization functions and related declarations.
* Menu:
* Multimin Overview::
* Multimin Caveats::
* Initializing the Multidimensional Minimizer::
* Providing a function to minimize::
* Multimin Iteration::
* Multimin Stopping Criteria::
* Multimin Algorithms with Derivatives::
* Multimin Algorithms without Derivatives::
* Multimin Examples::
* Multimin References and Further Reading::
File: gsl-ref.info, Node: Multimin Overview, Next: Multimin Caveats, Up: Multidimensional Minimization
36.1 Overview
=============
The problem of multidimensional minimization requires finding a point x
such that the scalar function,
f(x_1, ..., x_n)
takes a value which is lower than at any neighboring point. For smooth
functions the gradient g = \nabla f vanishes at the minimum. In general
there are no bracketing methods available for the minimization of
n-dimensional functions. The algorithms proceed from an initial guess
using a search algorithm which attempts to move in a downhill direction.
Algorithms making use of the gradient of the function perform a
one-dimensional line minimisation along this direction until the lowest
point is found to a suitable tolerance. The search direction is then
updated with local information from the function and its derivatives,
and the whole process repeated until the true n-dimensional minimum is
found.
Algorithms which do not require the gradient of the function use
different strategies. For example, the Nelder-Mead Simplex algorithm
maintains n+1 trial parameter vectors as the vertices of a
n-dimensional simplex. On each iteration it tries to improve the worst
vertex of the simplex by geometrical transformations. The iterations
are continued until the overall size of the simplex has decreased
sufficiently.
Both types of algorithms use a standard framework. The user provides
a high-level driver for the algorithms, and the library provides the
individual functions necessary for each of the steps. There are three
main phases of the iteration. The steps are,
* initialize minimizer state, S, for algorithm T
* update S using the iteration T
* test S for convergence, and repeat iteration if necessary
Each iteration step consists either of an improvement to the
line-minimisation in the current direction or an update to the search
direction itself. The state for the minimizers is held in a
`gsl_multimin_fdfminimizer' struct or a `gsl_multimin_fminimizer'
struct.
File: gsl-ref.info, Node: Multimin Caveats, Next: Initializing the Multidimensional Minimizer, Prev: Multimin Overview, Up: Multidimensional Minimization
36.2 Caveats
============
Note that the minimization algorithms can only search for one local
minimum at a time. When there are several local minima in the search
area, the first minimum to be found will be returned; however it is
difficult to predict which of the minima this will be. In most cases,
no error will be reported if you try to find a local minimum in an area
where there is more than one.
It is also important to note that the minimization algorithms find
local minima; there is no way to determine whether a minimum is a global
minimum of the function in question.
File: gsl-ref.info, Node: Initializing the Multidimensional Minimizer, Next: Providing a function to minimize, Prev: Multimin Caveats, Up: Multidimensional Minimization
36.3 Initializing the Multidimensional Minimizer
================================================
The following function initializes a multidimensional minimizer. The
minimizer itself depends only on the dimension of the problem and the
algorithm and can be reused for different problems.
-- Function: gsl_multimin_fdfminimizer *
gsl_multimin_fdfminimizer_alloc (const gsl_multimin_fdfminimizer_type *
T, size_t N)
-- Function: gsl_multimin_fminimizer * gsl_multimin_fminimizer_alloc
(const gsl_multimin_fminimizer_type * T, size_t N)
This function returns a pointer to a newly allocated instance of a
minimizer of type T for an N-dimension function. If there is
insufficient memory to create the minimizer then the function
returns a null pointer and the error handler is invoked with an
error code of `GSL_ENOMEM'.
-- Function: int gsl_multimin_fdfminimizer_set
(gsl_multimin_fdfminimizer * S, gsl_multimin_function_fdf *
FDF, const gsl_vector * X, double STEP_SIZE, double TOL)
This function initializes the minimizer S to minimize the function
FDF starting from the initial point X. The size of the first
trial step is given by STEP_SIZE. The accuracy of the line
minimization is specified by TOL. The precise meaning of this
parameter depends on the method used. Typically the line
minimization is considered successful if the gradient of the
function g is orthogonal to the current search direction p to a
relative accuracy of TOL, where dot(p,g) < tol |p| |g|. A TOL
value of 0.1 is suitable for most purposes, since line
minimization only needs to be carried out approximately. Note
that setting TOL to zero will force the use of "exact"
line-searches, which are extremely expensive.
-- Function: int gsl_multimin_fminimizer_set (gsl_multimin_fminimizer
* S, gsl_multimin_function * F, const gsl_vector * X, const
gsl_vector * STEP_SIZE)
This function initializes the minimizer S to minimize the function
F, starting from the initial point X. The size of the initial
trial steps is given in vector STEP_SIZE. The precise meaning of
this parameter depends on the method used.
-- Function: void gsl_multimin_fdfminimizer_free
(gsl_multimin_fdfminimizer * S)
-- Function: void gsl_multimin_fminimizer_free
(gsl_multimin_fminimizer * S)
This function frees all the memory associated with the minimizer S.
-- Function: const char * gsl_multimin_fdfminimizer_name (const
gsl_multimin_fdfminimizer * S)
-- Function: const char * gsl_multimin_fminimizer_name (const
gsl_multimin_fminimizer * S)
This function returns a pointer to the name of the minimizer. For
example,
printf ("s is a '%s' minimizer\n",
gsl_multimin_fdfminimizer_name (s));
would print something like `s is a 'conjugate_pr' minimizer'.
File: gsl-ref.info, Node: Providing a function to minimize, Next: Multimin Iteration, Prev: Initializing the Multidimensional Minimizer, Up: Multidimensional Minimization
36.4 Providing a function to minimize
=====================================
You must provide a parametric function of n variables for the
minimizers to operate on. You may also need to provide a routine which
calculates the gradient of the function and a third routine which
calculates both the function value and the gradient together. In order
to allow for general parameters the functions are defined by the
following data types:
-- Data Type: gsl_multimin_function_fdf
This data type defines a general function of n variables with
parameters and the corresponding gradient vector of derivatives,
`double (* f) (const gsl_vector * X, void * PARAMS)'
this function should return the result f(x,params) for
argument X and parameters PARAMS. If the function cannot be
computed, an error value of `GSL_NAN' should be returned.
`void (* df) (const gsl_vector * X, void * PARAMS, gsl_vector * G)'
this function should store the N-dimensional gradient g_i = d
f(x,params) / d x_i in the vector G for argument X and
parameters PARAMS, returning an appropriate error code if the
function cannot be computed.
`void (* fdf) (const gsl_vector * X, void * PARAMS, double * f, gsl_vector * G)'
This function should set the values of the F and G as above,
for arguments X and parameters PARAMS. This function
provides an optimization of the separate functions for f(x)
and g(x)--it is always faster to compute the function and its
derivative at the same time.
`size_t n'
the dimension of the system, i.e. the number of components of
the vectors X.
`void * params'
a pointer to the parameters of the function.
-- Data Type: gsl_multimin_function
This data type defines a general function of n variables with
parameters,
`double (* f) (const gsl_vector * X, void * PARAMS)'
this function should return the result f(x,params) for
argument X and parameters PARAMS. If the function cannot be
computed, an error value of `GSL_NAN' should be returned.
`size_t n'
the dimension of the system, i.e. the number of components of
the vectors X.
`void * params'
a pointer to the parameters of the function.
The following example function defines a simple two-dimensional
paraboloid with five parameters,
/* Paraboloid centered on (p[0],p[1]), with
scale factors (p[2],p[3]) and minimum p[4] */
double
my_f (const gsl_vector *v, void *params)
{
double x, y;
double *p = (double *)params;
x = gsl_vector_get(v, 0);
y = gsl_vector_get(v, 1);
return p[2] * (x - p[0]) * (x - p[0]) +
p[3] * (y - p[1]) * (y - p[1]) + p[4];
}
/* The gradient of f, df = (df/dx, df/dy). */
void
my_df (const gsl_vector *v, void *params,
gsl_vector *df)
{
double x, y;
double *p = (double *)params;
x = gsl_vector_get(v, 0);
y = gsl_vector_get(v, 1);
gsl_vector_set(df, 0, 2.0 * p[2] * (x - p[0]));
gsl_vector_set(df, 1, 2.0 * p[3] * (y - p[1]));
}
/* Compute both f and df together. */
void
my_fdf (const gsl_vector *x, void *params,
double *f, gsl_vector *df)
{
*f = my_f(x, params);
my_df(x, params, df);
}
The function can be initialized using the following code,
gsl_multimin_function_fdf my_func;
/* Paraboloid center at (1,2), scale factors (10, 20),
minimum value 30 */
double p[5] = { 1.0, 2.0, 10.0, 20.0, 30.0 };
my_func.n = 2; /* number of function components */
my_func.f = &my_f;
my_func.df = &my_df;
my_func.fdf = &my_fdf;
my_func.params = (void *)p;
File: gsl-ref.info, Node: Multimin Iteration, Next: Multimin Stopping Criteria, Prev: Providing a function to minimize, Up: Multidimensional Minimization
36.5 Iteration
==============
The following function drives the iteration of each algorithm. The
function performs one iteration to update the state of the minimizer.
The same function works for all minimizers so that different methods can
be substituted at runtime without modifications to the code.
-- Function: int gsl_multimin_fdfminimizer_iterate
(gsl_multimin_fdfminimizer * S)
-- Function: int gsl_multimin_fminimizer_iterate
(gsl_multimin_fminimizer * S)
These functions perform a single iteration of the minimizer S. If
the iteration encounters an unexpected problem then an error code
will be returned. The error code `GSL_ENOPROG' signifies that the
minimizer is unable to improve on its current estimate, either due
to numerical difficulty or because a genuine local minimum has been
reached.
The minimizer maintains a current best estimate of the minimum at all
times. This information can be accessed with the following auxiliary
functions,
-- Function: gsl_vector * gsl_multimin_fdfminimizer_x (const
gsl_multimin_fdfminimizer * S)
-- Function: gsl_vector * gsl_multimin_fminimizer_x (const
gsl_multimin_fminimizer * S)
-- Function: double gsl_multimin_fdfminimizer_minimum (const
gsl_multimin_fdfminimizer * S)
-- Function: double gsl_multimin_fminimizer_minimum (const
gsl_multimin_fminimizer * S)
-- Function: gsl_vector * gsl_multimin_fdfminimizer_gradient (const
gsl_multimin_fdfminimizer * S)
-- Function: double gsl_multimin_fminimizer_size (const
gsl_multimin_fminimizer * S)
These functions return the current best estimate of the location
of the minimum, the value of the function at that point, its
gradient, and minimizer specific characteristic size for the
minimizer S.
-- Function: int gsl_multimin_fdfminimizer_restart
(gsl_multimin_fdfminimizer * S)
This function resets the minimizer S to use the current point as a
new starting point.
File: gsl-ref.info, Node: Multimin Stopping Criteria, Next: Multimin Algorithms with Derivatives, Prev: Multimin Iteration, Up: Multidimensional Minimization
36.6 Stopping Criteria
======================
A minimization procedure should stop when one of the following
conditions is true:
* A minimum has been found to within the user-specified precision.
* A user-specified maximum number of iterations has been reached.
* An error has occurred.
The handling of these conditions is under user control. The functions
below allow the user to test the precision of the current result.
-- Function: int gsl_multimin_test_gradient (const gsl_vector * G,
double EPSABS)
This function tests the norm of the gradient G against the
absolute tolerance EPSABS. The gradient of a multidimensional
function goes to zero at a minimum. The test returns `GSL_SUCCESS'
if the following condition is achieved,
|g| < epsabs
and returns `GSL_CONTINUE' otherwise. A suitable choice of EPSABS
can be made from the desired accuracy in the function for small
variations in x. The relationship between these quantities is
given by \delta f = g \delta x.
-- Function: int gsl_multimin_test_size (const double SIZE, double
EPSABS)
This function tests the minimizer specific characteristic size (if
applicable to the used minimizer) against absolute tolerance
EPSABS. The test returns `GSL_SUCCESS' if the size is smaller
than tolerance, otherwise `GSL_CONTINUE' is returned.
File: gsl-ref.info, Node: Multimin Algorithms with Derivatives, Next: Multimin Algorithms without Derivatives, Prev: Multimin Stopping Criteria, Up: Multidimensional Minimization
36.7 Algorithms with Derivatives
================================
There are several minimization methods available. The best choice of
algorithm depends on the problem. The algorithms described in this
section use the value of the function and its gradient at each
evaluation point.
-- Minimizer: gsl_multimin_fdfminimizer_conjugate_fr
This is the Fletcher-Reeves conjugate gradient algorithm. The
conjugate gradient algorithm proceeds as a succession of line
minimizations. The sequence of search directions is used to build
up an approximation to the curvature of the function in the
neighborhood of the minimum.
An initial search direction P is chosen using the gradient, and
line minimization is carried out in that direction. The accuracy
of the line minimization is specified by the parameter TOL. The
minimum along this line occurs when the function gradient G and
the search direction P are orthogonal. The line minimization
terminates when dot(p,g) < tol |p| |g|. The search direction is
updated using the Fletcher-Reeves formula p' = g' - \beta g where
\beta=-|g'|^2/|g|^2, and the line minimization is then repeated
for the new search direction.
-- Minimizer: gsl_multimin_fdfminimizer_conjugate_pr
This is the Polak-Ribiere conjugate gradient algorithm. It is
similar to the Fletcher-Reeves method, differing only in the
choice of the coefficient \beta. Both methods work well when the
evaluation point is close enough to the minimum of the objective
function that it is well approximated by a quadratic hypersurface.
-- Minimizer: gsl_multimin_fdfminimizer_vector_bfgs2
-- Minimizer: gsl_multimin_fdfminimizer_vector_bfgs
These methods use the vector Broyden-Fletcher-Goldfarb-Shanno
(BFGS) algorithm. This is a quasi-Newton method which builds up
an approximation to the second derivatives of the function f using
the difference between successive gradient vectors. By combining
the first and second derivatives the algorithm is able to take
Newton-type steps towards the function minimum, assuming quadratic
behavior in that region.
The `bfgs2' version of this minimizer is the most efficient
version available, and is a faithful implementation of the line
minimization scheme described in Fletcher's `Practical Methods of
Optimization', Algorithms 2.6.2 and 2.6.4. It supersedes the
original `bfgs' routine and requires substantially fewer function
and gradient evaluations. The user-supplied tolerance TOL
corresponds to the parameter \sigma used by Fletcher. A value of
0.1 is recommended for typical use (larger values correspond to
less accurate line searches).
-- Minimizer: gsl_multimin_fdfminimizer_steepest_descent
The steepest descent algorithm follows the downhill gradient of the
function at each step. When a downhill step is successful the
step-size is increased by a factor of two. If the downhill step
leads to a higher function value then the algorithm backtracks and
the step size is decreased using the parameter TOL. A suitable
value of TOL for most applications is 0.1. The steepest descent
method is inefficient and is included only for demonstration
purposes.
File: gsl-ref.info, Node: Multimin Algorithms without Derivatives, Next: Multimin Examples, Prev: Multimin Algorithms with Derivatives, Up: Multidimensional Minimization
36.8 Algorithms without Derivatives
===================================
The algorithms described in this section use only the value of the
function at each evaluation point.
-- Minimizer: gsl_multimin_fminimizer_nmsimplex2
-- Minimizer: gsl_multimin_fminimizer_nmsimplex
These methods use the Simplex algorithm of Nelder and Mead.
Starting from the initial vector X = p_0, the algorithm constructs
an additional n vectors p_i using the step size vector s =
STEP_SIZE as follows:
p_0 = (x_0, x_1, ... , x_n)
p_1 = (x_0 + s_0, x_1, ... , x_n)
p_2 = (x_0, x_1 + s_1, ... , x_n)
... = ...
p_n = (x_0, x_1, ... , x_n + s_n)
These vectors form the n+1 vertices of a simplex in n dimensions.
On each iteration the algorithm uses simple geometrical
transformations to update the vector corresponding to the highest
function value. The geometric transformations are reflection,
reflection followed by expansion, contraction and multiple
contraction. Using these transformations the simplex moves through
the space towards the minimum, where it contracts itself.
After each iteration, the best vertex is returned. Note, that due
to the nature of the algorithm not every step improves the current
best parameter vector. Usually several iterations are required.
The minimizer-specific characteristic size is calculated as the
average distance from the geometrical center of the simplex to all
its vertices. This size can be used as a stopping criteria, as the
simplex contracts itself near the minimum. The size is returned by
the function `gsl_multimin_fminimizer_size'.
The `nmsimplex2' version of this minimiser is a new O(N) operations
implementation of the earlier O(N^2) operations `nmsimplex'
minimiser. It uses the same underlying algorithm, but the simplex
updates are computed more efficiently for high-dimensional
problems. In addition, the size of simplex is calculated as the
RMS distance of each vertex from the center rather than the mean
distance, allowing a linear update of this quantity on each step.
The memory usage is O(N^2) for both algorithms.
-- Minimizer: gsl_multimin_fminimizer_nmsimplex2rand
This method is a variant of `nmsimplex2' which initialises the
simplex around the starting point X using a randomly-oriented set
of basis vectors instead of the fixed coordinate axes. The final
dimensions of the simplex are scaled along the coordinate axes by
the vector STEP_SIZE. The randomisation uses a simple
deterministic generator so that repeated calls to
`gsl_multimin_fminimizer_set' for a given solver object will vary
the orientation in a well-defined way.
File: gsl-ref.info, Node: Multimin Examples, Next: Multimin References and Further Reading, Prev: Multimin Algorithms without Derivatives, Up: Multidimensional Minimization
36.9 Examples
=============
This example program finds the minimum of the paraboloid function
defined earlier. The location of the minimum is offset from the origin
in x and y, and the function value at the minimum is non-zero. The main
program is given below, it requires the example function given earlier
in this chapter.
int
main (void)
{
size_t iter = 0;
int status;
const gsl_multimin_fdfminimizer_type *T;
gsl_multimin_fdfminimizer *s;
/* Position of the minimum (1,2), scale factors
10,20, height 30. */
double par[5] = { 1.0, 2.0, 10.0, 20.0, 30.0 };
gsl_vector *x;
gsl_multimin_function_fdf my_func;
my_func.n = 2;
my_func.f = my_f;
my_func.df = my_df;
my_func.fdf = my_fdf;
my_func.params = par;
/* Starting point, x = (5,7) */
x = gsl_vector_alloc (2);
gsl_vector_set (x, 0, 5.0);
gsl_vector_set (x, 1, 7.0);
T = gsl_multimin_fdfminimizer_conjugate_fr;
s = gsl_multimin_fdfminimizer_alloc (T, 2);
gsl_multimin_fdfminimizer_set (s, &my_func, x, 0.01, 1e-4);
do
{
iter++;
status = gsl_multimin_fdfminimizer_iterate (s);
if (status)
break;
status = gsl_multimin_test_gradient (s->gradient, 1e-3);
if (status == GSL_SUCCESS)
printf ("Minimum found at:\n");
printf ("%5d %.5f %.5f %10.5f\n", iter,
gsl_vector_get (s->x, 0),
gsl_vector_get (s->x, 1),
s->f);
}
while (status == GSL_CONTINUE && iter < 100);
gsl_multimin_fdfminimizer_free (s);
gsl_vector_free (x);
return 0;
}
The initial step-size is chosen as 0.01, a conservative estimate in this
case, and the line minimization parameter is set at 0.0001. The program
terminates when the norm of the gradient has been reduced below 0.001.
The output of the program is shown below,
x y f
1 4.99629 6.99072 687.84780
2 4.98886 6.97215 683.55456
3 4.97400 6.93501 675.01278
4 4.94429 6.86073 658.10798
5 4.88487 6.71217 625.01340
6 4.76602 6.41506 561.68440
7 4.52833 5.82083 446.46694
8 4.05295 4.63238 261.79422
9 3.10219 2.25548 75.49762
10 2.85185 1.62963 67.03704
11 2.19088 1.76182 45.31640
12 0.86892 2.02622 30.18555
Minimum found at:
13 1.00000 2.00000 30.00000
Note that the algorithm gradually increases the step size as it
successfully moves downhill, as can be seen by plotting the successive
points.
The conjugate gradient algorithm finds the minimum on its second
direction because the function is purely quadratic. Additional
iterations would be needed for a more complicated function.
Here is another example using the Nelder-Mead Simplex algorithm to
minimize the same example object function, as above.
int
main(void)
{
double par[5] = {1.0, 2.0, 10.0, 20.0, 30.0};
const gsl_multimin_fminimizer_type *T =
gsl_multimin_fminimizer_nmsimplex2;
gsl_multimin_fminimizer *s = NULL;
gsl_vector *ss, *x;
gsl_multimin_function minex_func;
size_t iter = 0;
int status;
double size;
/* Starting point */
x = gsl_vector_alloc (2);
gsl_vector_set (x, 0, 5.0);
gsl_vector_set (x, 1, 7.0);
/* Set initial step sizes to 1 */
ss = gsl_vector_alloc (2);
gsl_vector_set_all (ss, 1.0);
/* Initialize method and iterate */
minex_func.n = 2;
minex_func.f = my_f;
minex_func.params = par;
s = gsl_multimin_fminimizer_alloc (T, 2);
gsl_multimin_fminimizer_set (s, &minex_func, x, ss);
do
{
iter++;
status = gsl_multimin_fminimizer_iterate(s);
if (status)
break;
size = gsl_multimin_fminimizer_size (s);
status = gsl_multimin_test_size (size, 1e-2);
if (status == GSL_SUCCESS)
{
printf ("converged to minimum at\n");
}
printf ("%5d %10.3e %10.3e f() = %7.3f size = %.3f\n",
iter,
gsl_vector_get (s->x, 0),
gsl_vector_get (s->x, 1),
s->fval, size);
}
while (status == GSL_CONTINUE && iter < 100);
gsl_vector_free(x);
gsl_vector_free(ss);
gsl_multimin_fminimizer_free (s);
return status;
}
The minimum search stops when the Simplex size drops to 0.01. The
output is shown below.
1 6.500e+00 5.000e+00 f() = 512.500 size = 1.130
2 5.250e+00 4.000e+00 f() = 290.625 size = 1.409
3 5.250e+00 4.000e+00 f() = 290.625 size = 1.409
4 5.500e+00 1.000e+00 f() = 252.500 size = 1.409
5 2.625e+00 3.500e+00 f() = 101.406 size = 1.847
6 2.625e+00 3.500e+00 f() = 101.406 size = 1.847
7 0.000e+00 3.000e+00 f() = 60.000 size = 1.847
8 2.094e+00 1.875e+00 f() = 42.275 size = 1.321
9 2.578e-01 1.906e+00 f() = 35.684 size = 1.069
10 5.879e-01 2.445e+00 f() = 35.664 size = 0.841
11 1.258e+00 2.025e+00 f() = 30.680 size = 0.476
12 1.258e+00 2.025e+00 f() = 30.680 size = 0.367
13 1.093e+00 1.849e+00 f() = 30.539 size = 0.300
14 8.830e-01 2.004e+00 f() = 30.137 size = 0.172
15 8.830e-01 2.004e+00 f() = 30.137 size = 0.126
16 9.582e-01 2.060e+00 f() = 30.090 size = 0.106
17 1.022e+00 2.004e+00 f() = 30.005 size = 0.063
18 1.022e+00 2.004e+00 f() = 30.005 size = 0.043
19 1.022e+00 2.004e+00 f() = 30.005 size = 0.043
20 1.022e+00 2.004e+00 f() = 30.005 size = 0.027
21 1.022e+00 2.004e+00 f() = 30.005 size = 0.022
22 9.920e-01 1.997e+00 f() = 30.001 size = 0.016
23 9.920e-01 1.997e+00 f() = 30.001 size = 0.013
converged to minimum at
24 9.920e-01 1.997e+00 f() = 30.001 size = 0.008
The simplex size first increases, while the simplex moves towards the
minimum. After a while the size begins to decrease as the simplex
contracts around the minimum.
File: gsl-ref.info, Node: Multimin References and Further Reading, Prev: Multimin Examples, Up: Multidimensional Minimization
36.10 References and Further Reading
====================================
The conjugate gradient and BFGS methods are described in detail in the
following book,
R. Fletcher, `Practical Methods of Optimization (Second Edition)'
Wiley (1987), ISBN 0471915475.
A brief description of multidimensional minimization algorithms and
more recent references can be found in,
C.W. Ueberhuber, `Numerical Computation (Volume 2)', Chapter 14,
Section 4.4 "Minimization Methods", p. 325-335, Springer (1997),
ISBN 3-540-62057-5.
The simplex algorithm is described in the following paper,
J.A. Nelder and R. Mead, `A simplex method for function
minimization', Computer Journal vol. 7 (1965), 308-313.
File: gsl-ref.info, Node: Least-Squares Fitting, Next: Nonlinear Least-Squares Fitting, Prev: Multidimensional Minimization, Up: Top
37 Least-Squares Fitting
************************
This chapter describes routines for performing least squares fits to
experimental data using linear combinations of functions. The data may
be weighted or unweighted, i.e. with known or unknown errors. For
weighted data the functions compute the best fit parameters and their
associated covariance matrix. For unweighted data the covariance
matrix is estimated from the scatter of the points, giving a
variance-covariance matrix.
The functions are divided into separate versions for simple one- or
two-parameter regression and multiple-parameter fits. The functions
are declared in the header file `gsl_fit.h'.
* Menu:
* Fitting Overview::
* Linear regression::
* Linear fitting without a constant term::
* Multi-parameter fitting::
* Fitting Examples::
* Fitting References and Further Reading::
File: gsl-ref.info, Node: Fitting Overview, Next: Linear regression, Up: Least-Squares Fitting
37.1 Overview
=============
Least-squares fits are found by minimizing \chi^2 (chi-squared), the
weighted sum of squared residuals over n experimental datapoints (x_i,
y_i) for the model Y(c,x),
\chi^2 = \sum_i w_i (y_i - Y(c, x_i))^2
The p parameters of the model are c = {c_0, c_1, ...}. The weight
factors w_i are given by w_i = 1/\sigma_i^2, where \sigma_i is the
experimental error on the data-point y_i. The errors are assumed to be
Gaussian and uncorrelated. For unweighted data the chi-squared sum is
computed without any weight factors.
The fitting routines return the best-fit parameters c and their p
\times p covariance matrix. The covariance matrix measures the
statistical errors on the best-fit parameters resulting from the errors
on the data, \sigma_i, and is defined as C_{ab} = <\delta c_a \delta
c_b> where < > denotes an average over the Gaussian error distributions
of the underlying datapoints.
The covariance matrix is calculated by error propagation from the
data errors \sigma_i. The change in a fitted parameter \delta c_a
caused by a small change in the data \delta y_i is given by
\delta c_a = \sum_i (dc_a/dy_i) \delta y_i
allowing the covariance matrix to be written in terms of the errors on
the data,
C_{ab} = \sum_{i,j} (dc_a/dy_i) (dc_b/dy_j) <\delta y_i \delta y_j>
For uncorrelated data the fluctuations of the underlying datapoints
satisfy <\delta y_i \delta y_j> = \sigma_i^2 \delta_{ij}, giving a
corresponding parameter covariance matrix of
C_{ab} = \sum_i (1/w_i) (dc_a/dy_i) (dc_b/dy_i)
When computing the covariance matrix for unweighted data, i.e. data
with unknown errors, the weight factors w_i in this sum are replaced by
the single estimate w = 1/\sigma^2, where \sigma^2 is the computed
variance of the residuals about the best-fit model, \sigma^2 = \sum
(y_i - Y(c,x_i))^2 / (n-p). This is referred to as the
"variance-covariance matrix".
The standard deviations of the best-fit parameters are given by the
square root of the corresponding diagonal elements of the covariance
matrix, \sigma_{c_a} = \sqrt{C_{aa}}. The correlation coefficient of
the fit parameters c_a and c_b is given by \rho_{ab} = C_{ab} /
\sqrt{C_{aa} C_{bb}}.
File: gsl-ref.info, Node: Linear regression, Next: Linear fitting without a constant term, Prev: Fitting Overview, Up: Least-Squares Fitting
37.2 Linear regression
======================
The functions described in this section can be used to perform
least-squares fits to a straight line model, Y(c,x) = c_0 + c_1 x.
-- Function: int gsl_fit_linear (const double * X, const size_t
XSTRIDE, const double * Y, const size_t YSTRIDE, size_t N,
double * C0, double * C1, double * COV00, double * COV01,
double * COV11, double * SUMSQ)
This function computes the best-fit linear regression coefficients
(C0,C1) of the model Y = c_0 + c_1 X for the dataset (X, Y), two
vectors of length N with strides XSTRIDE and YSTRIDE. The errors
on Y are assumed unknown so the variance-covariance matrix for the
parameters (C0, C1) is estimated from the scatter of the points
around the best-fit line and returned via the parameters (COV00,
COV01, COV11). The sum of squares of the residuals from the
best-fit line is returned in SUMSQ. Note: the correlation
coefficient of the data can be computed using
`gsl_stats_correlation' (*note Correlation::), it does not depend
on the fit.
-- Function: int gsl_fit_wlinear (const double * X, const size_t
XSTRIDE, const double * W, const size_t WSTRIDE, const double
* Y, const size_t YSTRIDE, size_t N, double * C0, double *
C1, double * COV00, double * COV01, double * COV11, double *
CHISQ)
This function computes the best-fit linear regression coefficients
(C0,C1) of the model Y = c_0 + c_1 X for the weighted dataset (X,
Y), two vectors of length N with strides XSTRIDE and YSTRIDE. The
vector W, of length N and stride WSTRIDE, specifies the weight of
each datapoint. The weight is the reciprocal of the variance for
each datapoint in Y.
The covariance matrix for the parameters (C0, C1) is computed
using the weights and returned via the parameters (COV00, COV01,
COV11). The weighted sum of squares of the residuals from the
best-fit line, \chi^2, is returned in CHISQ.
-- Function: int gsl_fit_linear_est (double X, double C0, double C1,
double COV00, double COV01, double COV11, double * Y, double
* Y_ERR)
This function uses the best-fit linear regression coefficients C0,
C1 and their covariance COV00, COV01, COV11 to compute the fitted
function Y and its standard deviation Y_ERR for the model Y = c_0
+ c_1 X at the point X.
File: gsl-ref.info, Node: Linear fitting without a constant term, Next: Multi-parameter fitting, Prev: Linear regression, Up: Least-Squares Fitting
37.3 Linear fitting without a constant term
===========================================
The functions described in this section can be used to perform
least-squares fits to a straight line model without a constant term, Y
= c_1 X.
-- Function: int gsl_fit_mul (const double * X, const size_t XSTRIDE,
const double * Y, const size_t YSTRIDE, size_t N, double *
C1, double * COV11, double * SUMSQ)
This function computes the best-fit linear regression coefficient
C1 of the model Y = c_1 X for the datasets (X, Y), two vectors of
length N with strides XSTRIDE and YSTRIDE. The errors on Y are
assumed unknown so the variance of the parameter C1 is estimated
from the scatter of the points around the best-fit line and
returned via the parameter COV11. The sum of squares of the
residuals from the best-fit line is returned in SUMSQ.
-- Function: int gsl_fit_wmul (const double * X, const size_t XSTRIDE,
const double * W, const size_t WSTRIDE, const double * Y,
const size_t YSTRIDE, size_t N, double * C1, double * COV11,
double * SUMSQ)
This function computes the best-fit linear regression coefficient
C1 of the model Y = c_1 X for the weighted datasets (X, Y), two
vectors of length N with strides XSTRIDE and YSTRIDE. The vector
W, of length N and stride WSTRIDE, specifies the weight of each
datapoint. The weight is the reciprocal of the variance for each
datapoint in Y.
The variance of the parameter C1 is computed using the weights and
returned via the parameter COV11. The weighted sum of squares of
the residuals from the best-fit line, \chi^2, is returned in CHISQ.
-- Function: int gsl_fit_mul_est (double X, double C1, double COV11,
double * Y, double * Y_ERR)
This function uses the best-fit linear regression coefficient C1
and its covariance COV11 to compute the fitted function Y and its
standard deviation Y_ERR for the model Y = c_1 X at the point X.
File: gsl-ref.info, Node: Multi-parameter fitting, Next: Fitting Examples, Prev: Linear fitting without a constant term, Up: Least-Squares Fitting
37.4 Multi-parameter fitting
============================
The functions described in this section perform least-squares fits to a
general linear model, y = X c where y is a vector of n observations, X
is an n by p matrix of predictor variables, and the elements of the
vector c are the p unknown best-fit parameters which are to be
estimated. The chi-squared value is given by \chi^2 = \sum_i w_i (y_i
- \sum_j X_{ij} c_j)^2.
This formulation can be used for fits to any number of functions
and/or variables by preparing the n-by-p matrix X appropriately. For
example, to fit to a p-th order polynomial in X, use the following
matrix,
X_{ij} = x_i^j
where the index i runs over the observations and the index j runs from
0 to p-1.
To fit to a set of p sinusoidal functions with fixed frequencies
\omega_1, \omega_2, ..., \omega_p, use,
X_{ij} = sin(\omega_j x_i)
To fit to p independent variables x_1, x_2, ..., x_p, use,
X_{ij} = x_j(i)
where x_j(i) is the i-th value of the predictor variable x_j.
The functions described in this section are declared in the header
file `gsl_multifit.h'.
The solution of the general linear least-squares system requires an
additional working space for intermediate results, such as the singular
value decomposition of the matrix X.
-- Function: gsl_multifit_linear_workspace * gsl_multifit_linear_alloc
(size_t N, size_t P)
This function allocates a workspace for fitting a model to N
observations using P parameters.
-- Function: void gsl_multifit_linear_free
(gsl_multifit_linear_workspace * WORK)
This function frees the memory associated with the workspace W.
-- Function: int gsl_multifit_linear (const gsl_matrix * X, const
gsl_vector * Y, gsl_vector * C, gsl_matrix * COV, double *
CHISQ, gsl_multifit_linear_workspace * WORK)
This function computes the best-fit parameters C of the model y =
X c for the observations Y and the matrix of predictor variables
X, using the preallocated workspace provided in WORK. The
variance-covariance matrix of the model parameters COV is
estimated from the scatter of the observations about the best-fit.
The sum of squares of the residuals from the best-fit, \chi^2, is
returned in CHISQ. If the coefficient of determination is desired,
it can be computed from the expression R^2 = 1 - \chi^2 / TSS,
where the total sum of squares (TSS) of the observations Y may be
computed from `gsl_stats_tss'.
The best-fit is found by singular value decomposition of the matrix
X using the modified Golub-Reinsch SVD algorithm, with column
scaling to improve the accuracy of the singular values. Any
components which have zero singular value (to machine precision)
are discarded from the fit.
-- Function: int gsl_multifit_wlinear (const gsl_matrix * X, const
gsl_vector * W, const gsl_vector * Y, gsl_vector * C,
gsl_matrix * COV, double * CHISQ,
gsl_multifit_linear_workspace * WORK)
This function computes the best-fit parameters C of the weighted
model y = X c for the observations Y with weights W and the matrix
of predictor variables X, using the preallocated workspace
provided in WORK. The covariance matrix of the model parameters
COV is computed with the given weights. The weighted sum of
squares of the residuals from the best-fit, \chi^2, is returned in
CHISQ. If the coefficient of determination is desired, it can be
computed from the expression R^2 = 1 - \chi^2 / WTSS, where the
weighted total sum of squares (WTSS) of the observations Y may be
computed from `gsl_stats_wtss'.
-- Function: int gsl_multifit_linear_svd (const gsl_matrix * X, const
gsl_vector * Y, double TOL, size_t * RANK, gsl_vector * C,
gsl_matrix * COV, double * CHISQ,
gsl_multifit_linear_workspace * WORK)
-- Function: int gsl_multifit_wlinear_svd (const gsl_matrix * X, const
gsl_vector * W, const gsl_vector * Y, double TOL, size_t *
RANK, gsl_vector * C, gsl_matrix * COV, double * CHISQ,
gsl_multifit_linear_workspace * WORK)
In these functions components of the fit are discarded if the
ratio of singular values s_i/s_0 falls below the user-specified
tolerance TOL, and the effective rank is returned in RANK.
-- Function: int gsl_multifit_linear_usvd (const gsl_matrix * X, const
gsl_vector * Y, double TOL, size_t * RANK, gsl_vector * C,
gsl_matrix * COV, double * CHISQ,
gsl_multifit_linear_workspace * WORK)
-- Function: int gsl_multifit_wlinear_usvd (const gsl_matrix * X,
const gsl_vector * W, const gsl_vector * Y, double TOL,
size_t * RANK, gsl_vector * C, gsl_matrix * COV, double *
CHISQ, gsl_multifit_linear_workspace * WORK)
These functions compute the fit using an SVD without column
scaling.
-- Function: int gsl_multifit_linear_est (const gsl_vector * X, const
gsl_vector * C, const gsl_matrix * COV, double * Y, double *
Y_ERR)
This function uses the best-fit multilinear regression coefficients
C and their covariance matrix COV to compute the fitted function
value Y and its standard deviation Y_ERR for the model y = x.c at
the point X.
-- Function: int gsl_multifit_linear_residuals (const gsl_matrix * X,
const gsl_vector * Y, const gsl_vector * C, gsl_vector * R)
This function computes the vector of residuals r = y - X c for the
observations Y, coefficients C and matrix of predictor variables X.
File: gsl-ref.info, Node: Fitting Examples, Next: Fitting References and Further Reading, Prev: Multi-parameter fitting, Up: Least-Squares Fitting
37.5 Examples
=============
The following program computes a least squares straight-line fit to a
simple dataset, and outputs the best-fit line and its associated one
standard-deviation error bars.
#include
#include
int
main (void)
{
int i, n = 4;
double x[4] = { 1970, 1980, 1990, 2000 };
double y[4] = { 12, 11, 14, 13 };
double w[4] = { 0.1, 0.2, 0.3, 0.4 };
double c0, c1, cov00, cov01, cov11, chisq;
gsl_fit_wlinear (x, 1, w, 1, y, 1, n,
&c0, &c1, &cov00, &cov01, &cov11,
&chisq);
printf ("# best fit: Y = %g + %g X\n", c0, c1);
printf ("# covariance matrix:\n");
printf ("# [ %g, %g\n# %g, %g]\n",
cov00, cov01, cov01, cov11);
printf ("# chisq = %g\n", chisq);
for (i = 0; i < n; i++)
printf ("data: %g %g %g\n",
x[i], y[i], 1/sqrt(w[i]));
printf ("\n");
for (i = -30; i < 130; i++)
{
double xf = x[0] + (i/100.0) * (x[n-1] - x[0]);
double yf, yf_err;
gsl_fit_linear_est (xf,
c0, c1,
cov00, cov01, cov11,
&yf, &yf_err);
printf ("fit: %g %g\n", xf, yf);
printf ("hi : %g %g\n", xf, yf + yf_err);
printf ("lo : %g %g\n", xf, yf - yf_err);
}
return 0;
}
The following commands extract the data from the output of the program
and display it using the GNU plotutils `graph' utility,
$ ./demo > tmp
$ more tmp
# best fit: Y = -106.6 + 0.06 X
# covariance matrix:
# [ 39602, -19.9
# -19.9, 0.01]
# chisq = 0.8
$ for n in data fit hi lo ;
do
grep "^$n" tmp | cut -d: -f2 > $n ;
done
$ graph -T X -X x -Y y -y 0 20 -m 0 -S 2 -Ie data
-S 0 -I a -m 1 fit -m 2 hi -m 2 lo
The next program performs a quadratic fit y = c_0 + c_1 x + c_2 x^2
to a weighted dataset using the generalised linear fitting function
`gsl_multifit_wlinear'. The model matrix X for a quadratic fit is
given by,
X = [ 1 , x_0 , x_0^2 ;
1 , x_1 , x_1^2 ;
1 , x_2 , x_2^2 ;
... , ... , ... ]
where the column of ones corresponds to the constant term c_0. The two
remaining columns corresponds to the terms c_1 x and c_2 x^2.
The program reads N lines of data in the format (X, Y, ERR) where
ERR is the error (standard deviation) in the value Y.
#include
#include
int
main (int argc, char **argv)
{
int i, n;
double xi, yi, ei, chisq;
gsl_matrix *X, *cov;
gsl_vector *y, *w, *c;
if (argc != 2)
{
fprintf (stderr,"usage: fit n < data\n");
exit (-1);
}
n = atoi (argv[1]);
X = gsl_matrix_alloc (n, 3);
y = gsl_vector_alloc (n);
w = gsl_vector_alloc (n);
c = gsl_vector_alloc (3);
cov = gsl_matrix_alloc (3, 3);
for (i = 0; i < n; i++)
{
int count = fscanf (stdin, "%lg %lg %lg",
&xi, &yi, &ei);
if (count != 3)
{
fprintf (stderr, "error reading file\n");
exit (-1);
}
printf ("%g %g +/- %g\n", xi, yi, ei);
gsl_matrix_set (X, i, 0, 1.0);
gsl_matrix_set (X, i, 1, xi);
gsl_matrix_set (X, i, 2, xi*xi);
gsl_vector_set (y, i, yi);
gsl_vector_set (w, i, 1.0/(ei*ei));
}
{
gsl_multifit_linear_workspace * work
= gsl_multifit_linear_alloc (n, 3);
gsl_multifit_wlinear (X, w, y, c, cov,
&chisq, work);
gsl_multifit_linear_free (work);
}
#define C(i) (gsl_vector_get(c,(i)))
#define COV(i,j) (gsl_matrix_get(cov,(i),(j)))
{
printf ("# best fit: Y = %g + %g X + %g X^2\n",
C(0), C(1), C(2));
printf ("# covariance matrix:\n");
printf ("[ %+.5e, %+.5e, %+.5e \n",
COV(0,0), COV(0,1), COV(0,2));
printf (" %+.5e, %+.5e, %+.5e \n",
COV(1,0), COV(1,1), COV(1,2));
printf (" %+.5e, %+.5e, %+.5e ]\n",
COV(2,0), COV(2,1), COV(2,2));
printf ("# chisq = %g\n", chisq);
}
gsl_matrix_free (X);
gsl_vector_free (y);
gsl_vector_free (w);
gsl_vector_free (c);
gsl_matrix_free (cov);
return 0;
}
A suitable set of data for fitting can be generated using the following
program. It outputs a set of points with gaussian errors from the curve
y = e^x in the region 0 < x < 2.
#include
#include
#include
int
main (void)
{
double x;
const gsl_rng_type * T;
gsl_rng * r;
gsl_rng_env_setup ();
T = gsl_rng_default;
r = gsl_rng_alloc (T);
for (x = 0.1; x < 2; x+= 0.1)
{
double y0 = exp (x);
double sigma = 0.1 * y0;
double dy = gsl_ran_gaussian (r, sigma);
printf ("%g %g %g\n", x, y0 + dy, sigma);
}
gsl_rng_free(r);
return 0;
}
The data can be prepared by running the resulting executable program,
$ ./generate > exp.dat
$ more exp.dat
0.1 0.97935 0.110517
0.2 1.3359 0.12214
0.3 1.52573 0.134986
0.4 1.60318 0.149182
0.5 1.81731 0.164872
0.6 1.92475 0.182212
....
To fit the data use the previous program, with the number of data points
given as the first argument. In this case there are 19 data points.
$ ./fit 19 < exp.dat
0.1 0.97935 +/- 0.110517
0.2 1.3359 +/- 0.12214
...
# best fit: Y = 1.02318 + 0.956201 X + 0.876796 X^2
# covariance matrix:
[ +1.25612e-02, -3.64387e-02, +1.94389e-02
-3.64387e-02, +1.42339e-01, -8.48761e-02
+1.94389e-02, -8.48761e-02, +5.60243e-02 ]
# chisq = 23.0987
The parameters of the quadratic fit match the coefficients of the
expansion of e^x, taking into account the errors on the parameters and
the O(x^3) difference between the exponential and quadratic functions
for the larger values of x. The errors on the parameters are given by
the square-root of the corresponding diagonal elements of the
covariance matrix. The chi-squared per degree of freedom is 1.4,
indicating a reasonable fit to the data.
File: gsl-ref.info, Node: Fitting References and Further Reading, Prev: Fitting Examples, Up: Least-Squares Fitting
37.6 References and Further Reading
===================================
A summary of formulas and techniques for least squares fitting can be
found in the "Statistics" chapter of the Annual Review of Particle
Physics prepared by the Particle Data Group,
`Review of Particle Properties', R.M. Barnett et al., Physical
Review D54, 1 (1996) `http://pdg.lbl.gov/'
The Review of Particle Physics is available online at the website given
above.
The tests used to prepare these routines are based on the NIST
Statistical Reference Datasets. The datasets and their documentation are
available from NIST at the following website,
`http://www.nist.gov/itl/div898/strd/index.html'.
File: gsl-ref.info, Node: Nonlinear Least-Squares Fitting, Next: Basis Splines, Prev: Least-Squares Fitting, Up: Top
38 Nonlinear Least-Squares Fitting
**********************************
This chapter describes functions for multidimensional nonlinear
least-squares fitting. The library provides low level components for a
variety of iterative solvers and convergence tests. These can be
combined by the user to achieve the desired solution, with full access
to the intermediate steps of the iteration. Each class of methods uses
the same framework, so that you can switch between solvers at runtime
without needing to recompile your program. Each instance of a solver
keeps track of its own state, allowing the solvers to be used in
multi-threaded programs.
The header file `gsl_multifit_nlin.h' contains prototypes for the
multidimensional nonlinear fitting functions and related declarations.
* Menu:
* Overview of Nonlinear Least-Squares Fitting::
* Initializing the Nonlinear Least-Squares Solver::
* Providing the Function to be Minimized::
* Iteration of the Minimization Algorithm::
* Search Stopping Parameters for Minimization Algorithms::
* Minimization Algorithms using Derivatives::
* Minimization Algorithms without Derivatives::
* Computing the covariance matrix of best fit parameters::
* Example programs for Nonlinear Least-Squares Fitting::
* References and Further Reading for Nonlinear Least-Squares Fitting::
File: gsl-ref.info, Node: Overview of Nonlinear Least-Squares Fitting, Next: Initializing the Nonlinear Least-Squares Solver, Up: Nonlinear Least-Squares Fitting
38.1 Overview
=============
The problem of multidimensional nonlinear least-squares fitting requires
the minimization of the squared residuals of n functions, f_i, in p
parameters, x_i,
\Phi(x) = (1/2) || F(x) ||^2
= (1/2) \sum_{i=1}^{n} f_i(x_1, ..., x_p)^2
All algorithms proceed from an initial guess using the linearization,
\psi(p) = || F(x+p) || ~=~ || F(x) + J p ||
where x is the initial point, p is the proposed step and J is the
Jacobian matrix J_{ij} = d f_i / d x_j. Additional strategies are used
to enlarge the region of convergence. These include requiring a
decrease in the norm ||F|| on each step or using a trust region to
avoid steps which fall outside the linear regime.
To perform a weighted least-squares fit of a nonlinear model Y(x,t)
to data (t_i, y_i) with independent Gaussian errors \sigma_i, use
function components of the following form,
f_i = (Y(x, t_i) - y_i) / \sigma_i
Note that the model parameters are denoted by x in this chapter since
the non-linear least-squares algorithms are described geometrically
(i.e. finding the minimum of a surface). The independent variable of
any data to be fitted is denoted by t.
With the definition above the Jacobian is J_{ij} =(1 / \sigma_i) d
Y_i / d x_j, where Y_i = Y(x,t_i).
File: gsl-ref.info, Node: Initializing the Nonlinear Least-Squares Solver, Next: Providing the Function to be Minimized, Prev: Overview of Nonlinear Least-Squares Fitting, Up: Nonlinear Least-Squares Fitting
38.2 Initializing the Solver
============================
-- Function: gsl_multifit_fsolver * gsl_multifit_fsolver_alloc (const
gsl_multifit_fsolver_type * T, size_t N, size_t P)
This function returns a pointer to a newly allocated instance of a
solver of type T for N observations and P parameters. The number
of observations N must be greater than or equal to parameters P.
If there is insufficient memory to create the solver then the
function returns a null pointer and the error handler is invoked
with an error code of `GSL_ENOMEM'.
-- Function: gsl_multifit_fdfsolver * gsl_multifit_fdfsolver_alloc
(const gsl_multifit_fdfsolver_type * T, size_t N, size_t P)
This function returns a pointer to a newly allocated instance of a
derivative solver of type T for N observations and P parameters.
For example, the following code creates an instance of a
Levenberg-Marquardt solver for 100 data points and 3 parameters,
const gsl_multifit_fdfsolver_type * T
= gsl_multifit_fdfsolver_lmder;
gsl_multifit_fdfsolver * s
= gsl_multifit_fdfsolver_alloc (T, 100, 3);
The number of observations N must be greater than or equal to
parameters P.
If there is insufficient memory to create the solver then the
function returns a null pointer and the error handler is invoked
with an error code of `GSL_ENOMEM'.
-- Function: int gsl_multifit_fsolver_set (gsl_multifit_fsolver * S,
gsl_multifit_function * F, const gsl_vector * X)
This function initializes, or reinitializes, an existing solver S
to use the function F and the initial guess X.
-- Function: int gsl_multifit_fdfsolver_set (gsl_multifit_fdfsolver *
S, gsl_multifit_function_fdf * FDF, const gsl_vector * X)
This function initializes, or reinitializes, an existing solver S
to use the function and derivative FDF and the initial guess X.
-- Function: void gsl_multifit_fsolver_free (gsl_multifit_fsolver * S)
-- Function: void gsl_multifit_fdfsolver_free (gsl_multifit_fdfsolver
* S)
These functions free all the memory associated with the solver S.
-- Function: const char * gsl_multifit_fsolver_name (const
gsl_multifit_fsolver * S)
-- Function: const char * gsl_multifit_fdfsolver_name (const
gsl_multifit_fdfsolver * S)
These functions return a pointer to the name of the solver. For
example,
printf ("s is a '%s' solver\n",
gsl_multifit_fdfsolver_name (s));
would print something like `s is a 'lmder' solver'.
File: gsl-ref.info, Node: Providing the Function to be Minimized, Next: Iteration of the Minimization Algorithm, Prev: Initializing the Nonlinear Least-Squares Solver, Up: Nonlinear Least-Squares Fitting
38.3 Providing the Function to be Minimized
===========================================
You must provide n functions of p variables for the minimization
algorithms to operate on. In order to allow for arbitrary parameters
the functions are defined by the following data types:
-- Data Type: gsl_multifit_function
This data type defines a general system of functions with
arbitrary parameters.
`int (* f) (const gsl_vector * X, void * PARAMS, gsl_vector * F)'
this function should store the vector result f(x,params) in F
for argument X and arbitrary parameters PARAMS, returning an
appropriate error code if the function cannot be computed.
`size_t n'
the number of functions, i.e. the number of components of the
vector F.
`size_t p'
the number of independent variables, i.e. the number of
components of the vector X.
`void * params'
a pointer to the arbitrary parameters of the function.
-- Data Type: gsl_multifit_function_fdf
This data type defines a general system of functions with
arbitrary parameters and the corresponding Jacobian matrix of
derivatives,
`int (* f) (const gsl_vector * X, void * PARAMS, gsl_vector * F)'
this function should store the vector result f(x,params) in F
for argument X and arbitrary parameters PARAMS, returning an
appropriate error code if the function cannot be computed.
`int (* df) (const gsl_vector * X, void * PARAMS, gsl_matrix * J)'
this function should store the N-by-P matrix result J_ij = d
f_i(x,params) / d x_j in J for argument X and arbitrary
parameters PARAMS, returning an appropriate error code if the
function cannot be computed.
`int (* fdf) (const gsl_vector * X, void * PARAMS, gsl_vector * F, gsl_matrix * J)'
This function should set the values of the F and J as above,
for arguments X and arbitrary parameters PARAMS. This
function provides an optimization of the separate functions
for f(x) and J(x)--it is always faster to compute the
function and its derivative at the same time.
`size_t n'
the number of functions, i.e. the number of components of the
vector F.
`size_t p'
the number of independent variables, i.e. the number of
components of the vector X.
`void * params'
a pointer to the arbitrary parameters of the function.
Note that when fitting a non-linear model against experimental data,
the data is passed to the functions above using the PARAMS argument and
the trial best-fit parameters through the X argument.
File: gsl-ref.info, Node: Iteration of the Minimization Algorithm, Next: Search Stopping Parameters for Minimization Algorithms, Prev: Providing the Function to be Minimized, Up: Nonlinear Least-Squares Fitting
38.4 Iteration
==============
The following functions drive the iteration of each algorithm. Each
function performs one iteration to update the state of any solver of the
corresponding type. The same functions work for all solvers so that
different methods can be substituted at runtime without modifications to
the code.
-- Function: int gsl_multifit_fsolver_iterate (gsl_multifit_fsolver *
S)
-- Function: int gsl_multifit_fdfsolver_iterate
(gsl_multifit_fdfsolver * S)
These functions perform a single iteration of the solver S. If
the iteration encounters an unexpected problem then an error code
will be returned. The solver maintains a current estimate of the
best-fit parameters at all times.
The solver struct S contains the following entries, which can be
used to track the progress of the solution:
`gsl_vector * x'
The current position.
`gsl_vector * f'
The function value at the current position.
`gsl_vector * dx'
The difference between the current position and the previous
position, i.e. the last step, taken as a vector.
`gsl_matrix * J'
The Jacobian matrix at the current position (for the
`gsl_multifit_fdfsolver' struct only)
The best-fit information also can be accessed with the following
auxiliary functions,
-- Function: gsl_vector * gsl_multifit_fsolver_position (const
gsl_multifit_fsolver * S)
-- Function: gsl_vector * gsl_multifit_fdfsolver_position (const
gsl_multifit_fdfsolver * S)
These functions return the current position (i.e. best-fit
parameters) `s->x' of the solver S.
File: gsl-ref.info, Node: Search Stopping Parameters for Minimization Algorithms, Next: Minimization Algorithms using Derivatives, Prev: Iteration of the Minimization Algorithm, Up: Nonlinear Least-Squares Fitting
38.5 Search Stopping Parameters
===============================
A minimization procedure should stop when one of the following
conditions is true:
* A minimum has been found to within the user-specified precision.
* A user-specified maximum number of iterations has been reached.
* An error has occurred.
The handling of these conditions is under user control. The functions
below allow the user to test the current estimate of the best-fit
parameters in several standard ways.
-- Function: int gsl_multifit_test_delta (const gsl_vector * DX, const
gsl_vector * X, double EPSABS, double EPSREL)
This function tests for the convergence of the sequence by
comparing the last step DX with the absolute error EPSABS and
relative error EPSREL to the current position X. The test returns
`GSL_SUCCESS' if the following condition is achieved,
|dx_i| < epsabs + epsrel |x_i|
for each component of X and returns `GSL_CONTINUE' otherwise.
-- Function: int gsl_multifit_test_gradient (const gsl_vector * G,
double EPSABS)
This function tests the residual gradient G against the absolute
error bound EPSABS. Mathematically, the gradient should be
exactly zero at the minimum. The test returns `GSL_SUCCESS' if the
following condition is achieved,
\sum_i |g_i| < epsabs
and returns `GSL_CONTINUE' otherwise. This criterion is suitable
for situations where the precise location of the minimum, x, is
unimportant provided a value can be found where the gradient is
small enough.
-- Function: int gsl_multifit_gradient (const gsl_matrix * J, const
gsl_vector * F, gsl_vector * G)
This function computes the gradient G of \Phi(x) = (1/2)
||F(x)||^2 from the Jacobian matrix J and the function values F,
using the formula g = J^T f.
File: gsl-ref.info, Node: Minimization Algorithms using Derivatives, Next: Minimization Algorithms without Derivatives, Prev: Search Stopping Parameters for Minimization Algorithms, Up: Nonlinear Least-Squares Fitting
38.6 Minimization Algorithms using Derivatives
==============================================
The minimization algorithms described in this section make use of both
the function and its derivative. They require an initial guess for the
location of the minimum. There is no absolute guarantee of
convergence--the function must be suitable for this technique and the
initial guess must be sufficiently close to the minimum for it to work.
-- Derivative Solver: gsl_multifit_fdfsolver_lmsder
This is a robust and efficient version of the Levenberg-Marquardt
algorithm as implemented in the scaled LMDER routine in MINPACK.
Minpack was written by Jorge J. More', Burton S. Garbow and
Kenneth E. Hillstrom.
The algorithm uses a generalized trust region to keep each step
under control. In order to be accepted a proposed new position x'
must satisfy the condition |D (x' - x)| < \delta, where D is a
diagonal scaling matrix and \delta is the size of the trust
region. The components of D are computed internally, using the
column norms of the Jacobian to estimate the sensitivity of the
residual to each component of x. This improves the behavior of the
algorithm for badly scaled functions.
On each iteration the algorithm attempts to minimize the linear
system |F + J p| subject to the constraint |D p| < \Delta. The
solution to this constrained linear system is found using the
Levenberg-Marquardt method.
The proposed step is now tested by evaluating the function at the
resulting point, x'. If the step reduces the norm of the function
sufficiently, and follows the predicted behavior of the function
within the trust region, then it is accepted and the size of the
trust region is increased. If the proposed step fails to improve
the solution, or differs significantly from the expected behavior
within the trust region, then the size of the trust region is
decreased and another trial step is computed.
The algorithm also monitors the progress of the solution and
returns an error if the changes in the solution are smaller than
the machine precision. The possible error codes are,
`GSL_ETOLF'
the decrease in the function falls below machine precision
`GSL_ETOLX'
the change in the position vector falls below machine
precision
`GSL_ETOLG'
the norm of the gradient, relative to the norm of the
function, falls below machine precision
`GSL_ENOPROG'
the routine has made 10 or more attempts to find a suitable
trial step without success (but subsequent calls can be made
to continue the search).(1)
These error codes indicate that further iterations will be
unlikely to change the solution from its current value.
-- Derivative Solver: gsl_multifit_fdfsolver_lmder
This is an unscaled version of the LMDER algorithm. The elements
of the diagonal scaling matrix D are set to 1. This algorithm may
be useful in circumstances where the scaled version of LMDER
converges too slowly, or the function is already scaled
appropriately.
---------- Footnotes ----------
(1) The return code `GSL_CONTINUE' was used for this case in
versions prior to 1.14.
File: gsl-ref.info, Node: Minimization Algorithms without Derivatives, Next: Computing the covariance matrix of best fit parameters, Prev: Minimization Algorithms using Derivatives, Up: Nonlinear Least-Squares Fitting
38.7 Minimization Algorithms without Derivatives
================================================
There are no algorithms implemented in this section at the moment.
File: gsl-ref.info, Node: Computing the covariance matrix of best fit parameters, Next: Example programs for Nonlinear Least-Squares Fitting, Prev: Minimization Algorithms without Derivatives, Up: Nonlinear Least-Squares Fitting
38.8 Computing the covariance matrix of best fit parameters
===========================================================
-- Function: int gsl_multifit_covar (const gsl_matrix * J, double
EPSREL, gsl_matrix * COVAR)
This function uses the Jacobian matrix J to compute the covariance
matrix of the best-fit parameters, COVAR. The parameter EPSREL is
used to remove linear-dependent columns when J is rank deficient.
The covariance matrix is given by,
covar = (J^T J)^{-1}
and is computed by QR decomposition of J with column-pivoting. Any
columns of R which satisfy
|R_{kk}| <= epsrel |R_{11}|
are considered linearly-dependent and are excluded from the
covariance matrix (the corresponding rows and columns of the
covariance matrix are set to zero).
If the minimisation uses the weighted least-squares function f_i =
(Y(x, t_i) - y_i) / \sigma_i then the covariance matrix above
gives the statistical error on the best-fit parameters resulting
from the Gaussian errors \sigma_i on the underlying data y_i.
This can be verified from the relation \delta f = J \delta c and
the fact that the fluctuations in f from the data y_i are
normalised by \sigma_i and so satisfy <\delta f \delta f^T> = I.
For an unweighted least-squares function f_i = (Y(x, t_i) - y_i)
the covariance matrix above should be multiplied by the variance
of the residuals about the best-fit \sigma^2 = \sum (y_i -
Y(x,t_i))^2 / (n-p) to give the variance-covariance matrix
\sigma^2 C. This estimates the statistical error on the best-fit
parameters from the scatter of the underlying data.
For more information about covariance matrices see *note Fitting
Overview::.
File: gsl-ref.info, Node: Example programs for Nonlinear Least-Squares Fitting, Next: References and Further Reading for Nonlinear Least-Squares Fitting, Prev: Computing the covariance matrix of best fit parameters, Up: Nonlinear Least-Squares Fitting
38.9 Examples
=============
The following example program fits a weighted exponential model with
background to experimental data, Y = A \exp(-\lambda t) + b. The first
part of the program sets up the functions `expb_f' and `expb_df' to
calculate the model and its Jacobian. The appropriate fitting function
is given by,
f_i = ((A \exp(-\lambda t_i) + b) - y_i)/\sigma_i
where we have chosen t_i = i. The Jacobian matrix J is the derivative
of these functions with respect to the three parameters (A, \lambda,
b). It is given by,
J_{ij} = d f_i / d x_j
where x_0 = A, x_1 = \lambda and x_2 = b.
/* expfit.c -- model functions for exponential + background */
struct data {
size_t n;
double * y;
double * sigma;
};
int
expb_f (const gsl_vector * x, void *data,
gsl_vector * f)
{
size_t n = ((struct data *)data)->n;
double *y = ((struct data *)data)->y;
double *sigma = ((struct data *) data)->sigma;
double A = gsl_vector_get (x, 0);
double lambda = gsl_vector_get (x, 1);
double b = gsl_vector_get (x, 2);
size_t i;
for (i = 0; i < n; i++)
{
/* Model Yi = A * exp(-lambda * i) + b */
double t = i;
double Yi = A * exp (-lambda * t) + b;
gsl_vector_set (f, i, (Yi - y[i])/sigma[i]);
}
return GSL_SUCCESS;
}
int
expb_df (const gsl_vector * x, void *data,
gsl_matrix * J)
{
size_t n = ((struct data *)data)->n;
double *sigma = ((struct data *) data)->sigma;
double A = gsl_vector_get (x, 0);
double lambda = gsl_vector_get (x, 1);
size_t i;
for (i = 0; i < n; i++)
{
/* Jacobian matrix J(i,j) = dfi / dxj, */
/* where fi = (Yi - yi)/sigma[i], */
/* Yi = A * exp(-lambda * i) + b */
/* and the xj are the parameters (A,lambda,b) */
double t = i;
double s = sigma[i];
double e = exp(-lambda * t);
gsl_matrix_set (J, i, 0, e/s);
gsl_matrix_set (J, i, 1, -t * A * e/s);
gsl_matrix_set (J, i, 2, 1/s);
}
return GSL_SUCCESS;
}
int
expb_fdf (const gsl_vector * x, void *data,
gsl_vector * f, gsl_matrix * J)
{
expb_f (x, data, f);
expb_df (x, data, J);
return GSL_SUCCESS;
}
The main part of the program sets up a Levenberg-Marquardt solver and
some simulated random data. The data uses the known parameters
(1.0,5.0,0.1) combined with Gaussian noise (standard deviation = 0.1)
over a range of 40 timesteps. The initial guess for the parameters is
chosen as (0.0, 1.0, 0.0).
#include
#include
#include
#include
#include
#include
#include
#include "expfit.c"
#define N 40
void print_state (size_t iter, gsl_multifit_fdfsolver * s);
int
main (void)
{
const gsl_multifit_fdfsolver_type *T;
gsl_multifit_fdfsolver *s;
int status;
unsigned int i, iter = 0;
const size_t n = N;
const size_t p = 3;
gsl_matrix *covar = gsl_matrix_alloc (p, p);
double y[N], sigma[N];
struct data d = { n, y, sigma};
gsl_multifit_function_fdf f;
double x_init[3] = { 1.0, 0.0, 0.0 };
gsl_vector_view x = gsl_vector_view_array (x_init, p);
const gsl_rng_type * type;
gsl_rng * r;
gsl_rng_env_setup();
type = gsl_rng_default;
r = gsl_rng_alloc (type);
f.f = &expb_f;
f.df = &expb_df;
f.fdf = &expb_fdf;
f.n = n;
f.p = p;
f.params = &d;
/* This is the data to be fitted */
for (i = 0; i < n; i++)
{
double t = i;
y[i] = 1.0 + 5 * exp (-0.1 * t)
+ gsl_ran_gaussian (r, 0.1);
sigma[i] = 0.1;
printf ("data: %u %g %g\n", i, y[i], sigma[i]);
};
T = gsl_multifit_fdfsolver_lmsder;
s = gsl_multifit_fdfsolver_alloc (T, n, p);
gsl_multifit_fdfsolver_set (s, &f, &x.vector);
print_state (iter, s);
do
{
iter++;
status = gsl_multifit_fdfsolver_iterate (s);
printf ("status = %s\n", gsl_strerror (status));
print_state (iter, s);
if (status)
break;
status = gsl_multifit_test_delta (s->dx, s->x,
1e-4, 1e-4);
}
while (status == GSL_CONTINUE && iter < 500);
gsl_multifit_covar (s->J, 0.0, covar);
#define FIT(i) gsl_vector_get(s->x, i)
#define ERR(i) sqrt(gsl_matrix_get(covar,i,i))
{
double chi = gsl_blas_dnrm2(s->f);
double dof = n - p;
double c = GSL_MAX_DBL(1, chi / sqrt(dof));
printf("chisq/dof = %g\n", pow(chi, 2.0) / dof);
printf ("A = %.5f +/- %.5f\n", FIT(0), c*ERR(0));
printf ("lambda = %.5f +/- %.5f\n", FIT(1), c*ERR(1));
printf ("b = %.5f +/- %.5f\n", FIT(2), c*ERR(2));
}
printf ("status = %s\n", gsl_strerror (status));
gsl_multifit_fdfsolver_free (s);
gsl_matrix_free (covar);
gsl_rng_free (r);
return 0;
}
void
print_state (size_t iter, gsl_multifit_fdfsolver * s)
{
printf ("iter: %3u x = % 15.8f % 15.8f % 15.8f "
"|f(x)| = %g\n",
iter,
gsl_vector_get (s->x, 0),
gsl_vector_get (s->x, 1),
gsl_vector_get (s->x, 2),
gsl_blas_dnrm2 (s->f));
}
The iteration terminates when the change in x is smaller than 0.0001, as
both an absolute and relative change. Here are the results of running
the program:
iter: 0 x=1.00000000 0.00000000 0.00000000 |f(x)|=117.349
status=success
iter: 1 x=1.64659312 0.01814772 0.64659312 |f(x)|=76.4578
status=success
iter: 2 x=2.85876037 0.08092095 1.44796363 |f(x)|=37.6838
status=success
iter: 3 x=4.94899512 0.11942928 1.09457665 |f(x)|=9.58079
status=success
iter: 4 x=5.02175572 0.10287787 1.03388354 |f(x)|=5.63049
status=success
iter: 5 x=5.04520433 0.10405523 1.01941607 |f(x)|=5.44398
status=success
iter: 6 x=5.04535782 0.10404906 1.01924871 |f(x)|=5.44397
chisq/dof = 0.800996
A = 5.04536 +/- 0.06028
lambda = 0.10405 +/- 0.00316
b = 1.01925 +/- 0.03782
status = success
The approximate values of the parameters are found correctly, and the
chi-squared value indicates a good fit (the chi-squared per degree of
freedom is approximately 1). In this case the errors on the parameters
can be estimated from the square roots of the diagonal elements of the
covariance matrix.
If the chi-squared value shows a poor fit (i.e. chi^2/dof >> 1) then
the error estimates obtained from the covariance matrix will be too
small. In the example program the error estimates are multiplied by
\sqrt{\chi^2/dof} in this case, a common way of increasing the errors
for a poor fit. Note that a poor fit will result from the use an
inappropriate model, and the scaled error estimates may then be outside
the range of validity for Gaussian errors.
File: gsl-ref.info, Node: References and Further Reading for Nonlinear Least-Squares Fitting, Prev: Example programs for Nonlinear Least-Squares Fitting, Up: Nonlinear Least-Squares Fitting
38.10 References and Further Reading
====================================
The MINPACK algorithm is described in the following article,
J.J. More', `The Levenberg-Marquardt Algorithm: Implementation and
Theory', Lecture Notes in Mathematics, v630 (1978), ed G. Watson.
The following paper is also relevant to the algorithms described in this
section,
J.J. More', B.S. Garbow, K.E. Hillstrom, "Testing Unconstrained
Optimization Software", ACM Transactions on Mathematical Software,
Vol 7, No 1 (1981), p 17-41.
File: gsl-ref.info, Node: Basis Splines, Next: Physical Constants, Prev: Nonlinear Least-Squares Fitting, Up: Top
39 Basis Splines
****************
This chapter describes functions for the computation of smoothing basis
splines (B-splines). A smoothing spline differs from an interpolating
spline in that the resulting curve is not required to pass through each
datapoint. *Note Interpolation::, for information about interpolating
splines.
The header file `gsl_bspline.h' contains the prototypes for the
bspline functions and related declarations.
* Menu:
* Overview of B-splines::
* Initializing the B-splines solver::
* Constructing the knots vector::
* Evaluation of B-spline basis functions::
* Evaluation of B-spline basis function derivatives::
* Obtaining Greville abscissae for B-spline basis functions::
* Example programs for B-splines::
* References and Further Reading::
File: gsl-ref.info, Node: Overview of B-splines, Next: Initializing the B-splines solver, Up: Basis Splines
39.1 Overview
=============
B-splines are commonly used as basis functions to fit smoothing curves
to large data sets. To do this, the abscissa axis is broken up into
some number of intervals, where the endpoints of each interval are
called "breakpoints". These breakpoints are then converted to "knots"
by imposing various continuity and smoothness conditions at each
interface. Given a nondecreasing knot vector t = {t_0, t_1, ...,
t_{n+k-1}}, the n basis splines of order k are defined by
B_(i,1)(x) = (1, t_i <= x < t_(i+1)
(0, else
B_(i,k)(x) = [(x - t_i)/(t_(i+k-1) - t_i)] B_(i,k-1)(x)
+ [(t_(i+k) - x)/(t_(i+k) - t_(i+1))] B_(i+1,k-1)(x)
for i = 0, ..., n-1. The common case of cubic B-splines is given by k =
4. The above recurrence relation can be evaluated in a numerically
stable way by the de Boor algorithm.
If we define appropriate knots on an interval [a,b] then the
B-spline basis functions form a complete set on that interval.
Therefore we can expand a smoothing function as
f(x) = \sum_i c_i B_(i,k)(x)
given enough (x_j, f(x_j)) data pairs. The coefficients c_i can be
readily obtained from a least-squares fit.
File: gsl-ref.info, Node: Initializing the B-splines solver, Next: Constructing the knots vector, Prev: Overview of B-splines, Up: Basis Splines
39.2 Initializing the B-splines solver
======================================
The computation of B-spline functions requires a preallocated workspace
of type `gsl_bspline_workspace'. If B-spline derivatives are also
required, an additional `gsl_bspline_deriv_workspace' is needed.
-- Function: gsl_bspline_workspace * gsl_bspline_alloc (const size_t
K, const size_t NBREAK)
This function allocates a workspace for computing B-splines of
order K. The number of breakpoints is given by NBREAK. This leads
to n = nbreak + k - 2 basis functions. Cubic B-splines are
specified by k = 4. The size of the workspace is O(5k + nbreak).
-- Function: void gsl_bspline_free (gsl_bspline_workspace * W)
This function frees the memory associated with the workspace W.
-- Function: gsl_bspline_deriv_workspace * gsl_bspline_deriv_alloc
(const size_t K)
This function allocates a workspace for computing the derivatives
of a B-spline basis function of order K. The size of the workspace
is O(2k^2).
-- Function: void gsl_bspline_deriv_free (gsl_bspline_deriv_workspace
* W)
This function frees the memory associated with the derivative
workspace W.
File: gsl-ref.info, Node: Constructing the knots vector, Next: Evaluation of B-spline basis functions, Prev: Initializing the B-splines solver, Up: Basis Splines
39.3 Constructing the knots vector
==================================
-- Function: int gsl_bspline_knots (const gsl_vector * BREAKPTS,
gsl_bspline_workspace * W)
This function computes the knots associated with the given
breakpoints and stores them internally in `w->knots'.
-- Function: int gsl_bspline_knots_uniform (const double A, const
double B, gsl_bspline_workspace * W)
This function assumes uniformly spaced breakpoints on [a,b] and
constructs the corresponding knot vector using the previously
specified NBREAK parameter. The knots are stored in `w->knots'.
File: gsl-ref.info, Node: Evaluation of B-spline basis functions, Next: Evaluation of B-spline basis function derivatives, Prev: Constructing the knots vector, Up: Basis Splines
39.4 Evaluation of B-splines
============================
-- Function: int gsl_bspline_eval (const double X, gsl_vector * B,
gsl_bspline_workspace * W)
This function evaluates all B-spline basis functions at the
position X and stores them in the vector B, so that the i-th
element is B_i(x). The vector B must be of length n = nbreak + k -
2. This value may also be obtained by calling
`gsl_bspline_ncoeffs'. Computing all the basis functions at once
is more efficient than computing them individually, due to the
nature of the defining recurrence relation.
-- Function: int gsl_bspline_eval_nonzero (const double X, gsl_vector
* BK, size_t * ISTART, size_t * IEND, gsl_bspline_workspace *
W)
This function evaluates all potentially nonzero B-spline basis
functions at the position X and stores them in the vector BK, so
that the i-th element is B_(istart+i)(x). The last element of BK
is B_(iend)(x). The vector BK must be of length k. By returning
only the nonzero basis functions, this function allows quantities
involving linear combinations of the B_i(x) to be computed without
unnecessary terms (such linear combinations occur, for example,
when evaluating an interpolated function).
-- Function: size_t gsl_bspline_ncoeffs (gsl_bspline_workspace * W)
This function returns the number of B-spline coefficients given by
n = nbreak + k - 2.
File: gsl-ref.info, Node: Evaluation of B-spline basis function derivatives, Next: Obtaining Greville abscissae for B-spline basis functions, Prev: Evaluation of B-spline basis functions, Up: Basis Splines
39.5 Evaluation of B-spline derivatives
=======================================
-- Function: int gsl_bspline_deriv_eval (const double X, const size_t
NDERIV, gsl_matrix * DB, gsl_bspline_workspace * W,
gsl_bspline_deriv_workspace * DW)
This function evaluates all B-spline basis function derivatives of
orders 0 through nderiv (inclusive) at the position X and stores
them in the matrix DB. The (i,j)-th element of DB is
d^jB_i(x)/dx^j. The matrix DB must be of size n = nbreak + k - 2
by nderiv + 1. The value n may also be obtained by calling
`gsl_bspline_ncoeffs'. Note that function evaluations are
included as the zeroth order derivatives in DB. Computing all the
basis function derivatives at once is more efficient than
computing them individually, due to the nature of the defining
recurrence relation.
-- Function: int gsl_bspline_deriv_eval_nonzero (const double X, const
size_t NDERIV, gsl_matrix * DB, size_t * ISTART, size_t *
IEND, gsl_bspline_workspace * W, gsl_bspline_deriv_workspace
* DW)
This function evaluates all potentially nonzero B-spline basis
function derivatives of orders 0 through nderiv (inclusive) at the
position X and stores them in the matrix DB. The (i,j)-th element
of DB is d^j/dx^j B_(istart+i)(x). The last row of DB contains
d^j/dx^j B_(iend)(x). The matrix DB must be of size k by at least
nderiv + 1. Note that function evaluations are included as the
zeroth order derivatives in DB. By returning only the nonzero
basis functions, this function allows quantities involving linear
combinations of the B_i(x) and their derivatives to be computed
without unnecessary terms.
File: gsl-ref.info, Node: Obtaining Greville abscissae for B-spline basis functions, Next: Example programs for B-splines, Prev: Evaluation of B-spline basis function derivatives, Up: Basis Splines
39.6 Greville abscissae
=======================
The Greville abscissae are defined to be the mean location of k-1
consecutive knots in the knot vector for each basis spline function of
order k. Note that the first and last knots in the knot vector are
excluded when applying this definition; consequently there are
`gsl_bspline_ncoeffs' Greville abscissa. They are often used in
B-spline collocation applications and may also be called
Marsden-Schoenberg points.
The above definition is undefined for k=1. The implementation
chooses to return interval midpoints in the degenerate k=1 case.
-- Function: double gsl_bspline_greville_abscissa (size_t I,
gsl_bspline_workspace *W);
Returns the location of the i-th Greville abscissa for the given
spline basis. Here, i = 0, ..., `gsl_bspline_ncoeffs(w) - 1'.
File: gsl-ref.info, Node: Example programs for B-splines, Next: References and Further Reading, Prev: Obtaining Greville abscissae for B-spline basis functions, Up: Basis Splines
39.7 Examples
=============
The following program computes a linear least squares fit to data using
cubic B-spline basis functions with uniform breakpoints. The data is
generated from the curve y(x) = \cos(x) \exp(-x/10) on the interval [0,
15] with Gaussian noise added.
#include
#include
#include
#include
#include
#include
#include
#include
/* number of data points to fit */
#define N 200
/* number of fit coefficients */
#define NCOEFFS 12
/* nbreak = ncoeffs + 2 - k = ncoeffs - 2 since k = 4 */
#define NBREAK (NCOEFFS - 2)
int
main (void)
{
const size_t n = N;
const size_t ncoeffs = NCOEFFS;
const size_t nbreak = NBREAK;
size_t i, j;
gsl_bspline_workspace *bw;
gsl_vector *B;
double dy;
gsl_rng *r;
gsl_vector *c, *w;
gsl_vector *x, *y;
gsl_matrix *X, *cov;
gsl_multifit_linear_workspace *mw;
double chisq, Rsq, dof, tss;
gsl_rng_env_setup();
r = gsl_rng_alloc(gsl_rng_default);
/* allocate a cubic bspline workspace (k = 4) */
bw = gsl_bspline_alloc(4, nbreak);
B = gsl_vector_alloc(ncoeffs);
x = gsl_vector_alloc(n);
y = gsl_vector_alloc(n);
X = gsl_matrix_alloc(n, ncoeffs);
c = gsl_vector_alloc(ncoeffs);
w = gsl_vector_alloc(n);
cov = gsl_matrix_alloc(ncoeffs, ncoeffs);
mw = gsl_multifit_linear_alloc(n, ncoeffs);
printf("#m=0,S=0\n");
/* this is the data to be fitted */
for (i = 0; i < n; ++i)
{
double sigma;
double xi = (15.0 / (N - 1)) * i;
double yi = cos(xi) * exp(-0.1 * xi);
sigma = 0.1 * yi;
dy = gsl_ran_gaussian(r, sigma);
yi += dy;
gsl_vector_set(x, i, xi);
gsl_vector_set(y, i, yi);
gsl_vector_set(w, i, 1.0 / (sigma * sigma));
printf("%f %f\n", xi, yi);
}
/* use uniform breakpoints on [0, 15] */
gsl_bspline_knots_uniform(0.0, 15.0, bw);
/* construct the fit matrix X */
for (i = 0; i < n; ++i)
{
double xi = gsl_vector_get(x, i);
/* compute B_j(xi) for all j */
gsl_bspline_eval(xi, B, bw);
/* fill in row i of X */
for (j = 0; j < ncoeffs; ++j)
{
double Bj = gsl_vector_get(B, j);
gsl_matrix_set(X, i, j, Bj);
}
}
/* do the fit */
gsl_multifit_wlinear(X, w, y, c, cov, &chisq, mw);
dof = n - ncoeffs;
tss = gsl_stats_wtss(w->data, 1, y->data, 1, y->size);
Rsq = 1.0 - chisq / tss;
fprintf(stderr, "chisq/dof = %e, Rsq = %f\n",
chisq / dof, Rsq);
/* output the smoothed curve */
{
double xi, yi, yerr;
printf("#m=1,S=0\n");
for (xi = 0.0; xi < 15.0; xi += 0.1)
{
gsl_bspline_eval(xi, B, bw);
gsl_multifit_linear_est(B, c, cov, &yi, &yerr);
printf("%f %f\n", xi, yi);
}
}
gsl_rng_free(r);
gsl_bspline_free(bw);
gsl_vector_free(B);
gsl_vector_free(x);
gsl_vector_free(y);
gsl_matrix_free(X);
gsl_vector_free(c);
gsl_vector_free(w);
gsl_matrix_free(cov);
gsl_multifit_linear_free(mw);
return 0;
} /* main() */
The output can be plotted with GNU `graph'.
$ ./a.out > bspline.dat
chisq/dof = 1.118217e+00, Rsq = 0.989771
$ graph -T ps -X x -Y y -x 0 15 -y -1 1.3 < bspline.dat > bspline.ps
File: gsl-ref.info, Node: References and Further Reading, Prev: Example programs for B-splines, Up: Basis Splines
39.8 References and Further Reading
===================================
Further information on the algorithms described in this section can be
found in the following book,
C. de Boor, `A Practical Guide to Splines' (1978), Springer-Verlag,
ISBN 0-387-90356-9.
Further information of Greville abscissae and B-spline collocation
can be found in the following paper,
Richard W. Johnson, Higher order B-spline collocation at the
Greville abscissae. `Applied Numerical Mathematics'. vol. 52,
2005, 63-75.
A large collection of B-spline routines is available in the PPPACK
library available at `http://www.netlib.org/pppack', which is also part
of SLATEC.
File: gsl-ref.info, Node: Physical Constants, Next: IEEE floating-point arithmetic, Prev: Basis Splines, Up: Top
40 Physical Constants
*********************
This chapter describes macros for the values of physical constants, such
as the speed of light, c, and gravitational constant, G. The values
are available in different unit systems, including the standard MKSA
system (meters, kilograms, seconds, amperes) and the CGSM system
(centimeters, grams, seconds, gauss), which is commonly used in
Astronomy.
The definitions of constants in the MKSA system are available in the
file `gsl_const_mksa.h'. The constants in the CGSM system are defined
in `gsl_const_cgsm.h'. Dimensionless constants, such as the fine
structure constant, which are pure numbers are defined in
`gsl_const_num.h'.
* Menu:
* Fundamental Constants::
* Astronomy and Astrophysics::
* Atomic and Nuclear Physics::
* Measurement of Time::
* Imperial Units ::
* Speed and Nautical Units::
* Printers Units::
* Volume Area and Length::
* Mass and Weight ::
* Thermal Energy and Power::
* Pressure::
* Viscosity::
* Light and Illumination::
* Radioactivity::
* Force and Energy::
* Prefixes::
* Physical Constant Examples::
* Physical Constant References and Further Reading::
The full list of constants is described briefly below. Consult the
header files themselves for the values of the constants used in the
library.
File: gsl-ref.info, Node: Fundamental Constants, Next: Astronomy and Astrophysics, Up: Physical Constants
40.1 Fundamental Constants
==========================
`GSL_CONST_MKSA_SPEED_OF_LIGHT'
The speed of light in vacuum, c.
`GSL_CONST_MKSA_VACUUM_PERMEABILITY'
The permeability of free space, \mu_0. This constant is defined in
the MKSA system only.
`GSL_CONST_MKSA_VACUUM_PERMITTIVITY'
The permittivity of free space, \epsilon_0. This constant is
defined in the MKSA system only.
`GSL_CONST_MKSA_PLANCKS_CONSTANT_H'
Planck's constant, h.
`GSL_CONST_MKSA_PLANCKS_CONSTANT_HBAR'
Planck's constant divided by 2\pi, \hbar.
`GSL_CONST_NUM_AVOGADRO'
Avogadro's number, N_a.
`GSL_CONST_MKSA_FARADAY'
The molar charge of 1 Faraday.
`GSL_CONST_MKSA_BOLTZMANN'
The Boltzmann constant, k.
`GSL_CONST_MKSA_MOLAR_GAS'
The molar gas constant, R_0.
`GSL_CONST_MKSA_STANDARD_GAS_VOLUME'
The standard gas volume, V_0.
`GSL_CONST_MKSA_STEFAN_BOLTZMANN_CONSTANT'
The Stefan-Boltzmann radiation constant, \sigma.
`GSL_CONST_MKSA_GAUSS'
The magnetic field of 1 Gauss.
File: gsl-ref.info, Node: Astronomy and Astrophysics, Next: Atomic and Nuclear Physics, Prev: Fundamental Constants, Up: Physical Constants
40.2 Astronomy and Astrophysics
===============================
`GSL_CONST_MKSA_ASTRONOMICAL_UNIT'
The length of 1 astronomical unit (mean earth-sun distance), au.
`GSL_CONST_MKSA_GRAVITATIONAL_CONSTANT'
The gravitational constant, G.
`GSL_CONST_MKSA_LIGHT_YEAR'
The distance of 1 light-year, ly.
`GSL_CONST_MKSA_PARSEC'
The distance of 1 parsec, pc.
`GSL_CONST_MKSA_GRAV_ACCEL'
The standard gravitational acceleration on Earth, g.
`GSL_CONST_MKSA_SOLAR_MASS'
The mass of the Sun.
File: gsl-ref.info, Node: Atomic and Nuclear Physics, Next: Measurement of Time, Prev: Astronomy and Astrophysics, Up: Physical Constants
40.3 Atomic and Nuclear Physics
===============================
`GSL_CONST_MKSA_ELECTRON_CHARGE'
The charge of the electron, e.
`GSL_CONST_MKSA_ELECTRON_VOLT'
The energy of 1 electron volt, eV.
`GSL_CONST_MKSA_UNIFIED_ATOMIC_MASS'
The unified atomic mass, amu.
`GSL_CONST_MKSA_MASS_ELECTRON'
The mass of the electron, m_e.
`GSL_CONST_MKSA_MASS_MUON'
The mass of the muon, m_\mu.
`GSL_CONST_MKSA_MASS_PROTON'
The mass of the proton, m_p.
`GSL_CONST_MKSA_MASS_NEUTRON'
The mass of the neutron, m_n.
`GSL_CONST_NUM_FINE_STRUCTURE'
The electromagnetic fine structure constant \alpha.
`GSL_CONST_MKSA_RYDBERG'
The Rydberg constant, Ry, in units of energy. This is related to
the Rydberg inverse wavelength R_\infty by Ry = h c R_\infty.
`GSL_CONST_MKSA_BOHR_RADIUS'
The Bohr radius, a_0.
`GSL_CONST_MKSA_ANGSTROM'
The length of 1 angstrom.
`GSL_CONST_MKSA_BARN'
The area of 1 barn.
`GSL_CONST_MKSA_BOHR_MAGNETON'
The Bohr Magneton, \mu_B.
`GSL_CONST_MKSA_NUCLEAR_MAGNETON'
The Nuclear Magneton, \mu_N.
`GSL_CONST_MKSA_ELECTRON_MAGNETIC_MOMENT'
The absolute value of the magnetic moment of the electron, \mu_e.
The physical magnetic moment of the electron is negative.
`GSL_CONST_MKSA_PROTON_MAGNETIC_MOMENT'
The magnetic moment of the proton, \mu_p.
`GSL_CONST_MKSA_THOMSON_CROSS_SECTION'
The Thomson cross section, \sigma_T.
`GSL_CONST_MKSA_DEBYE'
The electric dipole moment of 1 Debye, D.
File: gsl-ref.info, Node: Measurement of Time, Next: Imperial Units, Prev: Atomic and Nuclear Physics, Up: Physical Constants
40.4 Measurement of Time
========================
`GSL_CONST_MKSA_MINUTE'
The number of seconds in 1 minute.
`GSL_CONST_MKSA_HOUR'
The number of seconds in 1 hour.
`GSL_CONST_MKSA_DAY'
The number of seconds in 1 day.
`GSL_CONST_MKSA_WEEK'
The number of seconds in 1 week.
File: gsl-ref.info, Node: Imperial Units, Next: Speed and Nautical Units, Prev: Measurement of Time, Up: Physical Constants
40.5 Imperial Units
===================
`GSL_CONST_MKSA_INCH'
The length of 1 inch.
`GSL_CONST_MKSA_FOOT'
The length of 1 foot.
`GSL_CONST_MKSA_YARD'
The length of 1 yard.
`GSL_CONST_MKSA_MILE'
The length of 1 mile.
`GSL_CONST_MKSA_MIL'
The length of 1 mil (1/1000th of an inch).
File: gsl-ref.info, Node: Speed and Nautical Units, Next: Printers Units, Prev: Imperial Units, Up: Physical Constants
40.6 Speed and Nautical Units
=============================
`GSL_CONST_MKSA_KILOMETERS_PER_HOUR'
The speed of 1 kilometer per hour.
`GSL_CONST_MKSA_MILES_PER_HOUR'
The speed of 1 mile per hour.
`GSL_CONST_MKSA_NAUTICAL_MILE'
The length of 1 nautical mile.
`GSL_CONST_MKSA_FATHOM'
The length of 1 fathom.
`GSL_CONST_MKSA_KNOT'
The speed of 1 knot.
File: gsl-ref.info, Node: Printers Units, Next: Volume Area and Length, Prev: Speed and Nautical Units, Up: Physical Constants
40.7 Printers Units
===================
`GSL_CONST_MKSA_POINT'
The length of 1 printer's point (1/72 inch).
`GSL_CONST_MKSA_TEXPOINT'
The length of 1 TeX point (1/72.27 inch).
File: gsl-ref.info, Node: Volume Area and Length, Next: Mass and Weight, Prev: Printers Units, Up: Physical Constants
40.8 Volume, Area and Length
============================
`GSL_CONST_MKSA_MICRON'
The length of 1 micron.
`GSL_CONST_MKSA_HECTARE'
The area of 1 hectare.
`GSL_CONST_MKSA_ACRE'
The area of 1 acre.
`GSL_CONST_MKSA_LITER'
The volume of 1 liter.
`GSL_CONST_MKSA_US_GALLON'
The volume of 1 US gallon.
`GSL_CONST_MKSA_CANADIAN_GALLON'
The volume of 1 Canadian gallon.
`GSL_CONST_MKSA_UK_GALLON'
The volume of 1 UK gallon.
`GSL_CONST_MKSA_QUART'
The volume of 1 quart.
`GSL_CONST_MKSA_PINT'
The volume of 1 pint.
File: gsl-ref.info, Node: Mass and Weight, Next: Thermal Energy and Power, Prev: Volume Area and Length, Up: Physical Constants
40.9 Mass and Weight
====================
`GSL_CONST_MKSA_POUND_MASS'
The mass of 1 pound.
`GSL_CONST_MKSA_OUNCE_MASS'
The mass of 1 ounce.
`GSL_CONST_MKSA_TON'
The mass of 1 ton.
`GSL_CONST_MKSA_METRIC_TON'
The mass of 1 metric ton (1000 kg).
`GSL_CONST_MKSA_UK_TON'
The mass of 1 UK ton.
`GSL_CONST_MKSA_TROY_OUNCE'
The mass of 1 troy ounce.
`GSL_CONST_MKSA_CARAT'
The mass of 1 carat.
`GSL_CONST_MKSA_GRAM_FORCE'
The force of 1 gram weight.
`GSL_CONST_MKSA_POUND_FORCE'
The force of 1 pound weight.
`GSL_CONST_MKSA_KILOPOUND_FORCE'
The force of 1 kilopound weight.
`GSL_CONST_MKSA_POUNDAL'
The force of 1 poundal.
File: gsl-ref.info, Node: Thermal Energy and Power, Next: Pressure, Prev: Mass and Weight, Up: Physical Constants
40.10 Thermal Energy and Power
==============================
`GSL_CONST_MKSA_CALORIE'
The energy of 1 calorie.
`GSL_CONST_MKSA_BTU'
The energy of 1 British Thermal Unit, btu.
`GSL_CONST_MKSA_THERM'
The energy of 1 Therm.
`GSL_CONST_MKSA_HORSEPOWER'
The power of 1 horsepower.
File: gsl-ref.info, Node: Pressure, Next: Viscosity, Prev: Thermal Energy and Power, Up: Physical Constants
40.11 Pressure
==============
`GSL_CONST_MKSA_BAR'
The pressure of 1 bar.
`GSL_CONST_MKSA_STD_ATMOSPHERE'
The pressure of 1 standard atmosphere.
`GSL_CONST_MKSA_TORR'
The pressure of 1 torr.
`GSL_CONST_MKSA_METER_OF_MERCURY'
The pressure of 1 meter of mercury.
`GSL_CONST_MKSA_INCH_OF_MERCURY'
The pressure of 1 inch of mercury.
`GSL_CONST_MKSA_INCH_OF_WATER'
The pressure of 1 inch of water.
`GSL_CONST_MKSA_PSI'
The pressure of 1 pound per square inch.
File: gsl-ref.info, Node: Viscosity, Next: Light and Illumination, Prev: Pressure, Up: Physical Constants
40.12 Viscosity
===============
`GSL_CONST_MKSA_POISE'
The dynamic viscosity of 1 poise.
`GSL_CONST_MKSA_STOKES'
The kinematic viscosity of 1 stokes.
File: gsl-ref.info, Node: Light and Illumination, Next: Radioactivity, Prev: Viscosity, Up: Physical Constants
40.13 Light and Illumination
============================
`GSL_CONST_MKSA_STILB'
The luminance of 1 stilb.
`GSL_CONST_MKSA_LUMEN'
The luminous flux of 1 lumen.
`GSL_CONST_MKSA_LUX'
The illuminance of 1 lux.
`GSL_CONST_MKSA_PHOT'
The illuminance of 1 phot.
`GSL_CONST_MKSA_FOOTCANDLE'
The illuminance of 1 footcandle.
`GSL_CONST_MKSA_LAMBERT'
The luminance of 1 lambert.
`GSL_CONST_MKSA_FOOTLAMBERT'
The luminance of 1 footlambert.
File: gsl-ref.info, Node: Radioactivity, Next: Force and Energy, Prev: Light and Illumination, Up: Physical Constants
40.14 Radioactivity
===================
`GSL_CONST_MKSA_CURIE'
The activity of 1 curie.
`GSL_CONST_MKSA_ROENTGEN'
The exposure of 1 roentgen.
`GSL_CONST_MKSA_RAD'
The absorbed dose of 1 rad.
File: gsl-ref.info, Node: Force and Energy, Next: Prefixes, Prev: Radioactivity, Up: Physical Constants
40.15 Force and Energy
======================
`GSL_CONST_MKSA_NEWTON'
The SI unit of force, 1 Newton.
`GSL_CONST_MKSA_DYNE'
The force of 1 Dyne = 10^-5 Newton.
`GSL_CONST_MKSA_JOULE'
The SI unit of energy, 1 Joule.
`GSL_CONST_MKSA_ERG'
The energy 1 erg = 10^-7 Joule.
File: gsl-ref.info, Node: Prefixes, Next: Physical Constant Examples, Prev: Force and Energy, Up: Physical Constants
40.16 Prefixes
==============
These constants are dimensionless scaling factors.
`GSL_CONST_NUM_YOTTA'
10^24
`GSL_CONST_NUM_ZETTA'
10^21
`GSL_CONST_NUM_EXA'
10^18
`GSL_CONST_NUM_PETA'
10^15
`GSL_CONST_NUM_TERA'
10^12
`GSL_CONST_NUM_GIGA'
10^9
`GSL_CONST_NUM_MEGA'
10^6
`GSL_CONST_NUM_KILO'
10^3
`GSL_CONST_NUM_MILLI'
10^-3
`GSL_CONST_NUM_MICRO'
10^-6
`GSL_CONST_NUM_NANO'
10^-9
`GSL_CONST_NUM_PICO'
10^-12
`GSL_CONST_NUM_FEMTO'
10^-15
`GSL_CONST_NUM_ATTO'
10^-18
`GSL_CONST_NUM_ZEPTO'
10^-21
`GSL_CONST_NUM_YOCTO'
10^-24
File: gsl-ref.info, Node: Physical Constant Examples, Next: Physical Constant References and Further Reading, Prev: Prefixes, Up: Physical Constants
40.17 Examples
==============
The following program demonstrates the use of the physical constants in
a calculation. In this case, the goal is to calculate the range of
light-travel times from Earth to Mars.
The required data is the average distance of each planet from the
Sun in astronomical units (the eccentricities and inclinations of the
orbits will be neglected for the purposes of this calculation). The
average radius of the orbit of Mars is 1.52 astronomical units, and for
the orbit of Earth it is 1 astronomical unit (by definition). These
values are combined with the MKSA values of the constants for the speed
of light and the length of an astronomical unit to produce a result for
the shortest and longest light-travel times in seconds. The figures are
converted into minutes before being displayed.
#include
#include
int
main (void)
{
double c = GSL_CONST_MKSA_SPEED_OF_LIGHT;
double au = GSL_CONST_MKSA_ASTRONOMICAL_UNIT;
double minutes = GSL_CONST_MKSA_MINUTE;
/* distance stored in meters */
double r_earth = 1.00 * au;
double r_mars = 1.52 * au;
double t_min, t_max;
t_min = (r_mars - r_earth) / c;
t_max = (r_mars + r_earth) / c;
printf ("light travel time from Earth to Mars:\n");
printf ("minimum = %.1f minutes\n", t_min / minutes);
printf ("maximum = %.1f minutes\n", t_max / minutes);
return 0;
}
Here is the output from the program,
light travel time from Earth to Mars:
minimum = 4.3 minutes
maximum = 21.0 minutes
File: gsl-ref.info, Node: Physical Constant References and Further Reading, Prev: Physical Constant Examples, Up: Physical Constants
40.18 References and Further Reading
====================================
The authoritative sources for physical constants are the 2006 CODATA
recommended values, published in the article below. Further information
on the values of physical constants is also available from the NIST
website.
P.J. Mohr, B.N. Taylor, D.B. Newell, "CODATA Recommended Values of
the Fundamental Physical Constants: 2006", Reviews of Modern
Physics, 80(2), pp. 633-730 (2008).
`http://www.physics.nist.gov/cuu/Constants/index.html'
`http://physics.nist.gov/Pubs/SP811/appenB9.html'
File: gsl-ref.info, Node: IEEE floating-point arithmetic, Next: Debugging Numerical Programs, Prev: Physical Constants, Up: Top
41 IEEE floating-point arithmetic
*********************************
This chapter describes functions for examining the representation of
floating point numbers and controlling the floating point environment of
your program. The functions described in this chapter are declared in
the header file `gsl_ieee_utils.h'.
* Menu:
* Representation of floating point numbers::
* Setting up your IEEE environment::
* IEEE References and Further Reading::
File: gsl-ref.info, Node: Representation of floating point numbers, Next: Setting up your IEEE environment, Up: IEEE floating-point arithmetic
41.1 Representation of floating point numbers
=============================================
The IEEE Standard for Binary Floating-Point Arithmetic defines binary
formats for single and double precision numbers. Each number is
composed of three parts: a "sign bit" (s), an "exponent" (E) and a
"fraction" (f). The numerical value of the combination (s,E,f) is
given by the following formula,
(-1)^s (1.fffff...) 2^E
The sign bit is either zero or one. The exponent ranges from a minimum
value E_min to a maximum value E_max depending on the precision. The
exponent is converted to an unsigned number e, known as the "biased
exponent", for storage by adding a "bias" parameter, e = E + bias. The
sequence fffff... represents the digits of the binary fraction f. The
binary digits are stored in "normalized form", by adjusting the
exponent to give a leading digit of 1. Since the leading digit is
always 1 for normalized numbers it is assumed implicitly and does not
have to be stored. Numbers smaller than 2^(E_min) are be stored in
"denormalized form" with a leading zero,
(-1)^s (0.fffff...) 2^(E_min)
This allows gradual underflow down to 2^(E_min - p) for p bits of
precision. A zero is encoded with the special exponent of 2^(E_min -
1) and infinities with the exponent of 2^(E_max + 1).
The format for single precision numbers uses 32 bits divided in the
following way,
seeeeeeeefffffffffffffffffffffff
s = sign bit, 1 bit
e = exponent, 8 bits (E_min=-126, E_max=127, bias=127)
f = fraction, 23 bits
The format for double precision numbers uses 64 bits divided in the
following way,
seeeeeeeeeeeffffffffffffffffffffffffffffffffffffffffffffffffffff
s = sign bit, 1 bit
e = exponent, 11 bits (E_min=-1022, E_max=1023, bias=1023)
f = fraction, 52 bits
It is often useful to be able to investigate the behavior of a
calculation at the bit-level and the library provides functions for
printing the IEEE representations in a human-readable form.
-- Function: void gsl_ieee_fprintf_float (FILE * STREAM, const float *
X)
-- Function: void gsl_ieee_fprintf_double (FILE * STREAM, const double
* X)
These functions output a formatted version of the IEEE
floating-point number pointed to by X to the stream STREAM. A
pointer is used to pass the number indirectly, to avoid any
undesired promotion from `float' to `double'. The output takes
one of the following forms,
`NaN'
the Not-a-Number symbol
`Inf, -Inf'
positive or negative infinity
`1.fffff...*2^E, -1.fffff...*2^E'
a normalized floating point number
`0.fffff...*2^E, -0.fffff...*2^E'
a denormalized floating point number
`0, -0'
positive or negative zero
The output can be used directly in GNU Emacs Calc mode by
preceding it with `2#' to indicate binary.
-- Function: void gsl_ieee_printf_float (const float * X)
-- Function: void gsl_ieee_printf_double (const double * X)
These functions output a formatted version of the IEEE
floating-point number pointed to by X to the stream `stdout'.
The following program demonstrates the use of the functions by printing
the single and double precision representations of the fraction 1/3.
For comparison the representation of the value promoted from single to
double precision is also printed.
#include
#include
int
main (void)
{
float f = 1.0/3.0;
double d = 1.0/3.0;
double fd = f; /* promote from float to double */
printf (" f="); gsl_ieee_printf_float(&f);
printf ("\n");
printf ("fd="); gsl_ieee_printf_double(&fd);
printf ("\n");
printf (" d="); gsl_ieee_printf_double(&d);
printf ("\n");
return 0;
}
The binary representation of 1/3 is 0.01010101... . The output below
shows that the IEEE format normalizes this fraction to give a leading
digit of 1,
f= 1.01010101010101010101011*2^-2
fd= 1.0101010101010101010101100000000000000000000000000000*2^-2
d= 1.0101010101010101010101010101010101010101010101010101*2^-2
The output also shows that a single-precision number is promoted to
double-precision by adding zeros in the binary representation.
File: gsl-ref.info, Node: Setting up your IEEE environment, Next: IEEE References and Further Reading, Prev: Representation of floating point numbers, Up: IEEE floating-point arithmetic
41.2 Setting up your IEEE environment
=====================================
The IEEE standard defines several "modes" for controlling the behavior
of floating point operations. These modes specify the important
properties of computer arithmetic: the direction used for rounding (e.g.
whether numbers should be rounded up, down or to the nearest number),
the rounding precision and how the program should handle arithmetic
exceptions, such as division by zero.
Many of these features can now be controlled via standard functions
such as `fpsetround', which should be used whenever they are available.
Unfortunately in the past there has been no universal API for
controlling their behavior--each system has had its own low-level way
of accessing them. To help you write portable programs GSL allows you
to specify modes in a platform-independent way using the environment
variable `GSL_IEEE_MODE'. The library then takes care of all the
necessary machine-specific initializations for you when you call the
function `gsl_ieee_env_setup'.
-- Function: void gsl_ieee_env_setup ()
This function reads the environment variable `GSL_IEEE_MODE' and
attempts to set up the corresponding specified IEEE modes. The
environment variable should be a list of keywords, separated by
commas, like this,
`GSL_IEEE_MODE' = "KEYWORD,KEYWORD,..."
where KEYWORD is one of the following mode-names,
`single-precision'
`double-precision'
`extended-precision'
`round-to-nearest'
`round-down'
`round-up'
`round-to-zero'
`mask-all'
`mask-invalid'
`mask-denormalized'
`mask-division-by-zero'
`mask-overflow'
`mask-underflow'
`trap-inexact'
`trap-common'
If `GSL_IEEE_MODE' is empty or undefined then the function returns
immediately and no attempt is made to change the system's IEEE
mode. When the modes from `GSL_IEEE_MODE' are turned on the
function prints a short message showing the new settings to remind
you that the results of the program will be affected.
If the requested modes are not supported by the platform being
used then the function calls the error handler and returns an
error code of `GSL_EUNSUP'.
When options are specified using this method, the resulting mode is
based on a default setting of the highest available precision
(double precision or extended precision, depending on the
platform) in round-to-nearest mode, with all exceptions enabled
apart from the INEXACT exception. The INEXACT exception is
generated whenever rounding occurs, so it must generally be
disabled in typical scientific calculations. All other
floating-point exceptions are enabled by default, including
underflows and the use of denormalized numbers, for safety. They
can be disabled with the individual `mask-' settings or together
using `mask-all'.
The following adjusted combination of modes is convenient for many
purposes,
GSL_IEEE_MODE="double-precision,"\
"mask-underflow,"\
"mask-denormalized"
This choice ignores any errors relating to small numbers (either
denormalized, or underflowing to zero) but traps overflows,
division by zero and invalid operations.
Note that on the x86 series of processors this function sets both
the original x87 mode and the newer MXCSR mode, which controls SSE
floating-point operations. The SSE floating-point units do not
have a precision-control bit, and always work in double-precision.
The single-precision and extended-precision keywords have no
effect in this case.
To demonstrate the effects of different rounding modes consider the
following program which computes e, the base of natural logarithms, by
summing a rapidly-decreasing series,
e = 1 + 1/2! + 1/3! + 1/4! + ...
= 2.71828182846...
#include
#include
#include
int
main (void)
{
double x = 1, oldsum = 0, sum = 0;
int i = 0;
gsl_ieee_env_setup (); /* read GSL_IEEE_MODE */
do
{
i++;
oldsum = sum;
sum += x;
x = x / i;
printf ("i=%2d sum=%.18f error=%g\n",
i, sum, sum - M_E);
if (i > 30)
break;
}
while (sum != oldsum);
return 0;
}
Here are the results of running the program in `round-to-nearest' mode.
This is the IEEE default so it isn't really necessary to specify it
here,
$ GSL_IEEE_MODE="round-to-nearest" ./a.out
i= 1 sum=1.000000000000000000 error=-1.71828
i= 2 sum=2.000000000000000000 error=-0.718282
....
i=18 sum=2.718281828459045535 error=4.44089e-16
i=19 sum=2.718281828459045535 error=4.44089e-16
After nineteen terms the sum converges to within 4 \times 10^-16 of the
correct value. If we now change the rounding mode to `round-down' the
final result is less accurate,
$ GSL_IEEE_MODE="round-down" ./a.out
i= 1 sum=1.000000000000000000 error=-1.71828
....
i=19 sum=2.718281828459041094 error=-3.9968e-15
The result is about 4 \times 10^-15 below the correct value, an order
of magnitude worse than the result obtained in the `round-to-nearest'
mode.
If we change to rounding mode to `round-up' then the final result is
higher than the correct value (when we add each term to the sum the
final result is always rounded up, which increases the sum by at least
one tick until the added term underflows to zero). To avoid this
problem we would need to use a safer converge criterion, such as `while
(fabs(sum - oldsum) > epsilon)', with a suitably chosen value of
epsilon.
Finally we can see the effect of computing the sum using
single-precision rounding, in the default `round-to-nearest' mode. In
this case the program thinks it is still using double precision numbers
but the CPU rounds the result of each floating point operation to
single-precision accuracy. This simulates the effect of writing the
program using single-precision `float' variables instead of `double'
variables. The iteration stops after about half the number of
iterations and the final result is much less accurate,
$ GSL_IEEE_MODE="single-precision" ./a.out
....
i=12 sum=2.718281984329223633 error=1.5587e-07
with an error of O(10^-7), which corresponds to single precision
accuracy (about 1 part in 10^7). Continuing the iterations further
does not decrease the error because all the subsequent results are
rounded to the same value.
File: gsl-ref.info, Node: IEEE References and Further Reading, Prev: Setting up your IEEE environment, Up: IEEE floating-point arithmetic
41.3 References and Further Reading
===================================
The reference for the IEEE standard is,
ANSI/IEEE Std 754-1985, IEEE Standard for Binary Floating-Point
Arithmetic.
A more pedagogical introduction to the standard can be found in the
following paper,
David Goldberg: What Every Computer Scientist Should Know About
Floating-Point Arithmetic. `ACM Computing Surveys', Vol. 23, No. 1
(March 1991), pages 5-48.
Corrigendum: `ACM Computing Surveys', Vol. 23, No. 3 (September
1991), page 413. and see also the sections by B. A. Wichmann and
Charles B. Dunham in Surveyor's Forum: "What Every Computer
Scientist Should Know About Floating-Point Arithmetic". `ACM
Computing Surveys', Vol. 24, No. 3 (September 1992), page 319.
A detailed textbook on IEEE arithmetic and its practical use is
available from SIAM Press,
Michael L. Overton, `Numerical Computing with IEEE Floating Point
Arithmetic', SIAM Press, ISBN 0898715717.
File: gsl-ref.info, Node: Debugging Numerical Programs, Next: Contributors to GSL, Prev: IEEE floating-point arithmetic, Up: Top
Appendix A Debugging Numerical Programs
***************************************
This chapter describes some tips and tricks for debugging numerical
programs which use GSL.
* Menu:
* Using gdb::
* Examining floating point registers::
* Handling floating point exceptions::
* GCC warning options for numerical programs::
* Debugging References::
File: gsl-ref.info, Node: Using gdb, Next: Examining floating point registers, Up: Debugging Numerical Programs
A.1 Using gdb
=============
Any errors reported by the library are passed to the function
`gsl_error'. By running your programs under gdb and setting a
breakpoint in this function you can automatically catch any library
errors. You can add a breakpoint for every session by putting
break gsl_error
into your `.gdbinit' file in the directory where your program is
started.
If the breakpoint catches an error then you can use a backtrace
(`bt') to see the call-tree, and the arguments which possibly caused
the error. By moving up into the calling function you can investigate
the values of variables at that point. Here is an example from the
program `fft/test_trap', which contains the following line,
status = gsl_fft_complex_wavetable_alloc (0, &complex_wavetable);
The function `gsl_fft_complex_wavetable_alloc' takes the length of an
FFT as its first argument. When this line is executed an error will be
generated because the length of an FFT is not allowed to be zero.
To debug this problem we start `gdb', using the file `.gdbinit' to
define a breakpoint in `gsl_error',
$ gdb test_trap
GDB is free software and you are welcome to distribute copies
of it under certain conditions; type "show copying" to see
the conditions. There is absolutely no warranty for GDB;
type "show warranty" for details. GDB 4.16 (i586-debian-linux),
Copyright 1996 Free Software Foundation, Inc.
Breakpoint 1 at 0x8050b1e: file error.c, line 14.
When we run the program this breakpoint catches the error and shows the
reason for it.
(gdb) run
Starting program: test_trap
Breakpoint 1, gsl_error (reason=0x8052b0d
"length n must be positive integer",
file=0x8052b04 "c_init.c", line=108, gsl_errno=1)
at error.c:14
14 if (gsl_error_handler)
The first argument of `gsl_error' is always a string describing the
error. Now we can look at the backtrace to see what caused the problem,
(gdb) bt
#0 gsl_error (reason=0x8052b0d
"length n must be positive integer",
file=0x8052b04 "c_init.c", line=108, gsl_errno=1)
at error.c:14
#1 0x8049376 in gsl_fft_complex_wavetable_alloc (n=0,
wavetable=0xbffff778) at c_init.c:108
#2 0x8048a00 in main (argc=1, argv=0xbffff9bc)
at test_trap.c:94
#3 0x80488be in ___crt_dummy__ ()
We can see that the error was generated in the function
`gsl_fft_complex_wavetable_alloc' when it was called with an argument
of N=0. The original call came from line 94 in the file `test_trap.c'.
By moving up to the level of the original call we can find the line
that caused the error,
(gdb) up
#1 0x8049376 in gsl_fft_complex_wavetable_alloc (n=0,
wavetable=0xbffff778) at c_init.c:108
108 GSL_ERROR ("length n must be positive integer", GSL_EDOM);
(gdb) up
#2 0x8048a00 in main (argc=1, argv=0xbffff9bc)
at test_trap.c:94
94 status = gsl_fft_complex_wavetable_alloc (0,
&complex_wavetable);
Thus we have found the line that caused the problem. From this point we
could also print out the values of other variables such as
`complex_wavetable'.
File: gsl-ref.info, Node: Examining floating point registers, Next: Handling floating point exceptions, Prev: Using gdb, Up: Debugging Numerical Programs
A.2 Examining floating point registers
======================================
The contents of floating point registers can be examined using the
command `info float' (on supported platforms).
(gdb) info float
st0: 0xc4018b895aa17a945000 Valid Normal -7.838871e+308
st1: 0x3ff9ea3f50e4d7275000 Valid Normal 0.0285946
st2: 0x3fe790c64ce27dad4800 Valid Normal 6.7415931e-08
st3: 0x3ffaa3ef0df6607d7800 Spec Normal 0.0400229
st4: 0x3c028000000000000000 Valid Normal 4.4501477e-308
st5: 0x3ffef5412c22219d9000 Zero Normal 0.9580257
st6: 0x3fff8000000000000000 Valid Normal 1
st7: 0xc4028b65a1f6d243c800 Valid Normal -1.566206e+309
fctrl: 0x0272 53 bit; NEAR; mask DENOR UNDER LOS;
fstat: 0xb9ba flags 0001; top 7; excep DENOR OVERF UNDER LOS
ftag: 0x3fff
fip: 0x08048b5c
fcs: 0x051a0023
fopoff: 0x08086820
fopsel: 0x002b
Individual registers can be examined using the variables $REG, where
REG is the register name.
(gdb) p $st1
$1 = 0.02859464454261210347719
File: gsl-ref.info, Node: Handling floating point exceptions, Next: GCC warning options for numerical programs, Prev: Examining floating point registers, Up: Debugging Numerical Programs
A.3 Handling floating point exceptions
======================================
It is possible to stop the program whenever a `SIGFPE' floating point
exception occurs. This can be useful for finding the cause of an
unexpected infinity or `NaN'. The current handler settings can be
shown with the command `info signal SIGFPE'.
(gdb) info signal SIGFPE
Signal Stop Print Pass to program Description
SIGFPE Yes Yes Yes Arithmetic exception
Unless the program uses a signal handler the default setting should be
changed so that SIGFPE is not passed to the program, as this would cause
it to exit. The command `handle SIGFPE stop nopass' prevents this.
(gdb) handle SIGFPE stop nopass
Signal Stop Print Pass to program Description
SIGFPE Yes Yes No Arithmetic exception
Depending on the platform it may be necessary to instruct the kernel to
generate signals for floating point exceptions. For programs using GSL
this can be achieved using the `GSL_IEEE_MODE' environment variable in
conjunction with the function `gsl_ieee_env_setup' as described in
*note IEEE floating-point arithmetic::.
(gdb) set env GSL_IEEE_MODE=double-precision
File: gsl-ref.info, Node: GCC warning options for numerical programs, Next: Debugging References, Prev: Handling floating point exceptions, Up: Debugging Numerical Programs
A.4 GCC warning options for numerical programs
==============================================
Writing reliable numerical programs in C requires great care. The
following GCC warning options are recommended when compiling numerical
programs:
gcc -ansi -pedantic -Werror -Wall -W
-Wmissing-prototypes -Wstrict-prototypes
-Wconversion -Wshadow -Wpointer-arith
-Wcast-qual -Wcast-align
-Wwrite-strings -Wnested-externs
-fshort-enums -fno-common -Dinline= -g -O2
For details of each option consult the manual `Using and Porting GCC'.
The following table gives a brief explanation of what types of errors
these options catch.
`-ansi -pedantic'
Use ANSI C, and reject any non-ANSI extensions. These flags help
in writing portable programs that will compile on other systems.
`-Werror'
Consider warnings to be errors, so that compilation stops. This
prevents warnings from scrolling off the top of the screen and
being lost. You won't be able to compile the program until it is
completely warning-free.
`-Wall'
This turns on a set of warnings for common programming problems.
You need `-Wall', but it is not enough on its own.
`-O2'
Turn on optimization. The warnings for uninitialized variables in
`-Wall' rely on the optimizer to analyze the code. If there is no
optimization then these warnings aren't generated.
`-W'
This turns on some extra warnings not included in `-Wall', such as
missing return values and comparisons between signed and unsigned
integers.
`-Wmissing-prototypes -Wstrict-prototypes'
Warn if there are any missing or inconsistent prototypes. Without
prototypes it is harder to detect problems with incorrect
arguments.
`-Wconversion'
The main use of this option is to warn about conversions from
signed to unsigned integers. For example, `unsigned int x = -1'.
If you need to perform such a conversion you can use an explicit
cast.
`-Wshadow'
This warns whenever a local variable shadows another local
variable. If two variables have the same name then it is a
potential source of confusion.
`-Wpointer-arith -Wcast-qual -Wcast-align'
These options warn if you try to do pointer arithmetic for types
which don't have a size, such as `void', if you remove a `const'
cast from a pointer, or if you cast a pointer to a type which has a
different size, causing an invalid alignment.
`-Wwrite-strings'
This option gives string constants a `const' qualifier so that it
will be a compile-time error to attempt to overwrite them.
`-fshort-enums'
This option makes the type of `enum' as short as possible.
Normally this makes an `enum' different from an `int'.
Consequently any attempts to assign a pointer-to-int to a
pointer-to-enum will generate a cast-alignment warning.
`-fno-common'
This option prevents global variables being simultaneously defined
in different object files (you get an error at link time). Such a
variable should be defined in one file and referred to in other
files with an `extern' declaration.
`-Wnested-externs'
This warns if an `extern' declaration is encountered within a
function.
`-Dinline='
The `inline' keyword is not part of ANSI C. Thus if you want to use
`-ansi' with a program which uses inline functions you can use this
preprocessor definition to remove the `inline' keywords.
`-g'
It always makes sense to put debugging symbols in the executable
so that you can debug it using `gdb'. The only effect of
debugging symbols is to increase the size of the file, and you can
use the `strip' command to remove them later if necessary.
File: gsl-ref.info, Node: Debugging References, Prev: GCC warning options for numerical programs, Up: Debugging Numerical Programs
A.5 References and Further Reading
==================================
The following books are essential reading for anyone writing and
debugging numerical programs with GCC and GDB.
R.M. Stallman, `Using and Porting GNU CC', Free Software
Foundation, ISBN 1882114388
R.M. Stallman, R.H. Pesch, `Debugging with GDB: The GNU
Source-Level Debugger', Free Software Foundation, ISBN 1882114779
For a tutorial introduction to the GNU C Compiler and related programs,
see
B.J. Gough, `An Introduction to GCC', Network Theory Ltd, ISBN
0954161793
File: gsl-ref.info, Node: Contributors to GSL, Next: Autoconf Macros, Prev: Debugging Numerical Programs, Up: Top
Appendix B Contributors to GSL
******************************
(See the AUTHORS file in the distribution for up-to-date information.)
*Mark Galassi*
Conceived GSL (with James Theiler) and wrote the design document.
Wrote the simulated annealing package and the relevant chapter in
the manual.
*James Theiler*
Conceived GSL (with Mark Galassi). Wrote the random number
generators and the relevant chapter in this manual.
*Jim Davies*
Wrote the statistical routines and the relevant chapter in this
manual.
*Brian Gough*
FFTs, numerical integration, random number generators and
distributions, root finding, minimization and fitting, polynomial
solvers, complex numbers, physical constants, permutations, vector
and matrix functions, histograms, statistics, ieee-utils, revised
CBLAS Level 2 & 3, matrix decompositions, eigensystems, cumulative
distribution functions, testing, documentation and releases.
*Reid Priedhorsky*
Wrote and documented the initial version of the root finding
routines while at Los Alamos National Laboratory, Mathematical
Modeling and Analysis Group.
*Gerard Jungman*
Special Functions, Series acceleration, ODEs, BLAS, Linear Algebra,
Eigensystems, Hankel Transforms.
*Mike Booth*
Wrote the Monte Carlo library.
*Jorma Olavi Ta"htinen*
Wrote the initial complex arithmetic functions.
*Thomas Walter*
Wrote the initial heapsort routines and Cholesky decomposition.
*Fabrice Rossi*
Multidimensional minimization.
*Carlo Perassi*
Implementation of the random number generators in Knuth's
`Seminumerical Algorithms', 3rd Ed.
*Szymon Jaroszewicz*
Wrote the routines for generating combinations.
*Nicolas Darnis*
Wrote the cyclic functions and the initial functions for canonical
permutations.
*Jason H. Stover*
Wrote the major cumulative distribution functions.
*Ivo Alxneit*
Wrote the routines for wavelet transforms.
*Tuomo Keskitalo*
Improved the implementation of the ODE solvers and wrote the
ode-initval2 routines.
*Lowell Johnson*
Implementation of the Mathieu functions.
*Patrick Alken*
Implementation of non-symmetric and generalized eigensystems and
B-splines.
*Rhys Ulerich*
Wrote the multiset routines.
*Pavel Holoborodko*
Wrote the fixed order Gauss-Legendre quadrature routines.
*Pedro Gonnet*
Wrote the CQUAD integration routines.
Thanks to Nigel Lowry for help in proofreading the manual.
The non-symmetric eigensystems routines contain code based on the
LAPACK linear algebra library. LAPACK is distributed under the
following license:
Copyright (c) 1992-2006 The University of Tennessee. All rights
reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer listed
in this license in the documentation and/or other materials
provided with the distribution.
* Neither the name of the copyright holders nor the names of its
contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE,
EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
File: gsl-ref.info, Node: Autoconf Macros, Next: GSL CBLAS Library, Prev: Contributors to GSL, Up: Top
Appendix C Autoconf Macros
**************************
For applications using `autoconf' the standard macro `AC_CHECK_LIB' can
be used to link with GSL automatically from a `configure' script. The
library itself depends on the presence of a CBLAS and math library as
well, so these must also be located before linking with the main
`libgsl' file. The following commands should be placed in the
`configure.ac' file to perform these tests,
AC_CHECK_LIB([m],[cos])
AC_CHECK_LIB([gslcblas],[cblas_dgemm])
AC_CHECK_LIB([gsl],[gsl_blas_dgemm])
It is important to check for `libm' and `libgslcblas' before `libgsl',
otherwise the tests will fail. Assuming the libraries are found the
output during the configure stage looks like this,
checking for cos in -lm... yes
checking for cblas_dgemm in -lgslcblas... yes
checking for gsl_blas_dgemm in -lgsl... yes
If the library is found then the tests will define the macros
`HAVE_LIBGSL', `HAVE_LIBGSLCBLAS', `HAVE_LIBM' and add the options
`-lgsl -lgslcblas -lm' to the variable `LIBS'.
The tests above will find any version of the library. They are
suitable for general use, where the versions of the functions are not
important. An alternative macro is available in the file `gsl.m4' to
test for a specific version of the library. To use this macro simply
add the following line to your `configure.in' file instead of the tests
above:
AX_PATH_GSL(GSL_VERSION,
[action-if-found],
[action-if-not-found])
The argument `GSL_VERSION' should be the two or three digit MAJOR.MINOR
or MAJOR.MINOR.MICRO version number of the release you require. A
suitable choice for `action-if-not-found' is,
AC_MSG_ERROR(could not find required version of GSL)
Then you can add the variables `GSL_LIBS' and `GSL_CFLAGS' to your
Makefile.am files to obtain the correct compiler flags. `GSL_LIBS' is
equal to the output of the `gsl-config --libs' command and `GSL_CFLAGS'
is equal to `gsl-config --cflags' command. For example,
libfoo_la_LDFLAGS = -lfoo $(GSL_LIBS) -lgslcblas
Note that the macro `AX_PATH_GSL' needs to use the C compiler so it
should appear in the `configure.in' file before the macro
`AC_LANG_CPLUSPLUS' for programs that use C++.
To test for `inline' the following test should be placed in your
`configure.in' file,
AC_C_INLINE
if test "$ac_cv_c_inline" != no ; then
AC_DEFINE(HAVE_INLINE,1)
AC_SUBST(HAVE_INLINE)
fi
and the macro will then be defined in the compilation flags or by
including the file `config.h' before any library headers.
The following autoconf test will check for `extern inline',
dnl Check for "extern inline", using a modified version
dnl of the test for AC_C_INLINE from acspecific.mt
dnl
AC_CACHE_CHECK([for extern inline], ac_cv_c_extern_inline,
[ac_cv_c_extern_inline=no
AC_TRY_COMPILE([extern $ac_cv_c_inline double foo(double x);
extern $ac_cv_c_inline double foo(double x) { return x+1.0; };
double foo (double x) { return x + 1.0; };],
[ foo(1.0) ],
[ac_cv_c_extern_inline="yes"])
])
if test "$ac_cv_c_extern_inline" != no ; then
AC_DEFINE(HAVE_INLINE,1)
AC_SUBST(HAVE_INLINE)
fi
The substitution of portability functions can be made automatically
if you use `autoconf'. For example, to test whether the BSD function
`hypot' is available you can include the following line in the
configure file `configure.in' for your application,
AC_CHECK_FUNCS(hypot)
and place the following macro definitions in the file `config.h.in',
/* Substitute gsl_hypot for missing system hypot */
#ifndef HAVE_HYPOT
#define hypot gsl_hypot
#endif
The application source files can then use the include command `#include
' to substitute `gsl_hypot' for each occurrence of `hypot'
when `hypot' is not available.
File: gsl-ref.info, Node: GSL CBLAS Library, Next: Free Software Needs Free Documentation, Prev: Autoconf Macros, Up: Top
Appendix D GSL CBLAS Library
****************************
The prototypes for the low-level CBLAS functions are declared in the
file `gsl_cblas.h'. For the definition of the functions consult the
documentation available from Netlib (*note BLAS References and Further
Reading::).
* Menu:
* Level 1 CBLAS Functions::
* Level 2 CBLAS Functions::
* Level 3 CBLAS Functions::
* GSL CBLAS Examples::
File: gsl-ref.info, Node: Level 1 CBLAS Functions, Next: Level 2 CBLAS Functions, Up: GSL CBLAS Library
D.1 Level 1
===========
-- Function: float cblas_sdsdot (const int N, const float ALPHA, const
float * X, const int INCX, const float * Y, const int INCY)
-- Function: double cblas_dsdot (const int N, const float * X, const
int INCX, const float * Y, const int INCY)
-- Function: float cblas_sdot (const int N, const float * X, const int
INCX, const float * Y, const int INCY)
-- Function: double cblas_ddot (const int N, const double * X, const
int INCX, const double * Y, const int INCY)
-- Function: void cblas_cdotu_sub (const int N, const void * X, const
int INCX, const void * Y, const int INCY, void * DOTU)
-- Function: void cblas_cdotc_sub (const int N, const void * X, const
int INCX, const void * Y, const int INCY, void * DOTC)
-- Function: void cblas_zdotu_sub (const int N, const void * X, const
int INCX, const void * Y, const int INCY, void * DOTU)
-- Function: void cblas_zdotc_sub (const int N, const void * X, const
int INCX, const void * Y, const int INCY, void * DOTC)
-- Function: float cblas_snrm2 (const int N, const float * X, const
int INCX)
-- Function: float cblas_sasum (const int N, const float * X, const
int INCX)
-- Function: double cblas_dnrm2 (const int N, const double * X, const
int INCX)
-- Function: double cblas_dasum (const int N, const double * X, const
int INCX)
-- Function: float cblas_scnrm2 (const int N, const void * X, const
int INCX)
-- Function: float cblas_scasum (const int N, const void * X, const
int INCX)
-- Function: double cblas_dznrm2 (const int N, const void * X, const
int INCX)
-- Function: double cblas_dzasum (const int N, const void * X, const
int INCX)
-- Function: CBLAS_INDEX cblas_isamax (const int N, const float * X,
const int INCX)
-- Function: CBLAS_INDEX cblas_idamax (const int N, const double * X,
const int INCX)
-- Function: CBLAS_INDEX cblas_icamax (const int N, const void * X,
const int INCX)
-- Function: CBLAS_INDEX cblas_izamax (const int N, const void * X,
const int INCX)
-- Function: void cblas_sswap (const int N, float * X, const int INCX,
float * Y, const int INCY)
-- Function: void cblas_scopy (const int N, const float * X, const int
INCX, float * Y, const int INCY)
-- Function: void cblas_saxpy (const int N, const float ALPHA, const
float * X, const int INCX, float * Y, const int INCY)
-- Function: void cblas_dswap (const int N, double * X, const int
INCX, double * Y, const int INCY)
-- Function: void cblas_dcopy (const int N, const double * X, const
int INCX, double * Y, const int INCY)
-- Function: void cblas_daxpy (const int N, const double ALPHA, const
double * X, const int INCX, double * Y, const int INCY)
-- Function: void cblas_cswap (const int N, void * X, const int INCX,
void * Y, const int INCY)
-- Function: void cblas_ccopy (const int N, const void * X, const int
INCX, void * Y, const int INCY)
-- Function: void cblas_caxpy (const int N, const void * ALPHA, const
void * X, const int INCX, void * Y, const int INCY)
-- Function: void cblas_zswap (const int N, void * X, const int INCX,
void * Y, const int INCY)
-- Function: void cblas_zcopy (const int N, const void * X, const int
INCX, void * Y, const int INCY)
-- Function: void cblas_zaxpy (const int N, const void * ALPHA, const
void * X, const int INCX, void * Y, const int INCY)
-- Function: void cblas_srotg (float * A, float * B, float * C, float
* S)
-- Function: void cblas_srotmg (float * D1, float * D2, float * B1,
const float B2, float * P)
-- Function: void cblas_srot (const int N, float * X, const int INCX,
float * Y, const int INCY, const float C, const float S)
-- Function: void cblas_srotm (const int N, float * X, const int INCX,
float * Y, const int INCY, const float * P)
-- Function: void cblas_drotg (double * A, double * B, double * C,
double * S)
-- Function: void cblas_drotmg (double * D1, double * D2, double * B1,
const double B2, double * P)
-- Function: void cblas_drot (const int N, double * X, const int INCX,
double * Y, const int INCY, const double C, const double S)
-- Function: void cblas_drotm (const int N, double * X, const int
INCX, double * Y, const int INCY, const double * P)
-- Function: void cblas_sscal (const int N, const float ALPHA, float *
X, const int INCX)
-- Function: void cblas_dscal (const int N, const double ALPHA, double
* X, const int INCX)
-- Function: void cblas_cscal (const int N, const void * ALPHA, void *
X, const int INCX)
-- Function: void cblas_zscal (const int N, const void * ALPHA, void *
X, const int INCX)
-- Function: void cblas_csscal (const int N, const float ALPHA, void *
X, const int INCX)
-- Function: void cblas_zdscal (const int N, const double ALPHA, void
* X, const int INCX)
File: gsl-ref.info, Node: Level 2 CBLAS Functions, Next: Level 3 CBLAS Functions, Prev: Level 1 CBLAS Functions, Up: GSL CBLAS Library
D.2 Level 2
===========
-- Function: void cblas_sgemv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_TRANSPOSE TRANSA, const int M, const int N, const
float ALPHA, const float * A, const int LDA, const float * X,
const int INCX, const float BETA, float * Y, const int INCY)
-- Function: void cblas_sgbmv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_TRANSPOSE TRANSA, const int M, const int N, const
int KL, const int KU, const float ALPHA, const float * A,
const int LDA, const float * X, const int INCX, const float
BETA, float * Y, const int INCY)
-- Function: void cblas_strmv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const enum CBLAS_TRANSPOSE TRANSA,
const enum CBLAS_DIAG DIAG, const int N, const float * A,
const int LDA, float * X, const int INCX)
-- Function: void cblas_stbmv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const enum CBLAS_TRANSPOSE TRANSA,
const enum CBLAS_DIAG DIAG, const int N, const int K, const
float * A, const int LDA, float * X, const int INCX)
-- Function: void cblas_stpmv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const enum CBLAS_TRANSPOSE TRANSA,
const enum CBLAS_DIAG DIAG, const int N, const float * AP,
float * X, const int INCX)
-- Function: void cblas_strsv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const enum CBLAS_TRANSPOSE TRANSA,
const enum CBLAS_DIAG DIAG, const int N, const float * A,
const int LDA, float * X, const int INCX)
-- Function: void cblas_stbsv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const enum CBLAS_TRANSPOSE TRANSA,
const enum CBLAS_DIAG DIAG, const int N, const int K, const
float * A, const int LDA, float * X, const int INCX)
-- Function: void cblas_stpsv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const enum CBLAS_TRANSPOSE TRANSA,
const enum CBLAS_DIAG DIAG, const int N, const float * AP,
float * X, const int INCX)
-- Function: void cblas_dgemv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_TRANSPOSE TRANSA, const int M, const int N, const
double ALPHA, const double * A, const int LDA, const double *
X, const int INCX, const double BETA, double * Y, const int
INCY)
-- Function: void cblas_dgbmv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_TRANSPOSE TRANSA, const int M, const int N, const
int KL, const int KU, const double ALPHA, const double * A,
const int LDA, const double * X, const int INCX, const double
BETA, double * Y, const int INCY)
-- Function: void cblas_dtrmv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const enum CBLAS_TRANSPOSE TRANSA,
const enum CBLAS_DIAG DIAG, const int N, const double * A,
const int LDA, double * X, const int INCX)
-- Function: void cblas_dtbmv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const enum CBLAS_TRANSPOSE TRANSA,
const enum CBLAS_DIAG DIAG, const int N, const int K, const
double * A, const int LDA, double * X, const int INCX)
-- Function: void cblas_dtpmv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const enum CBLAS_TRANSPOSE TRANSA,
const enum CBLAS_DIAG DIAG, const int N, const double * AP,
double * X, const int INCX)
-- Function: void cblas_dtrsv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const enum CBLAS_TRANSPOSE TRANSA,
const enum CBLAS_DIAG DIAG, const int N, const double * A,
const int LDA, double * X, const int INCX)
-- Function: void cblas_dtbsv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const enum CBLAS_TRANSPOSE TRANSA,
const enum CBLAS_DIAG DIAG, const int N, const int K, const
double * A, const int LDA, double * X, const int INCX)
-- Function: void cblas_dtpsv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const enum CBLAS_TRANSPOSE TRANSA,
const enum CBLAS_DIAG DIAG, const int N, const double * AP,
double * X, const int INCX)
-- Function: void cblas_cgemv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_TRANSPOSE TRANSA, const int M, const int N, const
void * ALPHA, const void * A, const int LDA, const void * X,
const int INCX, const void * BETA, void * Y, const int INCY)
-- Function: void cblas_cgbmv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_TRANSPOSE TRANSA, const int M, const int N, const
int KL, const int KU, const void * ALPHA, const void * A,
const int LDA, const void * X, const int INCX, const void *
BETA, void * Y, const int INCY)
-- Function: void cblas_ctrmv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const enum CBLAS_TRANSPOSE TRANSA,
const enum CBLAS_DIAG DIAG, const int N, const void * A,
const int LDA, void * X, const int INCX)
-- Function: void cblas_ctbmv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const enum CBLAS_TRANSPOSE TRANSA,
const enum CBLAS_DIAG DIAG, const int N, const int K, const
void * A, const int LDA, void * X, const int INCX)
-- Function: void cblas_ctpmv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const enum CBLAS_TRANSPOSE TRANSA,
const enum CBLAS_DIAG DIAG, const int N, const void * AP,
void * X, const int INCX)
-- Function: void cblas_ctrsv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const enum CBLAS_TRANSPOSE TRANSA,
const enum CBLAS_DIAG DIAG, const int N, const void * A,
const int LDA, void * X, const int INCX)
-- Function: void cblas_ctbsv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const enum CBLAS_TRANSPOSE TRANSA,
const enum CBLAS_DIAG DIAG, const int N, const int K, const
void * A, const int LDA, void * X, const int INCX)
-- Function: void cblas_ctpsv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const enum CBLAS_TRANSPOSE TRANSA,
const enum CBLAS_DIAG DIAG, const int N, const void * AP,
void * X, const int INCX)
-- Function: void cblas_zgemv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_TRANSPOSE TRANSA, const int M, const int N, const
void * ALPHA, const void * A, const int LDA, const void * X,
const int INCX, const void * BETA, void * Y, const int INCY)
-- Function: void cblas_zgbmv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_TRANSPOSE TRANSA, const int M, const int N, const
int KL, const int KU, const void * ALPHA, const void * A,
const int LDA, const void * X, const int INCX, const void *
BETA, void * Y, const int INCY)
-- Function: void cblas_ztrmv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const enum CBLAS_TRANSPOSE TRANSA,
const enum CBLAS_DIAG DIAG, const int N, const void * A,
const int LDA, void * X, const int INCX)
-- Function: void cblas_ztbmv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const enum CBLAS_TRANSPOSE TRANSA,
const enum CBLAS_DIAG DIAG, const int N, const int K, const
void * A, const int LDA, void * X, const int INCX)
-- Function: void cblas_ztpmv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const enum CBLAS_TRANSPOSE TRANSA,
const enum CBLAS_DIAG DIAG, const int N, const void * AP,
void * X, const int INCX)
-- Function: void cblas_ztrsv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const enum CBLAS_TRANSPOSE TRANSA,
const enum CBLAS_DIAG DIAG, const int N, const void * A,
const int LDA, void * X, const int INCX)
-- Function: void cblas_ztbsv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const enum CBLAS_TRANSPOSE TRANSA,
const enum CBLAS_DIAG DIAG, const int N, const int K, const
void * A, const int LDA, void * X, const int INCX)
-- Function: void cblas_ztpsv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const enum CBLAS_TRANSPOSE TRANSA,
const enum CBLAS_DIAG DIAG, const int N, const void * AP,
void * X, const int INCX)
-- Function: void cblas_ssymv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const int N, const float ALPHA, const
float * A, const int LDA, const float * X, const int INCX,
const float BETA, float * Y, const int INCY)
-- Function: void cblas_ssbmv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const int N, const int K, const float
ALPHA, const float * A, const int LDA, const float * X, const
int INCX, const float BETA, float * Y, const int INCY)
-- Function: void cblas_sspmv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const int N, const float ALPHA, const
float * AP, const float * X, const int INCX, const float
BETA, float * Y, const int INCY)
-- Function: void cblas_sger (const enum CBLAS_ORDER ORDER, const int
M, const int N, const float ALPHA, const float * X, const int
INCX, const float * Y, const int INCY, float * A, const int
LDA)
-- Function: void cblas_ssyr (const enum CBLAS_ORDER ORDER, const enum
CBLAS_UPLO UPLO, const int N, const float ALPHA, const float
* X, const int INCX, float * A, const int LDA)
-- Function: void cblas_sspr (const enum CBLAS_ORDER ORDER, const enum
CBLAS_UPLO UPLO, const int N, const float ALPHA, const float
* X, const int INCX, float * AP)
-- Function: void cblas_ssyr2 (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const int N, const float ALPHA, const
float * X, const int INCX, const float * Y, const int INCY,
float * A, const int LDA)
-- Function: void cblas_sspr2 (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const int N, const float ALPHA, const
float * X, const int INCX, const float * Y, const int INCY,
float * A)
-- Function: void cblas_dsymv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const int N, const double ALPHA, const
double * A, const int LDA, const double * X, const int INCX,
const double BETA, double * Y, const int INCY)
-- Function: void cblas_dsbmv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const int N, const int K, const double
ALPHA, const double * A, const int LDA, const double * X,
const int INCX, const double BETA, double * Y, const int INCY)
-- Function: void cblas_dspmv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const int N, const double ALPHA, const
double * AP, const double * X, const int INCX, const double
BETA, double * Y, const int INCY)
-- Function: void cblas_dger (const enum CBLAS_ORDER ORDER, const int
M, const int N, const double ALPHA, const double * X, const
int INCX, const double * Y, const int INCY, double * A, const
int LDA)
-- Function: void cblas_dsyr (const enum CBLAS_ORDER ORDER, const enum
CBLAS_UPLO UPLO, const int N, const double ALPHA, const
double * X, const int INCX, double * A, const int LDA)
-- Function: void cblas_dspr (const enum CBLAS_ORDER ORDER, const enum
CBLAS_UPLO UPLO, const int N, const double ALPHA, const
double * X, const int INCX, double * AP)
-- Function: void cblas_dsyr2 (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const int N, const double ALPHA, const
double * X, const int INCX, const double * Y, const int INCY,
double * A, const int LDA)
-- Function: void cblas_dspr2 (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const int N, const double ALPHA, const
double * X, const int INCX, const double * Y, const int INCY,
double * A)
-- Function: void cblas_chemv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const int N, const void * ALPHA, const
void * A, const int LDA, const void * X, const int INCX,
const void * BETA, void * Y, const int INCY)
-- Function: void cblas_chbmv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const int N, const int K, const void *
ALPHA, const void * A, const int LDA, const void * X, const
int INCX, const void * BETA, void * Y, const int INCY)
-- Function: void cblas_chpmv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const int N, const void * ALPHA, const
void * AP, const void * X, const int INCX, const void * BETA,
void * Y, const int INCY)
-- Function: void cblas_cgeru (const enum CBLAS_ORDER ORDER, const int
M, const int N, const void * ALPHA, const void * X, const int
INCX, const void * Y, const int INCY, void * A, const int LDA)
-- Function: void cblas_cgerc (const enum CBLAS_ORDER ORDER, const int
M, const int N, const void * ALPHA, const void * X, const int
INCX, const void * Y, const int INCY, void * A, const int LDA)
-- Function: void cblas_cher (const enum CBLAS_ORDER ORDER, const enum
CBLAS_UPLO UPLO, const int N, const float ALPHA, const void *
X, const int INCX, void * A, const int LDA)
-- Function: void cblas_chpr (const enum CBLAS_ORDER ORDER, const enum
CBLAS_UPLO UPLO, const int N, const float ALPHA, const void *
X, const int INCX, void * A)
-- Function: void cblas_cher2 (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const int N, const void * ALPHA, const
void * X, const int INCX, const void * Y, const int INCY,
void * A, const int LDA)
-- Function: void cblas_chpr2 (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const int N, const void * ALPHA, const
void * X, const int INCX, const void * Y, const int INCY,
void * AP)
-- Function: void cblas_zhemv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const int N, const void * ALPHA, const
void * A, const int LDA, const void * X, const int INCX,
const void * BETA, void * Y, const int INCY)
-- Function: void cblas_zhbmv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const int N, const int K, const void *
ALPHA, const void * A, const int LDA, const void * X, const
int INCX, const void * BETA, void * Y, const int INCY)
-- Function: void cblas_zhpmv (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const int N, const void * ALPHA, const
void * AP, const void * X, const int INCX, const void * BETA,
void * Y, const int INCY)
-- Function: void cblas_zgeru (const enum CBLAS_ORDER ORDER, const int
M, const int N, const void * ALPHA, const void * X, const int
INCX, const void * Y, const int INCY, void * A, const int LDA)
-- Function: void cblas_zgerc (const enum CBLAS_ORDER ORDER, const int
M, const int N, const void * ALPHA, const void * X, const int
INCX, const void * Y, const int INCY, void * A, const int LDA)
-- Function: void cblas_zher (const enum CBLAS_ORDER ORDER, const enum
CBLAS_UPLO UPLO, const int N, const double ALPHA, const void
* X, const int INCX, void * A, const int LDA)
-- Function: void cblas_zhpr (const enum CBLAS_ORDER ORDER, const enum
CBLAS_UPLO UPLO, const int N, const double ALPHA, const void
* X, const int INCX, void * A)
-- Function: void cblas_zher2 (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const int N, const void * ALPHA, const
void * X, const int INCX, const void * Y, const int INCY,
void * A, const int LDA)
-- Function: void cblas_zhpr2 (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const int N, const void * ALPHA, const
void * X, const int INCX, const void * Y, const int INCY,
void * AP)
File: gsl-ref.info, Node: Level 3 CBLAS Functions, Next: GSL CBLAS Examples, Prev: Level 2 CBLAS Functions, Up: GSL CBLAS Library
D.3 Level 3
===========
-- Function: void cblas_sgemm (const enum CBLAS_ORDER ORDER, const
enum CBLAS_TRANSPOSE TRANSA, const enum CBLAS_TRANSPOSE
TRANSB, const int M, const int N, const int K, const float
ALPHA, const float * A, const int LDA, const float * B, const
int LDB, const float BETA, float * C, const int LDC)
-- Function: void cblas_ssymm (const enum CBLAS_ORDER ORDER, const
enum CBLAS_SIDE SIDE, const enum CBLAS_UPLO UPLO, const int
M, const int N, const float ALPHA, const float * A, const int
LDA, const float * B, const int LDB, const float BETA, float
* C, const int LDC)
-- Function: void cblas_ssyrk (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const enum CBLAS_TRANSPOSE TRANS, const
int N, const int K, const float ALPHA, const float * A, const
int LDA, const float BETA, float * C, const int LDC)
-- Function: void cblas_ssyr2k (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const enum CBLAS_TRANSPOSE TRANS, const
int N, const int K, const float ALPHA, const float * A, const
int LDA, const float * B, const int LDB, const float BETA,
float * C, const int LDC)
-- Function: void cblas_strmm (const enum CBLAS_ORDER ORDER, const
enum CBLAS_SIDE SIDE, const enum CBLAS_UPLO UPLO, const enum
CBLAS_TRANSPOSE TRANSA, const enum CBLAS_DIAG DIAG, const int
M, const int N, const float ALPHA, const float * A, const int
LDA, float * B, const int LDB)
-- Function: void cblas_strsm (const enum CBLAS_ORDER ORDER, const
enum CBLAS_SIDE SIDE, const enum CBLAS_UPLO UPLO, const enum
CBLAS_TRANSPOSE TRANSA, const enum CBLAS_DIAG DIAG, const int
M, const int N, const float ALPHA, const float * A, const int
LDA, float * B, const int LDB)
-- Function: void cblas_dgemm (const enum CBLAS_ORDER ORDER, const
enum CBLAS_TRANSPOSE TRANSA, const enum CBLAS_TRANSPOSE
TRANSB, const int M, const int N, const int K, const double
ALPHA, const double * A, const int LDA, const double * B,
const int LDB, const double BETA, double * C, const int LDC)
-- Function: void cblas_dsymm (const enum CBLAS_ORDER ORDER, const
enum CBLAS_SIDE SIDE, const enum CBLAS_UPLO UPLO, const int
M, const int N, const double ALPHA, const double * A, const
int LDA, const double * B, const int LDB, const double BETA,
double * C, const int LDC)
-- Function: void cblas_dsyrk (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const enum CBLAS_TRANSPOSE TRANS, const
int N, const int K, const double ALPHA, const double * A,
const int LDA, const double BETA, double * C, const int LDC)
-- Function: void cblas_dsyr2k (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const enum CBLAS_TRANSPOSE TRANS, const
int N, const int K, const double ALPHA, const double * A,
const int LDA, const double * B, const int LDB, const double
BETA, double * C, const int LDC)
-- Function: void cblas_dtrmm (const enum CBLAS_ORDER ORDER, const
enum CBLAS_SIDE SIDE, const enum CBLAS_UPLO UPLO, const enum
CBLAS_TRANSPOSE TRANSA, const enum CBLAS_DIAG DIAG, const int
M, const int N, const double ALPHA, const double * A, const
int LDA, double * B, const int LDB)
-- Function: void cblas_dtrsm (const enum CBLAS_ORDER ORDER, const
enum CBLAS_SIDE SIDE, const enum CBLAS_UPLO UPLO, const enum
CBLAS_TRANSPOSE TRANSA, const enum CBLAS_DIAG DIAG, const int
M, const int N, const double ALPHA, const double * A, const
int LDA, double * B, const int LDB)
-- Function: void cblas_cgemm (const enum CBLAS_ORDER ORDER, const
enum CBLAS_TRANSPOSE TRANSA, const enum CBLAS_TRANSPOSE
TRANSB, const int M, const int N, const int K, const void *
ALPHA, const void * A, const int LDA, const void * B, const
int LDB, const void * BETA, void * C, const int LDC)
-- Function: void cblas_csymm (const enum CBLAS_ORDER ORDER, const
enum CBLAS_SIDE SIDE, const enum CBLAS_UPLO UPLO, const int
M, const int N, const void * ALPHA, const void * A, const int
LDA, const void * B, const int LDB, const void * BETA, void *
C, const int LDC)
-- Function: void cblas_csyrk (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const enum CBLAS_TRANSPOSE TRANS, const
int N, const int K, const void * ALPHA, const void * A, const
int LDA, const void * BETA, void * C, const int LDC)
-- Function: void cblas_csyr2k (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const enum CBLAS_TRANSPOSE TRANS, const
int N, const int K, const void * ALPHA, const void * A, const
int LDA, const void * B, const int LDB, const void * BETA,
void * C, const int LDC)
-- Function: void cblas_ctrmm (const enum CBLAS_ORDER ORDER, const
enum CBLAS_SIDE SIDE, const enum CBLAS_UPLO UPLO, const enum
CBLAS_TRANSPOSE TRANSA, const enum CBLAS_DIAG DIAG, const int
M, const int N, const void * ALPHA, const void * A, const int
LDA, void * B, const int LDB)
-- Function: void cblas_ctrsm (const enum CBLAS_ORDER ORDER, const
enum CBLAS_SIDE SIDE, const enum CBLAS_UPLO UPLO, const enum
CBLAS_TRANSPOSE TRANSA, const enum CBLAS_DIAG DIAG, const int
M, const int N, const void * ALPHA, const void * A, const int
LDA, void * B, const int LDB)
-- Function: void cblas_zgemm (const enum CBLAS_ORDER ORDER, const
enum CBLAS_TRANSPOSE TRANSA, const enum CBLAS_TRANSPOSE
TRANSB, const int M, const int N, const int K, const void *
ALPHA, const void * A, const int LDA, const void * B, const
int LDB, const void * BETA, void * C, const int LDC)
-- Function: void cblas_zsymm (const enum CBLAS_ORDER ORDER, const
enum CBLAS_SIDE SIDE, const enum CBLAS_UPLO UPLO, const int
M, const int N, const void * ALPHA, const void * A, const int
LDA, const void * B, const int LDB, const void * BETA, void *
C, const int LDC)
-- Function: void cblas_zsyrk (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const enum CBLAS_TRANSPOSE TRANS, const
int N, const int K, const void * ALPHA, const void * A, const
int LDA, const void * BETA, void * C, const int LDC)
-- Function: void cblas_zsyr2k (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const enum CBLAS_TRANSPOSE TRANS, const
int N, const int K, const void * ALPHA, const void * A, const
int LDA, const void * B, const int LDB, const void * BETA,
void * C, const int LDC)
-- Function: void cblas_ztrmm (const enum CBLAS_ORDER ORDER, const
enum CBLAS_SIDE SIDE, const enum CBLAS_UPLO UPLO, const enum
CBLAS_TRANSPOSE TRANSA, const enum CBLAS_DIAG DIAG, const int
M, const int N, const void * ALPHA, const void * A, const int
LDA, void * B, const int LDB)
-- Function: void cblas_ztrsm (const enum CBLAS_ORDER ORDER, const
enum CBLAS_SIDE SIDE, const enum CBLAS_UPLO UPLO, const enum
CBLAS_TRANSPOSE TRANSA, const enum CBLAS_DIAG DIAG, const int
M, const int N, const void * ALPHA, const void * A, const int
LDA, void * B, const int LDB)
-- Function: void cblas_chemm (const enum CBLAS_ORDER ORDER, const
enum CBLAS_SIDE SIDE, const enum CBLAS_UPLO UPLO, const int
M, const int N, const void * ALPHA, const void * A, const int
LDA, const void * B, const int LDB, const void * BETA, void *
C, const int LDC)
-- Function: void cblas_cherk (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const enum CBLAS_TRANSPOSE TRANS, const
int N, const int K, const float ALPHA, const void * A, const
int LDA, const float BETA, void * C, const int LDC)
-- Function: void cblas_cher2k (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const enum CBLAS_TRANSPOSE TRANS, const
int N, const int K, const void * ALPHA, const void * A, const
int LDA, const void * B, const int LDB, const float BETA,
void * C, const int LDC)
-- Function: void cblas_zhemm (const enum CBLAS_ORDER ORDER, const
enum CBLAS_SIDE SIDE, const enum CBLAS_UPLO UPLO, const int
M, const int N, const void * ALPHA, const void * A, const int
LDA, const void * B, const int LDB, const void * BETA, void *
C, const int LDC)
-- Function: void cblas_zherk (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const enum CBLAS_TRANSPOSE TRANS, const
int N, const int K, const double ALPHA, const void * A, const
int LDA, const double BETA, void * C, const int LDC)
-- Function: void cblas_zher2k (const enum CBLAS_ORDER ORDER, const
enum CBLAS_UPLO UPLO, const enum CBLAS_TRANSPOSE TRANS, const
int N, const int K, const void * ALPHA, const void * A, const
int LDA, const void * B, const int LDB, const double BETA,
void * C, const int LDC)
-- Function: void cblas_xerbla (int P, const char * ROUT, const char *
FORM, ...)
File: gsl-ref.info, Node: GSL CBLAS Examples, Prev: Level 3 CBLAS Functions, Up: GSL CBLAS Library
D.4 Examples
============
The following program computes the product of two matrices using the
Level-3 BLAS function SGEMM,
[ 0.11 0.12 0.13 ] [ 1011 1012 ] [ 367.76 368.12 ]
[ 0.21 0.22 0.23 ] [ 1021 1022 ] = [ 674.06 674.72 ]
[ 1031 1032 ]
The matrices are stored in row major order but could be stored in column
major order if the first argument of the call to `cblas_sgemm' was
changed to `CblasColMajor'.
#include
#include
int
main (void)
{
int lda = 3;
float A[] = { 0.11, 0.12, 0.13,
0.21, 0.22, 0.23 };
int ldb = 2;
float B[] = { 1011, 1012,
1021, 1022,
1031, 1032 };
int ldc = 2;
float C[] = { 0.00, 0.00,
0.00, 0.00 };
/* Compute C = A B */
cblas_sgemm (CblasRowMajor,
CblasNoTrans, CblasNoTrans, 2, 2, 3,
1.0, A, lda, B, ldb, 0.0, C, ldc);
printf ("[ %g, %g\n", C[0], C[1]);
printf (" %g, %g ]\n", C[2], C[3]);
return 0;
}
To compile the program use the following command line,
$ gcc -Wall demo.c -lgslcblas
There is no need to link with the main library `-lgsl' in this case as
the CBLAS library is an independent unit. Here is the output from the
program,
$ ./a.out
[ 367.76, 368.12
674.06, 674.72 ]
File: gsl-ref.info, Node: Free Software Needs Free Documentation, Next: GNU General Public License, Prev: GSL CBLAS Library, Up: Top
Free Software Needs Free Documentation
**************************************
The following article was written by Richard Stallman, founder of
the GNU Project.
The biggest deficiency in the free software community today is not in
the software--it is the lack of good free documentation that we can
include with the free software. Many of our most important programs do
not come with free reference manuals and free introductory texts.
Documentation is an essential part of any software package; when an
important free software package does not come with a free manual and a
free tutorial, that is a major gap. We have many such gaps today.
Consider Perl, for instance. The tutorial manuals that people
normally use are non-free. How did this come about? Because the
authors of those manuals published them with restrictive terms--no
copying, no modification, source files not available--which exclude
them from the free software world.
That wasn't the first time this sort of thing happened, and it was
far from the last. Many times we have heard a GNU user eagerly
describe a manual that he is writing, his intended contribution to the
community, only to learn that he had ruined everything by signing a
publication contract to make it non-free.
Free documentation, like free software, is a matter of freedom, not
price. The problem with the non-free manual is not that publishers
charge a price for printed copies--that in itself is fine. (The Free
Software Foundation sells printed copies of manuals, too.) The problem
is the restrictions on the use of the manual. Free manuals are
available in source code form, and give you permission to copy and
modify. Non-free manuals do not allow this.
The criteria of freedom for a free manual are roughly the same as for
free software. Redistribution (including the normal kinds of
commercial redistribution) must be permitted, so that the manual can
accompany every copy of the program, both on-line and on paper.
Permission for modification of the technical content is crucial too.
When people modify the software, adding or changing features, if they
are conscientious they will change the manual too--so they can provide
accurate and clear documentation for the modified program. A manual
that leaves you no choice but to write a new manual to document a
changed version of the program is not really available to our community.
Some kinds of limits on the way modification is handled are
acceptable. For example, requirements to preserve the original
author's copyright notice, the distribution terms, or the list of
authors, are ok. It is also no problem to require modified versions to
include notice that they were modified. Even entire sections that may
not be deleted or changed are acceptable, as long as they deal with
nontechnical topics (like this one). These kinds of restrictions are
acceptable because they don't obstruct the community's normal use of
the manual.
However, it must be possible to modify all the _technical_ content
of the manual, and then distribute the result in all the usual media,
through all the usual channels. Otherwise, the restrictions obstruct
the use of the manual, it is not free, and we need another manual to
replace it.
Please spread the word about this issue. Our community continues to
lose manuals to proprietary publishing. If we spread the word that
free software needs free reference manuals and free tutorials, perhaps
the next person who wants to contribute by writing documentation will
realize, before it is too late, that only free manuals contribute to
the free software community.
If you are writing documentation, please insist on publishing it
under the GNU Free Documentation License or another free documentation
license. Remember that this decision requires your approval--you don't
have to let the publisher decide. Some commercial publishers will use
a free license if you insist, but they will not propose the option; it
is up to you to raise the issue and say firmly that this is what you
want. If the publisher you are dealing with refuses, please try other
publishers. If you're not sure whether a proposed license is free,
write to .
You can encourage commercial publishers to sell more free, copylefted
manuals and tutorials by buying them, and particularly by buying copies
from the publishers that paid for their writing or for major
improvements. Meanwhile, try to avoid buying non-free documentation at
all. Check the distribution terms of a manual before you buy it, and
insist that whoever seeks your business must respect your freedom.
Check the history of the book, and try reward the publishers that have
paid or pay the authors to work on it.
The Free Software Foundation maintains a list of free documentation
published by other publishers:
`http://www.fsf.org/doc/other-free-books.html'
File: gsl-ref.info, Node: GNU General Public License, Next: GNU Free Documentation License, Prev: Free Software Needs Free Documentation, Up: Top
GNU General Public License
**************************
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. `http://fsf.org/'
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
Preamble
========
The GNU General Public License is a free, copyleft license for software
and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program-to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you
have certain responsibilities if you distribute copies of the software,
or if you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the
manufacturer can do so. This is fundamentally incompatible with the
aim of protecting users' freedom to change the software. The
systematic pattern of such abuse occurs in the area of products for
individuals to use, which is precisely where it is most unacceptable.
Therefore, we have designed this version of the GPL to prohibit the
practice for those products. If such problems arise substantially in
other domains, we stand ready to extend this provision to those domains
in future versions of the GPL, as needed to protect the freedom of
users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
====================
0. Definitions.
"This License" refers to version 3 of the GNU General Public
License.
"Copyright" also means copyright-like laws that apply to other
kinds of works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the
work in a fashion requiring copyright permission, other than the
making of an exact copy. The resulting work is called a "modified
version" of the earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work
based on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it
on a computer or modifying a private copy. Propagation includes
copying, distribution (with or without modification), making
available to the public, and in some countries other activities as
well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user
through a computer network, with no transfer of a copy, is not
conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to
the extent that warranties are provided), that licensees may
convey the work under this License, and how to view a copy of this
License. If the interface presents a list of user commands or
options, such as a menu, a prominent item in the list meets this
criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any
non-source form of a work.
A "Standard Interface" means an interface that either is an
official standard defined by a recognized standards body, or, in
the case of interfaces specified for a particular programming
language, one that is widely used among developers working in that
language.
The "System Libraries" of an executable work include anything,
other than the work as a whole, that (a) is included in the normal
form of packaging a Major Component, but which is not part of that
Major Component, and (b) serves only to enable use of the work
with that Major Component, or to implement a Standard Interface
for which an implementation is available to the public in source
code form. A "Major Component", in this context, means a major
essential component (kernel, window system, and so on) of the
specific operating system (if any) on which the executable work
runs, or a compiler used to produce the work, or an object code
interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including
scripts to control those activities. However, it does not include
the work's System Libraries, or general-purpose tools or generally
available free programs which are used unmodified in performing
those activities but which are not part of the work. For example,
Corresponding Source includes interface definition files
associated with source files for the work, and the source code for
shared libraries and dynamically linked subprograms that the work
is specifically designed to require, such as by intimate data
communication or control flow between those subprograms and other
parts of the work.
The Corresponding Source need not include anything that users can
regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running
a covered work is covered by this License only if the output,
given its content, constitutes a covered work. This License
acknowledges your rights of fair use or other equivalent, as
provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise
remains in force. You may convey covered works to others for the
sole purpose of having them make modifications exclusively for
you, or provide you with facilities for running those works,
provided that you comply with the terms of this License in
conveying all material for which you do not control copyright.
Those thus making or running the covered works for you must do so
exclusively on your behalf, under your direction and control, on
terms that prohibit them from making any copies of your
copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section
10 makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under
article 11 of the WIPO copyright treaty adopted on 20 December
1996, or similar laws prohibiting or restricting circumvention of
such measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such
circumvention is effected by exercising rights under this License
with respect to the covered work, and you disclaim any intention
to limit operation or modification of the work as a means of
enforcing, against the work's users, your or third parties' legal
rights to forbid circumvention of technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the
code; keep intact all notices of the absence of any warranty; and
give all recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these
conditions:
a. The work must carry prominent notices stating that you
modified it, and giving a relevant date.
b. The work must carry prominent notices stating that it is
released under this License and any conditions added under
section 7. This requirement modifies the requirement in
section 4 to "keep intact all notices".
c. You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable
section 7 additional terms, to the whole of the work, and all
its parts, regardless of how they are packaged. This License
gives no permission to license the work in any other way, but
it does not invalidate such permission if you have separately
received it.
d. If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has
interactive interfaces that do not display Appropriate Legal
Notices, your work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered
work, and which are not combined with it such as to form a larger
program, in or on a volume of a storage or distribution medium, is
called an "aggregate" if the compilation and its resulting
copyright are not used to limit the access or legal rights of the
compilation's users beyond what the individual works permit.
Inclusion of a covered work in an aggregate does not cause this
License to apply to the other parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this
License, in one of these ways:
a. Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b. Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for
as long as you offer spare parts or customer support for that
product model, to give anyone who possesses the object code
either (1) a copy of the Corresponding Source for all the
software in the product that is covered by this License, on a
durable physical medium customarily used for software
interchange, for a price no more than your reasonable cost of
physically performing this conveying of source, or (2) access
to copy the Corresponding Source from a network server at no
charge.
c. Convey individual copies of the object code with a copy of
the written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially,
and only if you received the object code with such an offer,
in accord with subsection 6b.
d. Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access
to the Corresponding Source in the same way through the same
place at no further charge. You need not require recipients
to copy the Corresponding Source along with the object code.
If the place to copy the object code is a network server, the
Corresponding Source may be on a different server (operated
by you or a third party) that supports equivalent copying
facilities, provided you maintain clear directions next to
the object code saying where to find the Corresponding Source.
Regardless of what server hosts the Corresponding Source, you
remain obligated to ensure that it is available for as long
as needed to satisfy these requirements.
e. Convey the object code using peer-to-peer transmission,
provided you inform other peers where the object code and
Corresponding Source of the work are being offered to the
general public at no charge under subsection 6d.
A separable portion of the object code, whose source code is
excluded from the Corresponding Source as a System Library, need
not be included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means
any tangible personal property which is normally used for personal,
family, or household purposes, or (2) anything designed or sold for
incorporation into a dwelling. In determining whether a product
is a consumer product, doubtful cases shall be resolved in favor of
coverage. For a particular product received by a particular user,
"normally used" refers to a typical or common use of that class of
product, regardless of the status of the particular user or of the
way in which the particular user actually uses, or expects or is
expected to use, the product. A product is a consumer product
regardless of whether the product has substantial commercial,
industrial or non-consumer uses, unless such uses represent the
only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to
install and execute modified versions of a covered work in that
User Product from a modified version of its Corresponding Source.
The information must suffice to ensure that the continued
functioning of the modified object code is in no case prevented or
interfered with solely because modification has been made.
If you convey an object code work under this section in, or with,
or specifically for use in, a User Product, and the conveying
occurs as part of a transaction in which the right of possession
and use of the User Product is transferred to the recipient in
perpetuity or for a fixed term (regardless of how the transaction
is characterized), the Corresponding Source conveyed under this
section must be accompanied by the Installation Information. But
this requirement does not apply if neither you nor any third party
retains the ability to install modified object code on the User
Product (for example, the work has been installed in ROM).
The requirement to provide Installation Information does not
include a requirement to continue to provide support service,
warranty, or updates for a work that has been modified or
installed by the recipient, or for the User Product in which it
has been modified or installed. Access to a network may be denied
when the modification itself materially and adversely affects the
operation of the network or violates the rules and protocols for
communication across the network.
Corresponding Source conveyed, and Installation Information
provided, in accord with this section must be in a format that is
publicly documented (and with an implementation available to the
public in source code form), and must require no special password
or key for unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of
this License by making exceptions from one or more of its
conditions. Additional permissions that are applicable to the
entire Program shall be treated as though they were included in
this License, to the extent that they are valid under applicable
law. If additional permissions apply only to part of the Program,
that part may be used separately under those permissions, but the
entire Program remains governed by this License without regard to
the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part
of it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material
you add to a covered work, you may (if authorized by the copyright
holders of that material) supplement the terms of this License
with terms:
a. Disclaiming warranty or limiting liability differently from
the terms of sections 15 and 16 of this License; or
b. Requiring preservation of specified reasonable legal notices
or author attributions in that material or in the Appropriate
Legal Notices displayed by works containing it; or
c. Prohibiting misrepresentation of the origin of that material,
or requiring that modified versions of such material be
marked in reasonable ways as different from the original
version; or
d. Limiting the use for publicity purposes of names of licensors
or authors of the material; or
e. Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f. Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified
versions of it) with contractual assumptions of liability to
the recipient, for any liability that these contractual
assumptions directly impose on those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as
you received it, or any part of it, contains a notice stating that
it is governed by this License along with a term that is a further
restriction, you may remove that term. If a license document
contains a further restriction but permits relicensing or
conveying under this License, you may add to a covered work
material governed by the terms of that license document, provided
that the further restriction does not survive such relicensing or
conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in
the form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights
under this License (including any patent licenses granted under
the third paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly
and finally terminates your license, and (b) permanently, if the
copyright holder fails to notify you of the violation by some
reasonable means prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from
that copyright holder, and you cure the violation prior to 30 days
after your receipt of the notice.
Termination of your rights under this section does not terminate
the licenses of parties who have received copies or rights from
you under this License. If your rights have been terminated and
not permanently reinstated, you do not qualify to receive new
licenses for the same material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer
transmission to receive a copy likewise does not require
acceptance. However, nothing other than this License grants you
permission to propagate or modify any covered work. These actions
infringe copyright if you do not accept this License. Therefore,
by modifying or propagating a covered work, you indicate your
acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not
responsible for enforcing compliance by third parties with this
License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a
covered work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or
could give under the previous paragraph, plus a right to
possession of the Corresponding Source of the work from the
predecessor in interest, if the predecessor has it or can get it
with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you
may not impose a license fee, royalty, or other charge for
exercise of rights granted under this License, and you may not
initiate litigation (including a cross-claim or counterclaim in a
lawsuit) alleging that any patent claim is infringed by making,
using, selling, offering for sale, or importing the Program or any
portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based.
The work thus licensed is called the contributor's "contributor
version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner,
permitted by this License, of making, using, or selling its
contributor version, but do not include claims that would be
infringed only as a consequence of further modification of the
contributor version. For purposes of this definition, "control"
includes the right to grant patent sublicenses in a manner
consistent with the requirements of this License.
Each contributor grants you a non-exclusive, worldwide,
royalty-free patent license under the contributor's essential
patent claims, to make, use, sell, offer for sale, import and
otherwise run, modify and propagate the contents of its
contributor version.
In the following three paragraphs, a "patent license" is any
express agreement or commitment, however denominated, not to
enforce a patent (such as an express permission to practice a
patent or covenant not to sue for patent infringement). To
"grant" such a patent license to a party means to make such an
agreement or commitment not to enforce a patent against the party.
If you convey a covered work, knowingly relying on a patent
license, and the Corresponding Source of the work is not available
for anyone to copy, free of charge and under the terms of this
License, through a publicly available network server or other
readily accessible means, then you must either (1) cause the
Corresponding Source to be so available, or (2) arrange to deprive
yourself of the benefit of the patent license for this particular
work, or (3) arrange, in a manner consistent with the requirements
of this License, to extend the patent license to downstream
recipients. "Knowingly relying" means you have actual knowledge
that, but for the patent license, your conveying the covered work
in a country, or your recipient's use of the covered work in a
country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate,
modify or convey a specific copy of the covered work, then the
patent license you grant is automatically extended to all
recipients of the covered work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that
are specifically granted under this License. You may not convey a
covered work if you are a party to an arrangement with a third
party that is in the business of distributing software, under
which you make payment to the third party based on the extent of
your activity of conveying the work, and under which the third
party grants, to any of the parties who would receive the covered
work from you, a discriminatory patent license (a) in connection
with copies of the covered work conveyed by you (or copies made
from those copies), or (b) primarily for and in connection with
specific products or compilations that contain the covered work,
unless you entered into that arrangement, or that patent license
was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order,
agreement or otherwise) that contradict the conditions of this
License, they do not excuse you from the conditions of this
License. If you cannot convey a covered work so as to satisfy
simultaneously your obligations under this License and any other
pertinent obligations, then as a consequence you may not convey it
at all. For example, if you agree to terms that obligate you to
collect a royalty for further conveying from those to whom you
convey the Program, the only way you could satisfy both those
terms and this License would be to refrain entirely from conveying
the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a
single combined work, and to convey the resulting work. The terms
of this License will continue to apply to the part which is the
covered work, but the special requirements of the GNU Affero
General Public License, section 13, concerning interaction through
a network will apply to the combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new
versions of the GNU General Public License from time to time.
Such new versions will be similar in spirit to the present
version, but may differ in detail to address new problems or
concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU
General Public License "or any later version" applies to it, you
have the option of following the terms and conditions either of
that numbered version or of any later version published by the
Free Software Foundation. If the Program does not specify a
version number of the GNU General Public License, you may choose
any version ever published by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that
proxy's public statement of acceptance of a version permanently
authorizes you to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE
COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS"
WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE
RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU.
SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL
NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN
WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES
AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU
FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR
CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE
THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA
BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF
THE POSSIBILITY OF SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely
approximates an absolute waiver of all civil liability in
connection with the Program, unless a warranty or assumption of
liability accompanies a copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
===========================
How to Apply These Terms to Your New Programs
=============================================
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these
terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least the
"copyright" line and a pointer to where the full notice is found.
one line to give the program's name and a brief idea
of whAT IT DOES.
Copyright (C) YEAR NAME OF AUTHOR
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or (at
your option) any later version.
This program is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see `http://www.gnu.org/licenses/'.
Also add information on how to contact you by electronic and paper
mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
PROGRAM Copyright (C) YEAR NAME OF AUTHOR
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the
appropriate parts of the General Public License. Of course, your
program's commands might be different; for a GUI interface, you would
use an "about box".
You should also get your employer (if you work as a programmer) or
school, if any, to sign a "copyright disclaimer" for the program, if
necessary. For more information on this, and how to apply and follow
the GNU GPL, see `http://www.gnu.org/licenses/'.
The GNU General Public License does not permit incorporating your
program into proprietary programs. If your program is a subroutine
library, you may consider it more useful to permit linking proprietary
applications with the library. If this is what you want to do, use the
GNU Lesser General Public License instead of this License. But first,
please read `http://www.gnu.org/philosophy/why-not-lgpl.html'.
File: gsl-ref.info, Node: GNU Free Documentation License, Next: Function Index, Prev: GNU General Public License, Up: Top
GNU Free Documentation License
******************************
Version 1.3, 3 November 2008
Copyright (C) 2000, 2001, 2002, 2007, 2008 Free Software Foundation, Inc.
`http://fsf.org/'
Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is
not allowed.
0. PREAMBLE
The purpose of this License is to make a manual, textbook, or other
functional and useful document "free" in the sense of freedom: to
assure everyone the effective freedom to copy and redistribute it,
with or without modifying it, either commercially or
noncommercially. Secondarily, this License preserves for the
author and publisher a way to get credit for their work, while not
being considered responsible for modifications made by others.
This License is a kind of "copyleft", which means that derivative
works of the document must themselves be free in the same sense.
It complements the GNU General Public License, which is a copyleft
license designed for free software.
We have designed this License in order to use it for manuals for
free software, because free software needs free documentation: a
free program should come with manuals providing the same freedoms
that the software does. But this License is not limited to
software manuals; it can be used for any textual work, regardless
of subject matter or whether it is published as a printed book.
We recommend this License principally for works whose purpose is
instruction or reference.
1. APPLICABILITY AND DEFINITIONS
This License applies to any manual or other work, in any medium,
that contains a notice placed by the copyright holder saying it
can be distributed under the terms of this License. Such a notice
grants a world-wide, royalty-free license, unlimited in duration,
to use that work under the conditions stated herein. The
"Document", below, refers to any such manual or work. Any member
of the public is a licensee, and is addressed as "you". You
accept the license if you copy, modify or distribute the work in a
way requiring permission under copyright law.
A "Modified Version" of the Document means any work containing the
Document or a portion of it, either copied verbatim, or with
modifications and/or translated into another language.
A "Secondary Section" is a named appendix or a front-matter section
of the Document that deals exclusively with the relationship of the
publishers or authors of the Document to the Document's overall
subject (or to related matters) and contains nothing that could
fall directly within that overall subject. (Thus, if the Document
is in part a textbook of mathematics, a Secondary Section may not
explain any mathematics.) The relationship could be a matter of
historical connection with the subject or with related matters, or
of legal, commercial, philosophical, ethical or political position
regarding them.
The "Invariant Sections" are certain Secondary Sections whose
titles are designated, as being those of Invariant Sections, in
the notice that says that the Document is released under this
License. If a section does not fit the above definition of
Secondary then it is not allowed to be designated as Invariant.
The Document may contain zero Invariant Sections. If the Document
does not identify any Invariant Sections then there are none.
The "Cover Texts" are certain short passages of text that are
listed, as Front-Cover Texts or Back-Cover Texts, in the notice
that says that the Document is released under this License. A
Front-Cover Text may be at most 5 words, and a Back-Cover Text may
be at most 25 words.
A "Transparent" copy of the Document means a machine-readable copy,
represented in a format whose specification is available to the
general public, that is suitable for revising the document
straightforwardly with generic text editors or (for images
composed of pixels) generic paint programs or (for drawings) some
widely available drawing editor, and that is suitable for input to
text formatters or for automatic translation to a variety of
formats suitable for input to text formatters. A copy made in an
otherwise Transparent file format whose markup, or absence of
markup, has been arranged to thwart or discourage subsequent
modification by readers is not Transparent. An image format is
not Transparent if used for any substantial amount of text. A
copy that is not "Transparent" is called "Opaque".
Examples of suitable formats for Transparent copies include plain
ASCII without markup, Texinfo input format, LaTeX input format,
SGML or XML using a publicly available DTD, and
standard-conforming simple HTML, PostScript or PDF designed for
human modification. Examples of transparent image formats include
PNG, XCF and JPG. Opaque formats include proprietary formats that
can be read and edited only by proprietary word processors, SGML or
XML for which the DTD and/or processing tools are not generally
available, and the machine-generated HTML, PostScript or PDF
produced by some word processors for output purposes only.
The "Title Page" means, for a printed book, the title page itself,
plus such following pages as are needed to hold, legibly, the
material this License requires to appear in the title page. For
works in formats which do not have any title page as such, "Title
Page" means the text near the most prominent appearance of the
work's title, preceding the beginning of the body of the text.
The "publisher" means any person or entity that distributes copies
of the Document to the public.
A section "Entitled XYZ" means a named subunit of the Document
whose title either is precisely XYZ or contains XYZ in parentheses
following text that translates XYZ in another language. (Here XYZ
stands for a specific section name mentioned below, such as
"Acknowledgements", "Dedications", "Endorsements", or "History".)
To "Preserve the Title" of such a section when you modify the
Document means that it remains a section "Entitled XYZ" according
to this definition.
The Document may include Warranty Disclaimers next to the notice
which states that this License applies to the Document. These
Warranty Disclaimers are considered to be included by reference in
this License, but only as regards disclaiming warranties: any other
implication that these Warranty Disclaimers may have is void and
has no effect on the meaning of this License.
2. VERBATIM COPYING
You may copy and distribute the Document in any medium, either
commercially or noncommercially, provided that this License, the
copyright notices, and the license notice saying this License
applies to the Document are reproduced in all copies, and that you
add no other conditions whatsoever to those of this License. You
may not use technical measures to obstruct or control the reading
or further copying of the copies you make or distribute. However,
you may accept compensation in exchange for copies. If you
distribute a large enough number of copies you must also follow
the conditions in section 3.
You may also lend copies, under the same conditions stated above,
and you may publicly display copies.
3. COPYING IN QUANTITY
If you publish printed copies (or copies in media that commonly
have printed covers) of the Document, numbering more than 100, and
the Document's license notice requires Cover Texts, you must
enclose the copies in covers that carry, clearly and legibly, all
these Cover Texts: Front-Cover Texts on the front cover, and
Back-Cover Texts on the back cover. Both covers must also clearly
and legibly identify you as the publisher of these copies. The
front cover must present the full title with all words of the
title equally prominent and visible. You may add other material
on the covers in addition. Copying with changes limited to the
covers, as long as they preserve the title of the Document and
satisfy these conditions, can be treated as verbatim copying in
other respects.
If the required texts for either cover are too voluminous to fit
legibly, you should put the first ones listed (as many as fit
reasonably) on the actual cover, and continue the rest onto
adjacent pages.
If you publish or distribute Opaque copies of the Document
numbering more than 100, you must either include a
machine-readable Transparent copy along with each Opaque copy, or
state in or with each Opaque copy a computer-network location from
which the general network-using public has access to download
using public-standard network protocols a complete Transparent
copy of the Document, free of added material. If you use the
latter option, you must take reasonably prudent steps, when you
begin distribution of Opaque copies in quantity, to ensure that
this Transparent copy will remain thus accessible at the stated
location until at least one year after the last time you
distribute an Opaque copy (directly or through your agents or
retailers) of that edition to the public.
It is requested, but not required, that you contact the authors of
the Document well before redistributing any large number of
copies, to give them a chance to provide you with an updated
version of the Document.
4. MODIFICATIONS
You may copy and distribute a Modified Version of the Document
under the conditions of sections 2 and 3 above, provided that you
release the Modified Version under precisely this License, with
the Modified Version filling the role of the Document, thus
licensing distribution and modification of the Modified Version to
whoever possesses a copy of it. In addition, you must do these
things in the Modified Version:
A. Use in the Title Page (and on the covers, if any) a title
distinct from that of the Document, and from those of
previous versions (which should, if there were any, be listed
in the History section of the Document). You may use the
same title as a previous version if the original publisher of
that version gives permission.
B. List on the Title Page, as authors, one or more persons or
entities responsible for authorship of the modifications in
the Modified Version, together with at least five of the
principal authors of the Document (all of its principal
authors, if it has fewer than five), unless they release you
from this requirement.
C. State on the Title page the name of the publisher of the
Modified Version, as the publisher.
D. Preserve all the copyright notices of the Document.
E. Add an appropriate copyright notice for your modifications
adjacent to the other copyright notices.
F. Include, immediately after the copyright notices, a license
notice giving the public permission to use the Modified
Version under the terms of this License, in the form shown in
the Addendum below.
G. Preserve in that license notice the full lists of Invariant
Sections and required Cover Texts given in the Document's
license notice.
H. Include an unaltered copy of this License.
I. Preserve the section Entitled "History", Preserve its Title,
and add to it an item stating at least the title, year, new
authors, and publisher of the Modified Version as given on
the Title Page. If there is no section Entitled "History" in
the Document, create one stating the title, year, authors,
and publisher of the Document as given on its Title Page,
then add an item describing the Modified Version as stated in
the previous sentence.
J. Preserve the network location, if any, given in the Document
for public access to a Transparent copy of the Document, and
likewise the network locations given in the Document for
previous versions it was based on. These may be placed in
the "History" section. You may omit a network location for a
work that was published at least four years before the
Document itself, or if the original publisher of the version
it refers to gives permission.
K. For any section Entitled "Acknowledgements" or "Dedications",
Preserve the Title of the section, and preserve in the
section all the substance and tone of each of the contributor
acknowledgements and/or dedications given therein.
L. Preserve all the Invariant Sections of the Document,
unaltered in their text and in their titles. Section numbers
or the equivalent are not considered part of the section
titles.
M. Delete any section Entitled "Endorsements". Such a section
may not be included in the Modified Version.
N. Do not retitle any existing section to be Entitled
"Endorsements" or to conflict in title with any Invariant
Section.
O. Preserve any Warranty Disclaimers.
If the Modified Version includes new front-matter sections or
appendices that qualify as Secondary Sections and contain no
material copied from the Document, you may at your option
designate some or all of these sections as invariant. To do this,
add their titles to the list of Invariant Sections in the Modified
Version's license notice. These titles must be distinct from any
other section titles.
You may add a section Entitled "Endorsements", provided it contains
nothing but endorsements of your Modified Version by various
parties--for example, statements of peer review or that the text
has been approved by an organization as the authoritative
definition of a standard.
You may add a passage of up to five words as a Front-Cover Text,
and a passage of up to 25 words as a Back-Cover Text, to the end
of the list of Cover Texts in the Modified Version. Only one
passage of Front-Cover Text and one of Back-Cover Text may be
added by (or through arrangements made by) any one entity. If the
Document already includes a cover text for the same cover,
previously added by you or by arrangement made by the same entity
you are acting on behalf of, you may not add another; but you may
replace the old one, on explicit permission from the previous
publisher that added the old one.
The author(s) and publisher(s) of the Document do not by this
License give permission to use their names for publicity for or to
assert or imply endorsement of any Modified Version.
5. COMBINING DOCUMENTS
You may combine the Document with other documents released under
this License, under the terms defined in section 4 above for
modified versions, provided that you include in the combination
all of the Invariant Sections of all of the original documents,
unmodified, and list them all as Invariant Sections of your
combined work in its license notice, and that you preserve all
their Warranty Disclaimers.
The combined work need only contain one copy of this License, and
multiple identical Invariant Sections may be replaced with a single
copy. If there are multiple Invariant Sections with the same name
but different contents, make the title of each such section unique
by adding at the end of it, in parentheses, the name of the
original author or publisher of that section if known, or else a
unique number. Make the same adjustment to the section titles in
the list of Invariant Sections in the license notice of the
combined work.
In the combination, you must combine any sections Entitled
"History" in the various original documents, forming one section
Entitled "History"; likewise combine any sections Entitled
"Acknowledgements", and any sections Entitled "Dedications". You
must delete all sections Entitled "Endorsements."
6. COLLECTIONS OF DOCUMENTS
You may make a collection consisting of the Document and other
documents released under this License, and replace the individual
copies of this License in the various documents with a single copy
that is included in the collection, provided that you follow the
rules of this License for verbatim copying of each of the
documents in all other respects.
You may extract a single document from such a collection, and
distribute it individually under this License, provided you insert
a copy of this License into the extracted document, and follow
this License in all other respects regarding verbatim copying of
that document.
7. AGGREGATION WITH INDEPENDENT WORKS
A compilation of the Document or its derivatives with other
separate and independent documents or works, in or on a volume of
a storage or distribution medium, is called an "aggregate" if the
copyright resulting from the compilation is not used to limit the
legal rights of the compilation's users beyond what the individual
works permit. When the Document is included in an aggregate, this
License does not apply to the other works in the aggregate which
are not themselves derivative works of the Document.
If the Cover Text requirement of section 3 is applicable to these
copies of the Document, then if the Document is less than one half
of the entire aggregate, the Document's Cover Texts may be placed
on covers that bracket the Document within the aggregate, or the
electronic equivalent of covers if the Document is in electronic
form. Otherwise they must appear on printed covers that bracket
the whole aggregate.
8. TRANSLATION
Translation is considered a kind of modification, so you may
distribute translations of the Document under the terms of section
4. Replacing Invariant Sections with translations requires special
permission from their copyright holders, but you may include
translations of some or all Invariant Sections in addition to the
original versions of these Invariant Sections. You may include a
translation of this License, and all the license notices in the
Document, and any Warranty Disclaimers, provided that you also
include the original English version of this License and the
original versions of those notices and disclaimers. In case of a
disagreement between the translation and the original version of
this License or a notice or disclaimer, the original version will
prevail.
If a section in the Document is Entitled "Acknowledgements",
"Dedications", or "History", the requirement (section 4) to
Preserve its Title (section 1) will typically require changing the
actual title.
9. TERMINATION
You may not copy, modify, sublicense, or distribute the Document
except as expressly provided under this License. Any attempt
otherwise to copy, modify, sublicense, or distribute it is void,
and will automatically terminate your rights under this License.
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly
and finally terminates your license, and (b) permanently, if the
copyright holder fails to notify you of the violation by some
reasonable means prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from
that copyright holder, and you cure the violation prior to 30 days
after your receipt of the notice.
Termination of your rights under this section does not terminate
the licenses of parties who have received copies or rights from
you under this License. If your rights have been terminated and
not permanently reinstated, receipt of a copy of some or all of
the same material does not give you any rights to use it.
10. FUTURE REVISIONS OF THIS LICENSE
The Free Software Foundation may publish new, revised versions of
the GNU Free Documentation License from time to time. Such new
versions will be similar in spirit to the present version, but may
differ in detail to address new problems or concerns. See
`http://www.gnu.org/copyleft/'.
Each version of the License is given a distinguishing version
number. If the Document specifies that a particular numbered
version of this License "or any later version" applies to it, you
have the option of following the terms and conditions either of
that specified version or of any later version that has been
published (not as a draft) by the Free Software Foundation. If
the Document does not specify a version number of this License,
you may choose any version ever published (not as a draft) by the
Free Software Foundation. If the Document specifies that a proxy
can decide which future versions of this License can be used, that
proxy's public statement of acceptance of a version permanently
authorizes you to choose that version for the Document.
11. RELICENSING
"Massive Multiauthor Collaboration Site" (or "MMC Site") means any
World Wide Web server that publishes copyrightable works and also
provides prominent facilities for anybody to edit those works. A
public wiki that anybody can edit is an example of such a server.
A "Massive Multiauthor Collaboration" (or "MMC") contained in the
site means any set of copyrightable works thus published on the MMC
site.
"CC-BY-SA" means the Creative Commons Attribution-Share Alike 3.0
license published by Creative Commons Corporation, a not-for-profit
corporation with a principal place of business in San Francisco,
California, as well as future copyleft versions of that license
published by that same organization.
"Incorporate" means to publish or republish a Document, in whole or
in part, as part of another Document.
An MMC is "eligible for relicensing" if it is licensed under this
License, and if all works that were first published under this
License somewhere other than this MMC, and subsequently
incorporated in whole or in part into the MMC, (1) had no cover
texts or invariant sections, and (2) were thus incorporated prior
to November 1, 2008.
The operator of an MMC Site may republish an MMC contained in the
site under CC-BY-SA on the same site at any time before August 1,
2009, provided the MMC is eligible for relicensing.
ADDENDUM: How to use this License for your documents
====================================================
To use this License in a document you have written, include a copy of
the License in the document and put the following copyright and license
notices just after the title page:
Copyright (C) YEAR YOUR NAME.
Permission is granted to copy, distribute and/or modify
this document under the terms of the GNU Free
Documentation License, Version 1.3 or any later version
published by the Free Software Foundation; with no
Invariant Sections, no Front-Cover Texts, and no
Back-Cover Texts. A copy of the license is included in
the section entitled ``GNU Free Documentation License''.
If you have Invariant Sections, Front-Cover Texts and Back-Cover
Texts, replace the "with...Texts." line with this:
with the Invariant Sections being LIST THEIR
TITLES, with the Front-Cover Texts being LIST, and
with the Back-Cover Texts being LIST.
If you have Invariant Sections without Cover Texts, or some other
combination of the three, merge those two alternatives to suit the
situation.
If your document contains nontrivial examples of program code, we
recommend releasing these examples in parallel under your choice of
free software license, such as the GNU General Public License, to
permit their use in free software.
./gsl_DOC-1.15-s-i486/usr/share/info/gsl-ref.info-30000644000000000000000000111322712035456005017726 0ustar rootrootThis is gsl-ref.info, produced by makeinfo version 4.13 from
gsl-ref.texi.
INFO-DIR-SECTION Software libraries
START-INFO-DIR-ENTRY
* gsl-ref: (gsl-ref). GNU Scientific Library - Reference
END-INFO-DIR-ENTRY
Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004,
2005, 2006, 2007, 2008, 2009, 2010, 2011 The GSL Team.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.3 or
any later version published by the Free Software Foundation; with the
Invariant Sections being "GNU General Public License" and "Free Software
Needs Free Documentation", the Front-Cover text being "A GNU Manual",
and with the Back-Cover Text being (a) (see below). A copy of the
license is included in the section entitled "GNU Free Documentation
License".
(a) The Back-Cover Text is: "You have the freedom to copy and modify
this GNU Manual."
File: gsl-ref.info, Node: Spherical Vector Distributions, Next: The Weibull Distribution, Prev: The Pareto Distribution, Up: Random Number Distributions
20.23 Spherical Vector Distributions
====================================
The spherical distributions generate random vectors, located on a
spherical surface. They can be used as random directions, for example
in the steps of a random walk.
-- Function: void gsl_ran_dir_2d (const gsl_rng * R, double * X,
double * Y)
-- Function: void gsl_ran_dir_2d_trig_method (const gsl_rng * R,
double * X, double * Y)
This function returns a random direction vector v = (X,Y) in two
dimensions. The vector is normalized such that |v|^2 = x^2 + y^2
= 1. The obvious way to do this is to take a uniform random
number between 0 and 2\pi and let X and Y be the sine and cosine
respectively. Two trig functions would have been expensive in the
old days, but with modern hardware implementations, this is
sometimes the fastest way to go. This is the case for the Pentium
(but not the case for the Sun Sparcstation). One can avoid the
trig evaluations by choosing X and Y in the interior of a unit
circle (choose them at random from the interior of the enclosing
square, and then reject those that are outside the unit circle),
and then dividing by \sqrt{x^2 + y^2}. A much cleverer approach,
attributed to von Neumann (See Knuth, v2, 3rd ed, p140, exercise
23), requires neither trig nor a square root. In this approach, U
and V are chosen at random from the interior of a unit circle, and
then x=(u^2-v^2)/(u^2+v^2) and y=2uv/(u^2+v^2).
-- Function: void gsl_ran_dir_3d (const gsl_rng * R, double * X,
double * Y, double * Z)
This function returns a random direction vector v = (X,Y,Z) in
three dimensions. The vector is normalized such that |v|^2 = x^2
+ y^2 + z^2 = 1. The method employed is due to Robert E. Knop
(CACM 13, 326 (1970)), and explained in Knuth, v2, 3rd ed, p136.
It uses the surprising fact that the distribution projected along
any axis is actually uniform (this is only true for 3 dimensions).
-- Function: void gsl_ran_dir_nd (const gsl_rng * R, size_t N, double
* X)
This function returns a random direction vector v =
(x_1,x_2,...,x_n) in N dimensions. The vector is normalized such
that |v|^2 = x_1^2 + x_2^2 + ... + x_n^2 = 1. The method uses the
fact that a multivariate Gaussian distribution is spherically
symmetric. Each component is generated to have a Gaussian
distribution, and then the components are normalized. The method
is described by Knuth, v2, 3rd ed, p135-136, and attributed to G.
W. Brown, Modern Mathematics for the Engineer (1956).
File: gsl-ref.info, Node: The Weibull Distribution, Next: The Type-1 Gumbel Distribution, Prev: Spherical Vector Distributions, Up: Random Number Distributions
20.24 The Weibull Distribution
==============================
-- Function: double gsl_ran_weibull (const gsl_rng * R, double A,
double B)
This function returns a random variate from the Weibull
distribution. The distribution function is,
p(x) dx = {b \over a^b} x^{b-1} \exp(-(x/a)^b) dx
for x >= 0.
-- Function: double gsl_ran_weibull_pdf (double X, double A, double B)
This function computes the probability density p(x) at X for a
Weibull distribution with scale A and exponent B, using the
formula given above.
-- Function: double gsl_cdf_weibull_P (double X, double A, double B)
-- Function: double gsl_cdf_weibull_Q (double X, double A, double B)
-- Function: double gsl_cdf_weibull_Pinv (double P, double A, double B)
-- Function: double gsl_cdf_weibull_Qinv (double Q, double A, double B)
These functions compute the cumulative distribution functions
P(x), Q(x) and their inverses for the Weibull distribution with
scale A and exponent B.
File: gsl-ref.info, Node: The Type-1 Gumbel Distribution, Next: The Type-2 Gumbel Distribution, Prev: The Weibull Distribution, Up: Random Number Distributions
20.25 The Type-1 Gumbel Distribution
====================================
-- Function: double gsl_ran_gumbel1 (const gsl_rng * R, double A,
double B)
This function returns a random variate from the Type-1 Gumbel
distribution. The Type-1 Gumbel distribution function is,
p(x) dx = a b \exp(-(b \exp(-ax) + ax)) dx
for -\infty < x < \infty.
-- Function: double gsl_ran_gumbel1_pdf (double X, double A, double B)
This function computes the probability density p(x) at X for a
Type-1 Gumbel distribution with parameters A and B, using the
formula given above.
-- Function: double gsl_cdf_gumbel1_P (double X, double A, double B)
-- Function: double gsl_cdf_gumbel1_Q (double X, double A, double B)
-- Function: double gsl_cdf_gumbel1_Pinv (double P, double A, double B)
-- Function: double gsl_cdf_gumbel1_Qinv (double Q, double A, double B)
These functions compute the cumulative distribution functions
P(x), Q(x) and their inverses for the Type-1 Gumbel distribution
with parameters A and B.
File: gsl-ref.info, Node: The Type-2 Gumbel Distribution, Next: The Dirichlet Distribution, Prev: The Type-1 Gumbel Distribution, Up: Random Number Distributions
20.26 The Type-2 Gumbel Distribution
====================================
-- Function: double gsl_ran_gumbel2 (const gsl_rng * R, double A,
double B)
This function returns a random variate from the Type-2 Gumbel
distribution. The Type-2 Gumbel distribution function is,
p(x) dx = a b x^{-a-1} \exp(-b x^{-a}) dx
for 0 < x < \infty.
-- Function: double gsl_ran_gumbel2_pdf (double X, double A, double B)
This function computes the probability density p(x) at X for a
Type-2 Gumbel distribution with parameters A and B, using the
formula given above.
-- Function: double gsl_cdf_gumbel2_P (double X, double A, double B)
-- Function: double gsl_cdf_gumbel2_Q (double X, double A, double B)
-- Function: double gsl_cdf_gumbel2_Pinv (double P, double A, double B)
-- Function: double gsl_cdf_gumbel2_Qinv (double Q, double A, double B)
These functions compute the cumulative distribution functions
P(x), Q(x) and their inverses for the Type-2 Gumbel distribution
with parameters A and B.
File: gsl-ref.info, Node: The Dirichlet Distribution, Next: General Discrete Distributions, Prev: The Type-2 Gumbel Distribution, Up: Random Number Distributions
20.27 The Dirichlet Distribution
================================
-- Function: void gsl_ran_dirichlet (const gsl_rng * R, size_t K,
const double ALPHA[], double THETA[])
This function returns an array of K random variates from a
Dirichlet distribution of order K-1. The distribution function is
p(\theta_1, ..., \theta_K) d\theta_1 ... d\theta_K =
(1/Z) \prod_{i=1}^K \theta_i^{\alpha_i - 1} \delta(1 -\sum_{i=1}^K \theta_i) d\theta_1 ... d\theta_K
for theta_i >= 0 and alpha_i > 0. The delta function ensures that
\sum \theta_i = 1. The normalization factor Z is
Z = {\prod_{i=1}^K \Gamma(\alpha_i)} / {\Gamma( \sum_{i=1}^K \alpha_i)}
The random variates are generated by sampling K values from gamma
distributions with parameters a=alpha_i, b=1, and renormalizing.
See A.M. Law, W.D. Kelton, `Simulation Modeling and Analysis'
(1991).
-- Function: double gsl_ran_dirichlet_pdf (size_t K, const double
ALPHA[], const double THETA[])
This function computes the probability density p(\theta_1, ... ,
\theta_K) at THETA[K] for a Dirichlet distribution with parameters
ALPHA[K], using the formula given above.
-- Function: double gsl_ran_dirichlet_lnpdf (size_t K, const double
ALPHA[], const double THETA[])
This function computes the logarithm of the probability density
p(\theta_1, ... , \theta_K) for a Dirichlet distribution with
parameters ALPHA[K].
File: gsl-ref.info, Node: General Discrete Distributions, Next: The Poisson Distribution, Prev: The Dirichlet Distribution, Up: Random Number Distributions
20.28 General Discrete Distributions
====================================
Given K discrete events with different probabilities P[k], produce a
random value k consistent with its probability.
The obvious way to do this is to preprocess the probability list by
generating a cumulative probability array with K+1 elements:
C[0] = 0
C[k+1] = C[k]+P[k].
Note that this construction produces C[K]=1. Now choose a uniform
deviate u between 0 and 1, and find the value of k such that C[k] <= u
< C[k+1]. Although this in principle requires of order \log K steps per
random number generation, they are fast steps, and if you use something
like \lfloor uK \rfloor as a starting point, you can often do pretty
well.
But faster methods have been devised. Again, the idea is to
preprocess the probability list, and save the result in some form of
lookup table; then the individual calls for a random discrete event can
go rapidly. An approach invented by G. Marsaglia (Generating discrete
random variables in a computer, Comm ACM 6, 37-38 (1963)) is very
clever, and readers interested in examples of good algorithm design are
directed to this short and well-written paper. Unfortunately, for
large K, Marsaglia's lookup table can be quite large.
A much better approach is due to Alastair J. Walker (An efficient
method for generating discrete random variables with general
distributions, ACM Trans on Mathematical Software 3, 253-256 (1977);
see also Knuth, v2, 3rd ed, p120-121,139). This requires two lookup
tables, one floating point and one integer, but both only of size K.
After preprocessing, the random numbers are generated in O(1) time,
even for large K. The preprocessing suggested by Walker requires
O(K^2) effort, but that is not actually necessary, and the
implementation provided here only takes O(K) effort. In general, more
preprocessing leads to faster generation of the individual random
numbers, but a diminishing return is reached pretty early. Knuth points
out that the optimal preprocessing is combinatorially difficult for
large K.
This method can be used to speed up some of the discrete random
number generators below, such as the binomial distribution. To use it
for something like the Poisson Distribution, a modification would have
to be made, since it only takes a finite set of K outcomes.
-- Function: gsl_ran_discrete_t * gsl_ran_discrete_preproc (size_t K,
const double * P)
This function returns a pointer to a structure that contains the
lookup table for the discrete random number generator. The array
P[] contains the probabilities of the discrete events; these array
elements must all be positive, but they needn't add up to one (so
you can think of them more generally as "weights")--the
preprocessor will normalize appropriately. This return value is
used as an argument for the `gsl_ran_discrete' function below.
-- Function: size_t gsl_ran_discrete (const gsl_rng * R, const
gsl_ran_discrete_t * G)
After the preprocessor, above, has been called, you use this
function to get the discrete random numbers.
-- Function: double gsl_ran_discrete_pdf (size_t K, const
gsl_ran_discrete_t * G)
Returns the probability P[k] of observing the variable K. Since
P[k] is not stored as part of the lookup table, it must be
recomputed; this computation takes O(K), so if K is large and you
care about the original array P[k] used to create the lookup
table, then you should just keep this original array P[k] around.
-- Function: void gsl_ran_discrete_free (gsl_ran_discrete_t * G)
De-allocates the lookup table pointed to by G.
File: gsl-ref.info, Node: The Poisson Distribution, Next: The Bernoulli Distribution, Prev: General Discrete Distributions, Up: Random Number Distributions
20.29 The Poisson Distribution
==============================
-- Function: unsigned int gsl_ran_poisson (const gsl_rng * R, double
MU)
This function returns a random integer from the Poisson
distribution with mean MU. The probability distribution for
Poisson variates is,
p(k) = {\mu^k \over k!} \exp(-\mu)
for k >= 0.
-- Function: double gsl_ran_poisson_pdf (unsigned int K, double MU)
This function computes the probability p(k) of obtaining K from a
Poisson distribution with mean MU, using the formula given above.
-- Function: double gsl_cdf_poisson_P (unsigned int K, double MU)
-- Function: double gsl_cdf_poisson_Q (unsigned int K, double MU)
These functions compute the cumulative distribution functions
P(k), Q(k) for the Poisson distribution with parameter MU.
File: gsl-ref.info, Node: The Bernoulli Distribution, Next: The Binomial Distribution, Prev: The Poisson Distribution, Up: Random Number Distributions
20.30 The Bernoulli Distribution
================================
-- Function: unsigned int gsl_ran_bernoulli (const gsl_rng * R, double
P)
This function returns either 0 or 1, the result of a Bernoulli
trial with probability P. The probability distribution for a
Bernoulli trial is,
p(0) = 1 - p
p(1) = p
-- Function: double gsl_ran_bernoulli_pdf (unsigned int K, double P)
This function computes the probability p(k) of obtaining K from a
Bernoulli distribution with probability parameter P, using the
formula given above.
File: gsl-ref.info, Node: The Binomial Distribution, Next: The Multinomial Distribution, Prev: The Bernoulli Distribution, Up: Random Number Distributions
20.31 The Binomial Distribution
===============================
-- Function: unsigned int gsl_ran_binomial (const gsl_rng * R, double
P, unsigned int N)
This function returns a random integer from the binomial
distribution, the number of successes in N independent trials with
probability P. The probability distribution for binomial variates
is,
p(k) = {n! \over k! (n-k)! } p^k (1-p)^{n-k}
for 0 <= k <= n.
-- Function: double gsl_ran_binomial_pdf (unsigned int K, double P,
unsigned int N)
This function computes the probability p(k) of obtaining K from a
binomial distribution with parameters P and N, using the formula
given above.
-- Function: double gsl_cdf_binomial_P (unsigned int K, double P,
unsigned int N)
-- Function: double gsl_cdf_binomial_Q (unsigned int K, double P,
unsigned int N)
These functions compute the cumulative distribution functions
P(k), Q(k) for the binomial distribution with parameters P and N.
File: gsl-ref.info, Node: The Multinomial Distribution, Next: The Negative Binomial Distribution, Prev: The Binomial Distribution, Up: Random Number Distributions
20.32 The Multinomial Distribution
==================================
-- Function: void gsl_ran_multinomial (const gsl_rng * R, size_t K,
unsigned int N, const double P[], unsigned int N[])
This function computes a random sample N[] from the multinomial
distribution formed by N trials from an underlying distribution
P[K]. The distribution function for N[] is,
P(n_1, n_2, ..., n_K) =
(N!/(n_1! n_2! ... n_K!)) p_1^n_1 p_2^n_2 ... p_K^n_K
where (n_1, n_2, ..., n_K) are nonnegative integers with
sum_{k=1}^K n_k = N, and (p_1, p_2, ..., p_K) is a probability
distribution with \sum p_i = 1. If the array P[K] is not
normalized then its entries will be treated as weights and
normalized appropriately. The arrays N[] and P[] must both be of
length K.
Random variates are generated using the conditional binomial
method (see C.S. Davis, `The computer generation of multinomial
random variates', Comp. Stat. Data Anal. 16 (1993) 205-217 for
details).
-- Function: double gsl_ran_multinomial_pdf (size_t K, const double
P[], const unsigned int N[])
This function computes the probability P(n_1, n_2, ..., n_K) of
sampling N[K] from a multinomial distribution with parameters
P[K], using the formula given above.
-- Function: double gsl_ran_multinomial_lnpdf (size_t K, const double
P[], const unsigned int N[])
This function returns the logarithm of the probability for the
multinomial distribution P(n_1, n_2, ..., n_K) with parameters
P[K].
File: gsl-ref.info, Node: The Negative Binomial Distribution, Next: The Pascal Distribution, Prev: The Multinomial Distribution, Up: Random Number Distributions
20.33 The Negative Binomial Distribution
========================================
-- Function: unsigned int gsl_ran_negative_binomial (const gsl_rng *
R, double P, double N)
This function returns a random integer from the negative binomial
distribution, the number of failures occurring before N successes
in independent trials with probability P of success. The
probability distribution for negative binomial variates is,
p(k) = {\Gamma(n + k) \over \Gamma(k+1) \Gamma(n) } p^n (1-p)^k
Note that n is not required to be an integer.
-- Function: double gsl_ran_negative_binomial_pdf (unsigned int K,
double P, double N)
This function computes the probability p(k) of obtaining K from a
negative binomial distribution with parameters P and N, using the
formula given above.
-- Function: double gsl_cdf_negative_binomial_P (unsigned int K,
double P, double N)
-- Function: double gsl_cdf_negative_binomial_Q (unsigned int K,
double P, double N)
These functions compute the cumulative distribution functions
P(k), Q(k) for the negative binomial distribution with parameters
P and N.
File: gsl-ref.info, Node: The Pascal Distribution, Next: The Geometric Distribution, Prev: The Negative Binomial Distribution, Up: Random Number Distributions
20.34 The Pascal Distribution
=============================
-- Function: unsigned int gsl_ran_pascal (const gsl_rng * R, double P,
unsigned int N)
This function returns a random integer from the Pascal
distribution. The Pascal distribution is simply a negative
binomial distribution with an integer value of n.
p(k) = {(n + k - 1)! \over k! (n - 1)! } p^n (1-p)^k
for k >= 0
-- Function: double gsl_ran_pascal_pdf (unsigned int K, double P,
unsigned int N)
This function computes the probability p(k) of obtaining K from a
Pascal distribution with parameters P and N, using the formula
given above.
-- Function: double gsl_cdf_pascal_P (unsigned int K, double P,
unsigned int N)
-- Function: double gsl_cdf_pascal_Q (unsigned int K, double P,
unsigned int N)
These functions compute the cumulative distribution functions
P(k), Q(k) for the Pascal distribution with parameters P and N.
File: gsl-ref.info, Node: The Geometric Distribution, Next: The Hypergeometric Distribution, Prev: The Pascal Distribution, Up: Random Number Distributions
20.35 The Geometric Distribution
================================
-- Function: unsigned int gsl_ran_geometric (const gsl_rng * R, double
P)
This function returns a random integer from the geometric
distribution, the number of independent trials with probability P
until the first success. The probability distribution for
geometric variates is,
p(k) = p (1-p)^(k-1)
for k >= 1. Note that the distribution begins with k=1 with this
definition. There is another convention in which the exponent k-1
is replaced by k.
-- Function: double gsl_ran_geometric_pdf (unsigned int K, double P)
This function computes the probability p(k) of obtaining K from a
geometric distribution with probability parameter P, using the
formula given above.
-- Function: double gsl_cdf_geometric_P (unsigned int K, double P)
-- Function: double gsl_cdf_geometric_Q (unsigned int K, double P)
These functions compute the cumulative distribution functions
P(k), Q(k) for the geometric distribution with parameter P.
File: gsl-ref.info, Node: The Hypergeometric Distribution, Next: The Logarithmic Distribution, Prev: The Geometric Distribution, Up: Random Number Distributions
20.36 The Hypergeometric Distribution
=====================================
-- Function: unsigned int gsl_ran_hypergeometric (const gsl_rng * R,
unsigned int N1, unsigned int N2, unsigned int T)
This function returns a random integer from the hypergeometric
distribution. The probability distribution for hypergeometric
random variates is,
p(k) = C(n_1, k) C(n_2, t - k) / C(n_1 + n_2, t)
where C(a,b) = a!/(b!(a-b)!) and t <= n_1 + n_2. The domain of k
is max(0,t-n_2), ..., min(t,n_1).
If a population contains n_1 elements of "type 1" and n_2 elements
of "type 2" then the hypergeometric distribution gives the
probability of obtaining k elements of "type 1" in t samples from
the population without replacement.
-- Function: double gsl_ran_hypergeometric_pdf (unsigned int K,
unsigned int N1, unsigned int N2, unsigned int T)
This function computes the probability p(k) of obtaining K from a
hypergeometric distribution with parameters N1, N2, T, using the
formula given above.
-- Function: double gsl_cdf_hypergeometric_P (unsigned int K, unsigned
int N1, unsigned int N2, unsigned int T)
-- Function: double gsl_cdf_hypergeometric_Q (unsigned int K, unsigned
int N1, unsigned int N2, unsigned int T)
These functions compute the cumulative distribution functions
P(k), Q(k) for the hypergeometric distribution with parameters N1,
N2 and T.
File: gsl-ref.info, Node: The Logarithmic Distribution, Next: Shuffling and Sampling, Prev: The Hypergeometric Distribution, Up: Random Number Distributions
20.37 The Logarithmic Distribution
==================================
-- Function: unsigned int gsl_ran_logarithmic (const gsl_rng * R,
double P)
This function returns a random integer from the logarithmic
distribution. The probability distribution for logarithmic random
variates is,
p(k) = {-1 \over \log(1-p)} {(p^k \over k)}
for k >= 1.
-- Function: double gsl_ran_logarithmic_pdf (unsigned int K, double P)
This function computes the probability p(k) of obtaining K from a
logarithmic distribution with probability parameter P, using the
formula given above.
File: gsl-ref.info, Node: Shuffling and Sampling, Next: Random Number Distribution Examples, Prev: The Logarithmic Distribution, Up: Random Number Distributions
20.38 Shuffling and Sampling
============================
The following functions allow the shuffling and sampling of a set of
objects. The algorithms rely on a random number generator as a source
of randomness and a poor quality generator can lead to correlations in
the output. In particular it is important to avoid generators with a
short period. For more information see Knuth, v2, 3rd ed, Section
3.4.2, "Random Sampling and Shuffling".
-- Function: void gsl_ran_shuffle (const gsl_rng * R, void * BASE,
size_t N, size_t SIZE)
This function randomly shuffles the order of N objects, each of
size SIZE, stored in the array BASE[0..N-1]. The output of the
random number generator R is used to produce the permutation. The
algorithm generates all possible n! permutations with equal
probability, assuming a perfect source of random numbers.
The following code shows how to shuffle the numbers from 0 to 51,
int a[52];
for (i = 0; i < 52; i++)
{
a[i] = i;
}
gsl_ran_shuffle (r, a, 52, sizeof (int));
-- Function: int gsl_ran_choose (const gsl_rng * R, void * DEST,
size_t K, void * SRC, size_t N, size_t SIZE)
This function fills the array DEST[k] with K objects taken
randomly from the N elements of the array SRC[0..N-1]. The
objects are each of size SIZE. The output of the random number
generator R is used to make the selection. The algorithm ensures
all possible samples are equally likely, assuming a perfect source
of randomness.
The objects are sampled _without_ replacement, thus each object can
only appear once in DEST[k]. It is required that K be less than
or equal to `n'. The objects in DEST will be in the same relative
order as those in SRC. You will need to call `gsl_ran_shuffle(r,
dest, n, size)' if you want to randomize the order.
The following code shows how to select a random sample of three
unique numbers from the set 0 to 99,
double a[3], b[100];
for (i = 0; i < 100; i++)
{
b[i] = (double) i;
}
gsl_ran_choose (r, a, 3, b, 100, sizeof (double));
-- Function: void gsl_ran_sample (const gsl_rng * R, void * DEST,
size_t K, void * SRC, size_t N, size_t SIZE)
This function is like `gsl_ran_choose' but samples K items from
the original array of N items SRC with replacement, so the same
object can appear more than once in the output sequence DEST.
There is no requirement that K be less than N in this case.
File: gsl-ref.info, Node: Random Number Distribution Examples, Next: Random Number Distribution References and Further Reading, Prev: Shuffling and Sampling, Up: Random Number Distributions
20.39 Examples
==============
The following program demonstrates the use of a random number generator
to produce variates from a distribution. It prints 10 samples from the
Poisson distribution with a mean of 3.
#include
#include
#include
int
main (void)
{
const gsl_rng_type * T;
gsl_rng * r;
int i, n = 10;
double mu = 3.0;
/* create a generator chosen by the
environment variable GSL_RNG_TYPE */
gsl_rng_env_setup();
T = gsl_rng_default;
r = gsl_rng_alloc (T);
/* print n random variates chosen from
the poisson distribution with mean
parameter mu */
for (i = 0; i < n; i++)
{
unsigned int k = gsl_ran_poisson (r, mu);
printf (" %u", k);
}
printf ("\n");
gsl_rng_free (r);
return 0;
}
If the library and header files are installed under `/usr/local' (the
default location) then the program can be compiled with these options,
$ gcc -Wall demo.c -lgsl -lgslcblas -lm
Here is the output of the program,
$ ./a.out
2 5 5 2 1 0 3 4 1 1
The variates depend on the seed used by the generator. The seed for the
default generator type `gsl_rng_default' can be changed with the
`GSL_RNG_SEED' environment variable to produce a different stream of
variates,
$ GSL_RNG_SEED=123 ./a.out
GSL_RNG_SEED=123
4 5 6 3 3 1 4 2 5 5
The following program generates a random walk in two dimensions.
#include
#include
#include
int
main (void)
{
int i;
double x = 0, y = 0, dx, dy;
const gsl_rng_type * T;
gsl_rng * r;
gsl_rng_env_setup();
T = gsl_rng_default;
r = gsl_rng_alloc (T);
printf ("%g %g\n", x, y);
for (i = 0; i < 10; i++)
{
gsl_ran_dir_2d (r, &dx, &dy);
x += dx; y += dy;
printf ("%g %g\n", x, y);
}
gsl_rng_free (r);
return 0;
}
Here is some output from the program, four 10-step random walks from
the origin,
The following program computes the upper and lower cumulative
distribution functions for the standard normal distribution at x=2.
#include
#include
int
main (void)
{
double P, Q;
double x = 2.0;
P = gsl_cdf_ugaussian_P (x);
printf ("prob(x < %f) = %f\n", x, P);
Q = gsl_cdf_ugaussian_Q (x);
printf ("prob(x > %f) = %f\n", x, Q);
x = gsl_cdf_ugaussian_Pinv (P);
printf ("Pinv(%f) = %f\n", P, x);
x = gsl_cdf_ugaussian_Qinv (Q);
printf ("Qinv(%f) = %f\n", Q, x);
return 0;
}
Here is the output of the program,
prob(x < 2.000000) = 0.977250
prob(x > 2.000000) = 0.022750
Pinv(0.977250) = 2.000000
Qinv(0.022750) = 2.000000
File: gsl-ref.info, Node: Random Number Distribution References and Further Reading, Prev: Random Number Distribution Examples, Up: Random Number Distributions
20.40 References and Further Reading
====================================
For an encyclopaedic coverage of the subject readers are advised to
consult the book `Non-Uniform Random Variate Generation' by Luc
Devroye. It covers every imaginable distribution and provides hundreds
of algorithms.
Luc Devroye, `Non-Uniform Random Variate Generation',
Springer-Verlag, ISBN 0-387-96305-7. Available online at
`http://cg.scs.carleton.ca/~luc/rnbookindex.html'.
The subject of random variate generation is also reviewed by Knuth, who
describes algorithms for all the major distributions.
Donald E. Knuth, `The Art of Computer Programming: Seminumerical
Algorithms' (Vol 2, 3rd Ed, 1997), Addison-Wesley, ISBN 0201896842.
The Particle Data Group provides a short review of techniques for
generating distributions of random numbers in the "Monte Carlo" section
of its Annual Review of Particle Physics.
`Review of Particle Properties' R.M. Barnett et al., Physical
Review D54, 1 (1996) `http://pdg.lbl.gov/'.
The Review of Particle Physics is available online in postscript and pdf
format.
An overview of methods used to compute cumulative distribution functions
can be found in `Statistical Computing' by W.J. Kennedy and J.E.
Gentle. Another general reference is `Elements of Statistical
Computing' by R.A. Thisted.
William E. Kennedy and James E. Gentle, `Statistical Computing'
(1980), Marcel Dekker, ISBN 0-8247-6898-1.
Ronald A. Thisted, `Elements of Statistical Computing' (1988),
Chapman & Hall, ISBN 0-412-01371-1.
The cumulative distribution functions for the Gaussian distribution are
based on the following papers,
`Rational Chebyshev Approximations Using Linear Equations', W.J.
Cody, W. Fraser, J.F. Hart. Numerische Mathematik 12, 242-251
(1968).
`Rational Chebyshev Approximations for the Error Function', W.J.
Cody. Mathematics of Computation 23, n107, 631-637 (July 1969).
File: gsl-ref.info, Node: Statistics, Next: Histograms, Prev: Random Number Distributions, Up: Top
21 Statistics
*************
This chapter describes the statistical functions in the library. The
basic statistical functions include routines to compute the mean,
variance and standard deviation. More advanced functions allow you to
calculate absolute deviations, skewness, and kurtosis as well as the
median and arbitrary percentiles. The algorithms use recurrence
relations to compute average quantities in a stable way, without large
intermediate values that might overflow.
The functions are available in versions for datasets in the standard
floating-point and integer types. The versions for double precision
floating-point data have the prefix `gsl_stats' and are declared in the
header file `gsl_statistics_double.h'. The versions for integer data
have the prefix `gsl_stats_int' and are declared in the header file
`gsl_statistics_int.h'. All the functions operate on C arrays with a
STRIDE parameter specifying the spacing between elements.
* Menu:
* Mean and standard deviation and variance::
* Absolute deviation::
* Higher moments (skewness and kurtosis)::
* Autocorrelation::
* Covariance::
* Correlation::
* Weighted Samples::
* Maximum and Minimum values::
* Median and Percentiles::
* Example statistical programs::
* Statistics References and Further Reading::
File: gsl-ref.info, Node: Mean and standard deviation and variance, Next: Absolute deviation, Up: Statistics
21.1 Mean, Standard Deviation and Variance
==========================================
-- Function: double gsl_stats_mean (const double DATA[], size_t
STRIDE, size_t N)
This function returns the arithmetic mean of DATA, a dataset of
length N with stride STRIDE. The arithmetic mean, or "sample
mean", is denoted by \Hat\mu and defined as,
\Hat\mu = (1/N) \sum x_i
where x_i are the elements of the dataset DATA. For samples drawn
from a gaussian distribution the variance of \Hat\mu is \sigma^2 /
N.
-- Function: double gsl_stats_variance (const double DATA[], size_t
STRIDE, size_t N)
This function returns the estimated, or "sample", variance of
DATA, a dataset of length N with stride STRIDE. The estimated
variance is denoted by \Hat\sigma^2 and is defined by,
\Hat\sigma^2 = (1/(N-1)) \sum (x_i - \Hat\mu)^2
where x_i are the elements of the dataset DATA. Note that the
normalization factor of 1/(N-1) results from the derivation of
\Hat\sigma^2 as an unbiased estimator of the population variance
\sigma^2. For samples drawn from a Gaussian distribution the
variance of \Hat\sigma^2 itself is 2 \sigma^4 / N.
This function computes the mean via a call to `gsl_stats_mean'. If
you have already computed the mean then you can pass it directly to
`gsl_stats_variance_m'.
-- Function: double gsl_stats_variance_m (const double DATA[], size_t
STRIDE, size_t N, double MEAN)
This function returns the sample variance of DATA relative to the
given value of MEAN. The function is computed with \Hat\mu
replaced by the value of MEAN that you supply,
\Hat\sigma^2 = (1/(N-1)) \sum (x_i - mean)^2
-- Function: double gsl_stats_sd (const double DATA[], size_t STRIDE,
size_t N)
-- Function: double gsl_stats_sd_m (const double DATA[], size_t
STRIDE, size_t N, double MEAN)
The standard deviation is defined as the square root of the
variance. These functions return the square root of the
corresponding variance functions above.
-- Function: double gsl_stats_tss (const double DATA[], size_t STRIDE,
size_t N)
-- Function: double gsl_stats_tss_m (const double DATA[], size_t
STRIDE, size_t N, double MEAN)
These functions return the total sum of squares (TSS) of DATA about
the mean. For `gsl_stats_tss_m' the user-supplied value of MEAN
is used, and for `gsl_stats_tss' it is computed using
`gsl_stats_mean'.
TSS = \sum (x_i - mean)^2
-- Function: double gsl_stats_variance_with_fixed_mean (const double
DATA[], size_t STRIDE, size_t N, double MEAN)
This function computes an unbiased estimate of the variance of
DATA when the population mean MEAN of the underlying distribution
is known _a priori_. In this case the estimator for the variance
uses the factor 1/N and the sample mean \Hat\mu is replaced by the
known population mean \mu,
\Hat\sigma^2 = (1/N) \sum (x_i - \mu)^2
-- Function: double gsl_stats_sd_with_fixed_mean (const double DATA[],
size_t STRIDE, size_t N, double MEAN)
This function calculates the standard deviation of DATA for a
fixed population mean MEAN. The result is the square root of the
corresponding variance function.
File: gsl-ref.info, Node: Absolute deviation, Next: Higher moments (skewness and kurtosis), Prev: Mean and standard deviation and variance, Up: Statistics
21.2 Absolute deviation
=======================
-- Function: double gsl_stats_absdev (const double DATA[], size_t
STRIDE, size_t N)
This function computes the absolute deviation from the mean of
DATA, a dataset of length N with stride STRIDE. The absolute
deviation from the mean is defined as,
absdev = (1/N) \sum |x_i - \Hat\mu|
where x_i are the elements of the dataset DATA. The absolute
deviation from the mean provides a more robust measure of the
width of a distribution than the variance. This function computes
the mean of DATA via a call to `gsl_stats_mean'.
-- Function: double gsl_stats_absdev_m (const double DATA[], size_t
STRIDE, size_t N, double MEAN)
This function computes the absolute deviation of the dataset DATA
relative to the given value of MEAN,
absdev = (1/N) \sum |x_i - mean|
This function is useful if you have already computed the mean of
DATA (and want to avoid recomputing it), or wish to calculate the
absolute deviation relative to another value (such as zero, or the
median).
File: gsl-ref.info, Node: Higher moments (skewness and kurtosis), Next: Autocorrelation, Prev: Absolute deviation, Up: Statistics
21.3 Higher moments (skewness and kurtosis)
===========================================
-- Function: double gsl_stats_skew (const double DATA[], size_t
STRIDE, size_t N)
This function computes the skewness of DATA, a dataset of length N
with stride STRIDE. The skewness is defined as,
skew = (1/N) \sum ((x_i - \Hat\mu)/\Hat\sigma)^3
where x_i are the elements of the dataset DATA. The skewness
measures the asymmetry of the tails of a distribution.
The function computes the mean and estimated standard deviation of
DATA via calls to `gsl_stats_mean' and `gsl_stats_sd'.
-- Function: double gsl_stats_skew_m_sd (const double DATA[], size_t
STRIDE, size_t N, double MEAN, double SD)
This function computes the skewness of the dataset DATA using the
given values of the mean MEAN and standard deviation SD,
skew = (1/N) \sum ((x_i - mean)/sd)^3
These functions are useful if you have already computed the mean
and standard deviation of DATA and want to avoid recomputing them.
-- Function: double gsl_stats_kurtosis (const double DATA[], size_t
STRIDE, size_t N)
This function computes the kurtosis of DATA, a dataset of length N
with stride STRIDE. The kurtosis is defined as,
kurtosis = ((1/N) \sum ((x_i - \Hat\mu)/\Hat\sigma)^4) - 3
The kurtosis measures how sharply peaked a distribution is,
relative to its width. The kurtosis is normalized to zero for a
Gaussian distribution.
-- Function: double gsl_stats_kurtosis_m_sd (const double DATA[],
size_t STRIDE, size_t N, double MEAN, double SD)
This function computes the kurtosis of the dataset DATA using the
given values of the mean MEAN and standard deviation SD,
kurtosis = ((1/N) \sum ((x_i - mean)/sd)^4) - 3
This function is useful if you have already computed the mean and
standard deviation of DATA and want to avoid recomputing them.
File: gsl-ref.info, Node: Autocorrelation, Next: Covariance, Prev: Higher moments (skewness and kurtosis), Up: Statistics
21.4 Autocorrelation
====================
-- Function: double gsl_stats_lag1_autocorrelation (const double
DATA[], const size_t STRIDE, const size_t N)
This function computes the lag-1 autocorrelation of the dataset
DATA.
a_1 = {\sum_{i = 1}^{n} (x_{i} - \Hat\mu) (x_{i-1} - \Hat\mu)
\over
\sum_{i = 1}^{n} (x_{i} - \Hat\mu) (x_{i} - \Hat\mu)}
-- Function: double gsl_stats_lag1_autocorrelation_m (const double
DATA[], const size_t STRIDE, const size_t N, const double
MEAN)
This function computes the lag-1 autocorrelation of the dataset
DATA using the given value of the mean MEAN.
File: gsl-ref.info, Node: Covariance, Next: Correlation, Prev: Autocorrelation, Up: Statistics
21.5 Covariance
===============
-- Function: double gsl_stats_covariance (const double DATA1[], const
size_t STRIDE1, const double DATA2[], const size_t STRIDE2,
const size_t N)
This function computes the covariance of the datasets DATA1 and
DATA2 which must both be of the same length N.
covar = (1/(n - 1)) \sum_{i = 1}^{n} (x_i - \Hat x) (y_i - \Hat y)
-- Function: double gsl_stats_covariance_m (const double DATA1[],
const size_t STRIDE1, const double DATA2[], const size_t
STRIDE2, const size_t N, const double MEAN1, const double
MEAN2)
This function computes the covariance of the datasets DATA1 and
DATA2 using the given values of the means, MEAN1 and MEAN2. This
is useful if you have already computed the means of DATA1 and
DATA2 and want to avoid recomputing them.
File: gsl-ref.info, Node: Correlation, Next: Weighted Samples, Prev: Covariance, Up: Statistics
21.6 Correlation
================
-- Function: double gsl_stats_correlation (const double DATA1[], const
size_t STRIDE1, const double DATA2[], const size_t STRIDE2,
const size_t N)
This function efficiently computes the Pearson correlation
coefficient between the datasets DATA1 and DATA2 which must both
be of the same length N.
r = cov(x, y) / (\Hat\sigma_x \Hat\sigma_y)
= {1/(n-1) \sum (x_i - \Hat x) (y_i - \Hat y)
\over
\sqrt{1/(n-1) \sum (x_i - \Hat x)^2} \sqrt{1/(n-1) \sum (y_i - \Hat y)^2}
}
File: gsl-ref.info, Node: Weighted Samples, Next: Maximum and Minimum values, Prev: Correlation, Up: Statistics
21.7 Weighted Samples
=====================
The functions described in this section allow the computation of
statistics for weighted samples. The functions accept an array of
samples, x_i, with associated weights, w_i. Each sample x_i is
considered as having been drawn from a Gaussian distribution with
variance \sigma_i^2. The sample weight w_i is defined as the
reciprocal of this variance, w_i = 1/\sigma_i^2. Setting a weight to
zero corresponds to removing a sample from a dataset.
-- Function: double gsl_stats_wmean (const double W[], size_t WSTRIDE,
const double DATA[], size_t STRIDE, size_t N)
This function returns the weighted mean of the dataset DATA with
stride STRIDE and length N, using the set of weights W with stride
WSTRIDE and length N. The weighted mean is defined as,
\Hat\mu = (\sum w_i x_i) / (\sum w_i)
-- Function: double gsl_stats_wvariance (const double W[], size_t
WSTRIDE, const double DATA[], size_t STRIDE, size_t N)
This function returns the estimated variance of the dataset DATA
with stride STRIDE and length N, using the set of weights W with
stride WSTRIDE and length N. The estimated variance of a weighted
dataset is calculated as,
\Hat\sigma^2 = ((\sum w_i)/((\sum w_i)^2 - \sum (w_i^2)))
\sum w_i (x_i - \Hat\mu)^2
Note that this expression reduces to an unweighted variance with
the familiar 1/(N-1) factor when there are N equal non-zero
weights.
-- Function: double gsl_stats_wvariance_m (const double W[], size_t
WSTRIDE, const double DATA[], size_t STRIDE, size_t N, double
WMEAN)
This function returns the estimated variance of the weighted
dataset DATA using the given weighted mean WMEAN.
-- Function: double gsl_stats_wsd (const double W[], size_t WSTRIDE,
const double DATA[], size_t STRIDE, size_t N)
The standard deviation is defined as the square root of the
variance. This function returns the square root of the
corresponding variance function `gsl_stats_wvariance' above.
-- Function: double gsl_stats_wsd_m (const double W[], size_t WSTRIDE,
const double DATA[], size_t STRIDE, size_t N, double WMEAN)
This function returns the square root of the corresponding variance
function `gsl_stats_wvariance_m' above.
-- Function: double gsl_stats_wvariance_with_fixed_mean (const double
W[], size_t WSTRIDE, const double DATA[], size_t STRIDE,
size_t N, const double MEAN)
This function computes an unbiased estimate of the variance of the
weighted dataset DATA when the population mean MEAN of the
underlying distribution is known _a priori_. In this case the
estimator for the variance replaces the sample mean \Hat\mu by the
known population mean \mu,
\Hat\sigma^2 = (\sum w_i (x_i - \mu)^2) / (\sum w_i)
-- Function: double gsl_stats_wsd_with_fixed_mean (const double W[],
size_t WSTRIDE, const double DATA[], size_t STRIDE, size_t N,
const double MEAN)
The standard deviation is defined as the square root of the
variance. This function returns the square root of the
corresponding variance function above.
-- Function: double gsl_stats_wtss (const double W[], const size_t
WSTRIDE, const double DATA[], size_t STRIDE, size_t N)
-- Function: double gsl_stats_wtss_m (const double W[], const size_t
WSTRIDE, const double DATA[], size_t STRIDE, size_t N, double
WMEAN)
These functions return the weighted total sum of squares (TSS) of
DATA about the weighted mean. For `gsl_stats_wtss_m' the
user-supplied value of WMEAN is used, and for `gsl_stats_wtss' it
is computed using `gsl_stats_wmean'.
TSS = \sum w_i (x_i - wmean)^2
-- Function: double gsl_stats_wabsdev (const double W[], size_t
WSTRIDE, const double DATA[], size_t STRIDE, size_t N)
This function computes the weighted absolute deviation from the
weighted mean of DATA. The absolute deviation from the mean is
defined as,
absdev = (\sum w_i |x_i - \Hat\mu|) / (\sum w_i)
-- Function: double gsl_stats_wabsdev_m (const double W[], size_t
WSTRIDE, const double DATA[], size_t STRIDE, size_t N, double
WMEAN)
This function computes the absolute deviation of the weighted
dataset DATA about the given weighted mean WMEAN.
-- Function: double gsl_stats_wskew (const double W[], size_t WSTRIDE,
const double DATA[], size_t STRIDE, size_t N)
This function computes the weighted skewness of the dataset DATA.
skew = (\sum w_i ((x_i - \Hat x)/\Hat \sigma)^3) / (\sum w_i)
-- Function: double gsl_stats_wskew_m_sd (const double W[], size_t
WSTRIDE, const double DATA[], size_t STRIDE, size_t N, double
WMEAN, double WSD)
This function computes the weighted skewness of the dataset DATA
using the given values of the weighted mean and weighted standard
deviation, WMEAN and WSD.
-- Function: double gsl_stats_wkurtosis (const double W[], size_t
WSTRIDE, const double DATA[], size_t STRIDE, size_t N)
This function computes the weighted kurtosis of the dataset DATA.
kurtosis = ((\sum w_i ((x_i - \Hat x)/\Hat \sigma)^4) / (\sum w_i)) - 3
-- Function: double gsl_stats_wkurtosis_m_sd (const double W[], size_t
WSTRIDE, const double DATA[], size_t STRIDE, size_t N, double
WMEAN, double WSD)
This function computes the weighted kurtosis of the dataset DATA
using the given values of the weighted mean and weighted standard
deviation, WMEAN and WSD.
File: gsl-ref.info, Node: Maximum and Minimum values, Next: Median and Percentiles, Prev: Weighted Samples, Up: Statistics
21.8 Maximum and Minimum values
===============================
The following functions find the maximum and minimum values of a
dataset (or their indices). If the data contains `NaN's then a `NaN'
will be returned, since the maximum or minimum value is undefined. For
functions which return an index, the location of the first `NaN' in the
array is returned.
-- Function: double gsl_stats_max (const double DATA[], size_t STRIDE,
size_t N)
This function returns the maximum value in DATA, a dataset of
length N with stride STRIDE. The maximum value is defined as the
value of the element x_i which satisfies x_i >= x_j for all j.
If you want instead to find the element with the largest absolute
magnitude you will need to apply `fabs' or `abs' to your data
before calling this function.
-- Function: double gsl_stats_min (const double DATA[], size_t STRIDE,
size_t N)
This function returns the minimum value in DATA, a dataset of
length N with stride STRIDE. The minimum value is defined as the
value of the element x_i which satisfies x_i <= x_j for all j.
If you want instead to find the element with the smallest absolute
magnitude you will need to apply `fabs' or `abs' to your data
before calling this function.
-- Function: void gsl_stats_minmax (double * MIN, double * MAX, const
double DATA[], size_t STRIDE, size_t N)
This function finds both the minimum and maximum values MIN, MAX
in DATA in a single pass.
-- Function: size_t gsl_stats_max_index (const double DATA[], size_t
STRIDE, size_t N)
This function returns the index of the maximum value in DATA, a
dataset of length N with stride STRIDE. The maximum value is
defined as the value of the element x_i which satisfies x_i >= x_j
for all j. When there are several equal maximum elements then the
first one is chosen.
-- Function: size_t gsl_stats_min_index (const double DATA[], size_t
STRIDE, size_t N)
This function returns the index of the minimum value in DATA, a
dataset of length N with stride STRIDE. The minimum value is
defined as the value of the element x_i which satisfies x_i >= x_j
for all j. When there are several equal minimum elements then the
first one is chosen.
-- Function: void gsl_stats_minmax_index (size_t * MIN_INDEX, size_t *
MAX_INDEX, const double DATA[], size_t STRIDE, size_t N)
This function returns the indexes MIN_INDEX, MAX_INDEX of the
minimum and maximum values in DATA in a single pass.
File: gsl-ref.info, Node: Median and Percentiles, Next: Example statistical programs, Prev: Maximum and Minimum values, Up: Statistics
21.9 Median and Percentiles
===========================
The median and percentile functions described in this section operate on
sorted data. For convenience we use "quantiles", measured on a scale
of 0 to 1, instead of percentiles (which use a scale of 0 to 100).
-- Function: double gsl_stats_median_from_sorted_data (const double
SORTED_DATA[], size_t STRIDE, size_t N)
This function returns the median value of SORTED_DATA, a dataset
of length N with stride STRIDE. The elements of the array must be
in ascending numerical order. There are no checks to see whether
the data are sorted, so the function `gsl_sort' should always be
used first.
When the dataset has an odd number of elements the median is the
value of element (n-1)/2. When the dataset has an even number of
elements the median is the mean of the two nearest middle values,
elements (n-1)/2 and n/2. Since the algorithm for computing the
median involves interpolation this function always returns a
floating-point number, even for integer data types.
-- Function: double gsl_stats_quantile_from_sorted_data (const double
SORTED_DATA[], size_t STRIDE, size_t N, double F)
This function returns a quantile value of SORTED_DATA, a
double-precision array of length N with stride STRIDE. The
elements of the array must be in ascending numerical order. The
quantile is determined by the F, a fraction between 0 and 1. For
example, to compute the value of the 75th percentile F should have
the value 0.75.
There are no checks to see whether the data are sorted, so the
function `gsl_sort' should always be used first.
The quantile is found by interpolation, using the formula
quantile = (1 - \delta) x_i + \delta x_{i+1}
where i is `floor'((n - 1)f) and \delta is (n-1)f - i.
Thus the minimum value of the array (`data[0*stride]') is given by
F equal to zero, the maximum value (`data[(n-1)*stride]') is given
by F equal to one and the median value is given by F equal to 0.5.
Since the algorithm for computing quantiles involves interpolation
this function always returns a floating-point number, even for
integer data types.
File: gsl-ref.info, Node: Example statistical programs, Next: Statistics References and Further Reading, Prev: Median and Percentiles, Up: Statistics
21.10 Examples
==============
Here is a basic example of how to use the statistical functions:
#include
#include
int
main(void)
{
double data[5] = {17.2, 18.1, 16.5, 18.3, 12.6};
double mean, variance, largest, smallest;
mean = gsl_stats_mean(data, 1, 5);
variance = gsl_stats_variance(data, 1, 5);
largest = gsl_stats_max(data, 1, 5);
smallest = gsl_stats_min(data, 1, 5);
printf ("The dataset is %g, %g, %g, %g, %g\n",
data[0], data[1], data[2], data[3], data[4]);
printf ("The sample mean is %g\n", mean);
printf ("The estimated variance is %g\n", variance);
printf ("The largest value is %g\n", largest);
printf ("The smallest value is %g\n", smallest);
return 0;
}
The program should produce the following output,
The dataset is 17.2, 18.1, 16.5, 18.3, 12.6
The sample mean is 16.54
The estimated variance is 5.373
The largest value is 18.3
The smallest value is 12.6
Here is an example using sorted data,
#include
#include
#include
int
main(void)
{
double data[5] = {17.2, 18.1, 16.5, 18.3, 12.6};
double median, upperq, lowerq;
printf ("Original dataset: %g, %g, %g, %g, %g\n",
data[0], data[1], data[2], data[3], data[4]);
gsl_sort (data, 1, 5);
printf ("Sorted dataset: %g, %g, %g, %g, %g\n",
data[0], data[1], data[2], data[3], data[4]);
median
= gsl_stats_median_from_sorted_data (data,
1, 5);
upperq
= gsl_stats_quantile_from_sorted_data (data,
1, 5,
0.75);
lowerq
= gsl_stats_quantile_from_sorted_data (data,
1, 5,
0.25);
printf ("The median is %g\n", median);
printf ("The upper quartile is %g\n", upperq);
printf ("The lower quartile is %g\n", lowerq);
return 0;
}
This program should produce the following output,
Original dataset: 17.2, 18.1, 16.5, 18.3, 12.6
Sorted dataset: 12.6, 16.5, 17.2, 18.1, 18.3
The median is 17.2
The upper quartile is 18.1
The lower quartile is 16.5
File: gsl-ref.info, Node: Statistics References and Further Reading, Prev: Example statistical programs, Up: Statistics
21.11 References and Further Reading
====================================
The standard reference for almost any topic in statistics is the
multi-volume `Advanced Theory of Statistics' by Kendall and Stuart.
Maurice Kendall, Alan Stuart, and J. Keith Ord. `The Advanced
Theory of Statistics' (multiple volumes) reprinted as `Kendall's
Advanced Theory of Statistics'. Wiley, ISBN 047023380X.
Many statistical concepts can be more easily understood by a Bayesian
approach. The following book by Gelman, Carlin, Stern and Rubin gives a
comprehensive coverage of the subject.
Andrew Gelman, John B. Carlin, Hal S. Stern, Donald B. Rubin.
`Bayesian Data Analysis'. Chapman & Hall, ISBN 0412039915.
For physicists the Particle Data Group provides useful reviews of
Probability and Statistics in the "Mathematical Tools" section of its
Annual Review of Particle Physics.
`Review of Particle Properties' R.M. Barnett et al., Physical
Review D54, 1 (1996)
The Review of Particle Physics is available online at the website
`http://pdg.lbl.gov/'.
File: gsl-ref.info, Node: Histograms, Next: N-tuples, Prev: Statistics, Up: Top
22 Histograms
*************
This chapter describes functions for creating histograms. Histograms
provide a convenient way of summarizing the distribution of a set of
data. A histogram consists of a set of "bins" which count the number of
events falling into a given range of a continuous variable x. In GSL
the bins of a histogram contain floating-point numbers, so they can be
used to record both integer and non-integer distributions. The bins
can use arbitrary sets of ranges (uniformly spaced bins are the
default). Both one and two-dimensional histograms are supported.
Once a histogram has been created it can also be converted into a
probability distribution function. The library provides efficient
routines for selecting random samples from probability distributions.
This can be useful for generating simulations based on real data.
The functions are declared in the header files `gsl_histogram.h' and
`gsl_histogram2d.h'.
* Menu:
* The histogram struct::
* Histogram allocation::
* Copying Histograms::
* Updating and accessing histogram elements::
* Searching histogram ranges::
* Histogram Statistics::
* Histogram Operations::
* Reading and writing histograms::
* Resampling from histograms::
* The histogram probability distribution struct::
* Example programs for histograms::
* Two dimensional histograms::
* The 2D histogram struct::
* 2D Histogram allocation::
* Copying 2D Histograms::
* Updating and accessing 2D histogram elements::
* Searching 2D histogram ranges::
* 2D Histogram Statistics::
* 2D Histogram Operations::
* Reading and writing 2D histograms::
* Resampling from 2D histograms::
* Example programs for 2D histograms::
File: gsl-ref.info, Node: The histogram struct, Next: Histogram allocation, Up: Histograms
22.1 The histogram struct
=========================
A histogram is defined by the following struct,
-- Data Type: gsl_histogram
`size_t n'
This is the number of histogram bins
`double * range'
The ranges of the bins are stored in an array of N+1 elements
pointed to by RANGE.
`double * bin'
The counts for each bin are stored in an array of N elements
pointed to by BIN. The bins are floating-point numbers, so
you can increment them by non-integer values if necessary.
The range for BIN[i] is given by RANGE[i] to RANGE[i+1]. For n bins
there are n+1 entries in the array RANGE. Each bin is inclusive at the
lower end and exclusive at the upper end. Mathematically this means
that the bins are defined by the following inequality,
bin[i] corresponds to range[i] <= x < range[i+1]
Here is a diagram of the correspondence between ranges and bins on the
number-line for x,
[ bin[0] )[ bin[1] )[ bin[2] )[ bin[3] )[ bin[4] )
---|---------|---------|---------|---------|---------|--- x
r[0] r[1] r[2] r[3] r[4] r[5]
In this picture the values of the RANGE array are denoted by r. On the
left-hand side of each bin the square bracket `[' denotes an inclusive
lower bound (r <= x), and the round parentheses `)' on the right-hand
side denote an exclusive upper bound (x < r). Thus any samples which
fall on the upper end of the histogram are excluded. If you want to
include this value for the last bin you will need to add an extra bin
to your histogram.
The `gsl_histogram' struct and its associated functions are defined
in the header file `gsl_histogram.h'.
File: gsl-ref.info, Node: Histogram allocation, Next: Copying Histograms, Prev: The histogram struct, Up: Histograms
22.2 Histogram allocation
=========================
The functions for allocating memory to a histogram follow the style of
`malloc' and `free'. In addition they also perform their own error
checking. If there is insufficient memory available to allocate a
histogram then the functions call the error handler (with an error
number of `GSL_ENOMEM') in addition to returning a null pointer. Thus
if you use the library error handler to abort your program then it
isn't necessary to check every histogram `alloc'.
-- Function: gsl_histogram * gsl_histogram_alloc (size_t N)
This function allocates memory for a histogram with N bins, and
returns a pointer to a newly created `gsl_histogram' struct. If
insufficient memory is available a null pointer is returned and the
error handler is invoked with an error code of `GSL_ENOMEM'. The
bins and ranges are not initialized, and should be prepared using
one of the range-setting functions below in order to make the
histogram ready for use.
-- Function: int gsl_histogram_set_ranges (gsl_histogram * H, const
double RANGE[], size_t SIZE)
This function sets the ranges of the existing histogram H using
the array RANGE of size SIZE. The values of the histogram bins
are reset to zero. The `range' array should contain the desired
bin limits. The ranges can be arbitrary, subject to the
restriction that they are monotonically increasing.
The following example shows how to create a histogram with
logarithmic bins with ranges [1,10), [10,100) and [100,1000).
gsl_histogram * h = gsl_histogram_alloc (3);
/* bin[0] covers the range 1 <= x < 10 */
/* bin[1] covers the range 10 <= x < 100 */
/* bin[2] covers the range 100 <= x < 1000 */
double range[4] = { 1.0, 10.0, 100.0, 1000.0 };
gsl_histogram_set_ranges (h, range, 4);
Note that the size of the RANGE array should be defined to be one
element bigger than the number of bins. The additional element is
required for the upper value of the final bin.
-- Function: int gsl_histogram_set_ranges_uniform (gsl_histogram * H,
double XMIN, double XMAX)
This function sets the ranges of the existing histogram H to cover
the range XMIN to XMAX uniformly. The values of the histogram
bins are reset to zero. The bin ranges are shown in the table
below,
bin[0] corresponds to xmin <= x < xmin + d
bin[1] corresponds to xmin + d <= x < xmin + 2 d
......
bin[n-1] corresponds to xmin + (n-1)d <= x < xmax
where d is the bin spacing, d = (xmax-xmin)/n.
-- Function: void gsl_histogram_free (gsl_histogram * H)
This function frees the histogram H and all of the memory
associated with it.
File: gsl-ref.info, Node: Copying Histograms, Next: Updating and accessing histogram elements, Prev: Histogram allocation, Up: Histograms
22.3 Copying Histograms
=======================
-- Function: int gsl_histogram_memcpy (gsl_histogram * DEST, const
gsl_histogram * SRC)
This function copies the histogram SRC into the pre-existing
histogram DEST, making DEST into an exact copy of SRC. The two
histograms must be of the same size.
-- Function: gsl_histogram * gsl_histogram_clone (const gsl_histogram
* SRC)
This function returns a pointer to a newly created histogram which
is an exact copy of the histogram SRC.
File: gsl-ref.info, Node: Updating and accessing histogram elements, Next: Searching histogram ranges, Prev: Copying Histograms, Up: Histograms
22.4 Updating and accessing histogram elements
==============================================
There are two ways to access histogram bins, either by specifying an x
coordinate or by using the bin-index directly. The functions for
accessing the histogram through x coordinates use a binary search to
identify the bin which covers the appropriate range.
-- Function: int gsl_histogram_increment (gsl_histogram * H, double X)
This function updates the histogram H by adding one (1.0) to the
bin whose range contains the coordinate X.
If X lies in the valid range of the histogram then the function
returns zero to indicate success. If X is less than the lower
limit of the histogram then the function returns `GSL_EDOM', and
none of bins are modified. Similarly, if the value of X is greater
than or equal to the upper limit of the histogram then the function
returns `GSL_EDOM', and none of the bins are modified. The error
handler is not called, however, since it is often necessary to
compute histograms for a small range of a larger dataset, ignoring
the values outside the range of interest.
-- Function: int gsl_histogram_accumulate (gsl_histogram * H, double
X, double WEIGHT)
This function is similar to `gsl_histogram_increment' but increases
the value of the appropriate bin in the histogram H by the
floating-point number WEIGHT.
-- Function: double gsl_histogram_get (const gsl_histogram * H, size_t
I)
This function returns the contents of the I-th bin of the histogram
H. If I lies outside the valid range of indices for the histogram
then the error handler is called with an error code of `GSL_EDOM'
and the function returns 0.
-- Function: int gsl_histogram_get_range (const gsl_histogram * H,
size_t I, double * LOWER, double * UPPER)
This function finds the upper and lower range limits of the I-th
bin of the histogram H. If the index I is valid then the
corresponding range limits are stored in LOWER and UPPER. The
lower limit is inclusive (i.e. events with this coordinate are
included in the bin) and the upper limit is exclusive (i.e. events
with the coordinate of the upper limit are excluded and fall in the
neighboring higher bin, if it exists). The function returns 0 to
indicate success. If I lies outside the valid range of indices for
the histogram then the error handler is called and the function
returns an error code of `GSL_EDOM'.
-- Function: double gsl_histogram_max (const gsl_histogram * H)
-- Function: double gsl_histogram_min (const gsl_histogram * H)
-- Function: size_t gsl_histogram_bins (const gsl_histogram * H)
These functions return the maximum upper and minimum lower range
limits and the number of bins of the histogram H. They provide a
way of determining these values without accessing the
`gsl_histogram' struct directly.
-- Function: void gsl_histogram_reset (gsl_histogram * H)
This function resets all the bins in the histogram H to zero.
File: gsl-ref.info, Node: Searching histogram ranges, Next: Histogram Statistics, Prev: Updating and accessing histogram elements, Up: Histograms
22.5 Searching histogram ranges
===============================
The following functions are used by the access and update routines to
locate the bin which corresponds to a given x coordinate.
-- Function: int gsl_histogram_find (const gsl_histogram * H, double
X, size_t * I)
This function finds and sets the index I to the bin number which
covers the coordinate X in the histogram H. The bin is located
using a binary search. The search includes an optimization for
histograms with uniform range, and will return the correct bin
immediately in this case. If X is found in the range of the
histogram then the function sets the index I and returns
`GSL_SUCCESS'. If X lies outside the valid range of the histogram
then the function returns `GSL_EDOM' and the error handler is
invoked.
File: gsl-ref.info, Node: Histogram Statistics, Next: Histogram Operations, Prev: Searching histogram ranges, Up: Histograms
22.6 Histogram Statistics
=========================
-- Function: double gsl_histogram_max_val (const gsl_histogram * H)
This function returns the maximum value contained in the histogram
bins.
-- Function: size_t gsl_histogram_max_bin (const gsl_histogram * H)
This function returns the index of the bin containing the maximum
value. In the case where several bins contain the same maximum
value the smallest index is returned.
-- Function: double gsl_histogram_min_val (const gsl_histogram * H)
This function returns the minimum value contained in the histogram
bins.
-- Function: size_t gsl_histogram_min_bin (const gsl_histogram * H)
This function returns the index of the bin containing the minimum
value. In the case where several bins contain the same maximum
value the smallest index is returned.
-- Function: double gsl_histogram_mean (const gsl_histogram * H)
This function returns the mean of the histogrammed variable, where
the histogram is regarded as a probability distribution. Negative
bin values are ignored for the purposes of this calculation. The
accuracy of the result is limited by the bin width.
-- Function: double gsl_histogram_sigma (const gsl_histogram * H)
This function returns the standard deviation of the histogrammed
variable, where the histogram is regarded as a probability
distribution. Negative bin values are ignored for the purposes of
this calculation. The accuracy of the result is limited by the bin
width.
-- Function: double gsl_histogram_sum (const gsl_histogram * H)
This function returns the sum of all bin values. Negative bin
values are included in the sum.
File: gsl-ref.info, Node: Histogram Operations, Next: Reading and writing histograms, Prev: Histogram Statistics, Up: Histograms
22.7 Histogram Operations
=========================
-- Function: int gsl_histogram_equal_bins_p (const gsl_histogram * H1,
const gsl_histogram * H2)
This function returns 1 if the all of the individual bin ranges of
the two histograms are identical, and 0 otherwise.
-- Function: int gsl_histogram_add (gsl_histogram * H1, const
gsl_histogram * H2)
This function adds the contents of the bins in histogram H2 to the
corresponding bins of histogram H1, i.e. h'_1(i) = h_1(i) +
h_2(i). The two histograms must have identical bin ranges.
-- Function: int gsl_histogram_sub (gsl_histogram * H1, const
gsl_histogram * H2)
This function subtracts the contents of the bins in histogram H2
from the corresponding bins of histogram H1, i.e. h'_1(i) = h_1(i)
- h_2(i). The two histograms must have identical bin ranges.
-- Function: int gsl_histogram_mul (gsl_histogram * H1, const
gsl_histogram * H2)
This function multiplies the contents of the bins of histogram H1
by the contents of the corresponding bins in histogram H2, i.e.
h'_1(i) = h_1(i) * h_2(i). The two histograms must have identical
bin ranges.
-- Function: int gsl_histogram_div (gsl_histogram * H1, const
gsl_histogram * H2)
This function divides the contents of the bins of histogram H1 by
the contents of the corresponding bins in histogram H2, i.e.
h'_1(i) = h_1(i) / h_2(i). The two histograms must have identical
bin ranges.
-- Function: int gsl_histogram_scale (gsl_histogram * H, double SCALE)
This function multiplies the contents of the bins of histogram H
by the constant SCALE, i.e. h'_1(i) = h_1(i) * scale.
-- Function: int gsl_histogram_shift (gsl_histogram * H, double OFFSET)
This function shifts the contents of the bins of histogram H by
the constant OFFSET, i.e. h'_1(i) = h_1(i) + offset.
File: gsl-ref.info, Node: Reading and writing histograms, Next: Resampling from histograms, Prev: Histogram Operations, Up: Histograms
22.8 Reading and writing histograms
===================================
The library provides functions for reading and writing histograms to a
file as binary data or formatted text.
-- Function: int gsl_histogram_fwrite (FILE * STREAM, const
gsl_histogram * H)
This function writes the ranges and bins of the histogram H to the
stream STREAM in binary format. The return value is 0 for success
and `GSL_EFAILED' if there was a problem writing to the file.
Since the data is written in the native binary format it may not
be portable between different architectures.
-- Function: int gsl_histogram_fread (FILE * STREAM, gsl_histogram * H)
This function reads into the histogram H from the open stream
STREAM in binary format. The histogram H must be preallocated
with the correct size since the function uses the number of bins
in H to determine how many bytes to read. The return value is 0
for success and `GSL_EFAILED' if there was a problem reading from
the file. The data is assumed to have been written in the native
binary format on the same architecture.
-- Function: int gsl_histogram_fprintf (FILE * STREAM, const
gsl_histogram * H, const char * RANGE_FORMAT, const char *
BIN_FORMAT)
This function writes the ranges and bins of the histogram H
line-by-line to the stream STREAM using the format specifiers
RANGE_FORMAT and BIN_FORMAT. These should be one of the `%g',
`%e' or `%f' formats for floating point numbers. The function
returns 0 for success and `GSL_EFAILED' if there was a problem
writing to the file. The histogram output is formatted in three
columns, and the columns are separated by spaces, like this,
range[0] range[1] bin[0]
range[1] range[2] bin[1]
range[2] range[3] bin[2]
....
range[n-1] range[n] bin[n-1]
The values of the ranges are formatted using RANGE_FORMAT and the
value of the bins are formatted using BIN_FORMAT. Each line
contains the lower and upper limit of the range of the bins and the
value of the bin itself. Since the upper limit of one bin is the
lower limit of the next there is duplication of these values
between lines but this allows the histogram to be manipulated with
line-oriented tools.
-- Function: int gsl_histogram_fscanf (FILE * STREAM, gsl_histogram *
H)
This function reads formatted data from the stream STREAM into the
histogram H. The data is assumed to be in the three-column format
used by `gsl_histogram_fprintf'. The histogram H must be
preallocated with the correct length since the function uses the
size of H to determine how many numbers to read. The function
returns 0 for success and `GSL_EFAILED' if there was a problem
reading from the file.
File: gsl-ref.info, Node: Resampling from histograms, Next: The histogram probability distribution struct, Prev: Reading and writing histograms, Up: Histograms
22.9 Resampling from histograms
===============================
A histogram made by counting events can be regarded as a measurement of
a probability distribution. Allowing for statistical error, the height
of each bin represents the probability of an event where the value of x
falls in the range of that bin. The probability distribution function
has the one-dimensional form p(x)dx where,
p(x) = n_i/ (N w_i)
In this equation n_i is the number of events in the bin which contains
x, w_i is the width of the bin and N is the total number of events.
The distribution of events within each bin is assumed to be uniform.
File: gsl-ref.info, Node: The histogram probability distribution struct, Next: Example programs for histograms, Prev: Resampling from histograms, Up: Histograms
22.10 The histogram probability distribution struct
===================================================
The probability distribution function for a histogram consists of a set
of "bins" which measure the probability of an event falling into a
given range of a continuous variable x. A probability distribution
function is defined by the following struct, which actually stores the
cumulative probability distribution function. This is the natural
quantity for generating samples via the inverse transform method,
because there is a one-to-one mapping between the cumulative
probability distribution and the range [0,1]. It can be shown that by
taking a uniform random number in this range and finding its
corresponding coordinate in the cumulative probability distribution we
obtain samples with the desired probability distribution.
-- Data Type: gsl_histogram_pdf
`size_t n'
This is the number of bins used to approximate the probability
distribution function.
`double * range'
The ranges of the bins are stored in an array of N+1 elements
pointed to by RANGE.
`double * sum'
The cumulative probability for the bins is stored in an array
of N elements pointed to by SUM.
The following functions allow you to create a `gsl_histogram_pdf'
struct which represents this probability distribution and generate
random samples from it.
-- Function: gsl_histogram_pdf * gsl_histogram_pdf_alloc (size_t N)
This function allocates memory for a probability distribution with
N bins and returns a pointer to a newly initialized
`gsl_histogram_pdf' struct. If insufficient memory is available a
null pointer is returned and the error handler is invoked with an
error code of `GSL_ENOMEM'.
-- Function: int gsl_histogram_pdf_init (gsl_histogram_pdf * P, const
gsl_histogram * H)
This function initializes the probability distribution P with the
contents of the histogram H. If any of the bins of H are negative
then the error handler is invoked with an error code of `GSL_EDOM'
because a probability distribution cannot contain negative values.
-- Function: void gsl_histogram_pdf_free (gsl_histogram_pdf * P)
This function frees the probability distribution function P and
all of the memory associated with it.
-- Function: double gsl_histogram_pdf_sample (const gsl_histogram_pdf
* P, double R)
This function uses R, a uniform random number between zero and
one, to compute a single random sample from the probability
distribution P. The algorithm used to compute the sample s is
given by the following formula,
s = range[i] + delta * (range[i+1] - range[i])
where i is the index which satisfies sum[i] <= r < sum[i+1] and
delta is (r - sum[i])/(sum[i+1] - sum[i]).
File: gsl-ref.info, Node: Example programs for histograms, Next: Two dimensional histograms, Prev: The histogram probability distribution struct, Up: Histograms
22.11 Example programs for histograms
=====================================
The following program shows how to make a simple histogram of a column
of numerical data supplied on `stdin'. The program takes three
arguments, specifying the upper and lower bounds of the histogram and
the number of bins. It then reads numbers from `stdin', one line at a
time, and adds them to the histogram. When there is no more data to
read it prints out the accumulated histogram using
`gsl_histogram_fprintf'.
#include
#include
#include
int
main (int argc, char **argv)
{
double a, b;
size_t n;
if (argc != 4)
{
printf ("Usage: gsl-histogram xmin xmax n\n"
"Computes a histogram of the data "
"on stdin using n bins from xmin "
"to xmax\n");
exit (0);
}
a = atof (argv[1]);
b = atof (argv[2]);
n = atoi (argv[3]);
{
double x;
gsl_histogram * h = gsl_histogram_alloc (n);
gsl_histogram_set_ranges_uniform (h, a, b);
while (fscanf (stdin, "%lg", &x) == 1)
{
gsl_histogram_increment (h, x);
}
gsl_histogram_fprintf (stdout, h, "%g", "%g");
gsl_histogram_free (h);
}
exit (0);
}
Here is an example of the program in use. We generate 10000 random
samples from a Cauchy distribution with a width of 30 and histogram
them over the range -100 to 100, using 200 bins.
$ gsl-randist 0 10000 cauchy 30
| gsl-histogram -100 100 200 > histogram.dat
A plot of the resulting histogram shows the familiar shape of the
Cauchy distribution and the fluctuations caused by the finite sample
size.
$ awk '{print $1, $3 ; print $2, $3}' histogram.dat
| graph -T X
File: gsl-ref.info, Node: Two dimensional histograms, Next: The 2D histogram struct, Prev: Example programs for histograms, Up: Histograms
22.12 Two dimensional histograms
================================
A two dimensional histogram consists of a set of "bins" which count the
number of events falling in a given area of the (x,y) plane. The
simplest way to use a two dimensional histogram is to record
two-dimensional position information, n(x,y). Another possibility is
to form a "joint distribution" by recording related variables. For
example a detector might record both the position of an event (x) and
the amount of energy it deposited E. These could be histogrammed as
the joint distribution n(x,E).
File: gsl-ref.info, Node: The 2D histogram struct, Next: 2D Histogram allocation, Prev: Two dimensional histograms, Up: Histograms
22.13 The 2D histogram struct
=============================
Two dimensional histograms are defined by the following struct,
-- Data Type: gsl_histogram2d
`size_t nx, ny'
This is the number of histogram bins in the x and y
directions.
`double * xrange'
The ranges of the bins in the x-direction are stored in an
array of NX + 1 elements pointed to by XRANGE.
`double * yrange'
The ranges of the bins in the y-direction are stored in an
array of NY + 1 elements pointed to by YRANGE.
`double * bin'
The counts for each bin are stored in an array pointed to by
BIN. The bins are floating-point numbers, so you can
increment them by non-integer values if necessary. The array
BIN stores the two dimensional array of bins in a single
block of memory according to the mapping `bin(i,j)' = `bin[i
* ny + j]'.
The range for `bin(i,j)' is given by `xrange[i]' to `xrange[i+1]' in
the x-direction and `yrange[j]' to `yrange[j+1]' in the y-direction.
Each bin is inclusive at the lower end and exclusive at the upper end.
Mathematically this means that the bins are defined by the following
inequality,
bin(i,j) corresponds to xrange[i] <= x < xrange[i+1]
and yrange[j] <= y < yrange[j+1]
Note that any samples which fall on the upper sides of the histogram are
excluded. If you want to include these values for the side bins you
will need to add an extra row or column to your histogram.
The `gsl_histogram2d' struct and its associated functions are
defined in the header file `gsl_histogram2d.h'.
File: gsl-ref.info, Node: 2D Histogram allocation, Next: Copying 2D Histograms, Prev: The 2D histogram struct, Up: Histograms
22.14 2D Histogram allocation
=============================
The functions for allocating memory to a 2D histogram follow the style
of `malloc' and `free'. In addition they also perform their own error
checking. If there is insufficient memory available to allocate a
histogram then the functions call the error handler (with an error
number of `GSL_ENOMEM') in addition to returning a null pointer. Thus
if you use the library error handler to abort your program then it
isn't necessary to check every 2D histogram `alloc'.
-- Function: gsl_histogram2d * gsl_histogram2d_alloc (size_t NX,
size_t NY)
This function allocates memory for a two-dimensional histogram with
NX bins in the x direction and NY bins in the y direction. The
function returns a pointer to a newly created `gsl_histogram2d'
struct. If insufficient memory is available a null pointer is
returned and the error handler is invoked with an error code of
`GSL_ENOMEM'. The bins and ranges must be initialized with one of
the functions below before the histogram is ready for use.
-- Function: int gsl_histogram2d_set_ranges (gsl_histogram2d * H,
const double XRANGE[], size_t XSIZE, const double YRANGE[],
size_t YSIZE)
This function sets the ranges of the existing histogram H using
the arrays XRANGE and YRANGE of size XSIZE and YSIZE respectively.
The values of the histogram bins are reset to zero.
-- Function: int gsl_histogram2d_set_ranges_uniform (gsl_histogram2d *
H, double XMIN, double XMAX, double YMIN, double YMAX)
This function sets the ranges of the existing histogram H to cover
the ranges XMIN to XMAX and YMIN to YMAX uniformly. The values of
the histogram bins are reset to zero.
-- Function: void gsl_histogram2d_free (gsl_histogram2d * H)
This function frees the 2D histogram H and all of the memory
associated with it.
File: gsl-ref.info, Node: Copying 2D Histograms, Next: Updating and accessing 2D histogram elements, Prev: 2D Histogram allocation, Up: Histograms
22.15 Copying 2D Histograms
===========================
-- Function: int gsl_histogram2d_memcpy (gsl_histogram2d * DEST, const
gsl_histogram2d * SRC)
This function copies the histogram SRC into the pre-existing
histogram DEST, making DEST into an exact copy of SRC. The two
histograms must be of the same size.
-- Function: gsl_histogram2d * gsl_histogram2d_clone (const
gsl_histogram2d * SRC)
This function returns a pointer to a newly created histogram which
is an exact copy of the histogram SRC.
File: gsl-ref.info, Node: Updating and accessing 2D histogram elements, Next: Searching 2D histogram ranges, Prev: Copying 2D Histograms, Up: Histograms
22.16 Updating and accessing 2D histogram elements
==================================================
You can access the bins of a two-dimensional histogram either by
specifying a pair of (x,y) coordinates or by using the bin indices
(i,j) directly. The functions for accessing the histogram through
(x,y) coordinates use binary searches in the x and y directions to
identify the bin which covers the appropriate range.
-- Function: int gsl_histogram2d_increment (gsl_histogram2d * H,
double X, double Y)
This function updates the histogram H by adding one (1.0) to the
bin whose x and y ranges contain the coordinates (X,Y).
If the point (x,y) lies inside the valid ranges of the histogram
then the function returns zero to indicate success. If (x,y) lies
outside the limits of the histogram then the function returns
`GSL_EDOM', and none of the bins are modified. The error handler
is not called, since it is often necessary to compute histograms
for a small range of a larger dataset, ignoring any coordinates
outside the range of interest.
-- Function: int gsl_histogram2d_accumulate (gsl_histogram2d * H,
double X, double Y, double WEIGHT)
This function is similar to `gsl_histogram2d_increment' but
increases the value of the appropriate bin in the histogram H by
the floating-point number WEIGHT.
-- Function: double gsl_histogram2d_get (const gsl_histogram2d * H,
size_t I, size_t J)
This function returns the contents of the (I,J)-th bin of the
histogram H. If (I,J) lies outside the valid range of indices for
the histogram then the error handler is called with an error code
of `GSL_EDOM' and the function returns 0.
-- Function: int gsl_histogram2d_get_xrange (const gsl_histogram2d *
H, size_t I, double * XLOWER, double * XUPPER)
-- Function: int gsl_histogram2d_get_yrange (const gsl_histogram2d *
H, size_t J, double * YLOWER, double * YUPPER)
These functions find the upper and lower range limits of the I-th
and J-th bins in the x and y directions of the histogram H. The
range limits are stored in XLOWER and XUPPER or YLOWER and YUPPER.
The lower limits are inclusive (i.e. events with these coordinates
are included in the bin) and the upper limits are exclusive (i.e.
events with the value of the upper limit are not included and fall
in the neighboring higher bin, if it exists). The functions
return 0 to indicate success. If I or J lies outside the valid
range of indices for the histogram then the error handler is
called with an error code of `GSL_EDOM'.
-- Function: double gsl_histogram2d_xmax (const gsl_histogram2d * H)
-- Function: double gsl_histogram2d_xmin (const gsl_histogram2d * H)
-- Function: size_t gsl_histogram2d_nx (const gsl_histogram2d * H)
-- Function: double gsl_histogram2d_ymax (const gsl_histogram2d * H)
-- Function: double gsl_histogram2d_ymin (const gsl_histogram2d * H)
-- Function: size_t gsl_histogram2d_ny (const gsl_histogram2d * H)
These functions return the maximum upper and minimum lower range
limits and the number of bins for the x and y directions of the
histogram H. They provide a way of determining these values
without accessing the `gsl_histogram2d' struct directly.
-- Function: void gsl_histogram2d_reset (gsl_histogram2d * H)
This function resets all the bins of the histogram H to zero.
File: gsl-ref.info, Node: Searching 2D histogram ranges, Next: 2D Histogram Statistics, Prev: Updating and accessing 2D histogram elements, Up: Histograms
22.17 Searching 2D histogram ranges
===================================
The following functions are used by the access and update routines to
locate the bin which corresponds to a given (x,y) coordinate.
-- Function: int gsl_histogram2d_find (const gsl_histogram2d * H,
double X, double Y, size_t * I, size_t * J)
This function finds and sets the indices I and J to the to the bin
which covers the coordinates (X,Y). The bin is located using a
binary search. The search includes an optimization for histograms
with uniform ranges, and will return the correct bin immediately
in this case. If (x,y) is found then the function sets the indices
(I,J) and returns `GSL_SUCCESS'. If (x,y) lies outside the valid
range of the histogram then the function returns `GSL_EDOM' and
the error handler is invoked.
File: gsl-ref.info, Node: 2D Histogram Statistics, Next: 2D Histogram Operations, Prev: Searching 2D histogram ranges, Up: Histograms
22.18 2D Histogram Statistics
=============================
-- Function: double gsl_histogram2d_max_val (const gsl_histogram2d * H)
This function returns the maximum value contained in the histogram
bins.
-- Function: void gsl_histogram2d_max_bin (const gsl_histogram2d * H,
size_t * I, size_t * J)
This function finds the indices of the bin containing the maximum
value in the histogram H and stores the result in (I,J). In the
case where several bins contain the same maximum value the first
bin found is returned.
-- Function: double gsl_histogram2d_min_val (const gsl_histogram2d * H)
This function returns the minimum value contained in the histogram
bins.
-- Function: void gsl_histogram2d_min_bin (const gsl_histogram2d * H,
size_t * I, size_t * J)
This function finds the indices of the bin containing the minimum
value in the histogram H and stores the result in (I,J). In the
case where several bins contain the same maximum value the first
bin found is returned.
-- Function: double gsl_histogram2d_xmean (const gsl_histogram2d * H)
This function returns the mean of the histogrammed x variable,
where the histogram is regarded as a probability distribution.
Negative bin values are ignored for the purposes of this
calculation.
-- Function: double gsl_histogram2d_ymean (const gsl_histogram2d * H)
This function returns the mean of the histogrammed y variable,
where the histogram is regarded as a probability distribution.
Negative bin values are ignored for the purposes of this
calculation.
-- Function: double gsl_histogram2d_xsigma (const gsl_histogram2d * H)
This function returns the standard deviation of the histogrammed x
variable, where the histogram is regarded as a probability
distribution. Negative bin values are ignored for the purposes of
this calculation.
-- Function: double gsl_histogram2d_ysigma (const gsl_histogram2d * H)
This function returns the standard deviation of the histogrammed y
variable, where the histogram is regarded as a probability
distribution. Negative bin values are ignored for the purposes of
this calculation.
-- Function: double gsl_histogram2d_cov (const gsl_histogram2d * H)
This function returns the covariance of the histogrammed x and y
variables, where the histogram is regarded as a probability
distribution. Negative bin values are ignored for the purposes of
this calculation.
-- Function: double gsl_histogram2d_sum (const gsl_histogram2d * H)
This function returns the sum of all bin values. Negative bin
values are included in the sum.
File: gsl-ref.info, Node: 2D Histogram Operations, Next: Reading and writing 2D histograms, Prev: 2D Histogram Statistics, Up: Histograms
22.19 2D Histogram Operations
=============================
-- Function: int gsl_histogram2d_equal_bins_p (const gsl_histogram2d *
H1, const gsl_histogram2d * H2)
This function returns 1 if all the individual bin ranges of the two
histograms are identical, and 0 otherwise.
-- Function: int gsl_histogram2d_add (gsl_histogram2d * H1, const
gsl_histogram2d * H2)
This function adds the contents of the bins in histogram H2 to the
corresponding bins of histogram H1, i.e. h'_1(i,j) = h_1(i,j) +
h_2(i,j). The two histograms must have identical bin ranges.
-- Function: int gsl_histogram2d_sub (gsl_histogram2d * H1, const
gsl_histogram2d * H2)
This function subtracts the contents of the bins in histogram H2
from the corresponding bins of histogram H1, i.e. h'_1(i,j) =
h_1(i,j) - h_2(i,j). The two histograms must have identical bin
ranges.
-- Function: int gsl_histogram2d_mul (gsl_histogram2d * H1, const
gsl_histogram2d * H2)
This function multiplies the contents of the bins of histogram H1
by the contents of the corresponding bins in histogram H2, i.e.
h'_1(i,j) = h_1(i,j) * h_2(i,j). The two histograms must have
identical bin ranges.
-- Function: int gsl_histogram2d_div (gsl_histogram2d * H1, const
gsl_histogram2d * H2)
This function divides the contents of the bins of histogram H1 by
the contents of the corresponding bins in histogram H2, i.e.
h'_1(i,j) = h_1(i,j) / h_2(i,j). The two histograms must have
identical bin ranges.
-- Function: int gsl_histogram2d_scale (gsl_histogram2d * H, double
SCALE)
This function multiplies the contents of the bins of histogram H
by the constant SCALE, i.e. h'_1(i,j) = h_1(i,j) scale.
-- Function: int gsl_histogram2d_shift (gsl_histogram2d * H, double
OFFSET)
This function shifts the contents of the bins of histogram H by
the constant OFFSET, i.e. h'_1(i,j) = h_1(i,j) + offset.
File: gsl-ref.info, Node: Reading and writing 2D histograms, Next: Resampling from 2D histograms, Prev: 2D Histogram Operations, Up: Histograms
22.20 Reading and writing 2D histograms
=======================================
The library provides functions for reading and writing two dimensional
histograms to a file as binary data or formatted text.
-- Function: int gsl_histogram2d_fwrite (FILE * STREAM, const
gsl_histogram2d * H)
This function writes the ranges and bins of the histogram H to the
stream STREAM in binary format. The return value is 0 for success
and `GSL_EFAILED' if there was a problem writing to the file.
Since the data is written in the native binary format it may not
be portable between different architectures.
-- Function: int gsl_histogram2d_fread (FILE * STREAM, gsl_histogram2d
* H)
This function reads into the histogram H from the stream STREAM in
binary format. The histogram H must be preallocated with the
correct size since the function uses the number of x and y bins in
H to determine how many bytes to read. The return value is 0 for
success and `GSL_EFAILED' if there was a problem reading from the
file. The data is assumed to have been written in the native
binary format on the same architecture.
-- Function: int gsl_histogram2d_fprintf (FILE * STREAM, const
gsl_histogram2d * H, const char * RANGE_FORMAT, const char *
BIN_FORMAT)
This function writes the ranges and bins of the histogram H
line-by-line to the stream STREAM using the format specifiers
RANGE_FORMAT and BIN_FORMAT. These should be one of the `%g',
`%e' or `%f' formats for floating point numbers. The function
returns 0 for success and `GSL_EFAILED' if there was a problem
writing to the file. The histogram output is formatted in five
columns, and the columns are separated by spaces, like this,
xrange[0] xrange[1] yrange[0] yrange[1] bin(0,0)
xrange[0] xrange[1] yrange[1] yrange[2] bin(0,1)
xrange[0] xrange[1] yrange[2] yrange[3] bin(0,2)
....
xrange[0] xrange[1] yrange[ny-1] yrange[ny] bin(0,ny-1)
xrange[1] xrange[2] yrange[0] yrange[1] bin(1,0)
xrange[1] xrange[2] yrange[1] yrange[2] bin(1,1)
xrange[1] xrange[2] yrange[1] yrange[2] bin(1,2)
....
xrange[1] xrange[2] yrange[ny-1] yrange[ny] bin(1,ny-1)
....
xrange[nx-1] xrange[nx] yrange[0] yrange[1] bin(nx-1,0)
xrange[nx-1] xrange[nx] yrange[1] yrange[2] bin(nx-1,1)
xrange[nx-1] xrange[nx] yrange[1] yrange[2] bin(nx-1,2)
....
xrange[nx-1] xrange[nx] yrange[ny-1] yrange[ny] bin(nx-1,ny-1)
Each line contains the lower and upper limits of the bin and the
contents of the bin. Since the upper limits of the each bin are
the lower limits of the neighboring bins there is duplication of
these values but this allows the histogram to be manipulated with
line-oriented tools.
-- Function: int gsl_histogram2d_fscanf (FILE * STREAM,
gsl_histogram2d * H)
This function reads formatted data from the stream STREAM into the
histogram H. The data is assumed to be in the five-column format
used by `gsl_histogram2d_fprintf'. The histogram H must be
preallocated with the correct lengths since the function uses the
sizes of H to determine how many numbers to read. The function
returns 0 for success and `GSL_EFAILED' if there was a problem
reading from the file.
File: gsl-ref.info, Node: Resampling from 2D histograms, Next: Example programs for 2D histograms, Prev: Reading and writing 2D histograms, Up: Histograms
22.21 Resampling from 2D histograms
===================================
As in the one-dimensional case, a two-dimensional histogram made by
counting events can be regarded as a measurement of a probability
distribution. Allowing for statistical error, the height of each bin
represents the probability of an event where (x,y) falls in the range
of that bin. For a two-dimensional histogram the probability
distribution takes the form p(x,y) dx dy where,
p(x,y) = n_{ij}/ (N A_{ij})
In this equation n_{ij} is the number of events in the bin which
contains (x,y), A_{ij} is the area of the bin and N is the total number
of events. The distribution of events within each bin is assumed to be
uniform.
-- Data Type: gsl_histogram2d_pdf
`size_t nx, ny'
This is the number of histogram bins used to approximate the
probability distribution function in the x and y directions.
`double * xrange'
The ranges of the bins in the x-direction are stored in an
array of NX + 1 elements pointed to by XRANGE.
`double * yrange'
The ranges of the bins in the y-direction are stored in an
array of NY + 1 pointed to by YRANGE.
`double * sum'
The cumulative probability for the bins is stored in an array
of NX*NY elements pointed to by SUM.
The following functions allow you to create a `gsl_histogram2d_pdf'
struct which represents a two dimensional probability distribution and
generate random samples from it.
-- Function: gsl_histogram2d_pdf * gsl_histogram2d_pdf_alloc (size_t
NX, size_t NY)
This function allocates memory for a two-dimensional probability
distribution of size NX-by-NY and returns a pointer to a newly
initialized `gsl_histogram2d_pdf' struct. If insufficient memory
is available a null pointer is returned and the error handler is
invoked with an error code of `GSL_ENOMEM'.
-- Function: int gsl_histogram2d_pdf_init (gsl_histogram2d_pdf * P,
const gsl_histogram2d * H)
This function initializes the two-dimensional probability
distribution calculated P from the histogram H. If any of the
bins of H are negative then the error handler is invoked with an
error code of `GSL_EDOM' because a probability distribution cannot
contain negative values.
-- Function: void gsl_histogram2d_pdf_free (gsl_histogram2d_pdf * P)
This function frees the two-dimensional probability distribution
function P and all of the memory associated with it.
-- Function: int gsl_histogram2d_pdf_sample (const gsl_histogram2d_pdf
* P, double R1, double R2, double * X, double * Y)
This function uses two uniform random numbers between zero and one,
R1 and R2, to compute a single random sample from the
two-dimensional probability distribution P.
File: gsl-ref.info, Node: Example programs for 2D histograms, Prev: Resampling from 2D histograms, Up: Histograms
22.22 Example programs for 2D histograms
========================================
This program demonstrates two features of two-dimensional histograms.
First a 10-by-10 two-dimensional histogram is created with x and y
running from 0 to 1. Then a few sample points are added to the
histogram, at (0.3,0.3) with a height of 1, at (0.8,0.1) with a height
of 5 and at (0.7,0.9) with a height of 0.5. This histogram with three
events is used to generate a random sample of 1000 simulated events,
which are printed out.
#include
#include
#include
int
main (void)
{
const gsl_rng_type * T;
gsl_rng * r;
gsl_histogram2d * h = gsl_histogram2d_alloc (10, 10);
gsl_histogram2d_set_ranges_uniform (h,
0.0, 1.0,
0.0, 1.0);
gsl_histogram2d_accumulate (h, 0.3, 0.3, 1);
gsl_histogram2d_accumulate (h, 0.8, 0.1, 5);
gsl_histogram2d_accumulate (h, 0.7, 0.9, 0.5);
gsl_rng_env_setup ();
T = gsl_rng_default;
r = gsl_rng_alloc (T);
{
int i;
gsl_histogram2d_pdf * p
= gsl_histogram2d_pdf_alloc (h->nx, h->ny);
gsl_histogram2d_pdf_init (p, h);
for (i = 0; i < 1000; i++) {
double x, y;
double u = gsl_rng_uniform (r);
double v = gsl_rng_uniform (r);
gsl_histogram2d_pdf_sample (p, u, v, &x, &y);
printf ("%g %g\n", x, y);
}
gsl_histogram2d_pdf_free (p);
}
gsl_histogram2d_free (h);
gsl_rng_free (r);
return 0;
}
File: gsl-ref.info, Node: N-tuples, Next: Monte Carlo Integration, Prev: Histograms, Up: Top
23 N-tuples
***********
This chapter describes functions for creating and manipulating
"ntuples", sets of values associated with events. The ntuples are
stored in files. Their values can be extracted in any combination and
"booked" in a histogram using a selection function.
The values to be stored are held in a user-defined data structure,
and an ntuple is created associating this data structure with a file.
The values are then written to the file (normally inside a loop) using
the ntuple functions described below.
A histogram can be created from ntuple data by providing a selection
function and a value function. The selection function specifies whether
an event should be included in the subset to be analyzed or not. The
value function computes the entry to be added to the histogram for each
event.
All the ntuple functions are defined in the header file
`gsl_ntuple.h'
* Menu:
* The ntuple struct::
* Creating ntuples::
* Opening an existing ntuple file::
* Writing ntuples::
* Reading ntuples ::
* Closing an ntuple file::
* Histogramming ntuple values::
* Example ntuple programs::
* Ntuple References and Further Reading::
File: gsl-ref.info, Node: The ntuple struct, Next: Creating ntuples, Up: N-tuples
23.1 The ntuple struct
======================
Ntuples are manipulated using the `gsl_ntuple' struct. This struct
contains information on the file where the ntuple data is stored, a
pointer to the current ntuple data row and the size of the user-defined
ntuple data struct.
typedef struct {
FILE * file;
void * ntuple_data;
size_t size;
} gsl_ntuple;
File: gsl-ref.info, Node: Creating ntuples, Next: Opening an existing ntuple file, Prev: The ntuple struct, Up: N-tuples
23.2 Creating ntuples
=====================
-- Function: gsl_ntuple * gsl_ntuple_create (char * FILENAME, void *
NTUPLE_DATA, size_t SIZE)
This function creates a new write-only ntuple file FILENAME for
ntuples of size SIZE and returns a pointer to the newly created
ntuple struct. Any existing file with the same name is truncated
to zero length and overwritten. A pointer to memory for the
current ntuple row NTUPLE_DATA must be supplied--this is used to
copy ntuples in and out of the file.
File: gsl-ref.info, Node: Opening an existing ntuple file, Next: Writing ntuples, Prev: Creating ntuples, Up: N-tuples
23.3 Opening an existing ntuple file
====================================
-- Function: gsl_ntuple * gsl_ntuple_open (char * FILENAME, void *
NTUPLE_DATA, size_t SIZE)
This function opens an existing ntuple file FILENAME for reading
and returns a pointer to a corresponding ntuple struct. The
ntuples in the file must have size SIZE. A pointer to memory for
the current ntuple row NTUPLE_DATA must be supplied--this is used
to copy ntuples in and out of the file.
File: gsl-ref.info, Node: Writing ntuples, Next: Reading ntuples, Prev: Opening an existing ntuple file, Up: N-tuples
23.4 Writing ntuples
====================
-- Function: int gsl_ntuple_write (gsl_ntuple * NTUPLE)
This function writes the current ntuple NTUPLE->NTUPLE_DATA of
size NTUPLE->SIZE to the corresponding file.
-- Function: int gsl_ntuple_bookdata (gsl_ntuple * NTUPLE)
This function is a synonym for `gsl_ntuple_write'.
File: gsl-ref.info, Node: Reading ntuples, Next: Closing an ntuple file, Prev: Writing ntuples, Up: N-tuples
23.5 Reading ntuples
====================
-- Function: int gsl_ntuple_read (gsl_ntuple * NTUPLE)
This function reads the current row of the ntuple file for NTUPLE
and stores the values in NTUPLE->DATA.
File: gsl-ref.info, Node: Closing an ntuple file, Next: Histogramming ntuple values, Prev: Reading ntuples, Up: N-tuples
23.6 Closing an ntuple file
===========================
-- Function: int gsl_ntuple_close (gsl_ntuple * NTUPLE)
This function closes the ntuple file NTUPLE and frees its
associated allocated memory.
File: gsl-ref.info, Node: Histogramming ntuple values, Next: Example ntuple programs, Prev: Closing an ntuple file, Up: N-tuples
23.7 Histogramming ntuple values
================================
Once an ntuple has been created its contents can be histogrammed in
various ways using the function `gsl_ntuple_project'. Two user-defined
functions must be provided, a function to select events and a function
to compute scalar values. The selection function and the value function
both accept the ntuple row as a first argument and other parameters as
a second argument.
The "selection function" determines which ntuple rows are selected
for histogramming. It is defined by the following struct,
typedef struct {
int (* function) (void * ntuple_data, void * params);
void * params;
} gsl_ntuple_select_fn;
The struct component FUNCTION should return a non-zero value for each
ntuple row that is to be included in the histogram.
The "value function" computes scalar values for those ntuple rows
selected by the selection function,
typedef struct {
double (* function) (void * ntuple_data, void * params);
void * params;
} gsl_ntuple_value_fn;
In this case the struct component FUNCTION should return the value to
be added to the histogram for the ntuple row.
-- Function: int gsl_ntuple_project (gsl_histogram * H, gsl_ntuple *
NTUPLE, gsl_ntuple_value_fn * VALUE_FUNC,
gsl_ntuple_select_fn * SELECT_FUNC)
This function updates the histogram H from the ntuple NTUPLE using
the functions VALUE_FUNC and SELECT_FUNC. For each ntuple row
where the selection function SELECT_FUNC is non-zero the
corresponding value of that row is computed using the function
VALUE_FUNC and added to the histogram. Those ntuple rows where
SELECT_FUNC returns zero are ignored. New entries are added to
the histogram, so subsequent calls can be used to accumulate
further data in the same histogram.
File: gsl-ref.info, Node: Example ntuple programs, Next: Ntuple References and Further Reading, Prev: Histogramming ntuple values, Up: N-tuples
23.8 Examples
=============
The following example programs demonstrate the use of ntuples in
managing a large dataset. The first program creates a set of 10,000
simulated "events", each with 3 associated values (x,y,z). These are
generated from a Gaussian distribution with unit variance, for
demonstration purposes, and written to the ntuple file `test.dat'.
#include
#include
#include
struct data
{
double x;
double y;
double z;
};
int
main (void)
{
const gsl_rng_type * T;
gsl_rng * r;
struct data ntuple_row;
int i;
gsl_ntuple *ntuple
= gsl_ntuple_create ("test.dat", &ntuple_row,
sizeof (ntuple_row));
gsl_rng_env_setup ();
T = gsl_rng_default;
r = gsl_rng_alloc (T);
for (i = 0; i < 10000; i++)
{
ntuple_row.x = gsl_ran_ugaussian (r);
ntuple_row.y = gsl_ran_ugaussian (r);
ntuple_row.z = gsl_ran_ugaussian (r);
gsl_ntuple_write (ntuple);
}
gsl_ntuple_close (ntuple);
gsl_rng_free (r);
return 0;
}
The next program analyses the ntuple data in the file `test.dat'. The
analysis procedure is to compute the squared-magnitude of each event,
E^2=x^2+y^2+z^2, and select only those which exceed a lower limit of
1.5. The selected events are then histogrammed using their E^2 values.
#include
#include
#include
struct data
{
double x;
double y;
double z;
};
int sel_func (void *ntuple_data, void *params);
double val_func (void *ntuple_data, void *params);
int
main (void)
{
struct data ntuple_row;
gsl_ntuple *ntuple
= gsl_ntuple_open ("test.dat", &ntuple_row,
sizeof (ntuple_row));
double lower = 1.5;
gsl_ntuple_select_fn S;
gsl_ntuple_value_fn V;
gsl_histogram *h = gsl_histogram_alloc (100);
gsl_histogram_set_ranges_uniform(h, 0.0, 10.0);
S.function = &sel_func;
S.params = &lower;
V.function = &val_func;
V.params = 0;
gsl_ntuple_project (h, ntuple, &V, &S);
gsl_histogram_fprintf (stdout, h, "%f", "%f");
gsl_histogram_free (h);
gsl_ntuple_close (ntuple);
return 0;
}
int
sel_func (void *ntuple_data, void *params)
{
struct data * data = (struct data *) ntuple_data;
double x, y, z, E2, scale;
scale = *(double *) params;
x = data->x;
y = data->y;
z = data->z;
E2 = x * x + y * y + z * z;
return E2 > scale;
}
double
val_func (void *ntuple_data, void *params)
{
struct data * data = (struct data *) ntuple_data;
double x, y, z;
x = data->x;
y = data->y;
z = data->z;
return x * x + y * y + z * z;
}
The following plot shows the distribution of the selected events.
Note the cut-off at the lower bound.
File: gsl-ref.info, Node: Ntuple References and Further Reading, Prev: Example ntuple programs, Up: N-tuples
23.9 References and Further Reading
===================================
Further information on the use of ntuples can be found in the
documentation for the CERN packages PAW and HBOOK (available online).
File: gsl-ref.info, Node: Monte Carlo Integration, Next: Simulated Annealing, Prev: N-tuples, Up: Top
24 Monte Carlo Integration
**************************
This chapter describes routines for multidimensional Monte Carlo
integration. These include the traditional Monte Carlo method and
adaptive algorithms such as VEGAS and MISER which use importance
sampling and stratified sampling techniques. Each algorithm computes an
estimate of a multidimensional definite integral of the form,
I = \int_xl^xu dx \int_yl^yu dy ... f(x, y, ...)
over a hypercubic region ((x_l,x_u), (y_l,y_u), ...) using a fixed
number of function calls. The routines also provide a statistical
estimate of the error on the result. This error estimate should be
taken as a guide rather than as a strict error bound--random sampling
of the region may not uncover all the important features of the
function, resulting in an underestimate of the error.
The functions are defined in separate header files for each routine,
`gsl_monte_plain.h', `gsl_monte_miser.h' and `gsl_monte_vegas.h'.
* Menu:
* Monte Carlo Interface::
* PLAIN Monte Carlo::
* MISER::
* VEGAS::
* Monte Carlo Examples::
* Monte Carlo Integration References and Further Reading::
File: gsl-ref.info, Node: Monte Carlo Interface, Next: PLAIN Monte Carlo, Up: Monte Carlo Integration
24.1 Interface
==============
All of the Monte Carlo integration routines use the same general form of
interface. There is an allocator to allocate memory for control
variables and workspace, a routine to initialize those control
variables, the integrator itself, and a function to free the space when
done.
Each integration function requires a random number generator to be
supplied, and returns an estimate of the integral and its standard
deviation. The accuracy of the result is determined by the number of
function calls specified by the user. If a known level of accuracy is
required this can be achieved by calling the integrator several times
and averaging the individual results until the desired accuracy is
obtained.
Random sample points used within the Monte Carlo routines are always
chosen strictly within the integration region, so that endpoint
singularities are automatically avoided.
The function to be integrated has its own datatype, defined in the
header file `gsl_monte.h'.
-- Data Type: gsl_monte_function
This data type defines a general function with parameters for Monte
Carlo integration.
`double (* f) (double * X, size_t DIM, void * PARAMS)'
this function should return the value f(x,params) for the
argument X and parameters PARAMS, where X is an array of size
DIM giving the coordinates of the point where the function is
to be evaluated.
`size_t dim'
the number of dimensions for X.
`void * params'
a pointer to the parameters of the function.
Here is an example for a quadratic function in two dimensions,
f(x,y) = a x^2 + b x y + c y^2
with a = 3, b = 2, c = 1. The following code defines a
`gsl_monte_function' `F' which you could pass to an integrator:
struct my_f_params { double a; double b; double c; };
double
my_f (double x[], size_t dim, void * p) {
struct my_f_params * fp = (struct my_f_params *)p;
if (dim != 2)
{
fprintf (stderr, "error: dim != 2");
abort ();
}
return fp->a * x[0] * x[0]
+ fp->b * x[0] * x[1]
+ fp->c * x[1] * x[1];
}
gsl_monte_function F;
struct my_f_params params = { 3.0, 2.0, 1.0 };
F.f = &my_f;
F.dim = 2;
F.params = ¶ms;
The function f(x) can be evaluated using the following macro,
#define GSL_MONTE_FN_EVAL(F,x)
(*((F)->f))(x,(F)->dim,(F)->params)
File: gsl-ref.info, Node: PLAIN Monte Carlo, Next: MISER, Prev: Monte Carlo Interface, Up: Monte Carlo Integration
24.2 PLAIN Monte Carlo
======================
The plain Monte Carlo algorithm samples points randomly from the
integration region to estimate the integral and its error. Using this
algorithm the estimate of the integral E(f; N) for N randomly
distributed points x_i is given by,
E(f; N) = = V = (V / N) \sum_i^N f(x_i)
where V is the volume of the integration region. The error on this
estimate \sigma(E;N) is calculated from the estimated variance of the
mean,
\sigma^2 (E; N) = (V^2 / N^2) \sum_i^N (f(x_i) - )^2.
For large N this variance decreases asymptotically as \Var(f)/N, where
\Var(f) is the true variance of the function over the integration
region. The error estimate itself should decrease as
\sigma(f)/\sqrt{N}. The familiar law of errors decreasing as
1/\sqrt{N} applies--to reduce the error by a factor of 10 requires a
100-fold increase in the number of sample points.
The functions described in this section are declared in the header
file `gsl_monte_plain.h'.
-- Function: gsl_monte_plain_state * gsl_monte_plain_alloc (size_t DIM)
This function allocates and initializes a workspace for Monte Carlo
integration in DIM dimensions.
-- Function: int gsl_monte_plain_init (gsl_monte_plain_state* S)
This function initializes a previously allocated integration state.
This allows an existing workspace to be reused for different
integrations.
-- Function: int gsl_monte_plain_integrate (gsl_monte_function * F,
const double XL[], const double XU[], size_t DIM, size_t
CALLS, gsl_rng * R, gsl_monte_plain_state * S, double *
RESULT, double * ABSERR)
This routines uses the plain Monte Carlo algorithm to integrate the
function F over the DIM-dimensional hypercubic region defined by
the lower and upper limits in the arrays XL and XU, each of size
DIM. The integration uses a fixed number of function calls CALLS,
and obtains random sampling points using the random number
generator R. A previously allocated workspace S must be supplied.
The result of the integration is returned in RESULT, with an
estimated absolute error ABSERR.
-- Function: void gsl_monte_plain_free (gsl_monte_plain_state * S)
This function frees the memory associated with the integrator state
S.
File: gsl-ref.info, Node: MISER, Next: VEGAS, Prev: PLAIN Monte Carlo, Up: Monte Carlo Integration
24.3 MISER
==========
The MISER algorithm of Press and Farrar is based on recursive
stratified sampling. This technique aims to reduce the overall
integration error by concentrating integration points in the regions of
highest variance.
The idea of stratified sampling begins with the observation that for
two disjoint regions a and b with Monte Carlo estimates of the integral
E_a(f) and E_b(f) and variances \sigma_a^2(f) and \sigma_b^2(f), the
variance \Var(f) of the combined estimate E(f) = (1/2) (E_a(f) + E_b(f))
is given by,
\Var(f) = (\sigma_a^2(f) / 4 N_a) + (\sigma_b^2(f) / 4 N_b).
It can be shown that this variance is minimized by distributing the
points such that,
N_a / (N_a + N_b) = \sigma_a / (\sigma_a + \sigma_b).
Hence the smallest error estimate is obtained by allocating sample
points in proportion to the standard deviation of the function in each
sub-region.
The MISER algorithm proceeds by bisecting the integration region
along one coordinate axis to give two sub-regions at each step. The
direction is chosen by examining all d possible bisections and
selecting the one which will minimize the combined variance of the two
sub-regions. The variance in the sub-regions is estimated by sampling
with a fraction of the total number of points available to the current
step. The same procedure is then repeated recursively for each of the
two half-spaces from the best bisection. The remaining sample points are
allocated to the sub-regions using the formula for N_a and N_b. This
recursive allocation of integration points continues down to a
user-specified depth where each sub-region is integrated using a plain
Monte Carlo estimate. These individual values and their error
estimates are then combined upwards to give an overall result and an
estimate of its error.
The functions described in this section are declared in the header
file `gsl_monte_miser.h'.
-- Function: gsl_monte_miser_state * gsl_monte_miser_alloc (size_t DIM)
This function allocates and initializes a workspace for Monte Carlo
integration in DIM dimensions. The workspace is used to maintain
the state of the integration.
-- Function: int gsl_monte_miser_init (gsl_monte_miser_state* S)
This function initializes a previously allocated integration state.
This allows an existing workspace to be reused for different
integrations.
-- Function: int gsl_monte_miser_integrate (gsl_monte_function * F,
const double XL[], const double XU[], size_t DIM, size_t
CALLS, gsl_rng * R, gsl_monte_miser_state * S, double *
RESULT, double * ABSERR)
This routines uses the MISER Monte Carlo algorithm to integrate the
function F over the DIM-dimensional hypercubic region defined by
the lower and upper limits in the arrays XL and XU, each of size
DIM. The integration uses a fixed number of function calls CALLS,
and obtains random sampling points using the random number
generator R. A previously allocated workspace S must be supplied.
The result of the integration is returned in RESULT, with an
estimated absolute error ABSERR.
-- Function: void gsl_monte_miser_free (gsl_monte_miser_state * S)
This function frees the memory associated with the integrator state
S.
The MISER algorithm has several configurable parameters which can be
changed using the following two functions.(1)
-- Function: void gsl_monte_miser_params_get (const
gsl_monte_miser_state * S, gsl_monte_miser_params * PARAMS)
This function copies the parameters of the integrator state into
the user-supplied PARAMS structure.
-- Function: void gsl_monte_miser_params_set (gsl_monte_miser_state *
S, const gsl_monte_miser_params * PARAMS)
This function sets the integrator parameters based on values
provided in the PARAMS structure.
Typically the values of the parameters are first read using
`gsl_monte_miser_params_get', the necessary changes are made to the
fields of the PARAMS structure, and the values are copied back into the
integrator state using `gsl_monte_miser_params_set'. The functions use
the `gsl_monte_miser_params' structure which contains the following
fields:
-- Variable: double estimate_frac
This parameter specifies the fraction of the currently available
number of function calls which are allocated to estimating the
variance at each recursive step. The default value is 0.1.
-- Variable: size_t min_calls
This parameter specifies the minimum number of function calls
required for each estimate of the variance. If the number of
function calls allocated to the estimate using ESTIMATE_FRAC falls
below MIN_CALLS then MIN_CALLS are used instead. This ensures
that each estimate maintains a reasonable level of accuracy. The
default value of MIN_CALLS is `16 * dim'.
-- Variable: size_t min_calls_per_bisection
This parameter specifies the minimum number of function calls
required to proceed with a bisection step. When a recursive step
has fewer calls available than MIN_CALLS_PER_BISECTION it performs
a plain Monte Carlo estimate of the current sub-region and
terminates its branch of the recursion. The default value of this
parameter is `32 * min_calls'.
-- Variable: double alpha
This parameter controls how the estimated variances for the two
sub-regions of a bisection are combined when allocating points.
With recursive sampling the overall variance should scale better
than 1/N, since the values from the sub-regions will be obtained
using a procedure which explicitly minimizes their variance. To
accommodate this behavior the MISER algorithm allows the total
variance to depend on a scaling parameter \alpha,
\Var(f) = {\sigma_a \over N_a^\alpha} + {\sigma_b \over N_b^\alpha}.
The authors of the original paper describing MISER recommend the
value \alpha = 2 as a good choice, obtained from numerical
experiments, and this is used as the default value in this
implementation.
-- Variable: double dither
This parameter introduces a random fractional variation of size
DITHER into each bisection, which can be used to break the
symmetry of integrands which are concentrated near the exact
center of the hypercubic integration region. The default value of
dither is zero, so no variation is introduced. If needed, a
typical value of DITHER is 0.1.
---------- Footnotes ----------
(1) The previous method of accessing these fields directly through
the `gsl_monte_miser_state' struct is now deprecated.
File: gsl-ref.info, Node: VEGAS, Next: Monte Carlo Examples, Prev: MISER, Up: Monte Carlo Integration
24.4 VEGAS
==========
The VEGAS algorithm of Lepage is based on importance sampling. It
samples points from the probability distribution described by the
function |f|, so that the points are concentrated in the regions that
make the largest contribution to the integral.
In general, if the Monte Carlo integral of f is sampled with points
distributed according to a probability distribution described by the
function g, we obtain an estimate E_g(f; N),
E_g(f; N) = E(f/g; N)
with a corresponding variance,
\Var_g(f; N) = \Var(f/g; N).
If the probability distribution is chosen as g = |f|/I(|f|) then it can
be shown that the variance V_g(f; N) vanishes, and the error in the
estimate will be zero. In practice it is not possible to sample from
the exact distribution g for an arbitrary function, so importance
sampling algorithms aim to produce efficient approximations to the
desired distribution.
The VEGAS algorithm approximates the exact distribution by making a
number of passes over the integration region while histogramming the
function f. Each histogram is used to define a sampling distribution
for the next pass. Asymptotically this procedure converges to the
desired distribution. In order to avoid the number of histogram bins
growing like K^d the probability distribution is approximated by a
separable function: g(x_1, x_2, ...) = g_1(x_1) g_2(x_2) ... so that
the number of bins required is only Kd. This is equivalent to locating
the peaks of the function from the projections of the integrand onto
the coordinate axes. The efficiency of VEGAS depends on the validity
of this assumption. It is most efficient when the peaks of the
integrand are well-localized. If an integrand can be rewritten in a
form which is approximately separable this will increase the efficiency
of integration with VEGAS.
VEGAS incorporates a number of additional features, and combines both
stratified sampling and importance sampling. The integration region is
divided into a number of "boxes", with each box getting a fixed number
of points (the goal is 2). Each box can then have a fractional number
of bins, but if the ratio of bins-per-box is less than two, Vegas
switches to a kind variance reduction (rather than importance sampling).
-- Function: gsl_monte_vegas_state * gsl_monte_vegas_alloc (size_t DIM)
This function allocates and initializes a workspace for Monte Carlo
integration in DIM dimensions. The workspace is used to maintain
the state of the integration.
-- Function: int gsl_monte_vegas_init (gsl_monte_vegas_state* S)
This function initializes a previously allocated integration state.
This allows an existing workspace to be reused for different
integrations.
-- Function: int gsl_monte_vegas_integrate (gsl_monte_function * F,
double XL[], double XU[], size_t DIM, size_t CALLS, gsl_rng *
R, gsl_monte_vegas_state * S, double * RESULT, double *
ABSERR)
This routines uses the VEGAS Monte Carlo algorithm to integrate the
function F over the DIM-dimensional hypercubic region defined by
the lower and upper limits in the arrays XL and XU, each of size
DIM. The integration uses a fixed number of function calls CALLS,
and obtains random sampling points using the random number
generator R. A previously allocated workspace S must be supplied.
The result of the integration is returned in RESULT, with an
estimated absolute error ABSERR. The result and its error
estimate are based on a weighted average of independent samples.
The chi-squared per degree of freedom for the weighted average is
returned via the state struct component, S->CHISQ, and must be
consistent with 1 for the weighted average to be reliable.
-- Function: void gsl_monte_vegas_free (gsl_monte_vegas_state * S)
This function frees the memory associated with the integrator state
S.
The VEGAS algorithm computes a number of independent estimates of the
integral internally, according to the `iterations' parameter described
below, and returns their weighted average. Random sampling of the
integrand can occasionally produce an estimate where the error is zero,
particularly if the function is constant in some regions. An estimate
with zero error causes the weighted average to break down and must be
handled separately. In the original Fortran implementations of VEGAS
the error estimate is made non-zero by substituting a small value
(typically `1e-30'). The implementation in GSL differs from this and
avoids the use of an arbitrary constant--it either assigns the value a
weight which is the average weight of the preceding estimates or
discards it according to the following procedure,
current estimate has zero error, weighted average has finite error
The current estimate is assigned a weight which is the average
weight of the preceding estimates.
current estimate has finite error, previous estimates had zero error
The previous estimates are discarded and the weighted averaging
procedure begins with the current estimate.
current estimate has zero error, previous estimates had zero error
The estimates are averaged using the arithmetic mean, but no error
is computed.
The convergence of the algorithm can be tested using the overall
chi-squared value of the results, which is available from the following
function:
-- Function: double gsl_monte_vegas_chisq (const gsl_monte_vegas_state
* S)
This function returns the chi-squared per degree of freedom for the
weighted estimate of the integral. The returned value should be
close to 1. A value which differs significantly from 1 indicates
that the values from different iterations are inconsistent. In
this case the weighted error will be under-estimated, and further
iterations of the algorithm are needed to obtain reliable results.
-- Function: void gsl_monte_vegas_runval (const gsl_monte_vegas_state
* S, double * RESULT, double * SIGMA)
This function returns the raw (unaveraged) values of the integral
RESULT and its error SIGMA from the most recent iteration of the
algorithm.
The VEGAS algorithm is highly configurable. Several parameters can
be changed using the following two functions.
-- Function: void gsl_monte_vegas_params_get (const
gsl_monte_vegas_state * S, gsl_monte_vegas_params * PARAMS)
This function copies the parameters of the integrator state into
the user-supplied PARAMS structure.
-- Function: void gsl_monte_vegas_params_set (gsl_monte_vegas_state *
S, const gsl_monte_vegas_params * PARAMS)
This function sets the integrator parameters based on values
provided in the PARAMS structure.
Typically the values of the parameters are first read using
`gsl_monte_vegas_params_get', the necessary changes are made to the
fields of the PARAMS structure, and the values are copied back into the
integrator state using `gsl_monte_vegas_params_set'. The functions use
the `gsl_monte_vegas_params' structure which contains the following
fields:
-- Variable: double alpha
The parameter `alpha' controls the stiffness of the rebinning
algorithm. It is typically set between one and two. A value of
zero prevents rebinning of the grid. The default value is 1.5.
-- Variable: size_t iterations
The number of iterations to perform for each call to the routine.
The default value is 5 iterations.
-- Variable: int stage
Setting this determines the "stage" of the calculation. Normally,
`stage = 0' which begins with a new uniform grid and empty weighted
average. Calling VEGAS with `stage = 1' retains the grid from the
previous run but discards the weighted average, so that one can
"tune" the grid using a relatively small number of points and then
do a large run with `stage = 1' on the optimized grid. Setting
`stage = 2' keeps the grid and the weighted average from the
previous run, but may increase (or decrease) the number of
histogram bins in the grid depending on the number of calls
available. Choosing `stage = 3' enters at the main loop, so that
nothing is changed, and is equivalent to performing additional
iterations in a previous call.
-- Variable: int mode
The possible choices are `GSL_VEGAS_MODE_IMPORTANCE',
`GSL_VEGAS_MODE_STRATIFIED', `GSL_VEGAS_MODE_IMPORTANCE_ONLY'.
This determines whether VEGAS will use importance sampling or
stratified sampling, or whether it can pick on its own. In low
dimensions VEGAS uses strict stratified sampling (more precisely,
stratified sampling is chosen if there are fewer than 2 bins per
box).
-- Variable: int verbose
-- Variable: FILE * ostream
These parameters set the level of information printed by VEGAS. All
information is written to the stream OSTREAM. The default setting
of VERBOSE is `-1', which turns off all output. A VERBOSE value
of `0' prints summary information about the weighted average and
final result, while a value of `1' also displays the grid
coordinates. A value of `2' prints information from the rebinning
procedure for each iteration.
The above fields and the CHISQ value can also be accessed directly
in the `gsl_monte_vegas_state' but such use is deprecated.
File: gsl-ref.info, Node: Monte Carlo Examples, Next: Monte Carlo Integration References and Further Reading, Prev: VEGAS, Up: Monte Carlo Integration
24.5 Examples
=============
The example program below uses the Monte Carlo routines to estimate the
value of the following 3-dimensional integral from the theory of random
walks,
I = \int_{-pi}^{+pi} {dk_x/(2 pi)}
\int_{-pi}^{+pi} {dk_y/(2 pi)}
\int_{-pi}^{+pi} {dk_z/(2 pi)}
1 / (1 - cos(k_x)cos(k_y)cos(k_z)).
The analytic value of this integral can be shown to be I =
\Gamma(1/4)^4/(4 \pi^3) = 1.393203929685676859.... The integral gives
the mean time spent at the origin by a random walk on a body-centered
cubic lattice in three dimensions.
For simplicity we will compute the integral over the region (0,0,0)
to (\pi,\pi,\pi) and multiply by 8 to obtain the full result. The
integral is slowly varying in the middle of the region but has
integrable singularities at the corners (0,0,0), (0,\pi,\pi),
(\pi,0,\pi) and (\pi,\pi,0). The Monte Carlo routines only select
points which are strictly within the integration region and so no
special measures are needed to avoid these singularities.
#include
#include
#include
#include
#include
#include
/* Computation of the integral,
I = int (dx dy dz)/(2pi)^3 1/(1-cos(x)cos(y)cos(z))
over (-pi,-pi,-pi) to (+pi, +pi, +pi). The exact answer
is Gamma(1/4)^4/(4 pi^3). This example is taken from
C.Itzykson, J.M.Drouffe, "Statistical Field Theory -
Volume 1", Section 1.1, p21, which cites the original
paper M.L.Glasser, I.J.Zucker, Proc.Natl.Acad.Sci.USA 74
1800 (1977) */
/* For simplicity we compute the integral over the region
(0,0,0) -> (pi,pi,pi) and multiply by 8 */
double exact = 1.3932039296856768591842462603255;
double
g (double *k, size_t dim, void *params)
{
double A = 1.0 / (M_PI * M_PI * M_PI);
return A / (1.0 - cos (k[0]) * cos (k[1]) * cos (k[2]));
}
void
display_results (char *title, double result, double error)
{
printf ("%s ==================\n", title);
printf ("result = % .6f\n", result);
printf ("sigma = % .6f\n", error);
printf ("exact = % .6f\n", exact);
printf ("error = % .6f = %.2g sigma\n", result - exact,
fabs (result - exact) / error);
}
int
main (void)
{
double res, err;
double xl[3] = { 0, 0, 0 };
double xu[3] = { M_PI, M_PI, M_PI };
const gsl_rng_type *T;
gsl_rng *r;
gsl_monte_function G = { &g, 3, 0 };
size_t calls = 500000;
gsl_rng_env_setup ();
T = gsl_rng_default;
r = gsl_rng_alloc (T);
{
gsl_monte_plain_state *s = gsl_monte_plain_alloc (3);
gsl_monte_plain_integrate (&G, xl, xu, 3, calls, r, s,
&res, &err);
gsl_monte_plain_free (s);
display_results ("plain", res, err);
}
{
gsl_monte_miser_state *s = gsl_monte_miser_alloc (3);
gsl_monte_miser_integrate (&G, xl, xu, 3, calls, r, s,
&res, &err);
gsl_monte_miser_free (s);
display_results ("miser", res, err);
}
{
gsl_monte_vegas_state *s = gsl_monte_vegas_alloc (3);
gsl_monte_vegas_integrate (&G, xl, xu, 3, 10000, r, s,
&res, &err);
display_results ("vegas warm-up", res, err);
printf ("converging...\n");
do
{
gsl_monte_vegas_integrate (&G, xl, xu, 3, calls/5, r, s,
&res, &err);
printf ("result = % .6f sigma = % .6f "
"chisq/dof = %.1f\n", res, err, gsl_monte_vegas_chisq (s));
}
while (fabs (gsl_monte_vegas_chisq (s) - 1.0) > 0.5);
display_results ("vegas final", res, err);
gsl_monte_vegas_free (s);
}
gsl_rng_free (r);
return 0;
}
With 500,000 function calls the plain Monte Carlo algorithm achieves a
fractional error of 1%. The estimated error `sigma' is roughly
consistent with the actual error-the computed result differs from the
true result by about 1.4 standard deviations,
plain ==================
result = 1.412209
sigma = 0.013436
exact = 1.393204
error = 0.019005 = 1.4 sigma
The MISER algorithm reduces the error by a factor of four, and also
correctly estimates the error,
miser ==================
result = 1.391322
sigma = 0.003461
exact = 1.393204
error = -0.001882 = 0.54 sigma
In the case of the VEGAS algorithm the program uses an initial warm-up
run of 10,000 function calls to prepare, or "warm up", the grid. This
is followed by a main run with five iterations of 100,000 function
calls. The chi-squared per degree of freedom for the five iterations are
checked for consistency with 1, and the run is repeated if the results
have not converged. In this case the estimates are consistent on the
first pass.
vegas warm-up ==================
result = 1.392673
sigma = 0.003410
exact = 1.393204
error = -0.000531 = 0.16 sigma
converging...
result = 1.393281 sigma = 0.000362 chisq/dof = 1.5
vegas final ==================
result = 1.393281
sigma = 0.000362
exact = 1.393204
error = 0.000077 = 0.21 sigma
If the value of `chisq' had differed significantly from 1 it would
indicate inconsistent results, with a correspondingly underestimated
error. The final estimate from VEGAS (using a similar number of
function calls) is significantly more accurate than the other two
algorithms.
File: gsl-ref.info, Node: Monte Carlo Integration References and Further Reading, Prev: Monte Carlo Examples, Up: Monte Carlo Integration
24.6 References and Further Reading
===================================
The MISER algorithm is described in the following article by Press and
Farrar,
W.H. Press, G.R. Farrar, `Recursive Stratified Sampling for
Multidimensional Monte Carlo Integration', Computers in Physics,
v4 (1990), pp190-195.
The VEGAS algorithm is described in the following papers,
G.P. Lepage, `A New Algorithm for Adaptive Multidimensional
Integration', Journal of Computational Physics 27, 192-203, (1978)
G.P. Lepage, `VEGAS: An Adaptive Multi-dimensional Integration
Program', Cornell preprint CLNS 80-447, March 1980
File: gsl-ref.info, Node: Simulated Annealing, Next: Ordinary Differential Equations, Prev: Monte Carlo Integration, Up: Top
25 Simulated Annealing
**********************
Stochastic search techniques are used when the structure of a space is
not well understood or is not smooth, so that techniques like Newton's
method (which requires calculating Jacobian derivative matrices) cannot
be used. In particular, these techniques are frequently used to solve
combinatorial optimization problems, such as the traveling salesman
problem.
The goal is to find a point in the space at which a real valued
"energy function" (or "cost function") is minimized. Simulated
annealing is a minimization technique which has given good results in
avoiding local minima; it is based on the idea of taking a random walk
through the space at successively lower temperatures, where the
probability of taking a step is given by a Boltzmann distribution.
The functions described in this chapter are declared in the header
file `gsl_siman.h'.
* Menu:
* Simulated Annealing algorithm::
* Simulated Annealing functions::
* Examples with Simulated Annealing::
* Simulated Annealing References and Further Reading::
File: gsl-ref.info, Node: Simulated Annealing algorithm, Next: Simulated Annealing functions, Up: Simulated Annealing
25.1 Simulated Annealing algorithm
==================================
The simulated annealing algorithm takes random walks through the problem
space, looking for points with low energies; in these random walks, the
probability of taking a step is determined by the Boltzmann
distribution,
p = e^{-(E_{i+1} - E_i)/(kT)}
if E_{i+1} > E_i, and p = 1 when E_{i+1} <= E_i.
In other words, a step will occur if the new energy is lower. If
the new energy is higher, the transition can still occur, and its
likelihood is proportional to the temperature T and inversely
proportional to the energy difference E_{i+1} - E_i.
The temperature T is initially set to a high value, and a random
walk is carried out at that temperature. Then the temperature is
lowered very slightly according to a "cooling schedule", for example: T
-> T/mu_T where \mu_T is slightly greater than 1.
The slight probability of taking a step that gives higher energy is
what allows simulated annealing to frequently get out of local minima.
File: gsl-ref.info, Node: Simulated Annealing functions, Next: Examples with Simulated Annealing, Prev: Simulated Annealing algorithm, Up: Simulated Annealing
25.2 Simulated Annealing functions
==================================
-- Function: void gsl_siman_solve (const gsl_rng * R, void * X0_P,
gsl_siman_Efunc_t EF, gsl_siman_step_t TAKE_STEP,
gsl_siman_metric_t DISTANCE, gsl_siman_print_t
PRINT_POSITION, gsl_siman_copy_t COPYFUNC,
gsl_siman_copy_construct_t COPY_CONSTRUCTOR,
gsl_siman_destroy_t DESTRUCTOR, size_t ELEMENT_SIZE,
gsl_siman_params_t PARAMS)
This function performs a simulated annealing search through a given
space. The space is specified by providing the functions EF and
DISTANCE. The simulated annealing steps are generated using the
random number generator R and the function TAKE_STEP.
The starting configuration of the system should be given by X0_P.
The routine offers two modes for updating configurations, a
fixed-size mode and a variable-size mode. In the fixed-size mode
the configuration is stored as a single block of memory of size
ELEMENT_SIZE. Copies of this configuration are created, copied
and destroyed internally using the standard library functions
`malloc', `memcpy' and `free'. The function pointers COPYFUNC,
COPY_CONSTRUCTOR and DESTRUCTOR should be null pointers in
fixed-size mode. In the variable-size mode the functions
COPYFUNC, COPY_CONSTRUCTOR and DESTRUCTOR are used to create, copy
and destroy configurations internally. The variable ELEMENT_SIZE
should be zero in the variable-size mode.
The PARAMS structure (described below) controls the run by
providing the temperature schedule and other tunable parameters to
the algorithm.
On exit the best result achieved during the search is placed in
`*X0_P'. If the annealing process has been successful this should
be a good approximation to the optimal point in the space.
If the function pointer PRINT_POSITION is not null, a debugging
log will be printed to `stdout' with the following columns:
#-iter #-evals temperature position energy best_energy
and the output of the function PRINT_POSITION itself. If
PRINT_POSITION is null then no information is printed.
The simulated annealing routines require several user-specified
functions to define the configuration space and energy function. The
prototypes for these functions are given below.
-- Data Type: gsl_siman_Efunc_t
This function type should return the energy of a configuration XP.
double (*gsl_siman_Efunc_t) (void *xp)
-- Data Type: gsl_siman_step_t
This function type should modify the configuration XP using a
random step taken from the generator R, up to a maximum distance of
STEP_SIZE.
void (*gsl_siman_step_t) (const gsl_rng *r, void *xp,
double step_size)
-- Data Type: gsl_siman_metric_t
This function type should return the distance between two
configurations XP and YP.
double (*gsl_siman_metric_t) (void *xp, void *yp)
-- Data Type: gsl_siman_print_t
This function type should print the contents of the configuration
XP.
void (*gsl_siman_print_t) (void *xp)
-- Data Type: gsl_siman_copy_t
This function type should copy the configuration SOURCE into DEST.
void (*gsl_siman_copy_t) (void *source, void *dest)
-- Data Type: gsl_siman_copy_construct_t
This function type should create a new copy of the configuration
XP.
void * (*gsl_siman_copy_construct_t) (void *xp)
-- Data Type: gsl_siman_destroy_t
This function type should destroy the configuration XP, freeing its
memory.
void (*gsl_siman_destroy_t) (void *xp)
-- Data Type: gsl_siman_params_t
These are the parameters that control a run of `gsl_siman_solve'.
This structure contains all the information needed to control the
search, beyond the energy function, the step function and the
initial guess.
`int n_tries'
The number of points to try for each step.
`int iters_fixed_T'
The number of iterations at each temperature.
`double step_size'
The maximum step size in the random walk.
`double k, t_initial, mu_t, t_min'
The parameters of the Boltzmann distribution and cooling
schedule.
File: gsl-ref.info, Node: Examples with Simulated Annealing, Next: Simulated Annealing References and Further Reading, Prev: Simulated Annealing functions, Up: Simulated Annealing
25.3 Examples
=============
The simulated annealing package is clumsy, and it has to be because it
is written in C, for C callers, and tries to be polymorphic at the same
time. But here we provide some examples which can be pasted into your
application with little change and should make things easier.
* Menu:
* Trivial example::
* Traveling Salesman Problem::
File: gsl-ref.info, Node: Trivial example, Next: Traveling Salesman Problem, Up: Examples with Simulated Annealing
25.3.1 Trivial example
----------------------
The first example, in one dimensional Cartesian space, sets up an energy
function which is a damped sine wave; this has many local minima, but
only one global minimum, somewhere between 1.0 and 1.5. The initial
guess given is 15.5, which is several local minima away from the global
minimum.
#include
#include
#include
#include
/* set up parameters for this simulated annealing run */
/* how many points do we try before stepping */
#define N_TRIES 200
/* how many iterations for each T? */
#define ITERS_FIXED_T 1000
/* max step size in random walk */
#define STEP_SIZE 1.0
/* Boltzmann constant */
#define K 1.0
/* initial temperature */
#define T_INITIAL 0.008
/* damping factor for temperature */
#define MU_T 1.003
#define T_MIN 2.0e-6
gsl_siman_params_t params
= {N_TRIES, ITERS_FIXED_T, STEP_SIZE,
K, T_INITIAL, MU_T, T_MIN};
/* now some functions to test in one dimension */
double E1(void *xp)
{
double x = * ((double *) xp);
return exp(-pow((x-1.0),2.0))*sin(8*x);
}
double M1(void *xp, void *yp)
{
double x = *((double *) xp);
double y = *((double *) yp);
return fabs(x - y);
}
void S1(const gsl_rng * r, void *xp, double step_size)
{
double old_x = *((double *) xp);
double new_x;
double u = gsl_rng_uniform(r);
new_x = u * 2 * step_size - step_size + old_x;
memcpy(xp, &new_x, sizeof(new_x));
}
void P1(void *xp)
{
printf ("%12g", *((double *) xp));
}
int
main(int argc, char *argv[])
{
const gsl_rng_type * T;
gsl_rng * r;
double x_initial = 15.5;
gsl_rng_env_setup();
T = gsl_rng_default;
r = gsl_rng_alloc(T);
gsl_siman_solve(r, &x_initial, E1, S1, M1, P1,
NULL, NULL, NULL,
sizeof(double), params);
gsl_rng_free (r);
return 0;
}
Here are a couple of plots that are generated by running
`siman_test' in the following way:
$ ./siman_test | awk '!/^#/ {print $1, $4}'
| graph -y 1.34 1.4 -W0 -X generation -Y position
| plot -Tps > siman-test.eps
$ ./siman_test | awk '!/^#/ {print $1, $5}'
| graph -y -0.88 -0.83 -W0 -X generation -Y energy
| plot -Tps > siman-energy.eps
File: gsl-ref.info, Node: Traveling Salesman Problem, Prev: Trivial example, Up: Examples with Simulated Annealing
25.3.2 Traveling Salesman Problem
---------------------------------
The TSP ("Traveling Salesman Problem") is the classic combinatorial
optimization problem. I have provided a very simple version of it,
based on the coordinates of twelve cities in the southwestern United
States. This should maybe be called the "Flying Salesman Problem",
since I am using the great-circle distance between cities, rather than
the driving distance. Also: I assume the earth is a sphere, so I don't
use geoid distances.
The `gsl_siman_solve' routine finds a route which is 3490.62
Kilometers long; this is confirmed by an exhaustive search of all
possible routes with the same initial city.
The full code can be found in `siman/siman_tsp.c', but I include
here some plots generated in the following way:
$ ./siman_tsp > tsp.output
$ grep -v "^#" tsp.output
| awk '{print $1, $NF}'
| graph -y 3300 6500 -W0 -X generation -Y distance
-L "TSP - 12 southwest cities"
| plot -Tps > 12-cities.eps
$ grep initial_city_coord tsp.output
| awk '{print $2, $3}'
| graph -X "longitude (- means west)" -Y "latitude"
-L "TSP - initial-order" -f 0.03 -S 1 0.1
| plot -Tps > initial-route.eps
$ grep final_city_coord tsp.output
| awk '{print $2, $3}'
| graph -X "longitude (- means west)" -Y "latitude"
-L "TSP - final-order" -f 0.03 -S 1 0.1
| plot -Tps > final-route.eps
This is the output showing the initial order of the cities; longitude is
negative, since it is west and I want the plot to look like a map.
# initial coordinates of cities (longitude and latitude)
###initial_city_coord: -105.95 35.68 Santa Fe
###initial_city_coord: -112.07 33.54 Phoenix
###initial_city_coord: -106.62 35.12 Albuquerque
###initial_city_coord: -103.2 34.41 Clovis
###initial_city_coord: -107.87 37.29 Durango
###initial_city_coord: -96.77 32.79 Dallas
###initial_city_coord: -105.92 35.77 Tesuque
###initial_city_coord: -107.84 35.15 Grants
###initial_city_coord: -106.28 35.89 Los Alamos
###initial_city_coord: -106.76 32.34 Las Cruces
###initial_city_coord: -108.58 37.35 Cortez
###initial_city_coord: -108.74 35.52 Gallup
###initial_city_coord: -105.95 35.68 Santa Fe
The optimal route turns out to be:
# final coordinates of cities (longitude and latitude)
###final_city_coord: -105.95 35.68 Santa Fe
###final_city_coord: -103.2 34.41 Clovis
###final_city_coord: -96.77 32.79 Dallas
###final_city_coord: -106.76 32.34 Las Cruces
###final_city_coord: -112.07 33.54 Phoenix
###final_city_coord: -108.74 35.52 Gallup
###final_city_coord: -108.58 37.35 Cortez
###final_city_coord: -107.87 37.29 Durango
###final_city_coord: -107.84 35.15 Grants
###final_city_coord: -106.62 35.12 Albuquerque
###final_city_coord: -106.28 35.89 Los Alamos
###final_city_coord: -105.92 35.77 Tesuque
###final_city_coord: -105.95 35.68 Santa Fe
Here's a plot of the cost function (energy) versus generation (point in
the calculation at which a new temperature is set) for this problem:
File: gsl-ref.info, Node: Simulated Annealing References and Further Reading, Prev: Examples with Simulated Annealing, Up: Simulated Annealing
25.4 References and Further Reading
===================================
Further information is available in the following book,
`Modern Heuristic Techniques for Combinatorial Problems', Colin R.
Reeves (ed.), McGraw-Hill, 1995 (ISBN 0-07-709239-2).
File: gsl-ref.info, Node: Ordinary Differential Equations, Next: Interpolation, Prev: Simulated Annealing, Up: Top
26 Ordinary Differential Equations
**********************************
This chapter describes functions for solving ordinary differential
equation (ODE) initial value problems. The library provides a variety
of low-level methods, such as Runge-Kutta and Bulirsch-Stoer routines,
and higher-level components for adaptive step-size control. The
components can be combined by the user to achieve the desired solution,
with full access to any intermediate steps. A driver object can be used
as a high level wrapper for easy use of low level functions.
These functions are declared in the header file `gsl_odeiv2.h'.
This is a new interface in version 1.15 and uses the prefix
`gsl_odeiv2' for all functions. It is recommended over the previous
`gsl_odeiv' implementation defined in `gsl_odeiv.h' The old interface
has been retained under the original name for backwards compatibility.
* Menu:
* Defining the ODE System::
* Stepping Functions::
* Adaptive Step-size Control::
* Evolution::
* Driver::
* ODE Example programs::
* ODE References and Further Reading::
File: gsl-ref.info, Node: Defining the ODE System, Next: Stepping Functions, Up: Ordinary Differential Equations
26.1 Defining the ODE System
============================
The routines solve the general n-dimensional first-order system,
dy_i(t)/dt = f_i(t, y_1(t), ..., y_n(t))
for i = 1, \dots, n. The stepping functions rely on the vector of
derivatives f_i and the Jacobian matrix, J_{ij} = df_i(t,y(t)) / dy_j.
A system of equations is defined using the `gsl_odeiv2_system' datatype.
-- Data Type: gsl_odeiv2_system
This data type defines a general ODE system with arbitrary
parameters.
`int (* function) (double t, const double y[], double dydt[], void * params)'
This function should store the vector elements
f_i(t,y,params) in the array DYDT, for arguments (T,Y) and
parameters PARAMS.
The function should return `GSL_SUCCESS' if the calculation
was completed successfully. Any other return value indicates
an error. A special return value `GSL_EBADFUNC' causes
`gsl_odeiv2' routines to immediately stop and return. The
user must call an appropriate reset function (e.g.
`gsl_odeiv2_driver_reset' or `gsl_odeiv2_step_reset') before
continuing. Use return values distinct from standard GSL
error codes to distinguish your function as the source of the
error.
`int (* jacobian) (double t, const double y[], double * dfdy, double dfdt[], void * params);'
This function should store the vector of derivative elements
in the array DFDT and the Jacobian matrix J_{ij} in the array
DFDY, regarded as a row-ordered matrix `J(i,j) = dfdy[i *
dimension + j]' where `dimension' is the dimension of the
system.
Not all of the stepper algorithms of `gsl_odeiv2' make use of
the Jacobian matrix, so it may not be necessary to provide
this function (the `jacobian' element of the struct can be
replaced by a null pointer for those algorithms).
The function should return `GSL_SUCCESS' if the calculation
was completed successfully. Any other return value indicates
an error. A special return value `GSL_EBADFUNC' causes
`gsl_odeiv2' routines to immediately stop and return. The
user must call an appropriate reset function (e.g.
`gsl_odeiv2_driver_reset' or `gsl_odeiv2_step_reset') before
continuing. Use return values distinct from standard GSL
error codes to distinguish your function as the source of the
error.
`size_t dimension;'
This is the dimension of the system of equations.
`void * params'
This is a pointer to the arbitrary parameters of the system.
File: gsl-ref.info, Node: Stepping Functions, Next: Adaptive Step-size Control, Prev: Defining the ODE System, Up: Ordinary Differential Equations
26.2 Stepping Functions
=======================
The lowest level components are the "stepping functions" which advance
a solution from time t to t+h for a fixed step-size h and estimate the
resulting local error.
-- Function: gsl_odeiv2_step * gsl_odeiv2_step_alloc (const
gsl_odeiv2_step_type * T, size_t DIM)
This function returns a pointer to a newly allocated instance of a
stepping function of type T for a system of DIM dimensions. Please
note that if you use a stepper method that requires access to a
driver object, it is advisable to use a driver allocation method,
which automatically allocates a stepper, too.
-- Function: int gsl_odeiv2_step_reset (gsl_odeiv2_step * S)
This function resets the stepping function S. It should be used
whenever the next use of S will not be a continuation of a
previous step.
-- Function: void gsl_odeiv2_step_free (gsl_odeiv2_step * S)
This function frees all the memory associated with the stepping
function S.
-- Function: const char * gsl_odeiv2_step_name (const gsl_odeiv2_step
* S)
This function returns a pointer to the name of the stepping
function. For example,
printf ("step method is '%s'\n",
gsl_odeiv2_step_name (s));
would print something like `step method is 'rkf45''.
-- Function: unsigned int gsl_odeiv2_step_order (const gsl_odeiv2_step
* S)
This function returns the order of the stepping function on the
previous step. The order can vary if the stepping function itself
is adaptive.
-- Function: int gsl_odeiv2_step_set_driver (gsl_odeiv2_step * S,
const gsl_odeiv2_driver * D)
This function sets a pointer of the driver object D for stepper S,
to allow the stepper to access control (and evolve) object through
the driver object. This is a requirement for some steppers, to get
the desired error level for internal iteration of stepper.
Allocation of a driver object calls this function automatically.
-- Function: int gsl_odeiv2_step_apply (gsl_odeiv2_step * S, double T,
double H, double Y[], double YERR[], const double DYDT_IN[],
double DYDT_OUT[], const gsl_odeiv2_system * SYS)
This function applies the stepping function S to the system of
equations defined by SYS, using the step-size H to advance the
system from time T and state Y to time T+H. The new state of the
system is stored in Y on output, with an estimate of the absolute
error in each component stored in YERR. If the argument DYDT_IN
is not null it should point an array containing the derivatives
for the system at time T on input. This is optional as the
derivatives will be computed internally if they are not provided,
but allows the reuse of existing derivative information. On
output the new derivatives of the system at time T+H will be
stored in DYDT_OUT if it is not null.
The stepping function returns `GSL_FAILURE' if it is unable to
compute the requested step. Also, if the user-supplied functions
defined in the system SYS return a status other than `GSL_SUCCESS'
the step will be aborted. In that case, the elements of Y will be
restored to their pre-step values and the error code from the
user-supplied function will be returned. Failure may be due to a
singularity in the system or too large step-size H. In that case
the step should be attempted again with a smaller step-size, e.g.
H/2.
If the driver object is not appropriately set via
`gsl_odeiv2_step_set_driver' for those steppers that need it, the
stepping function returns `GSL_EFAULT'. If the user-supplied
functions defined in the system SYS returns `GSL_EBADFUNC', the
function returns immediately with the same return code. In this
case the user must call `gsl_odeiv2_step_reset' before calling
this function again.
The following algorithms are available,
-- Step Type: gsl_odeiv2_step_rk2
Explicit embedded Runge-Kutta (2, 3) method.
-- Step Type: gsl_odeiv2_step_rk4
Explicit 4th order (classical) Runge-Kutta. Error estimation is
carried out by the step doubling method. For more efficient
estimate of the error, use the embedded methods described below.
-- Step Type: gsl_odeiv2_step_rkf45
Explicit embedded Runge-Kutta-Fehlberg (4, 5) method. This method
is a good general-purpose integrator.
-- Step Type: gsl_odeiv2_step_rkck
Explicit embedded Runge-Kutta Cash-Karp (4, 5) method.
-- Step Type: gsl_odeiv2_step_rk8pd
Explicit embedded Runge-Kutta Prince-Dormand (8, 9) method.
-- Step Type: gsl_odeiv2_step_rk1imp
Implicit Gaussian first order Runge-Kutta. Also known as implicit
Euler or backward Euler method. Error estimation is carried out by
the step doubling method. This algorithm requires the Jacobian and
access to the driver object via `gsl_odeiv2_step_set_driver'.
-- Step Type: gsl_odeiv2_step_rk2imp
Implicit Gaussian second order Runge-Kutta. Also known as implicit
mid-point rule. Error estimation is carried out by the step
doubling method. This stepper requires the Jacobian and access to
the driver object via `gsl_odeiv2_step_set_driver'.
-- Step Type: gsl_odeiv2_step_rk4imp
Implicit Gaussian 4th order Runge-Kutta. Error estimation is
carried out by the step doubling method. This algorithm requires
the Jacobian and access to the driver object via
`gsl_odeiv2_step_set_driver'.
-- Step Type: gsl_odeiv2_step_bsimp
Implicit Bulirsch-Stoer method of Bader and Deuflhard. The method
is generally suitable for stiff problems. This stepper requires the
Jacobian.
-- Step Type: gsl_odeiv2_step_msadams
A variable-coefficient linear multistep Adams method in Nordsieck
form. This stepper uses explicit Adams-Bashforth (predictor) and
implicit Adams-Moulton (corrector) methods in P(EC)^m functional
iteration mode. Method order varies dynamically between 1 and 12.
This stepper requires the access to the driver object via
`gsl_odeiv2_step_set_driver'.
-- Step Type: gsl_odeiv2_step_msbdf
A variable-coefficient linear multistep backward differentiation
formula (BDF) method in Nordsieck form. This stepper uses the
explicit BDF formula as predictor and implicit BDF formula as
corrector. A modified Newton iteration method is used to solve the
system of non-linear equations. Method order varies dynamically
between 1 and 5. The method is generally suitable for stiff
problems. This stepper requires the Jacobian and the access to the
driver object via `gsl_odeiv2_step_set_driver'.
File: gsl-ref.info, Node: Adaptive Step-size Control, Next: Evolution, Prev: Stepping Functions, Up: Ordinary Differential Equations
26.3 Adaptive Step-size Control
===============================
The control function examines the proposed change to the solution
produced by a stepping function and attempts to determine the optimal
step-size for a user-specified level of error.
-- Function: gsl_odeiv2_control * gsl_odeiv2_control_standard_new
(double EPS_ABS, double EPS_REL, double A_Y, double A_DYDT)
The standard control object is a four parameter heuristic based on
absolute and relative errors EPS_ABS and EPS_REL, and scaling
factors A_Y and A_DYDT for the system state y(t) and derivatives
y'(t) respectively.
The step-size adjustment procedure for this method begins by
computing the desired error level D_i for each component,
D_i = eps_abs + eps_rel * (a_y |y_i| + a_dydt h |y\prime_i|)
and comparing it with the observed error E_i = |yerr_i|. If the
observed error E exceeds the desired error level D by more than
10% for any component then the method reduces the step-size by an
appropriate factor,
h_new = h_old * S * (E/D)^(-1/q)
where q is the consistency order of the method (e.g. q=4 for 4(5)
embedded RK), and S is a safety factor of 0.9. The ratio E/D is
taken to be the maximum of the ratios E_i/D_i.
If the observed error E is less than 50% of the desired error
level D for the maximum ratio E_i/D_i then the algorithm takes the
opportunity to increase the step-size to bring the error in line
with the desired level,
h_new = h_old * S * (E/D)^(-1/(q+1))
This encompasses all the standard error scaling methods. To avoid
uncontrolled changes in the stepsize, the overall scaling factor is
limited to the range 1/5 to 5.
-- Function: gsl_odeiv2_control * gsl_odeiv2_control_y_new (double
EPS_ABS, double EPS_REL)
This function creates a new control object which will keep the
local error on each step within an absolute error of EPS_ABS and
relative error of EPS_REL with respect to the solution y_i(t).
This is equivalent to the standard control object with A_Y=1 and
A_DYDT=0.
-- Function: gsl_odeiv2_control * gsl_odeiv2_control_yp_new (double
EPS_ABS, double EPS_REL)
This function creates a new control object which will keep the
local error on each step within an absolute error of EPS_ABS and
relative error of EPS_REL with respect to the derivatives of the
solution y'_i(t). This is equivalent to the standard control
object with A_Y=0 and A_DYDT=1.
-- Function: gsl_odeiv2_control * gsl_odeiv2_control_scaled_new
(double EPS_ABS, double EPS_REL, double A_Y, double A_DYDT,
const double SCALE_ABS[], size_t DIM)
This function creates a new control object which uses the same
algorithm as `gsl_odeiv2_control_standard_new' but with an
absolute error which is scaled for each component by the array
SCALE_ABS. The formula for D_i for this control object is,
D_i = eps_abs * s_i + eps_rel * (a_y |y_i| + a_dydt h |y\prime_i|)
where s_i is the i-th component of the array SCALE_ABS. The same
error control heuristic is used by the Matlab ODE suite.
-- Function: gsl_odeiv2_control * gsl_odeiv2_control_alloc (const
gsl_odeiv2_control_type * T)
This function returns a pointer to a newly allocated instance of a
control function of type T. This function is only needed for
defining new types of control functions. For most purposes the
standard control functions described above should be sufficient.
-- Function: int gsl_odeiv2_control_init (gsl_odeiv2_control * C,
double EPS_ABS, double EPS_REL, double A_Y, double A_DYDT)
This function initializes the control function C with the
parameters EPS_ABS (absolute error), EPS_REL (relative error), A_Y
(scaling factor for y) and A_DYDT (scaling factor for derivatives).
-- Function: void gsl_odeiv2_control_free (gsl_odeiv2_control * C)
This function frees all the memory associated with the control
function C.
-- Function: int gsl_odeiv2_control_hadjust (gsl_odeiv2_control * C,
gsl_odeiv2_step * S, const double Y[], const double YERR[],
const double DYDT[], double * H)
This function adjusts the step-size H using the control function
C, and the current values of Y, YERR and DYDT. The stepping
function STEP is also needed to determine the order of the method.
If the error in the y-values YERR is found to be too large then
the step-size H is reduced and the function returns
`GSL_ODEIV_HADJ_DEC'. If the error is sufficiently small then H
may be increased and `GSL_ODEIV_HADJ_INC' is returned. The
function returns `GSL_ODEIV_HADJ_NIL' if the step-size is
unchanged. The goal of the function is to estimate the largest
step-size which satisfies the user-specified accuracy requirements
for the current point.
-- Function: const char * gsl_odeiv2_control_name (const
gsl_odeiv2_control * C)
This function returns a pointer to the name of the control
function. For example,
printf ("control method is '%s'\n",
gsl_odeiv2_control_name (c));
would print something like `control method is 'standard''
-- Function: int gsl_odeiv2_control_errlevel (gsl_odeiv2_control * C,
const double Y, const double DYDT, const double H, const
size_t IND, double * ERRLEV)
This function calculates the desired error level of the INDth
component to ERRLEV. It requires the value (Y) and value of the
derivative (DYDT) of the component, and the current step size H.
-- Function: int gsl_odeiv2_control_set_driver (gsl_odeiv2_control *
C, const gsl_odeiv2_driver * D)
This function sets a pointer of the driver object D for control
object C.
File: gsl-ref.info, Node: Evolution, Next: Driver, Prev: Adaptive Step-size Control, Up: Ordinary Differential Equations
26.4 Evolution
==============
The evolution function combines the results of a stepping function and
control function to reliably advance the solution forward one step
using an acceptable step-size.
-- Function: gsl_odeiv2_evolve * gsl_odeiv2_evolve_alloc (size_t DIM)
This function returns a pointer to a newly allocated instance of an
evolution function for a system of DIM dimensions.
-- Function: int gsl_odeiv2_evolve_apply (gsl_odeiv2_evolve * E,
gsl_odeiv2_control * CON, gsl_odeiv2_step * STEP, const
gsl_odeiv2_system * SYS, double * T, double T1, double * H,
double Y[])
This function advances the system (E, SYS) from time T and
position Y using the stepping function STEP. The new time and
position are stored in T and Y on output.
The initial step-size is taken as H. The control function CON is
applied to check whether the local error estimated by the stepping
function STEP using step-size H exceeds the required error
tolerance. If the error is too high, the step is retried by
calling STEP with a decreased step-size. This process is continued
until an acceptable step-size is found. An estimate of the local
error for the step can be obtained from the components of the
array `E->yerr[]'.
If the user-supplied functions defined in the system SYS returns
`GSL_EBADFUNC', the function returns immediately with the same
return code. In this case the user must call
`gsl_odeiv2_step_reset' and `gsl_odeiv2_evolve_reset' before
calling this function again.
Otherwise, if the user-supplied functions defined in the system
SYS or the stepping function STEP return a status other than
`GSL_SUCCESS', the step is retried with a decreased step-size. If
the step-size decreases below machine precision, a status of
`GSL_FAILURE' is returned if the user functions returned
`GSL_SUCCESS'. Otherwise the value returned by user function is
returned. If no acceptable step can be made, T and Y will be
restored to their pre-step values and H contains the final
attempted step-size.
If the step is successful the function returns a suggested
step-size for the next step in H. The maximum time T1 is guaranteed
not to be exceeded by the time-step. On the final time-step the
value of T will be set to T1 exactly.
-- Function: int gsl_odeiv2_evolve_apply_fixed_step (gsl_odeiv2_evolve
* E, gsl_odeiv2_control * CON, gsl_odeiv2_step * STEP, const
gsl_odeiv2_system * SYS, double * T, const double H, double
Y[])
This function advances the ODE-system (E, SYS, CON) from time T
and position Y using the stepping function STEP by a specified
step size H. If the local error estimated by the stepping function
exceeds the desired error level, the step is not taken and the
function returns `GSL_FAILURE'. Otherwise the value returned by
user function is returned.
-- Function: int gsl_odeiv2_evolve_reset (gsl_odeiv2_evolve * E)
This function resets the evolution function E. It should be used
whenever the next use of E will not be a continuation of a
previous step.
-- Function: void gsl_odeiv2_evolve_free (gsl_odeiv2_evolve * E)
This function frees all the memory associated with the evolution
function E.
-- Function: int gsl_odeiv2_evolve_set_driver (gsl_odeiv2_evolve * E,
const gsl_odeiv2_driver * D)
This function sets a pointer of the driver object D for evolve
object E.
If a system has discontinuous changes in the derivatives at known
points, it is advisable to evolve the system between each discontinuity
in sequence. For example, if a step-change in an external driving
force occurs at times t_a, t_b and t_c then evolution should be carried
out over the ranges (t_0,t_a), (t_a,t_b), (t_b,t_c), and (t_c,t_1)
separately and not directly over the range (t_0,t_1).
File: gsl-ref.info, Node: Driver, Next: ODE Example programs, Prev: Evolution, Up: Ordinary Differential Equations
26.5 Driver
===========
The driver object is a high level wrapper that combines the evolution,
control and stepper objects for easy use.
-- Function: gsl_odeiv2_driver * gsl_odeiv2_driver_alloc_y_new (const
gsl_odeiv2_system * SYS, const gsl_odeiv2_step_type * T,
const double HSTART, const double EPSABS, const double EPSREL)
-- Function: gsl_odeiv2_driver * gsl_odeiv2_driver_alloc_yp_new (const
gsl_odeiv2_system * SYS, const gsl_odeiv2_step_type * T,
const double HSTART, const double EPSABS, const double EPSREL)
-- Function: gsl_odeiv2_driver * gsl_odeiv2_driver_alloc_standard_new
(const gsl_odeiv2_system * SYS, const gsl_odeiv2_step_type *
T, const double HSTART, const double EPSABS, const double
EPSREL, const double A_Y, const double A_DYDT)
-- Function: gsl_odeiv2_driver * gsl_odeiv2_driver_alloc_scaled_new
(const gsl_odeiv2_system * SYS, const gsl_odeiv2_step_type *
T, const double HSTART, const double EPSABS, const double
EPSREL, const double A_Y, const double A_DYDT, const double
SCALE_ABS[])
These functions return a pointer to a newly allocated instance of a
driver object. The functions automatically allocate and initialise
the evolve, control and stepper objects for ODE system SYS using
stepper type T. The initial step size is given in HSTART. The rest
of the arguments follow the syntax and semantics of the control
functions with same name (`gsl_odeiv2_control_*_new').
-- Function: int gsl_odeiv2_driver_set_hmin (gsl_odeiv2_driver * d,
const double hmin)
The function sets a minimum for allowed step size HMIN for driver
D. Default value is 0.
-- Function: int gsl_odeiv2_driver_set_hmax (gsl_odeiv2_driver * d,
const double hmax)
The function sets a maximum for allowed step size HMAX for driver
D. Default value is `GSL_DBL_MAX'.
-- Function: int gsl_odeiv2_driver_set_nmax (gsl_odeiv2_driver * d,
const unsigned long int nmax)
The function sets a maximum for allowed number of steps NMAX for
driver D. Default value of 0 sets no limit for steps.
-- Function: int gsl_odeiv2_driver_apply (gsl_odeiv2_driver * D,
double * T, const double T1, double Y[])
This function evolves the driver system D from T to T1. Initially
vector Y should contain the values of dependent variables at point
T. If the function is unable to complete the calculation, an error
code from `gsl_odeiv2_evolve_apply' is returned, and T and Y
contain the values from last successful step.
If maximum number of steps is reached, a value of `GSL_EMAXITER'
is returned. If the step size drops below minimum value, the
function returns with `GSL_ENOPROG'. If the user-supplied functions
defined in the system SYS returns `GSL_EBADFUNC', the function
returns immediately with the same return code. In this case the
user must call `gsl_odeiv2_driver_reset' before calling this
function again.
-- Function: int gsl_odeiv2_driver_apply_fixed_step (gsl_odeiv2_driver
* D, double * T, const double H, const unsigned long int N,
double Y[])
This function evolves the driver system D from T with N steps of
size H. If the function is unable to complete the calculation, an
error code from `gsl_odeiv2_evolve_apply_fixed_step' is returned,
and T and Y contain the values from last successful step.
-- Function: int gsl_odeiv2_driver_reset (gsl_odeiv2_driver * d)
This function resets the evolution and stepper objects.
-- Function: int gsl_odeiv2_driver_free (gsl_odeiv2_driver * d)
This function frees the driver object, and the related evolution,
stepper and control objects.
File: gsl-ref.info, Node: ODE Example programs, Next: ODE References and Further Reading, Prev: Driver, Up: Ordinary Differential Equations
26.6 Examples
=============
The following program solves the second-order nonlinear Van der Pol
oscillator equation,
x\prime\prime(t) + \mu x\prime(t) (x(t)^2 - 1) + x(t) = 0
This can be converted into a first order system suitable for use with
the routines described in this chapter by introducing a separate
variable for the velocity, y = x'(t),
x\prime = y
y\prime = -x + \mu y (1-x^2)
The program begins by defining functions for these derivatives and
their Jacobian. The main function uses driver level functions to solve
the problem. The program evolves the solution from (y, y\prime) = (1,
0) at t=0 to t=100. The step-size h is automatically adjusted by the
controller to maintain an absolute accuracy of 10^{-6} in the function
values Y. The loop in the example prints the solution at the points
t_i = 1, 2, \dots, 100.
#include
#include
#include
#include
int
func (double t, const double y[], double f[],
void *params)
{
double mu = *(double *)params;
f[0] = y[1];
f[1] = -y[0] - mu*y[1]*(y[0]*y[0] - 1);
return GSL_SUCCESS;
}
int
jac (double t, const double y[], double *dfdy,
double dfdt[], void *params)
{
double mu = *(double *)params;
gsl_matrix_view dfdy_mat
= gsl_matrix_view_array (dfdy, 2, 2);
gsl_matrix * m = &dfdy_mat.matrix;
gsl_matrix_set (m, 0, 0, 0.0);
gsl_matrix_set (m, 0, 1, 1.0);
gsl_matrix_set (m, 1, 0, -2.0*mu*y[0]*y[1] - 1.0);
gsl_matrix_set (m, 1, 1, -mu*(y[0]*y[0] - 1.0));
dfdt[0] = 0.0;
dfdt[1] = 0.0;
return GSL_SUCCESS;
}
int
main (void)
{
double mu = 10;
gsl_odeiv2_system sys = {func, jac, 2, &mu};
gsl_odeiv2_driver * d =
gsl_odeiv2_driver_alloc_y_new (&sys, gsl_odeiv2_step_rk8pd,
1e-6, 1e-6, 0.0);
int i;
double t = 0.0, t1 = 100.0;
double y[2] = { 1.0, 0.0 };
for (i = 1; i <= 100; i++)
{
double ti = i * t1 / 100.0;
int status = gsl_odeiv2_driver_apply (d, &t, ti, y);
if (status != GSL_SUCCESS)
{
printf ("error, return value=%d\n", status);
break;
}
printf ("%.5e %.5e %.5e\n", t, y[0], y[1]);
}
gsl_odeiv2_driver_free (d);
return 0;
}
The user can work with the lower level functions directly, as in the
following example. In this case an intermediate result is printed after
each successful step instead of equidistant time points.
int
main (void)
{
const gsl_odeiv2_step_type * T
= gsl_odeiv2_step_rk8pd;
gsl_odeiv2_step * s
= gsl_odeiv2_step_alloc (T, 2);
gsl_odeiv2_control * c
= gsl_odeiv2_control_y_new (1e-6, 0.0);
gsl_odeiv2_evolve * e
= gsl_odeiv2_evolve_alloc (2);
double mu = 10;
gsl_odeiv2_system sys = {func, jac, 2, &mu};
double t = 0.0, t1 = 100.0;
double h = 1e-6;
double y[2] = { 1.0, 0.0 };
while (t < t1)
{
int status = gsl_odeiv2_evolve_apply (e, c, s,
&sys,
&t, t1,
&h, y);
if (status != GSL_SUCCESS)
break;
printf ("%.5e %.5e %.5e\n", t, y[0], y[1]);
}
gsl_odeiv2_evolve_free (e);
gsl_odeiv2_control_free (c);
gsl_odeiv2_step_free (s);
return 0;
}
For functions with multiple parameters, the appropriate information can
be passed in through the PARAMS argument in `gsl_odeiv2_system'
definition (MU in this example) by using a pointer to a struct.
It is also possible to work with a non-adaptive integrator, using only
the stepping function itself, `gsl_odeiv2_driver_apply_fixed_step' or
`gsl_odeiv2_evolve_apply_fixed_step'. The following program uses the
driver level function, with fourth-order Runge-Kutta stepping function
with a fixed stepsize of 0.001.
int
main (void)
{
double mu = 10;
gsl_odeiv2_system sys = { func, jac, 2, &mu };
gsl_odeiv2_driver *d =
gsl_odeiv2_driver_alloc_y_new (&sys, gsl_odeiv2_step_rk4,
1e-3, 1e-8, 1e-8);
double t = 0.0;
double y[2] = { 1.0, 0.0 };
int i, s;
for (i = 0; i < 100; i++)
{
s = gsl_odeiv2_driver_apply_fixed_step (d, &t, 1e-3, 1000, y);
if (s != GSL_SUCCESS)
{
printf ("error: driver returned %d\n", s);
break;
}
printf ("%.5e %.5e %.5e\n", t, y[0], y[1]);
}
gsl_odeiv2_driver_free (d);
return s;
}
File: gsl-ref.info, Node: ODE References and Further Reading, Prev: ODE Example programs, Up: Ordinary Differential Equations
26.7 References and Further Reading
===================================
Ascher, U.M., Petzold, L.R., `Computer Methods for Ordinary
Differential and Differential-Algebraic Equations', SIAM,
Philadelphia, 1998.
Hairer, E., Norsett, S. P., Wanner, G., `Solving Ordinary
Differential Equations I: Nonstiff Problems', Springer, Berlin,
1993.
Hairer, E., Wanner, G., `Solving Ordinary Differential Equations
II: Stiff and Differential-Algebraic Problems', Springer, Berlin,
1996.
Many of the basic Runge-Kutta formulas can be found in the Handbook
of Mathematical Functions,
Abramowitz & Stegun (eds.), `Handbook of Mathematical Functions',
Section 25.5.
The implicit Bulirsch-Stoer algorithm `bsimp' is described in the
following paper,
G. Bader and P. Deuflhard, "A Semi-Implicit Mid-Point Rule for
Stiff Systems of Ordinary Differential Equations.", Numer. Math.
41, 373-398, 1983.
The Adams and BDF multistep methods `msadams' and `msbdf' are based on
the following articles,
G. D. Byrne and A. C. Hindmarsh, "A Polyalgorithm for the
Numerical Solution of Ordinary Differential Equations.", ACM
Trans. Math. Software, 1, 71-96, 1975.
P. N. Brown, G. D. Byrne and A. C. Hindmarsh, "VODE: A
Variable-coefficient ODE Solver.", SIAM J. Sci. Stat. Comput. 10,
1038-1051, 1989.
A. C. Hindmarsh, P. N. Brown, K. E. Grant, S. L. Lee, R. Serban,
D. E. Shumaker and C. S. Woodward, "SUNDIALS: Suite of Nonlinear
and Differential/Algebraic Equation Solvers.", ACM Trans. Math.
Software 31, 363-396, 2005.
File: gsl-ref.info, Node: Interpolation, Next: Numerical Differentiation, Prev: Ordinary Differential Equations, Up: Top
27 Interpolation
****************
This chapter describes functions for performing interpolation. The
library provides a variety of interpolation methods, including Cubic
splines and Akima splines. The interpolation types are interchangeable,
allowing different methods to be used without recompiling.
Interpolations can be defined for both normal and periodic boundary
conditions. Additional functions are available for computing
derivatives and integrals of interpolating functions.
These interpolation methods produce curves that pass through each
datapoint. To interpolate noisy data with a smoothing curve see *note
Basis Splines::.
The functions described in this section are declared in the header
files `gsl_interp.h' and `gsl_spline.h'.
* Menu:
* Introduction to Interpolation::
* Interpolation Functions::
* Interpolation Types::
* Index Look-up and Acceleration::
* Evaluation of Interpolating Functions::
* Higher-level Interface::
* Interpolation Example programs::
* Interpolation References and Further Reading::
File: gsl-ref.info, Node: Introduction to Interpolation, Next: Interpolation Functions, Up: Interpolation
27.1 Introduction
=================
Given a set of data points (x_1, y_1) \dots (x_n, y_n) the routines
described in this section compute a continuous interpolating function
y(x) such that y(x_i) = y_i. The interpolation is piecewise smooth,
and its behavior at the end-points is determined by the type of
interpolation used.
File: gsl-ref.info, Node: Interpolation Functions, Next: Interpolation Types, Prev: Introduction to Interpolation, Up: Interpolation
27.2 Interpolation Functions
============================
The interpolation function for a given dataset is stored in a
`gsl_interp' object. These are created by the following functions.
-- Function: gsl_interp * gsl_interp_alloc (const gsl_interp_type * T,
size_t SIZE)
This function returns a pointer to a newly allocated interpolation
object of type T for SIZE data-points.
-- Function: int gsl_interp_init (gsl_interp * INTERP, const double
XA[], const double YA[], size_t SIZE)
This function initializes the interpolation object INTERP for the
data (XA,YA) where XA and YA are arrays of size SIZE. The
interpolation object (`gsl_interp') does not save the data arrays
XA and YA and only stores the static state computed from the data.
The XA data array is always assumed to be strictly ordered, with
increasing x values; the behavior for other arrangements is not
defined.
-- Function: void gsl_interp_free (gsl_interp * INTERP)
This function frees the interpolation object INTERP.
File: gsl-ref.info, Node: Interpolation Types, Next: Index Look-up and Acceleration, Prev: Interpolation Functions, Up: Interpolation
27.3 Interpolation Types
========================
The interpolation library provides six interpolation types:
-- Interpolation Type: gsl_interp_linear
Linear interpolation. This interpolation method does not require
any additional memory.
-- Interpolation Type: gsl_interp_polynomial
Polynomial interpolation. This method should only be used for
interpolating small numbers of points because polynomial
interpolation introduces large oscillations, even for well-behaved
datasets. The number of terms in the interpolating polynomial is
equal to the number of points.
-- Interpolation Type: gsl_interp_cspline
Cubic spline with natural boundary conditions. The resulting
curve is piecewise cubic on each interval, with matching first and
second derivatives at the supplied data-points. The second
derivative is chosen to be zero at the first point and last point.
-- Interpolation Type: gsl_interp_cspline_periodic
Cubic spline with periodic boundary conditions. The resulting
curve is piecewise cubic on each interval, with matching first and
second derivatives at the supplied data-points. The derivatives
at the first and last points are also matched. Note that the last
point in the data must have the same y-value as the first point,
otherwise the resulting periodic interpolation will have a
discontinuity at the boundary.
-- Interpolation Type: gsl_interp_akima
Non-rounded Akima spline with natural boundary conditions. This
method uses the non-rounded corner algorithm of Wodicka.
-- Interpolation Type: gsl_interp_akima_periodic
Non-rounded Akima spline with periodic boundary conditions. This
method uses the non-rounded corner algorithm of Wodicka.
The following related functions are available:
-- Function: const char * gsl_interp_name (const gsl_interp * INTERP)
This function returns the name of the interpolation type used by
INTERP. For example,
printf ("interp uses '%s' interpolation.\n",
gsl_interp_name (interp));
would print something like,
interp uses 'cspline' interpolation.
-- Function: unsigned int gsl_interp_min_size (const gsl_interp *
INTERP)
-- Function: unsigned int gsl_interp_type_min_size (const
gsl_interp_type * T)
These functions return the minimum number of points required by the
interpolation object INTERP or interpolation type T. For example,
Akima spline interpolation requires a minimum of 5 points.
File: gsl-ref.info, Node: Index Look-up and Acceleration, Next: Evaluation of Interpolating Functions, Prev: Interpolation Types, Up: Interpolation
27.4 Index Look-up and Acceleration
===================================
The state of searches can be stored in a `gsl_interp_accel' object,
which is a kind of iterator for interpolation lookups. It caches the
previous value of an index lookup. When the subsequent interpolation
point falls in the same interval its index value can be returned
immediately.
-- Function: size_t gsl_interp_bsearch (const double X_ARRAY[], double
X, size_t INDEX_LO, size_t INDEX_HI)
This function returns the index i of the array X_ARRAY such that
`x_array[i] <= x < x_array[i+1]'. The index is searched for in
the range [INDEX_LO,INDEX_HI]. An inline version of this function
is used when `HAVE_INLINE' is defined.
-- Function: gsl_interp_accel * gsl_interp_accel_alloc (void)
This function returns a pointer to an accelerator object, which is
a kind of iterator for interpolation lookups. It tracks the state
of lookups, thus allowing for application of various acceleration
strategies.
-- Function: size_t gsl_interp_accel_find (gsl_interp_accel * A, const
double X_ARRAY[], size_t SIZE, double X)
This function performs a lookup action on the data array X_ARRAY
of size SIZE, using the given accelerator A. This is how lookups
are performed during evaluation of an interpolation. The function
returns an index i such that `x_array[i] <= x < x_array[i+1]'. An
inline version of this function is used when `HAVE_INLINE' is
defined.
-- Function: int gsl_interp_accel_reset (gsl_interp_accel * ACC);
This function reinitializes the accelerator object ACC. It should
be used when the cached information is no longer applicable--for
example, when switching to a new dataset.
-- Function: void gsl_interp_accel_free (gsl_interp_accel* ACC)
This function frees the accelerator object ACC.
File: gsl-ref.info, Node: Evaluation of Interpolating Functions, Next: Higher-level Interface, Prev: Index Look-up and Acceleration, Up: Interpolation
27.5 Evaluation of Interpolating Functions
==========================================
-- Function: double gsl_interp_eval (const gsl_interp * INTERP, const
double XA[], const double YA[], double X, gsl_interp_accel *
ACC)
-- Function: int gsl_interp_eval_e (const gsl_interp * INTERP, const
double XA[], const double YA[], double X, gsl_interp_accel *
ACC, double * Y)
These functions return the interpolated value of Y for a given
point X, using the interpolation object INTERP, data arrays XA and
YA and the accelerator ACC. When X is outside the range of XA,
the error code `GSL_EDOM' is returned with a value of `GSL_NAN' for
Y.
-- Function: double gsl_interp_eval_deriv (const gsl_interp * INTERP,
const double XA[], const double YA[], double X,
gsl_interp_accel * ACC)
-- Function: int gsl_interp_eval_deriv_e (const gsl_interp * INTERP,
const double XA[], const double YA[], double X,
gsl_interp_accel * ACC, double * D)
These functions return the derivative D of an interpolated
function for a given point X, using the interpolation object
INTERP, data arrays XA and YA and the accelerator ACC.
-- Function: double gsl_interp_eval_deriv2 (const gsl_interp * INTERP,
const double XA[], const double YA[], double X,
gsl_interp_accel * ACC)
-- Function: int gsl_interp_eval_deriv2_e (const gsl_interp * INTERP,
const double XA[], const double YA[], double X,
gsl_interp_accel * ACC, double * D2)
These functions return the second derivative D2 of an interpolated
function for a given point X, using the interpolation object
INTERP, data arrays XA and YA and the accelerator ACC.
-- Function: double gsl_interp_eval_integ (const gsl_interp * INTERP,
const double XA[], const double YA[], double A, double B,
gsl_interp_accel * ACC)
-- Function: int gsl_interp_eval_integ_e (const gsl_interp * INTERP,
const double XA[], const double YA[], double A, double B,
gsl_interp_accel * ACC, double * RESULT)
These functions return the numerical integral RESULT of an
interpolated function over the range [A, B], using the
interpolation object INTERP, data arrays XA and YA and the
accelerator ACC.
File: gsl-ref.info, Node: Higher-level Interface, Next: Interpolation Example programs, Prev: Evaluation of Interpolating Functions, Up: Interpolation
27.6 Higher-level Interface
===========================
The functions described in the previous sections required the user to
supply pointers to the x and y arrays on each call. The following
functions are equivalent to the corresponding `gsl_interp' functions
but maintain a copy of this data in the `gsl_spline' object. This
removes the need to pass both XA and YA as arguments on each
evaluation. These functions are defined in the header file
`gsl_spline.h'.
-- Function: gsl_spline * gsl_spline_alloc (const gsl_interp_type * T,
size_t SIZE)
-- Function: int gsl_spline_init (gsl_spline * SPLINE, const double
XA[], const double YA[], size_t SIZE)
-- Function: void gsl_spline_free (gsl_spline * SPLINE)
-- Function: const char * gsl_spline_name (const gsl_spline * SPLINE)
-- Function: unsigned int gsl_spline_min_size (const gsl_spline *
SPLINE)
-- Function: double gsl_spline_eval (const gsl_spline * SPLINE, double
X, gsl_interp_accel * ACC)
-- Function: int gsl_spline_eval_e (const gsl_spline * SPLINE, double
X, gsl_interp_accel * ACC, double * Y)
-- Function: double gsl_spline_eval_deriv (const gsl_spline * SPLINE,
double X, gsl_interp_accel * ACC)
-- Function: int gsl_spline_eval_deriv_e (const gsl_spline * SPLINE,
double X, gsl_interp_accel * ACC, double * D)
-- Function: double gsl_spline_eval_deriv2 (const gsl_spline * SPLINE,
double X, gsl_interp_accel * ACC)
-- Function: int gsl_spline_eval_deriv2_e (const gsl_spline * SPLINE,
double X, gsl_interp_accel * ACC, double * D2)
-- Function: double gsl_spline_eval_integ (const gsl_spline * SPLINE,
double A, double B, gsl_interp_accel * ACC)
-- Function: int gsl_spline_eval_integ_e (const gsl_spline * SPLINE,
double A, double B, gsl_interp_accel * ACC, double * RESULT)
File: gsl-ref.info, Node: Interpolation Example programs, Next: Interpolation References and Further Reading, Prev: Higher-level Interface, Up: Interpolation
27.7 Examples
=============
The following program demonstrates the use of the interpolation and
spline functions. It computes a cubic spline interpolation of the
10-point dataset (x_i, y_i) where x_i = i + \sin(i)/2 and y_i = i +
\cos(i^2) for i = 0 \dots 9.
#include
#include
#include
#include
#include
int
main (void)
{
int i;
double xi, yi, x[10], y[10];
printf ("#m=0,S=2\n");
for (i = 0; i < 10; i++)
{
x[i] = i + 0.5 * sin (i);
y[i] = i + cos (i * i);
printf ("%g %g\n", x[i], y[i]);
}
printf ("#m=1,S=0\n");
{
gsl_interp_accel *acc
= gsl_interp_accel_alloc ();
gsl_spline *spline
= gsl_spline_alloc (gsl_interp_cspline, 10);
gsl_spline_init (spline, x, y, 10);
for (xi = x[0]; xi < x[9]; xi += 0.01)
{
yi = gsl_spline_eval (spline, xi, acc);
printf ("%g %g\n", xi, yi);
}
gsl_spline_free (spline);
gsl_interp_accel_free (acc);
}
return 0;
}
The output is designed to be used with the GNU plotutils `graph'
program,
$ ./a.out > interp.dat
$ graph -T ps < interp.dat > interp.ps
The result shows a smooth interpolation of the original points. The
interpolation method can be changed simply by varying the first
argument of `gsl_spline_alloc'.
The next program demonstrates a periodic cubic spline with 4 data
points. Note that the first and last points must be supplied with the
same y-value for a periodic spline.
#include
#include
#include
#include
#include
int
main (void)
{
int N = 4;
double x[4] = {0.00, 0.10, 0.27, 0.30};
double y[4] = {0.15, 0.70, -0.10, 0.15};
/* Note: y[0] == y[3] for periodic data */
gsl_interp_accel *acc = gsl_interp_accel_alloc ();
const gsl_interp_type *t = gsl_interp_cspline_periodic;
gsl_spline *spline = gsl_spline_alloc (t, N);
int i; double xi, yi;
printf ("#m=0,S=5\n");
for (i = 0; i < N; i++)
{
printf ("%g %g\n", x[i], y[i]);
}
printf ("#m=1,S=0\n");
gsl_spline_init (spline, x, y, N);
for (i = 0; i <= 100; i++)
{
xi = (1 - i / 100.0) * x[0] + (i / 100.0) * x[N-1];
yi = gsl_spline_eval (spline, xi, acc);
printf ("%g %g\n", xi, yi);
}
gsl_spline_free (spline);
gsl_interp_accel_free (acc);
return 0;
}
The output can be plotted with GNU `graph'.
$ ./a.out > interp.dat
$ graph -T ps < interp.dat > interp.ps
The result shows a periodic interpolation of the original points. The
slope of the fitted curve is the same at the beginning and end of the
data, and the second derivative is also.
File: gsl-ref.info, Node: Interpolation References and Further Reading, Prev: Interpolation Example programs, Up: Interpolation
27.8 References and Further Reading
===================================
Descriptions of the interpolation algorithms and further references can
be found in the following books:
C.W. Ueberhuber, `Numerical Computation (Volume 1), Chapter 9
"Interpolation"', Springer (1997), ISBN 3-540-62058-3.
D.M. Young, R.T. Gregory `A Survey of Numerical Mathematics
(Volume 1), Chapter 6.8', Dover (1988), ISBN 0-486-65691-8.
File: gsl-ref.info, Node: Numerical Differentiation, Next: Chebyshev Approximations, Prev: Interpolation, Up: Top
28 Numerical Differentiation
****************************
The functions described in this chapter compute numerical derivatives by
finite differencing. An adaptive algorithm is used to find the best
choice of finite difference and to estimate the error in the derivative.
These functions are declared in the header file `gsl_deriv.h'.
* Menu:
* Numerical Differentiation functions::
* Numerical Differentiation Examples::
* Numerical Differentiation References::
File: gsl-ref.info, Node: Numerical Differentiation functions, Next: Numerical Differentiation Examples, Up: Numerical Differentiation
28.1 Functions
==============
-- Function: int gsl_deriv_central (const gsl_function * F, double X,
double H, double * RESULT, double * ABSERR)
This function computes the numerical derivative of the function F
at the point X using an adaptive central difference algorithm with
a step-size of H. The derivative is returned in RESULT and an
estimate of its absolute error is returned in ABSERR.
The initial value of H is used to estimate an optimal step-size,
based on the scaling of the truncation error and round-off error
in the derivative calculation. The derivative is computed using a
5-point rule for equally spaced abscissae at x-h, x-h/2, x, x+h/2,
x+h, with an error estimate taken from the difference between the
5-point rule and the corresponding 3-point rule x-h, x, x+h. Note
that the value of the function at x does not contribute to the
derivative calculation, so only 4-points are actually used.
-- Function: int gsl_deriv_forward (const gsl_function * F, double X,
double H, double * RESULT, double * ABSERR)
This function computes the numerical derivative of the function F
at the point X using an adaptive forward difference algorithm with
a step-size of H. The function is evaluated only at points greater
than X, and never at X itself. The derivative is returned in
RESULT and an estimate of its absolute error is returned in
ABSERR. This function should be used if f(x) has a discontinuity
at X, or is undefined for values less than X.
The initial value of H is used to estimate an optimal step-size,
based on the scaling of the truncation error and round-off error
in the derivative calculation. The derivative at x is computed
using an "open" 4-point rule for equally spaced abscissae at x+h/4,
x+h/2, x+3h/4, x+h, with an error estimate taken from the
difference between the 4-point rule and the corresponding 2-point
rule x+h/2, x+h.
-- Function: int gsl_deriv_backward (const gsl_function * F, double X,
double H, double * RESULT, double * ABSERR)
This function computes the numerical derivative of the function F
at the point X using an adaptive backward difference algorithm
with a step-size of H. The function is evaluated only at points
less than X, and never at X itself. The derivative is returned in
RESULT and an estimate of its absolute error is returned in
ABSERR. This function should be used if f(x) has a discontinuity
at X, or is undefined for values greater than X.
This function is equivalent to calling `gsl_deriv_forward' with a
negative step-size.
File: gsl-ref.info, Node: Numerical Differentiation Examples, Next: Numerical Differentiation References, Prev: Numerical Differentiation functions, Up: Numerical Differentiation
28.2 Examples
=============
The following code estimates the derivative of the function f(x) =
x^{3/2} at x=2 and at x=0. The function f(x) is undefined for x<0 so
the derivative at x=0 is computed using `gsl_deriv_forward'.
#include
#include
#include
double f (double x, void * params)
{
return pow (x, 1.5);
}
int
main (void)
{
gsl_function F;
double result, abserr;
F.function = &f;
F.params = 0;
printf ("f(x) = x^(3/2)\n");
gsl_deriv_central (&F, 2.0, 1e-8, &result, &abserr);
printf ("x = 2.0\n");
printf ("f'(x) = %.10f +/- %.10f\n", result, abserr);
printf ("exact = %.10f\n\n", 1.5 * sqrt(2.0));
gsl_deriv_forward (&F, 0.0, 1e-8, &result, &abserr);
printf ("x = 0.0\n");
printf ("f'(x) = %.10f +/- %.10f\n", result, abserr);
printf ("exact = %.10f\n", 0.0);
return 0;
}
Here is the output of the program,
$ ./a.out
f(x) = x^(3/2)
x = 2.0
f'(x) = 2.1213203120 +/- 0.0000004064
exact = 2.1213203436
x = 0.0
f'(x) = 0.0000000160 +/- 0.0000000339
exact = 0.0000000000
File: gsl-ref.info, Node: Numerical Differentiation References, Prev: Numerical Differentiation Examples, Up: Numerical Differentiation
28.3 References and Further Reading
===================================
The algorithms used by these functions are described in the following
sources:
Abramowitz and Stegun, `Handbook of Mathematical Functions',
Section 25.3.4, and Table 25.5 (Coefficients for Differentiation).
S.D. Conte and Carl de Boor, `Elementary Numerical Analysis: An
Algorithmic Approach', McGraw-Hill, 1972.
File: gsl-ref.info, Node: Chebyshev Approximations, Next: Series Acceleration, Prev: Numerical Differentiation, Up: Top
29 Chebyshev Approximations
***************************
This chapter describes routines for computing Chebyshev approximations
to univariate functions. A Chebyshev approximation is a truncation of
the series f(x) = \sum c_n T_n(x), where the Chebyshev polynomials
T_n(x) = \cos(n \arccos x) provide an orthogonal basis of polynomials
on the interval [-1,1] with the weight function 1 / \sqrt{1-x^2}. The
first few Chebyshev polynomials are, T_0(x) = 1, T_1(x) = x, T_2(x) = 2
x^2 - 1. For further information see Abramowitz & Stegun, Chapter 22.
The functions described in this chapter are declared in the header
file `gsl_chebyshev.h'.
* Menu:
* Chebyshev Definitions::
* Creation and Calculation of Chebyshev Series::
* Auxiliary Functions for Chebyshev Series::
* Chebyshev Series Evaluation::
* Derivatives and Integrals::
* Chebyshev Approximation Examples::
* Chebyshev Approximation References and Further Reading::
File: gsl-ref.info, Node: Chebyshev Definitions, Next: Creation and Calculation of Chebyshev Series, Up: Chebyshev Approximations
29.1 Definitions
================
A Chebyshev series is stored using the following structure,
typedef struct
{
double * c; /* coefficients c[0] .. c[order] */
int order; /* order of expansion */
double a; /* lower interval point */
double b; /* upper interval point */
...
} gsl_cheb_series
The approximation is made over the range [a,b] using ORDER+1 terms,
including the coefficient c[0]. The series is computed using the
following convention,
f(x) = (c_0 / 2) + \sum_{n=1} c_n T_n(x)
which is needed when accessing the coefficients directly.
File: gsl-ref.info, Node: Creation and Calculation of Chebyshev Series, Next: Auxiliary Functions for Chebyshev Series, Prev: Chebyshev Definitions, Up: Chebyshev Approximations
29.2 Creation and Calculation of Chebyshev Series
=================================================
-- Function: gsl_cheb_series * gsl_cheb_alloc (const size_t N)
This function allocates space for a Chebyshev series of order N
and returns a pointer to a new `gsl_cheb_series' struct.
-- Function: void gsl_cheb_free (gsl_cheb_series * CS)
This function frees a previously allocated Chebyshev series CS.
-- Function: int gsl_cheb_init (gsl_cheb_series * CS, const
gsl_function * F, const double A, const double B)
This function computes the Chebyshev approximation CS for the
function F over the range (a,b) to the previously specified order.
The computation of the Chebyshev approximation is an O(n^2)
process, and requires n function evaluations.
File: gsl-ref.info, Node: Auxiliary Functions for Chebyshev Series, Next: Chebyshev Series Evaluation, Prev: Creation and Calculation of Chebyshev Series, Up: Chebyshev Approximations
29.3 Auxiliary Functions
========================
The following functions provide information about an existing Chebyshev
series.
-- Function: size_t gsl_cheb_order (const gsl_cheb_series * CS)
This function returns the order of Chebyshev series CS.
-- Function: size_t gsl_cheb_size (const gsl_cheb_series * CS)
-- Function: double * gsl_cheb_coeffs (const gsl_cheb_series * CS)
These functions return the size of the Chebyshev coefficient array
`c[]' and a pointer to its location in memory for the Chebyshev
series CS.
File: gsl-ref.info, Node: Chebyshev Series Evaluation, Next: Derivatives and Integrals, Prev: Auxiliary Functions for Chebyshev Series, Up: Chebyshev Approximations
29.4 Chebyshev Series Evaluation
================================
-- Function: double gsl_cheb_eval (const gsl_cheb_series * CS, double
X)
This function evaluates the Chebyshev series CS at a given point X.
-- Function: int gsl_cheb_eval_err (const gsl_cheb_series * CS, const
double X, double * RESULT, double * ABSERR)
This function computes the Chebyshev series CS at a given point X,
estimating both the series RESULT and its absolute error ABSERR.
The error estimate is made from the first neglected term in the
series.
-- Function: double gsl_cheb_eval_n (const gsl_cheb_series * CS,
size_t ORDER, double X)
This function evaluates the Chebyshev series CS at a given point
X, to (at most) the given order ORDER.
-- Function: int gsl_cheb_eval_n_err (const gsl_cheb_series * CS,
const size_t ORDER, const double X, double * RESULT, double *
ABSERR)
This function evaluates a Chebyshev series CS at a given point X,
estimating both the series RESULT and its absolute error ABSERR,
to (at most) the given order ORDER. The error estimate is made
from the first neglected term in the series.
File: gsl-ref.info, Node: Derivatives and Integrals, Next: Chebyshev Approximation Examples, Prev: Chebyshev Series Evaluation, Up: Chebyshev Approximations
29.5 Derivatives and Integrals
==============================
The following functions allow a Chebyshev series to be differentiated or
integrated, producing a new Chebyshev series. Note that the error
estimate produced by evaluating the derivative series will be
underestimated due to the contribution of higher order terms being
neglected.
-- Function: int gsl_cheb_calc_deriv (gsl_cheb_series * DERIV, const
gsl_cheb_series * CS)
This function computes the derivative of the series CS, storing
the derivative coefficients in the previously allocated DERIV.
The two series CS and DERIV must have been allocated with the same
order.
-- Function: int gsl_cheb_calc_integ (gsl_cheb_series * INTEG, const
gsl_cheb_series * CS)
This function computes the integral of the series CS, storing the
integral coefficients in the previously allocated INTEG. The two
series CS and INTEG must have been allocated with the same order.
The lower limit of the integration is taken to be the left hand
end of the range A.
File: gsl-ref.info, Node: Chebyshev Approximation Examples, Next: Chebyshev Approximation References and Further Reading, Prev: Derivatives and Integrals, Up: Chebyshev Approximations
29.6 Examples
=============
The following example program computes Chebyshev approximations to a
step function. This is an extremely difficult approximation to make,
due to the discontinuity, and was chosen as an example where
approximation error is visible. For smooth functions the Chebyshev
approximation converges extremely rapidly and errors would not be
visible.
#include
#include
#include
double
f (double x, void *p)
{
if (x < 0.5)
return 0.25;
else
return 0.75;
}
int
main (void)
{
int i, n = 10000;
gsl_cheb_series *cs = gsl_cheb_alloc (40);
gsl_function F;
F.function = f;
F.params = 0;
gsl_cheb_init (cs, &F, 0.0, 1.0);
for (i = 0; i < n; i++)
{
double x = i / (double)n;
double r10 = gsl_cheb_eval_n (cs, 10, x);
double r40 = gsl_cheb_eval (cs, x);
printf ("%g %g %g %g\n",
x, GSL_FN_EVAL (&F, x), r10, r40);
}
gsl_cheb_free (cs);
return 0;
}
The output from the program gives the original function, 10-th order
approximation and 40-th order approximation, all sampled at intervals of
0.001 in x.
File: gsl-ref.info, Node: Chebyshev Approximation References and Further Reading, Prev: Chebyshev Approximation Examples, Up: Chebyshev Approximations
29.7 References and Further Reading
===================================
The following paper describes the use of Chebyshev series,
R. Broucke, "Ten Subroutines for the Manipulation of Chebyshev
Series [C1] (Algorithm 446)". `Communications of the ACM' 16(4),
254-256 (1973)
File: gsl-ref.info, Node: Series Acceleration, Next: Wavelet Transforms, Prev: Chebyshev Approximations, Up: Top
30 Series Acceleration
**********************
The functions described in this chapter accelerate the convergence of a
series using the Levin u-transform. This method takes a small number of
terms from the start of a series and uses a systematic approximation to
compute an extrapolated value and an estimate of its error. The
u-transform works for both convergent and divergent series, including
asymptotic series.
These functions are declared in the header file `gsl_sum.h'.
* Menu:
* Acceleration functions::
* Acceleration functions without error estimation::
* Example of accelerating a series::
* Series Acceleration References::
File: gsl-ref.info, Node: Acceleration functions, Next: Acceleration functions without error estimation, Up: Series Acceleration
30.1 Acceleration functions
===========================
The following functions compute the full Levin u-transform of a series
with its error estimate. The error estimate is computed by propagating
rounding errors from each term through to the final extrapolation.
These functions are intended for summing analytic series where each
term is known to high accuracy, and the rounding errors are assumed to
originate from finite precision. They are taken to be relative errors of
order `GSL_DBL_EPSILON' for each term.
The calculation of the error in the extrapolated value is an O(N^2)
process, which is expensive in time and memory. A faster but less
reliable method which estimates the error from the convergence of the
extrapolated value is described in the next section. For the method
described here a full table of intermediate values and derivatives
through to O(N) must be computed and stored, but this does give a
reliable error estimate.
-- Function: gsl_sum_levin_u_workspace * gsl_sum_levin_u_alloc (size_t
N)
This function allocates a workspace for a Levin u-transform of N
terms. The size of the workspace is O(2n^2 + 3n).
-- Function: void gsl_sum_levin_u_free (gsl_sum_levin_u_workspace * W)
This function frees the memory associated with the workspace W.
-- Function: int gsl_sum_levin_u_accel (const double * ARRAY, size_t
ARRAY_SIZE, gsl_sum_levin_u_workspace * W, double *
SUM_ACCEL, double * ABSERR)
This function takes the terms of a series in ARRAY of size
ARRAY_SIZE and computes the extrapolated limit of the series using
a Levin u-transform. Additional working space must be provided in
W. The extrapolated sum is stored in SUM_ACCEL, with an estimate
of the absolute error stored in ABSERR. The actual term-by-term
sum is returned in `w->sum_plain'. The algorithm calculates the
truncation error (the difference between two successive
extrapolations) and round-off error (propagated from the individual
terms) to choose an optimal number of terms for the extrapolation.
All the terms of the series passed in through ARRAY should be
non-zero.
File: gsl-ref.info, Node: Acceleration functions without error estimation, Next: Example of accelerating a series, Prev: Acceleration functions, Up: Series Acceleration
30.2 Acceleration functions without error estimation
====================================================
The functions described in this section compute the Levin u-transform of
series and attempt to estimate the error from the "truncation error" in
the extrapolation, the difference between the final two approximations.
Using this method avoids the need to compute an intermediate table of
derivatives because the error is estimated from the behavior of the
extrapolated value itself. Consequently this algorithm is an O(N)
process and only requires O(N) terms of storage. If the series
converges sufficiently fast then this procedure can be acceptable. It
is appropriate to use this method when there is a need to compute many
extrapolations of series with similar convergence properties at
high-speed. For example, when numerically integrating a function
defined by a parameterized series where the parameter varies only
slightly. A reliable error estimate should be computed first using the
full algorithm described above in order to verify the consistency of the
results.
-- Function: gsl_sum_levin_utrunc_workspace *
gsl_sum_levin_utrunc_alloc (size_t N)
This function allocates a workspace for a Levin u-transform of N
terms, without error estimation. The size of the workspace is
O(3n).
-- Function: void gsl_sum_levin_utrunc_free
(gsl_sum_levin_utrunc_workspace * W)
This function frees the memory associated with the workspace W.
-- Function: int gsl_sum_levin_utrunc_accel (const double * ARRAY,
size_t ARRAY_SIZE, gsl_sum_levin_utrunc_workspace * W, double
* SUM_ACCEL, double * ABSERR_TRUNC)
This function takes the terms of a series in ARRAY of size
ARRAY_SIZE and computes the extrapolated limit of the series using
a Levin u-transform. Additional working space must be provided in
W. The extrapolated sum is stored in SUM_ACCEL. The actual
term-by-term sum is returned in `w->sum_plain'. The algorithm
terminates when the difference between two successive
extrapolations reaches a minimum or is sufficiently small. The
difference between these two values is used as estimate of the
error and is stored in ABSERR_TRUNC. To improve the reliability
of the algorithm the extrapolated values are replaced by moving
averages when calculating the truncation error, smoothing out any
fluctuations.
File: gsl-ref.info, Node: Example of accelerating a series, Next: Series Acceleration References, Prev: Acceleration functions without error estimation, Up: Series Acceleration
30.3 Examples
=============
The following code calculates an estimate of \zeta(2) = \pi^2 / 6 using
the series,
\zeta(2) = 1 + 1/2^2 + 1/3^2 + 1/4^2 + ...
After N terms the error in the sum is O(1/N), making direct summation
of the series converge slowly.
#include
#include
#include
#define N 20
int
main (void)
{
double t[N];
double sum_accel, err;
double sum = 0;
int n;
gsl_sum_levin_u_workspace * w
= gsl_sum_levin_u_alloc (N);
const double zeta_2 = M_PI * M_PI / 6.0;
/* terms for zeta(2) = \sum_{n=1}^{\infty} 1/n^2 */
for (n = 0; n < N; n++)
{
double np1 = n + 1.0;
t[n] = 1.0 / (np1 * np1);
sum += t[n];
}
gsl_sum_levin_u_accel (t, N, w, &sum_accel, &err);
printf ("term-by-term sum = % .16f using %d terms\n",
sum, N);
printf ("term-by-term sum = % .16f using %d terms\n",
w->sum_plain, w->terms_used);
printf ("exact value = % .16f\n", zeta_2);
printf ("accelerated sum = % .16f using %d terms\n",
sum_accel, w->terms_used);
printf ("estimated error = % .16f\n", err);
printf ("actual error = % .16f\n",
sum_accel - zeta_2);
gsl_sum_levin_u_free (w);
return 0;
}
The output below shows that the Levin u-transform is able to obtain an
estimate of the sum to 1 part in 10^10 using the first eleven terms of
the series. The error estimate returned by the function is also
accurate, giving the correct number of significant digits.
$ ./a.out
term-by-term sum = 1.5961632439130233 using 20 terms
term-by-term sum = 1.5759958390005426 using 13 terms
exact value = 1.6449340668482264
accelerated sum = 1.6449340668166479 using 13 terms
estimated error = 0.0000000000508580
actual error = -0.0000000000315785
Note that a direct summation of this series would require 10^10 terms
to achieve the same precision as the accelerated sum does in 13 terms.
File: gsl-ref.info, Node: Series Acceleration References, Prev: Example of accelerating a series, Up: Series Acceleration
30.4 References and Further Reading
===================================
The algorithms used by these functions are described in the following
papers,
T. Fessler, W.F. Ford, D.A. Smith, HURRY: An acceleration
algorithm for scalar sequences and series `ACM Transactions on
Mathematical Software', 9(3):346-354, 1983. and Algorithm 602
9(3):355-357, 1983.
The theory of the u-transform was presented by Levin,
D. Levin, Development of Non-Linear Transformations for Improving
Convergence of Sequences, `Intern. J. Computer Math.' B3:371-388,
1973.
A review paper on the Levin Transform is available online,
Herbert H. H. Homeier, Scalar Levin-Type Sequence Transformations,
`http://arxiv.org/abs/math/0005209'.
File: gsl-ref.info, Node: Wavelet Transforms, Next: Discrete Hankel Transforms, Prev: Series Acceleration, Up: Top
31 Wavelet Transforms
*********************
This chapter describes functions for performing Discrete Wavelet
Transforms (DWTs). The library includes wavelets for real data in both
one and two dimensions. The wavelet functions are declared in the
header files `gsl_wavelet.h' and `gsl_wavelet2d.h'.
* Menu:
* DWT Definitions::
* DWT Initialization::
* DWT Transform Functions::
* DWT Examples::
* DWT References::
File: gsl-ref.info, Node: DWT Definitions, Next: DWT Initialization, Up: Wavelet Transforms
31.1 Definitions
================
The continuous wavelet transform and its inverse are defined by the
relations,
w(s,\tau) = \int f(t) * \psi^*_{s,\tau}(t) dt
and,
f(t) = \int \int_{-\infty}^\infty w(s, \tau) * \psi_{s,\tau}(t) d\tau ds
where the basis functions \psi_{s,\tau} are obtained by scaling and
translation from a single function, referred to as the "mother wavelet".
The discrete version of the wavelet transform acts on equally-spaced
samples, with fixed scaling and translation steps (s, \tau). The
frequency and time axes are sampled "dyadically" on scales of 2^j
through a level parameter j. The resulting family of functions
{\psi_{j,n}} constitutes an orthonormal basis for square-integrable
signals.
The discrete wavelet transform is an O(N) algorithm, and is also
referred to as the "fast wavelet transform".
File: gsl-ref.info, Node: DWT Initialization, Next: DWT Transform Functions, Prev: DWT Definitions, Up: Wavelet Transforms
31.2 Initialization
===================
The `gsl_wavelet' structure contains the filter coefficients defining
the wavelet and any associated offset parameters.
-- Function: gsl_wavelet * gsl_wavelet_alloc (const gsl_wavelet_type *
T, size_t K)
This function allocates and initializes a wavelet object of type
T. The parameter K selects the specific member of the wavelet
family. A null pointer is returned if insufficient memory is
available or if a unsupported member is selected.
The following wavelet types are implemented:
-- Wavelet: gsl_wavelet_daubechies
-- Wavelet: gsl_wavelet_daubechies_centered
This is the Daubechies wavelet family of maximum phase with k/2
vanishing moments. The implemented wavelets are k=4, 6, ..., 20,
with K even.
-- Wavelet: gsl_wavelet_haar
-- Wavelet: gsl_wavelet_haar_centered
This is the Haar wavelet. The only valid choice of k for the Haar
wavelet is k=2.
-- Wavelet: gsl_wavelet_bspline
-- Wavelet: gsl_wavelet_bspline_centered
This is the biorthogonal B-spline wavelet family of order (i,j).
The implemented values of k = 100*i + j are 103, 105, 202, 204,
206, 208, 301, 303, 305 307, 309.
The centered forms of the wavelets align the coefficients of the various
sub-bands on edges. Thus the resulting visualization of the
coefficients of the wavelet transform in the phase plane is easier to
understand.
-- Function: const char * gsl_wavelet_name (const gsl_wavelet * W)
This function returns a pointer to the name of the wavelet family
for W.
-- Function: void gsl_wavelet_free (gsl_wavelet * W)
This function frees the wavelet object W.
The `gsl_wavelet_workspace' structure contains scratch space of the
same size as the input data and is used to hold intermediate results
during the transform.
-- Function: gsl_wavelet_workspace * gsl_wavelet_workspace_alloc
(size_t N)
This function allocates a workspace for the discrete wavelet
transform. To perform a one-dimensional transform on N elements,
a workspace of size N must be provided. For two-dimensional
transforms of N-by-N matrices it is sufficient to allocate a
workspace of size N, since the transform operates on individual
rows and columns. A null pointer is returned if insufficient
memory is available.
-- Function: void gsl_wavelet_workspace_free (gsl_wavelet_workspace *
WORK)
This function frees the allocated workspace WORK.
File: gsl-ref.info, Node: DWT Transform Functions, Next: DWT Examples, Prev: DWT Initialization, Up: Wavelet Transforms
31.3 Transform Functions
========================
This sections describes the actual functions performing the discrete
wavelet transform. Note that the transforms use periodic boundary
conditions. If the signal is not periodic in the sample length then
spurious coefficients will appear at the beginning and end of each level
of the transform.
* Menu:
* DWT in one dimension::
* DWT in two dimension::
File: gsl-ref.info, Node: DWT in one dimension, Next: DWT in two dimension, Up: DWT Transform Functions
31.3.1 Wavelet transforms in one dimension
------------------------------------------
-- Function: int gsl_wavelet_transform (const gsl_wavelet * W, double
* DATA, size_t STRIDE, size_t N, gsl_wavelet_direction DIR,
gsl_wavelet_workspace * WORK)
-- Function: int gsl_wavelet_transform_forward (const gsl_wavelet * W,
double * DATA, size_t STRIDE, size_t N, gsl_wavelet_workspace
* WORK)
-- Function: int gsl_wavelet_transform_inverse (const gsl_wavelet * W,
double * DATA, size_t STRIDE, size_t N, gsl_wavelet_workspace
* WORK)
These functions compute in-place forward and inverse discrete
wavelet transforms of length N with stride STRIDE on the array
DATA. The length of the transform N is restricted to powers of
two. For the `transform' version of the function the argument DIR
can be either `forward' (+1) or `backward' (-1). A workspace WORK
of length N must be provided.
For the forward transform, the elements of the original array are
replaced by the discrete wavelet transform f_i -> w_{j,k} in a
packed triangular storage layout, where J is the index of the level
j = 0 ... J-1 and K is the index of the coefficient within each
level, k = 0 ... (2^j)-1. The total number of levels is J =
\log_2(n). The output data has the following form,
(s_{-1,0}, d_{0,0}, d_{1,0}, d_{1,1}, d_{2,0}, ...,
d_{j,k}, ..., d_{J-1,2^{J-1}-1})
where the first element is the smoothing coefficient s_{-1,0},
followed by the detail coefficients d_{j,k} for each level j. The
backward transform inverts these coefficients to obtain the
original data.
These functions return a status of `GSL_SUCCESS' upon successful
completion. `GSL_EINVAL' is returned if N is not an integer power
of 2 or if insufficient workspace is provided.
File: gsl-ref.info, Node: DWT in two dimension, Prev: DWT in one dimension, Up: DWT Transform Functions
31.3.2 Wavelet transforms in two dimension
------------------------------------------
The library provides functions to perform two-dimensional discrete
wavelet transforms on square matrices. The matrix dimensions must be an
integer power of two. There are two possible orderings of the rows and
columns in the two-dimensional wavelet transform, referred to as the
"standard" and "non-standard" forms.
The "standard" transform performs a complete discrete wavelet
transform on the rows of the matrix, followed by a separate complete
discrete wavelet transform on the columns of the resulting
row-transformed matrix. This procedure uses the same ordering as a
two-dimensional Fourier transform.
The "non-standard" transform is performed in interleaved passes on
the rows and columns of the matrix for each level of the transform. The
first level of the transform is applied to the matrix rows, and then to
the matrix columns. This procedure is then repeated across the rows and
columns of the data for the subsequent levels of the transform, until
the full discrete wavelet transform is complete. The non-standard form
of the discrete wavelet transform is typically used in image analysis.
The functions described in this section are declared in the header
file `gsl_wavelet2d.h'.
-- Function: int gsl_wavelet2d_transform (const gsl_wavelet * W,
double * DATA, size_t TDA, size_t SIZE1, size_t SIZE2,
gsl_wavelet_direction DIR, gsl_wavelet_workspace * WORK)
-- Function: int gsl_wavelet2d_transform_forward (const gsl_wavelet *
W, double * DATA, size_t TDA, size_t SIZE1, size_t SIZE2,
gsl_wavelet_workspace * WORK)
-- Function: int gsl_wavelet2d_transform_inverse (const gsl_wavelet *
W, double * DATA, size_t TDA, size_t SIZE1, size_t SIZE2,
gsl_wavelet_workspace * WORK)
These functions compute two-dimensional in-place forward and
inverse discrete wavelet transforms in standard form on the array
DATA stored in row-major form with dimensions SIZE1 and SIZE2 and
physical row length TDA. The dimensions must be equal (square
matrix) and are restricted to powers of two. For the `transform'
version of the function the argument DIR can be either `forward'
(+1) or `backward' (-1). A workspace WORK of the appropriate size
must be provided. On exit, the appropriate elements of the array
DATA are replaced by their two-dimensional wavelet transform.
The functions return a status of `GSL_SUCCESS' upon successful
completion. `GSL_EINVAL' is returned if SIZE1 and SIZE2 are not
equal and integer powers of 2, or if insufficient workspace is
provided.
-- Function: int gsl_wavelet2d_transform_matrix (const gsl_wavelet *
W, gsl_matrix * M, gsl_wavelet_direction DIR,
gsl_wavelet_workspace * WORK)
-- Function: int gsl_wavelet2d_transform_matrix_forward (const
gsl_wavelet * W, gsl_matrix * M, gsl_wavelet_workspace * WORK)
-- Function: int gsl_wavelet2d_transform_matrix_inverse (const
gsl_wavelet * W, gsl_matrix * M, gsl_wavelet_workspace * WORK)
These functions compute the two-dimensional in-place wavelet
transform on a matrix A.
-- Function: int gsl_wavelet2d_nstransform (const gsl_wavelet * W,
double * DATA, size_t TDA, size_t SIZE1, size_t SIZE2,
gsl_wavelet_direction DIR, gsl_wavelet_workspace * WORK)
-- Function: int gsl_wavelet2d_nstransform_forward (const gsl_wavelet
* W, double * DATA, size_t TDA, size_t SIZE1, size_t SIZE2,
gsl_wavelet_workspace * WORK)
-- Function: int gsl_wavelet2d_nstransform_inverse (const gsl_wavelet
* W, double * DATA, size_t TDA, size_t SIZE1, size_t SIZE2,
gsl_wavelet_workspace * WORK)
These functions compute the two-dimensional wavelet transform in
non-standard form.
-- Function: int gsl_wavelet2d_nstransform_matrix (const gsl_wavelet *
W, gsl_matrix * M, gsl_wavelet_direction DIR,
gsl_wavelet_workspace * WORK)
-- Function: int gsl_wavelet2d_nstransform_matrix_forward (const
gsl_wavelet * W, gsl_matrix * M, gsl_wavelet_workspace * WORK)
-- Function: int gsl_wavelet2d_nstransform_matrix_inverse (const
gsl_wavelet * W, gsl_matrix * M, gsl_wavelet_workspace * WORK)
These functions compute the non-standard form of the
two-dimensional in-place wavelet transform on a matrix A.
File: gsl-ref.info, Node: DWT Examples, Next: DWT References, Prev: DWT Transform Functions, Up: Wavelet Transforms
31.4 Examples
=============
The following program demonstrates the use of the one-dimensional
wavelet transform functions. It computes an approximation to an input
signal (of length 256) using the 20 largest components of the wavelet
transform, while setting the others to zero.
#include
#include
#include
#include
int
main (int argc, char **argv)
{
int i, n = 256, nc = 20;
double *data = malloc (n * sizeof (double));
double *abscoeff = malloc (n * sizeof (double));
size_t *p = malloc (n * sizeof (size_t));
FILE * f;
gsl_wavelet *w;
gsl_wavelet_workspace *work;
w = gsl_wavelet_alloc (gsl_wavelet_daubechies, 4);
work = gsl_wavelet_workspace_alloc (n);
f = fopen (argv[1], "r");
for (i = 0; i < n; i++)
{
fscanf (f, "%lg", &data[i]);
}
fclose (f);
gsl_wavelet_transform_forward (w, data, 1, n, work);
for (i = 0; i < n; i++)
{
abscoeff[i] = fabs (data[i]);
}
gsl_sort_index (p, abscoeff, 1, n);
for (i = 0; (i + nc) < n; i++)
data[p[i]] = 0;
gsl_wavelet_transform_inverse (w, data, 1, n, work);
for (i = 0; i < n; i++)
{
printf ("%g\n", data[i]);
}
gsl_wavelet_free (w);
gsl_wavelet_workspace_free (work);
free (data);
free (abscoeff);
free (p);
return 0;
}
The output can be used with the GNU plotutils `graph' program,
$ ./a.out ecg.dat > dwt.dat
$ graph -T ps -x 0 256 32 -h 0.3 -a dwt.dat > dwt.ps
File: gsl-ref.info, Node: DWT References, Prev: DWT Examples, Up: Wavelet Transforms
31.5 References and Further Reading
===================================
The mathematical background to wavelet transforms is covered in the
original lectures by Daubechies,
Ingrid Daubechies. Ten Lectures on Wavelets. `CBMS-NSF Regional
Conference Series in Applied Mathematics' (1992), SIAM, ISBN
0898712742.
An easy to read introduction to the subject with an emphasis on the
application of the wavelet transform in various branches of science is,
Paul S. Addison. `The Illustrated Wavelet Transform Handbook'.
Institute of Physics Publishing (2002), ISBN 0750306920.
For extensive coverage of signal analysis by wavelets, wavelet packets
and local cosine bases see,
S. G. Mallat. `A wavelet tour of signal processing' (Second
edition). Academic Press (1999), ISBN 012466606X.
The concept of multiresolution analysis underlying the wavelet transform
is described in,
S. G. Mallat. Multiresolution Approximations and Wavelet
Orthonormal Bases of L^2(R). `Transactions of the American
Mathematical Society', 315(1), 1989, 69-87.
S. G. Mallat. A Theory for Multiresolution Signal
Decomposition--The Wavelet Representation. `IEEE Transactions on
Pattern Analysis and Machine Intelligence', 11, 1989, 674-693.
The coefficients for the individual wavelet families implemented by the
library can be found in the following papers,
I. Daubechies. Orthonormal Bases of Compactly Supported Wavelets.
`Communications on Pure and Applied Mathematics', 41 (1988)
909-996.
A. Cohen, I. Daubechies, and J.-C. Feauveau. Biorthogonal Bases
of Compactly Supported Wavelets. `Communications on Pure and
Applied Mathematics', 45 (1992) 485-560.
The PhysioNet archive of physiological datasets can be found online at
`http://www.physionet.org/' and is described in the following paper,
Goldberger et al. PhysioBank, PhysioToolkit, and PhysioNet:
Components of a New Research Resource for Complex Physiologic
Signals. `Circulation' 101(23):e215-e220 2000.
File: gsl-ref.info, Node: Discrete Hankel Transforms, Next: One dimensional Root-Finding, Prev: Wavelet Transforms, Up: Top
32 Discrete Hankel Transforms
*****************************
This chapter describes functions for performing Discrete Hankel
Transforms (DHTs). The functions are declared in the header file
`gsl_dht.h'.
* Menu:
* Discrete Hankel Transform Definition::
* Discrete Hankel Transform Functions::
* Discrete Hankel Transform References::
File: gsl-ref.info, Node: Discrete Hankel Transform Definition, Next: Discrete Hankel Transform Functions, Up: Discrete Hankel Transforms
32.1 Definitions
================
The discrete Hankel transform acts on a vector of sampled data, where
the samples are assumed to have been taken at points related to the
zeroes of a Bessel function of fixed order; compare this to the case of
the discrete Fourier transform, where samples are taken at points
related to the zeroes of the sine or cosine function.
Specifically, let f(t) be a function on the unit interval and
j_(\nu,m) the m-th zero of the Bessel function J_\nu(x). Then the
finite \nu-Hankel transform of f(t) is defined to be the set of numbers
g_m given by,
g_m = \int_0^1 t dt J_\nu(j_(\nu,m)t) f(t),
so that,
f(t) = \sum_{m=1}^\infty (2 J_\nu(j_(\nu,m)t) / J_(\nu+1)(j_(\nu,m))^2) g_m.
Suppose that f is band-limited in the sense that g_m=0 for m > M. Then
we have the following fundamental sampling theorem.
g_m = (2 / j_(\nu,M)^2)
\sum_{k=1}^{M-1} f(j_(\nu,k)/j_(\nu,M))
(J_\nu(j_(\nu,m) j_(\nu,k) / j_(\nu,M)) / J_(\nu+1)(j_(\nu,k))^2).
It is this discrete expression which defines the discrete Hankel
transform. The kernel in the summation above defines the matrix of the
\nu-Hankel transform of size M-1. The coefficients of this matrix,
being dependent on \nu and M, must be precomputed and stored; the
`gsl_dht' object encapsulates this data. The allocation function
`gsl_dht_alloc' returns a `gsl_dht' object which must be properly
initialized with `gsl_dht_init' before it can be used to perform
transforms on data sample vectors, for fixed \nu and M, using the
`gsl_dht_apply' function. The implementation allows a scaling of the
fundamental interval, for convenience, so that one can assume the
function is defined on the interval [0,X], rather than the unit
interval.
Notice that by assumption f(t) vanishes at the endpoints of the
interval, consistent with the inversion formula and the sampling
formula given above. Therefore, this transform corresponds to an
orthogonal expansion in eigenfunctions of the Dirichlet problem for the
Bessel differential equation.
File: gsl-ref.info, Node: Discrete Hankel Transform Functions, Next: Discrete Hankel Transform References, Prev: Discrete Hankel Transform Definition, Up: Discrete Hankel Transforms
32.2 Functions
==============
-- Function: gsl_dht * gsl_dht_alloc (size_t SIZE)
This function allocates a Discrete Hankel transform object of size
SIZE.
-- Function: int gsl_dht_init (gsl_dht * T, double NU, double XMAX)
This function initializes the transform T for the given values of
NU and XMAX.
-- Function: gsl_dht * gsl_dht_new (size_t SIZE, double NU, double
XMAX)
This function allocates a Discrete Hankel transform object of size
SIZE and initializes it for the given values of NU and XMAX.
-- Function: void gsl_dht_free (gsl_dht * T)
This function frees the transform T.
-- Function: int gsl_dht_apply (const gsl_dht * T, double * F_IN,
double * F_OUT)
This function applies the transform T to the array F_IN whose size
is equal to the size of the transform. The result is stored in
the array F_OUT which must be of the same length.
Applying this function to its output gives the original data
multiplied by (1/j_(\nu,M))^2, up to numerical errors.
-- Function: double gsl_dht_x_sample (const gsl_dht * T, int N)
This function returns the value of the N-th sample point in the
unit interval, (j_{\nu,n+1}/j_{\nu,M}) X. These are the points
where the function f(t) is assumed to be sampled.
-- Function: double gsl_dht_k_sample (const gsl_dht * T, int N)
This function returns the value of the N-th sample point in
"k-space", j_{\nu,n+1}/X.
File: gsl-ref.info, Node: Discrete Hankel Transform References, Prev: Discrete Hankel Transform Functions, Up: Discrete Hankel Transforms
32.3 References and Further Reading
===================================
The algorithms used by these functions are described in the following
papers,
H. Fisk Johnson, Comp. Phys. Comm. 43, 181 (1987).
D. Lemoine, J. Chem. Phys. 101, 3936 (1994).
File: gsl-ref.info, Node: One dimensional Root-Finding, Next: One dimensional Minimization, Prev: Discrete Hankel Transforms, Up: Top
33 One dimensional Root-Finding
*******************************
This chapter describes routines for finding roots of arbitrary
one-dimensional functions. The library provides low level components
for a variety of iterative solvers and convergence tests. These can be
combined by the user to achieve the desired solution, with full access
to the intermediate steps of the iteration. Each class of methods uses
the same framework, so that you can switch between solvers at runtime
without needing to recompile your program. Each instance of a solver
keeps track of its own state, allowing the solvers to be used in
multi-threaded programs.
The header file `gsl_roots.h' contains prototypes for the root
finding functions and related declarations.
* Menu:
* Root Finding Overview::
* Root Finding Caveats::
* Initializing the Solver::
* Providing the function to solve::
* Search Bounds and Guesses::
* Root Finding Iteration::
* Search Stopping Parameters::
* Root Bracketing Algorithms::
* Root Finding Algorithms using Derivatives::
* Root Finding Examples::
* Root Finding References and Further Reading::
File: gsl-ref.info, Node: Root Finding Overview, Next: Root Finding Caveats, Up: One dimensional Root-Finding
33.1 Overview
=============
One-dimensional root finding algorithms can be divided into two classes,
"root bracketing" and "root polishing". Algorithms which proceed by
bracketing a root are guaranteed to converge. Bracketing algorithms
begin with a bounded region known to contain a root. The size of this
bounded region is reduced, iteratively, until it encloses the root to a
desired tolerance. This provides a rigorous error estimate for the
location of the root.
The technique of "root polishing" attempts to improve an initial
guess to the root. These algorithms converge only if started "close
enough" to a root, and sacrifice a rigorous error bound for speed. By
approximating the behavior of a function in the vicinity of a root they
attempt to find a higher order improvement of an initial guess. When
the behavior of the function is compatible with the algorithm and a good
initial guess is available a polishing algorithm can provide rapid
convergence.
In GSL both types of algorithm are available in similar frameworks.
The user provides a high-level driver for the algorithms, and the
library provides the individual functions necessary for each of the
steps. There are three main phases of the iteration. The steps are,
* initialize solver state, S, for algorithm T
* update S using the iteration T
* test S for convergence, and repeat iteration if necessary
The state for bracketing solvers is held in a `gsl_root_fsolver'
struct. The updating procedure uses only function evaluations (not
derivatives). The state for root polishing solvers is held in a
`gsl_root_fdfsolver' struct. The updates require both the function and
its derivative (hence the name `fdf') to be supplied by the user.
File: gsl-ref.info, Node: Root Finding Caveats, Next: Initializing the Solver, Prev: Root Finding Overview, Up: One dimensional Root-Finding
33.2 Caveats
============
Note that root finding functions can only search for one root at a time.
When there are several roots in the search area, the first root to be
found will be returned; however it is difficult to predict which of the
roots this will be. _In most cases, no error will be reported if you
try to find a root in an area where there is more than one._
Care must be taken when a function may have a multiple root (such as
f(x) = (x-x_0)^2 or f(x) = (x-x_0)^3). It is not possible to use
root-bracketing algorithms on even-multiplicity roots. For these
algorithms the initial interval must contain a zero-crossing, where the
function is negative at one end of the interval and positive at the
other end. Roots with even-multiplicity do not cross zero, but only
touch it instantaneously. Algorithms based on root bracketing will
still work for odd-multiplicity roots (e.g. cubic, quintic, ...). Root
polishing algorithms generally work with higher multiplicity roots, but
at a reduced rate of convergence. In these cases the "Steffenson
algorithm" can be used to accelerate the convergence of multiple roots.
While it is not absolutely required that f have a root within the
search region, numerical root finding functions should not be used
haphazardly to check for the _existence_ of roots. There are better
ways to do this. Because it is easy to create situations where
numerical root finders can fail, it is a bad idea to throw a root
finder at a function you do not know much about. In general it is best
to examine the function visually by plotting before searching for a
root.
File: gsl-ref.info, Node: Initializing the Solver, Next: Providing the function to solve, Prev: Root Finding Caveats, Up: One dimensional Root-Finding
33.3 Initializing the Solver
============================
-- Function: gsl_root_fsolver * gsl_root_fsolver_alloc (const
gsl_root_fsolver_type * T)
This function returns a pointer to a newly allocated instance of a
solver of type T. For example, the following code creates an
instance of a bisection solver,
const gsl_root_fsolver_type * T
= gsl_root_fsolver_bisection;
gsl_root_fsolver * s
= gsl_root_fsolver_alloc (T);
If there is insufficient memory to create the solver then the
function returns a null pointer and the error handler is invoked
with an error code of `GSL_ENOMEM'.
-- Function: gsl_root_fdfsolver * gsl_root_fdfsolver_alloc (const
gsl_root_fdfsolver_type * T)
This function returns a pointer to a newly allocated instance of a
derivative-based solver of type T. For example, the following
code creates an instance of a Newton-Raphson solver,
const gsl_root_fdfsolver_type * T
= gsl_root_fdfsolver_newton;
gsl_root_fdfsolver * s
= gsl_root_fdfsolver_alloc (T);
If there is insufficient memory to create the solver then the
function returns a null pointer and the error handler is invoked
with an error code of `GSL_ENOMEM'.
-- Function: int gsl_root_fsolver_set (gsl_root_fsolver * S,
gsl_function * F, double X_LOWER, double X_UPPER)
This function initializes, or reinitializes, an existing solver S
to use the function F and the initial search interval [X_LOWER,
X_UPPER].
-- Function: int gsl_root_fdfsolver_set (gsl_root_fdfsolver * S,
gsl_function_fdf * FDF, double ROOT)
This function initializes, or reinitializes, an existing solver S
to use the function and derivative FDF and the initial guess ROOT.
-- Function: void gsl_root_fsolver_free (gsl_root_fsolver * S)
-- Function: void gsl_root_fdfsolver_free (gsl_root_fdfsolver * S)
These functions free all the memory associated with the solver S.
-- Function: const char * gsl_root_fsolver_name (const
gsl_root_fsolver * S)
-- Function: const char * gsl_root_fdfsolver_name (const
gsl_root_fdfsolver * S)
These functions return a pointer to the name of the solver. For
example,
printf ("s is a '%s' solver\n",
gsl_root_fsolver_name (s));
would print something like `s is a 'bisection' solver'.
File: gsl-ref.info, Node: Providing the function to solve, Next: Search Bounds and Guesses, Prev: Initializing the Solver, Up: One dimensional Root-Finding
33.4 Providing the function to solve
====================================
You must provide a continuous function of one variable for the root
finders to operate on, and, sometimes, its first derivative. In order
to allow for general parameters the functions are defined by the
following data types:
-- Data Type: gsl_function
This data type defines a general function with parameters.
`double (* function) (double X, void * PARAMS)'
this function should return the value f(x,params) for
argument X and parameters PARAMS
`void * params'
a pointer to the parameters of the function
Here is an example for the general quadratic function,
f(x) = a x^2 + b x + c
with a = 3, b = 2, c = 1. The following code defines a `gsl_function'
`F' which you could pass to a root finder as a function pointer:
struct my_f_params { double a; double b; double c; };
double
my_f (double x, void * p) {
struct my_f_params * params
= (struct my_f_params *)p;
double a = (params->a);
double b = (params->b);
double c = (params->c);
return (a * x + b) * x + c;
}
gsl_function F;
struct my_f_params params = { 3.0, 2.0, 1.0 };
F.function = &my_f;
F.params = ¶ms;
The function f(x) can be evaluated using the macro `GSL_FN_EVAL(&F,x)'
defined in `gsl_math.h'.
-- Data Type: gsl_function_fdf
This data type defines a general function with parameters and its
first derivative.
`double (* f) (double X, void * PARAMS)'
this function should return the value of f(x,params) for
argument X and parameters PARAMS
`double (* df) (double X, void * PARAMS)'
this function should return the value of the derivative of F
with respect to X, f'(x,params), for argument X and
parameters PARAMS
`void (* fdf) (double X, void * PARAMS, double * F, double * DF)'
this function should set the values of the function F to
f(x,params) and its derivative DF to f'(x,params) for
argument X and parameters PARAMS. This function provides an
optimization of the separate functions for f(x) and f'(x)--it
is always faster to compute the function and its derivative
at the same time.
`void * params'
a pointer to the parameters of the function
Here is an example where f(x) = 2\exp(2x):
double
my_f (double x, void * params)
{
return exp (2 * x);
}
double
my_df (double x, void * params)
{
return 2 * exp (2 * x);
}
void
my_fdf (double x, void * params,
double * f, double * df)
{
double t = exp (2 * x);
*f = t;
*df = 2 * t; /* uses existing value */
}
gsl_function_fdf FDF;
FDF.f = &my_f;
FDF.df = &my_df;
FDF.fdf = &my_fdf;
FDF.params = 0;
The function f(x) can be evaluated using the macro
`GSL_FN_FDF_EVAL_F(&FDF,x)' and the derivative f'(x) can be evaluated
using the macro `GSL_FN_FDF_EVAL_DF(&FDF,x)'. Both the function y =
f(x) and its derivative dy = f'(x) can be evaluated at the same time
using the macro `GSL_FN_FDF_EVAL_F_DF(&FDF,x,y,dy)'. The macro stores
f(x) in its Y argument and f'(x) in its DY argument--both of these
should be pointers to `double'.
File: gsl-ref.info, Node: Search Bounds and Guesses, Next: Root Finding Iteration, Prev: Providing the function to solve, Up: One dimensional Root-Finding
33.5 Search Bounds and Guesses
==============================
You provide either search bounds or an initial guess; this section
explains how search bounds and guesses work and how function arguments
control them.
A guess is simply an x value which is iterated until it is within
the desired precision of a root. It takes the form of a `double'.
Search bounds are the endpoints of an interval which is iterated
until the length of the interval is smaller than the requested
precision. The interval is defined by two values, the lower limit and
the upper limit. Whether the endpoints are intended to be included in
the interval or not depends on the context in which the interval is
used.
File: gsl-ref.info, Node: Root Finding Iteration, Next: Search Stopping Parameters, Prev: Search Bounds and Guesses, Up: One dimensional Root-Finding
33.6 Iteration
==============
The following functions drive the iteration of each algorithm. Each
function performs one iteration to update the state of any solver of the
corresponding type. The same functions work for all solvers so that
different methods can be substituted at runtime without modifications to
the code.
-- Function: int gsl_root_fsolver_iterate (gsl_root_fsolver * S)
-- Function: int gsl_root_fdfsolver_iterate (gsl_root_fdfsolver * S)
These functions perform a single iteration of the solver S. If the
iteration encounters an unexpected problem then an error code will
be returned,
`GSL_EBADFUNC'
the iteration encountered a singular point where the function
or its derivative evaluated to `Inf' or `NaN'.
`GSL_EZERODIV'
the derivative of the function vanished at the iteration
point, preventing the algorithm from continuing without a
division by zero.
The solver maintains a current best estimate of the root at all
times. The bracketing solvers also keep track of the current best
interval bounding the root. This information can be accessed with the
following auxiliary functions,
-- Function: double gsl_root_fsolver_root (const gsl_root_fsolver * S)
-- Function: double gsl_root_fdfsolver_root (const gsl_root_fdfsolver
* S)
These functions return the current estimate of the root for the
solver S.
-- Function: double gsl_root_fsolver_x_lower (const gsl_root_fsolver *
S)
-- Function: double gsl_root_fsolver_x_upper (const gsl_root_fsolver *
S)
These functions return the current bracketing interval for the
solver S.
File: gsl-ref.info, Node: Search Stopping Parameters, Next: Root Bracketing Algorithms, Prev: Root Finding Iteration, Up: One dimensional Root-Finding
33.7 Search Stopping Parameters
===============================
A root finding procedure should stop when one of the following
conditions is true:
* A root has been found to within the user-specified precision.
* A user-specified maximum number of iterations has been reached.
* An error has occurred.
The handling of these conditions is under user control. The functions
below allow the user to test the precision of the current result in
several standard ways.
-- Function: int gsl_root_test_interval (double X_LOWER, double
X_UPPER, double EPSABS, double EPSREL)
This function tests for the convergence of the interval [X_LOWER,
X_UPPER] with absolute error EPSABS and relative error EPSREL.
The test returns `GSL_SUCCESS' if the following condition is
achieved,
|a - b| < epsabs + epsrel min(|a|,|b|)
when the interval x = [a,b] does not include the origin. If the
interval includes the origin then \min(|a|,|b|) is replaced by
zero (which is the minimum value of |x| over the interval). This
ensures that the relative error is accurately estimated for roots
close to the origin.
This condition on the interval also implies that any estimate of
the root r in the interval satisfies the same condition with
respect to the true root r^*,
|r - r^*| < epsabs + epsrel r^*
assuming that the true root r^* is contained within the interval.
-- Function: int gsl_root_test_delta (double X1, double X0, double
EPSABS, double EPSREL)
This function tests for the convergence of the sequence ..., X0,
X1 with absolute error EPSABS and relative error EPSREL. The test
returns `GSL_SUCCESS' if the following condition is achieved,
|x_1 - x_0| < epsabs + epsrel |x_1|
and returns `GSL_CONTINUE' otherwise.
-- Function: int gsl_root_test_residual (double F, double EPSABS)
This function tests the residual value F against the absolute
error bound EPSABS. The test returns `GSL_SUCCESS' if the
following condition is achieved,
|f| < epsabs
and returns `GSL_CONTINUE' otherwise. This criterion is suitable
for situations where the precise location of the root, x, is
unimportant provided a value can be found where the residual,
|f(x)|, is small enough.
File: gsl-ref.info, Node: Root Bracketing Algorithms, Next: Root Finding Algorithms using Derivatives, Prev: Search Stopping Parameters, Up: One dimensional Root-Finding
33.8 Root Bracketing Algorithms
===============================
The root bracketing algorithms described in this section require an
initial interval which is guaranteed to contain a root--if a and b are
the endpoints of the interval then f(a) must differ in sign from f(b).
This ensures that the function crosses zero at least once in the
interval. If a valid initial interval is used then these algorithm
cannot fail, provided the function is well-behaved.
Note that a bracketing algorithm cannot find roots of even degree,
since these do not cross the x-axis.
-- Solver: gsl_root_fsolver_bisection
The "bisection algorithm" is the simplest method of bracketing the
roots of a function. It is the slowest algorithm provided by the
library, with linear convergence.
On each iteration, the interval is bisected and the value of the
function at the midpoint is calculated. The sign of this value is
used to determine which half of the interval does not contain a
root. That half is discarded to give a new, smaller interval
containing the root. This procedure can be continued indefinitely
until the interval is sufficiently small.
At any time the current estimate of the root is taken as the
midpoint of the interval.
-- Solver: gsl_root_fsolver_falsepos
The "false position algorithm" is a method of finding roots based
on linear interpolation. Its convergence is linear, but it is
usually faster than bisection.
On each iteration a line is drawn between the endpoints (a,f(a))
and (b,f(b)) and the point where this line crosses the x-axis
taken as a "midpoint". The value of the function at this point is
calculated and its sign is used to determine which side of the
interval does not contain a root. That side is discarded to give a
new, smaller interval containing the root. This procedure can be
continued indefinitely until the interval is sufficiently small.
The best estimate of the root is taken from the linear
interpolation of the interval on the current iteration.
-- Solver: gsl_root_fsolver_brent
The "Brent-Dekker method" (referred to here as "Brent's method")
combines an interpolation strategy with the bisection algorithm.
This produces a fast algorithm which is still robust.
On each iteration Brent's method approximates the function using an
interpolating curve. On the first iteration this is a linear
interpolation of the two endpoints. For subsequent iterations the
algorithm uses an inverse quadratic fit to the last three points,
for higher accuracy. The intercept of the interpolating curve
with the x-axis is taken as a guess for the root. If it lies
within the bounds of the current interval then the interpolating
point is accepted, and used to generate a smaller interval. If
the interpolating point is not accepted then the algorithm falls
back to an ordinary bisection step.
The best estimate of the root is taken from the most recent
interpolation or bisection.
File: gsl-ref.info, Node: Root Finding Algorithms using Derivatives, Next: Root Finding Examples, Prev: Root Bracketing Algorithms, Up: One dimensional Root-Finding
33.9 Root Finding Algorithms using Derivatives
==============================================
The root polishing algorithms described in this section require an
initial guess for the location of the root. There is no absolute
guarantee of convergence--the function must be suitable for this
technique and the initial guess must be sufficiently close to the root
for it to work. When these conditions are satisfied then convergence is
quadratic.
These algorithms make use of both the function and its derivative.
-- Derivative Solver: gsl_root_fdfsolver_newton
Newton's Method is the standard root-polishing algorithm. The
algorithm begins with an initial guess for the location of the
root. On each iteration, a line tangent to the function f is
drawn at that position. The point where this line crosses the
x-axis becomes the new guess. The iteration is defined by the
following sequence,
x_{i+1} = x_i - f(x_i)/f'(x_i)
Newton's method converges quadratically for single roots, and
linearly for multiple roots.
-- Derivative Solver: gsl_root_fdfsolver_secant
The "secant method" is a simplified version of Newton's method
which does not require the computation of the derivative on every
step.
On its first iteration the algorithm begins with Newton's method,
using the derivative to compute a first step,
x_1 = x_0 - f(x_0)/f'(x_0)
Subsequent iterations avoid the evaluation of the derivative by
replacing it with a numerical estimate, the slope of the line
through the previous two points,
x_{i+1} = x_i f(x_i) / f'_{est} where
f'_{est} = (f(x_i) - f(x_{i-1})/(x_i - x_{i-1})
When the derivative does not change significantly in the vicinity
of the root the secant method gives a useful saving.
Asymptotically the secant method is faster than Newton's method
whenever the cost of evaluating the derivative is more than 0.44
times the cost of evaluating the function itself. As with all
methods of computing a numerical derivative the estimate can
suffer from cancellation errors if the separation of the points
becomes too small.
On single roots, the method has a convergence of order (1 + \sqrt
5)/2 (approximately 1.62). It converges linearly for multiple
roots.
-- Derivative Solver: gsl_root_fdfsolver_steffenson
The "Steffenson Method"(1) provides the fastest convergence of all
the routines. It combines the basic Newton algorithm with an
Aitken "delta-squared" acceleration. If the Newton iterates are
x_i then the acceleration procedure generates a new sequence R_i,
R_i = x_i - (x_{i+1} - x_i)^2 / (x_{i+2} - 2 x_{i+1} + x_{i})
which converges faster than the original sequence under reasonable
conditions. The new sequence requires three terms before it can
produce its first value so the method returns accelerated values
on the second and subsequent iterations. On the first iteration
it returns the ordinary Newton estimate. The Newton iterate is
also returned if the denominator of the acceleration term ever
becomes zero.
As with all acceleration procedures this method can become
unstable if the function is not well-behaved.
---------- Footnotes ----------
(1) J.F. Steffensen (1873-1961). The spelling used in the name of
the function is slightly incorrect, but has been preserved to avoid
incompatibility.
File: gsl-ref.info, Node: Root Finding Examples, Next: Root Finding References and Further Reading, Prev: Root Finding Algorithms using Derivatives, Up: One dimensional Root-Finding
33.10 Examples
==============
For any root finding algorithm we need to prepare the function to be
solved. For this example we will use the general quadratic equation
described earlier. We first need a header file (`demo_fn.h') to define
the function parameters,
struct quadratic_params
{
double a, b, c;
};
double quadratic (double x, void *params);
double quadratic_deriv (double x, void *params);
void quadratic_fdf (double x, void *params,
double *y, double *dy);
We place the function definitions in a separate file (`demo_fn.c'),
double
quadratic (double x, void *params)
{
struct quadratic_params *p
= (struct quadratic_params *) params;
double a = p->a;
double b = p->b;
double c = p->c;
return (a * x + b) * x + c;
}
double
quadratic_deriv (double x, void *params)
{
struct quadratic_params *p
= (struct quadratic_params *) params;
double a = p->a;
double b = p->b;
double c = p->c;
return 2.0 * a * x + b;
}
void
quadratic_fdf (double x, void *params,
double *y, double *dy)
{
struct quadratic_params *p
= (struct quadratic_params *) params;
double a = p->a;
double b = p->b;
double c = p->c;
*y = (a * x + b) * x + c;
*dy = 2.0 * a * x + b;
}
The first program uses the function solver `gsl_root_fsolver_brent' for
Brent's method and the general quadratic defined above to solve the
following equation,
x^2 - 5 = 0
with solution x = \sqrt 5 = 2.236068...
#include
#include
#include
#include
#include "demo_fn.h"
#include "demo_fn.c"
int
main (void)
{
int status;
int iter = 0, max_iter = 100;
const gsl_root_fsolver_type *T;
gsl_root_fsolver *s;
double r = 0, r_expected = sqrt (5.0);
double x_lo = 0.0, x_hi = 5.0;
gsl_function F;
struct quadratic_params params = {1.0, 0.0, -5.0};
F.function = &quadratic;
F.params = ¶ms;
T = gsl_root_fsolver_brent;
s = gsl_root_fsolver_alloc (T);
gsl_root_fsolver_set (s, &F, x_lo, x_hi);
printf ("using %s method\n",
gsl_root_fsolver_name (s));
printf ("%5s [%9s, %9s] %9s %10s %9s\n",
"iter", "lower", "upper", "root",
"err", "err(est)");
do
{
iter++;
status = gsl_root_fsolver_iterate (s);
r = gsl_root_fsolver_root (s);
x_lo = gsl_root_fsolver_x_lower (s);
x_hi = gsl_root_fsolver_x_upper (s);
status = gsl_root_test_interval (x_lo, x_hi,
0, 0.001);
if (status == GSL_SUCCESS)
printf ("Converged:\n");
printf ("%5d [%.7f, %.7f] %.7f %+.7f %.7f\n",
iter, x_lo, x_hi,
r, r - r_expected,
x_hi - x_lo);
}
while (status == GSL_CONTINUE && iter < max_iter);
gsl_root_fsolver_free (s);
return status;
}
Here are the results of the iterations,
$ ./a.out
using brent method
iter [ lower, upper] root err err(est)
1 [1.0000000, 5.0000000] 1.0000000 -1.2360680 4.0000000
2 [1.0000000, 3.0000000] 3.0000000 +0.7639320 2.0000000
3 [2.0000000, 3.0000000] 2.0000000 -0.2360680 1.0000000
4 [2.2000000, 3.0000000] 2.2000000 -0.0360680 0.8000000
5 [2.2000000, 2.2366300] 2.2366300 +0.0005621 0.0366300
Converged:
6 [2.2360634, 2.2366300] 2.2360634 -0.0000046 0.0005666
If the program is modified to use the bisection solver instead of
Brent's method, by changing `gsl_root_fsolver_brent' to
`gsl_root_fsolver_bisection' the slower convergence of the Bisection
method can be observed,
$ ./a.out
using bisection method
iter [ lower, upper] root err err(est)
1 [0.0000000, 2.5000000] 1.2500000 -0.9860680 2.5000000
2 [1.2500000, 2.5000000] 1.8750000 -0.3610680 1.2500000
3 [1.8750000, 2.5000000] 2.1875000 -0.0485680 0.6250000
4 [2.1875000, 2.5000000] 2.3437500 +0.1076820 0.3125000
5 [2.1875000, 2.3437500] 2.2656250 +0.0295570 0.1562500
6 [2.1875000, 2.2656250] 2.2265625 -0.0095055 0.0781250
7 [2.2265625, 2.2656250] 2.2460938 +0.0100258 0.0390625
8 [2.2265625, 2.2460938] 2.2363281 +0.0002601 0.0195312
9 [2.2265625, 2.2363281] 2.2314453 -0.0046227 0.0097656
10 [2.2314453, 2.2363281] 2.2338867 -0.0021813 0.0048828
11 [2.2338867, 2.2363281] 2.2351074 -0.0009606 0.0024414
Converged:
12 [2.2351074, 2.2363281] 2.2357178 -0.0003502 0.0012207
The next program solves the same function using a derivative solver
instead.
#include
#include
#include
#include
#include "demo_fn.h"
#include "demo_fn.c"
int
main (void)
{
int status;
int iter = 0, max_iter = 100;
const gsl_root_fdfsolver_type *T;
gsl_root_fdfsolver *s;
double x0, x = 5.0, r_expected = sqrt (5.0);
gsl_function_fdf FDF;
struct quadratic_params params = {1.0, 0.0, -5.0};
FDF.f = &quadratic;
FDF.df = &quadratic_deriv;
FDF.fdf = &quadratic_fdf;
FDF.params = ¶ms;
T = gsl_root_fdfsolver_newton;
s = gsl_root_fdfsolver_alloc (T);
gsl_root_fdfsolver_set (s, &FDF, x);
printf ("using %s method\n",
gsl_root_fdfsolver_name (s));
printf ("%-5s %10s %10s %10s\n",
"iter", "root", "err", "err(est)");
do
{
iter++;
status = gsl_root_fdfsolver_iterate (s);
x0 = x;
x = gsl_root_fdfsolver_root (s);
status = gsl_root_test_delta (x, x0, 0, 1e-3);
if (status == GSL_SUCCESS)
printf ("Converged:\n");
printf ("%5d %10.7f %+10.7f %10.7f\n",
iter, x, x - r_expected, x - x0);
}
while (status == GSL_CONTINUE && iter < max_iter);
gsl_root_fdfsolver_free (s);
return status;
}
Here are the results for Newton's method,
$ ./a.out
using newton method
iter root err err(est)
1 3.0000000 +0.7639320 -2.0000000
2 2.3333333 +0.0972654 -0.6666667
3 2.2380952 +0.0020273 -0.0952381
Converged:
4 2.2360689 +0.0000009 -0.0020263
Note that the error can be estimated more accurately by taking the
difference between the current iterate and next iterate rather than the
previous iterate. The other derivative solvers can be investigated by
changing `gsl_root_fdfsolver_newton' to `gsl_root_fdfsolver_secant' or
`gsl_root_fdfsolver_steffenson'.
File: gsl-ref.info, Node: Root Finding References and Further Reading, Prev: Root Finding Examples, Up: One dimensional Root-Finding
33.11 References and Further Reading
====================================
For information on the Brent-Dekker algorithm see the following two
papers,
R. P. Brent, "An algorithm with guaranteed convergence for finding
a zero of a function", `Computer Journal', 14 (1971) 422-425
J. C. P. Bus and T. J. Dekker, "Two Efficient Algorithms with
Guaranteed Convergence for Finding a Zero of a Function", `ACM
Transactions of Mathematical Software', Vol. 1 No. 4 (1975) 330-345
File: gsl-ref.info, Node: One dimensional Minimization, Next: Multidimensional Root-Finding, Prev: One dimensional Root-Finding, Up: Top
34 One dimensional Minimization
*******************************
This chapter describes routines for finding minima of arbitrary
one-dimensional functions. The library provides low level components
for a variety of iterative minimizers and convergence tests. These can
be combined by the user to achieve the desired solution, with full
access to the intermediate steps of the algorithms. Each class of
methods uses the same framework, so that you can switch between
minimizers at runtime without needing to recompile your program. Each
instance of a minimizer keeps track of its own state, allowing the
minimizers to be used in multi-threaded programs.
The header file `gsl_min.h' contains prototypes for the minimization
functions and related declarations. To use the minimization algorithms
to find the maximum of a function simply invert its sign.
* Menu:
* Minimization Overview::
* Minimization Caveats::
* Initializing the Minimizer::
* Providing the function to minimize::
* Minimization Iteration::
* Minimization Stopping Parameters::
* Minimization Algorithms::
* Minimization Examples::
* Minimization References and Further Reading::
File: gsl-ref.info, Node: Minimization Overview, Next: Minimization Caveats, Up: One dimensional Minimization
34.1 Overview
=============
The minimization algorithms begin with a bounded region known to contain
a minimum. The region is described by a lower bound a and an upper
bound b, with an estimate of the location of the minimum x.
The value of the function at x must be less than the value of the
function at the ends of the interval,
f(a) > f(x) < f(b)
This condition guarantees that a minimum is contained somewhere within
the interval. On each iteration a new point x' is selected using one
of the available algorithms. If the new point is a better estimate of
the minimum, i.e. where f(x') < f(x), then the current estimate of the
minimum x is updated. The new point also allows the size of the
bounded interval to be reduced, by choosing the most compact set of
points which satisfies the constraint f(a) > f(x) < f(b). The interval
is reduced until it encloses the true minimum to a desired tolerance.
This provides a best estimate of the location of the minimum and a
rigorous error estimate.
Several bracketing algorithms are available within a single
framework. The user provides a high-level driver for the algorithm,
and the library provides the individual functions necessary for each of
the steps. There are three main phases of the iteration. The steps
are,
* initialize minimizer state, S, for algorithm T
* update S using the iteration T
* test S for convergence, and repeat iteration if necessary
The state for the minimizers is held in a `gsl_min_fminimizer' struct.
The updating procedure uses only function evaluations (not derivatives).
File: gsl-ref.info, Node: Minimization Caveats, Next: Initializing the Minimizer, Prev: Minimization Overview, Up: One dimensional Minimization
34.2 Caveats
============
Note that minimization functions can only search for one minimum at a
time. When there are several minima in the search area, the first
minimum to be found will be returned; however it is difficult to predict
which of the minima this will be. _In most cases, no error will be
reported if you try to find a minimum in an area where there is more
than one._
With all minimization algorithms it can be difficult to determine the
location of the minimum to full numerical precision. The behavior of
the function in the region of the minimum x^* can be approximated by a
Taylor expansion,
y = f(x^*) + (1/2) f''(x^*) (x - x^*)^2
and the second term of this expansion can be lost when added to the
first term at finite precision. This magnifies the error in locating
x^*, making it proportional to \sqrt \epsilon (where \epsilon is the
relative accuracy of the floating point numbers). For functions with
higher order minima, such as x^4, the magnification of the error is
correspondingly worse. The best that can be achieved is to converge to
the limit of numerical accuracy in the function values, rather than the
location of the minimum itself.
File: gsl-ref.info, Node: Initializing the Minimizer, Next: Providing the function to minimize, Prev: Minimization Caveats, Up: One dimensional Minimization
34.3 Initializing the Minimizer
===============================
-- Function: gsl_min_fminimizer * gsl_min_fminimizer_alloc (const
gsl_min_fminimizer_type * T)
This function returns a pointer to a newly allocated instance of a
minimizer of type T. For example, the following code creates an
instance of a golden section minimizer,
const gsl_min_fminimizer_type * T
= gsl_min_fminimizer_goldensection;
gsl_min_fminimizer * s
= gsl_min_fminimizer_alloc (T);
If there is insufficient memory to create the minimizer then the
function returns a null pointer and the error handler is invoked
with an error code of `GSL_ENOMEM'.
-- Function: int gsl_min_fminimizer_set (gsl_min_fminimizer * S,
gsl_function * F, double X_MINIMUM, double X_LOWER, double
X_UPPER)
This function sets, or resets, an existing minimizer S to use the
function F and the initial search interval [X_LOWER, X_UPPER],
with a guess for the location of the minimum X_MINIMUM.
If the interval given does not contain a minimum, then the function
returns an error code of `GSL_EINVAL'.
-- Function: int gsl_min_fminimizer_set_with_values
(gsl_min_fminimizer * S, gsl_function * F, double X_MINIMUM,
double F_MINIMUM, double X_LOWER, double F_LOWER, double
X_UPPER, double F_UPPER)
This function is equivalent to `gsl_min_fminimizer_set' but uses
the values F_MINIMUM, F_LOWER and F_UPPER instead of computing
`f(x_minimum)', `f(x_lower)' and `f(x_upper)'.
-- Function: void gsl_min_fminimizer_free (gsl_min_fminimizer * S)
This function frees all the memory associated with the minimizer S.
-- Function: const char * gsl_min_fminimizer_name (const
gsl_min_fminimizer * S)
This function returns a pointer to the name of the minimizer. For
example,
printf ("s is a '%s' minimizer\n",
gsl_min_fminimizer_name (s));
would print something like `s is a 'brent' minimizer'.
File: gsl-ref.info, Node: Providing the function to minimize, Next: Minimization Iteration, Prev: Initializing the Minimizer, Up: One dimensional Minimization
34.4 Providing the function to minimize
=======================================
You must provide a continuous function of one variable for the
minimizers to operate on. In order to allow for general parameters the
functions are defined by a `gsl_function' data type (*note Providing
the function to solve::).
File: gsl-ref.info, Node: Minimization Iteration, Next: Minimization Stopping Parameters, Prev: Providing the function to minimize, Up: One dimensional Minimization
34.5 Iteration
==============
The following functions drive the iteration of each algorithm. Each
function performs one iteration to update the state of any minimizer of
the corresponding type. The same functions work for all minimizers so
that different methods can be substituted at runtime without
modifications to the code.
-- Function: int gsl_min_fminimizer_iterate (gsl_min_fminimizer * S)
This function performs a single iteration of the minimizer S. If
the iteration encounters an unexpected problem then an error code
will be returned,
`GSL_EBADFUNC'
the iteration encountered a singular point where the function
evaluated to `Inf' or `NaN'.
`GSL_FAILURE'
the algorithm could not improve the current best
approximation or bounding interval.
The minimizer maintains a current best estimate of the position of
the minimum at all times, and the current interval bounding the minimum.
This information can be accessed with the following auxiliary functions,
-- Function: double gsl_min_fminimizer_x_minimum (const
gsl_min_fminimizer * S)
This function returns the current estimate of the position of the
minimum for the minimizer S.
-- Function: double gsl_min_fminimizer_x_upper (const
gsl_min_fminimizer * S)
-- Function: double gsl_min_fminimizer_x_lower (const
gsl_min_fminimizer * S)
These functions return the current upper and lower bound of the
interval for the minimizer S.
-- Function: double gsl_min_fminimizer_f_minimum (const
gsl_min_fminimizer * S)
-- Function: double gsl_min_fminimizer_f_upper (const
gsl_min_fminimizer * S)
-- Function: double gsl_min_fminimizer_f_lower (const
gsl_min_fminimizer * S)
These functions return the value of the function at the current
estimate of the minimum and at the upper and lower bounds of the
interval for the minimizer S.
File: gsl-ref.info, Node: Minimization Stopping Parameters, Next: Minimization Algorithms, Prev: Minimization Iteration, Up: One dimensional Minimization
34.6 Stopping Parameters
========================
A minimization procedure should stop when one of the following
conditions is true:
* A minimum has been found to within the user-specified precision.
* A user-specified maximum number of iterations has been reached.
* An error has occurred.
The handling of these conditions is under user control. The function
below allows the user to test the precision of the current result.
-- Function: int gsl_min_test_interval (double X_LOWER, double
X_UPPER, double EPSABS, double EPSREL)
This function tests for the convergence of the interval [X_LOWER,
X_UPPER] with absolute error EPSABS and relative error EPSREL.
The test returns `GSL_SUCCESS' if the following condition is
achieved,
|a - b| < epsabs + epsrel min(|a|,|b|)
when the interval x = [a,b] does not include the origin. If the
interval includes the origin then \min(|a|,|b|) is replaced by
zero (which is the minimum value of |x| over the interval). This
ensures that the relative error is accurately estimated for minima
close to the origin.
This condition on the interval also implies that any estimate of
the minimum x_m in the interval satisfies the same condition with
respect to the true minimum x_m^*,
|x_m - x_m^*| < epsabs + epsrel x_m^*
assuming that the true minimum x_m^* is contained within the
interval.
File: gsl-ref.info, Node: Minimization Algorithms, Next: Minimization Examples, Prev: Minimization Stopping Parameters, Up: One dimensional Minimization
34.7 Minimization Algorithms
============================
The minimization algorithms described in this section require an initial
interval which is guaranteed to contain a minimum--if a and b are the
endpoints of the interval and x is an estimate of the minimum then f(a)
> f(x) < f(b). This ensures that the function has at least one minimum
somewhere in the interval. If a valid initial interval is used then
these algorithm cannot fail, provided the function is well-behaved.
-- Minimizer: gsl_min_fminimizer_goldensection
The "golden section algorithm" is the simplest method of bracketing
the minimum of a function. It is the slowest algorithm provided
by the library, with linear convergence.
On each iteration, the algorithm first compares the subintervals
from the endpoints to the current minimum. The larger subinterval
is divided in a golden section (using the famous ratio (3-\sqrt
5)/2 = 0.3189660...) and the value of the function at this new
point is calculated. The new value is used with the constraint
f(a') > f(x') < f(b') to a select new interval containing the
minimum, by discarding the least useful point. This procedure can
be continued indefinitely until the interval is sufficiently
small. Choosing the golden section as the bisection ratio can be
shown to provide the fastest convergence for this type of
algorithm.
-- Minimizer: gsl_min_fminimizer_brent
The "Brent minimization algorithm" combines a parabolic
interpolation with the golden section algorithm. This produces a
fast algorithm which is still robust.
The outline of the algorithm can be summarized as follows: on each
iteration Brent's method approximates the function using an
interpolating parabola through three existing points. The minimum
of the parabola is taken as a guess for the minimum. If it lies
within the bounds of the current interval then the interpolating
point is accepted, and used to generate a smaller interval. If
the interpolating point is not accepted then the algorithm falls
back to an ordinary golden section step. The full details of
Brent's method include some additional checks to improve
convergence.
-- Minimizer: gsl_min_fminimizer_quad_golden
This is a variant of Brent's algorithm which uses the safeguarded
step-length algorithm of Gill and Murray.
File: gsl-ref.info, Node: Minimization Examples, Next: Minimization References and Further Reading, Prev: Minimization Algorithms, Up: One dimensional Minimization
34.8 Examples
=============
The following program uses the Brent algorithm to find the minimum of
the function f(x) = \cos(x) + 1, which occurs at x = \pi. The starting
interval is (0,6), with an initial guess for the minimum of 2.
#include
#include
#include
#include
double fn1 (double x, void * params)
{
return cos(x) + 1.0;
}
int
main (void)
{
int status;
int iter = 0, max_iter = 100;
const gsl_min_fminimizer_type *T;
gsl_min_fminimizer *s;
double m = 2.0, m_expected = M_PI;
double a = 0.0, b = 6.0;
gsl_function F;
F.function = &fn1;
F.params = 0;
T = gsl_min_fminimizer_brent;
s = gsl_min_fminimizer_alloc (T);
gsl_min_fminimizer_set (s, &F, m, a, b);
printf ("using %s method\n",
gsl_min_fminimizer_name (s));
printf ("%5s [%9s, %9s] %9s %10s %9s\n",
"iter", "lower", "upper", "min",
"err", "err(est)");
printf ("%5d [%.7f, %.7f] %.7f %+.7f %.7f\n",
iter, a, b,
m, m - m_expected, b - a);
do
{
iter++;
status = gsl_min_fminimizer_iterate (s);
m = gsl_min_fminimizer_x_minimum (s);
a = gsl_min_fminimizer_x_lower (s);
b = gsl_min_fminimizer_x_upper (s);
status
= gsl_min_test_interval (a, b, 0.001, 0.0);
if (status == GSL_SUCCESS)
printf ("Converged:\n");
printf ("%5d [%.7f, %.7f] "
"%.7f %+.7f %.7f\n",
iter, a, b,
m, m - m_expected, b - a);
}
while (status == GSL_CONTINUE && iter < max_iter);
gsl_min_fminimizer_free (s);
return status;
}
Here are the results of the minimization procedure.
$ ./a.out
0 [0.0000000, 6.0000000] 2.0000000 -1.1415927 6.0000000
1 [2.0000000, 6.0000000] 3.2758640 +0.1342713 4.0000000
2 [2.0000000, 3.2831929] 3.2758640 +0.1342713 1.2831929
3 [2.8689068, 3.2831929] 3.2758640 +0.1342713 0.4142862
4 [2.8689068, 3.2831929] 3.2758640 +0.1342713 0.4142862
5 [2.8689068, 3.2758640] 3.1460585 +0.0044658 0.4069572
6 [3.1346075, 3.2758640] 3.1460585 +0.0044658 0.1412565
7 [3.1346075, 3.1874620] 3.1460585 +0.0044658 0.0528545
8 [3.1346075, 3.1460585] 3.1460585 +0.0044658 0.0114510
9 [3.1346075, 3.1460585] 3.1424060 +0.0008133 0.0114510
10 [3.1346075, 3.1424060] 3.1415885 -0.0000041 0.0077985
Converged:
11 [3.1415885, 3.1424060] 3.1415927 -0.0000000 0.0008175
File: gsl-ref.info, Node: Minimization References and Further Reading, Prev: Minimization Examples, Up: One dimensional Minimization
34.9 References and Further Reading
===================================
Further information on Brent's algorithm is available in the following
book,
Richard Brent, `Algorithms for minimization without derivatives',
Prentice-Hall (1973), republished by Dover in paperback (2002),
ISBN 0-486-41998-3.
File: gsl-ref.info, Node: Multidimensional Root-Finding, Next: Multidimensional Minimization, Prev: One dimensional Minimization, Up: Top
35 Multidimensional Root-Finding
********************************
This chapter describes functions for multidimensional root-finding
(solving nonlinear systems with n equations in n unknowns). The
library provides low level components for a variety of iterative
solvers and convergence tests. These can be combined by the user to
achieve the desired solution, with full access to the intermediate
steps of the iteration. Each class of methods uses the same framework,
so that you can switch between solvers at runtime without needing to
recompile your program. Each instance of a solver keeps track of its
own state, allowing the solvers to be used in multi-threaded programs.
The solvers are based on the original Fortran library MINPACK.
The header file `gsl_multiroots.h' contains prototypes for the
multidimensional root finding functions and related declarations.
* Menu:
* Overview of Multidimensional Root Finding::
* Initializing the Multidimensional Solver::
* Providing the multidimensional system of equations to solve::
* Iteration of the multidimensional solver::
* Search Stopping Parameters for the multidimensional solver::
* Algorithms using Derivatives::
* Algorithms without Derivatives::
* Example programs for Multidimensional Root finding::
* References and Further Reading for Multidimensional Root Finding::
File: gsl-ref.info, Node: Overview of Multidimensional Root Finding, Next: Initializing the Multidimensional Solver, Up: Multidimensional Root-Finding
35.1 Overview
=============
The problem of multidimensional root finding requires the simultaneous
solution of n equations, f_i, in n variables, x_i,
f_i (x_1, ..., x_n) = 0 for i = 1 ... n.
In general there are no bracketing methods available for n dimensional
systems, and no way of knowing whether any solutions exist. All
algorithms proceed from an initial guess using a variant of the Newton
iteration,
x -> x' = x - J^{-1} f(x)
where x, f are vector quantities and J is the Jacobian matrix J_{ij} =
d f_i / d x_j. Additional strategies can be used to enlarge the region
of convergence. These include requiring a decrease in the norm |f| on
each step proposed by Newton's method, or taking steepest-descent steps
in the direction of the negative gradient of |f|.
Several root-finding algorithms are available within a single
framework. The user provides a high-level driver for the algorithms,
and the library provides the individual functions necessary for each of
the steps. There are three main phases of the iteration. The steps
are,
* initialize solver state, S, for algorithm T
* update S using the iteration T
* test S for convergence, and repeat iteration if necessary
The evaluation of the Jacobian matrix can be problematic, either because
programming the derivatives is intractable or because computation of the
n^2 terms of the matrix becomes too expensive. For these reasons the
algorithms provided by the library are divided into two classes
according to whether the derivatives are available or not.
The state for solvers with an analytic Jacobian matrix is held in a
`gsl_multiroot_fdfsolver' struct. The updating procedure requires both
the function and its derivatives to be supplied by the user.
The state for solvers which do not use an analytic Jacobian matrix is
held in a `gsl_multiroot_fsolver' struct. The updating procedure uses
only function evaluations (not derivatives). The algorithms estimate
the matrix J or J^{-1} by approximate methods.
./gsl_DOC-1.15-s-i486/usr/share/info/gsl-ref.info0000644000000000000000000006213612035456005017567 0ustar rootrootThis is gsl-ref.info, produced by makeinfo version 4.13 from
gsl-ref.texi.
INFO-DIR-SECTION Software libraries
START-INFO-DIR-ENTRY
* gsl-ref: (gsl-ref). GNU Scientific Library - Reference
END-INFO-DIR-ENTRY
Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004,
2005, 2006, 2007, 2008, 2009, 2010, 2011 The GSL Team.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.3 or
any later version published by the Free Software Foundation; with the
Invariant Sections being "GNU General Public License" and "Free Software
Needs Free Documentation", the Front-Cover text being "A GNU Manual",
and with the Back-Cover Text being (a) (see below). A copy of the
license is included in the section entitled "GNU Free Documentation
License".
(a) The Back-Cover Text is: "You have the freedom to copy and modify
this GNU Manual."
Indirect:
gsl-ref.info-1: 948
gsl-ref.info-2: 300146
gsl-ref.info-3: 598317
gsl-ref.info-4: 898064
gsl-ref.info-5: 1168153
gsl-ref.info-6: 1435602
Tag Table:
(Indirect)
Node: Top948
Node: Introduction3679
Node: Routines available in GSL4331
Node: GSL is Free Software6246
Node: Obtaining GSL8638
Node: No Warranty9665
Node: Reporting Bugs10171
Ref: Reporting Bugs-Footnote-111127
Node: Further Information11177
Node: Conventions used in this manual12190
Node: Using the library12954
Node: An Example Program13540
Ref: An Example Program-Footnote-114283
Node: Compiling and Linking14390
Node: Linking programs with the library15460
Ref: Linking programs with the library-Footnote-116862
Ref: Linking programs with the library-Footnote-216899
Node: Linking with an alternative BLAS library16952
Node: Shared Libraries18034
Ref: Shared Libraries-Footnote-119509
Node: ANSI C Compliance19557
Node: Inline functions20650
Node: Long double22188
Node: Portability functions23776
Node: Alternative optimized functions25173
Node: Support for different numeric types26631
Node: Compatibility with C++29694
Node: Aliasing of arrays30266
Node: Thread-safety31017
Node: Deprecated Functions32104
Node: Code Reuse32729
Node: Error Handling33373
Node: Error Reporting34123
Node: Error Codes36015
Node: Error Handlers37871
Node: Using GSL error reporting in your own functions41508
Node: Error Reporting Examples43466
Node: Mathematical Functions44678
Node: Mathematical Constants45439
Node: Infinities and Not-a-number46569
Ref: Infinities and Not-a-number-Footnote-147676
Node: Elementary Functions48086
Node: Small integer powers50533
Node: Testing the Sign of Numbers52180
Node: Testing for Odd and Even Numbers52611
Node: Maximum and Minimum functions53167
Node: Approximate Comparison of Floating Point Numbers55442
Node: Complex Numbers56801
Ref: Complex Numbers-Footnote-158127
Node: Representation of complex numbers58191
Node: Properties of complex numbers60128
Node: Complex arithmetic operators61133
Node: Elementary Complex Functions63879
Node: Complex Trigonometric Functions65699
Node: Inverse Complex Trigonometric Functions66920
Node: Complex Hyperbolic Functions69526
Node: Inverse Complex Hyperbolic Functions70822
Node: Complex Number References and Further Reading72810
Node: Polynomials74206
Node: Polynomial Evaluation74973
Node: Divided Difference Representation of Polynomials76383
Node: Quadratic Equations78132
Node: Cubic Equations80072
Node: General Polynomial Equations81654
Node: Roots of Polynomials Examples83999
Node: Roots of Polynomials References and Further Reading85388
Node: Special Functions86333
Node: Special Function Usage88327
Node: The gsl_sf_result struct89510
Node: Special Function Modes90774
Node: Airy Functions and Derivatives91710
Node: Airy Functions92411
Node: Derivatives of Airy Functions93771
Node: Zeros of Airy Functions95283
Node: Zeros of Derivatives of Airy Functions96002
Node: Bessel Functions96761
Node: Regular Cylindrical Bessel Functions97953
Node: Irregular Cylindrical Bessel Functions99277
Node: Regular Modified Cylindrical Bessel Functions100735
Node: Irregular Modified Cylindrical Bessel Functions103562
Node: Regular Spherical Bessel Functions106530
Node: Irregular Spherical Bessel Functions108744
Node: Regular Modified Spherical Bessel Functions110431
Node: Irregular Modified Spherical Bessel Functions112434
Node: Regular Bessel Function - Fractional Order114497
Node: Irregular Bessel Functions - Fractional Order115495
Node: Regular Modified Bessel Functions - Fractional Order116077
Node: Irregular Modified Bessel Functions - Fractional Order117002
Node: Zeros of Regular Bessel Functions118238
Node: Clausen Functions119341
Node: Coulomb Functions119945
Node: Normalized Hydrogenic Bound States120394
Node: Coulomb Wave Functions121507
Node: Coulomb Wave Function Normalization Constant124917
Node: Coupling Coefficients125670
Node: 3-j Symbols126393
Node: 6-j Symbols126997
Node: 9-j Symbols127621
Node: Dawson Function128327
Node: Debye Functions128894
Node: Dilogarithm130668
Node: Real Argument130963
Node: Complex Argument131656
Node: Elementary Operations132125
Node: Elliptic Integrals132949
Node: Definition of Legendre Forms133521
Node: Definition of Carlson Forms134341
Node: Legendre Form of Complete Elliptic Integrals135038
Node: Legendre Form of Incomplete Elliptic Integrals136606
Node: Carlson Forms138794
Node: Elliptic Functions (Jacobi)140341
Node: Error Functions140922
Node: Error Function141373
Node: Complementary Error Function141746
Node: Log Complementary Error Function142206
Node: Probability functions142658
Node: Exponential Functions143911
Node: Exponential Function144294
Node: Relative Exponential Functions145517
Node: Exponentiation With Error Estimate147189
Node: Exponential Integrals148398
Node: Exponential Integral148880
Node: Ei(x)149807
Node: Hyperbolic Integrals150255
Node: Ei_3(x)150911
Node: Trigonometric Integrals151303
Node: Arctangent Integral151902
Node: Fermi-Dirac Function152302
Node: Complete Fermi-Dirac Integrals152672
Node: Incomplete Fermi-Dirac Integrals155240
Node: Gamma and Beta Functions155883
Node: Gamma Functions156456
Node: Factorials159565
Node: Pochhammer Symbol162118
Node: Incomplete Gamma Functions163624
Node: Beta Functions164873
Node: Incomplete Beta Function165625
Node: Gegenbauer Functions166336
Node: Hypergeometric Functions167986
Node: Laguerre Functions172249
Node: Lambert W Functions173812
Node: Legendre Functions and Spherical Harmonics174850
Node: Legendre Polynomials175439
Node: Associated Legendre Polynomials and Spherical Harmonics177494
Node: Conical Functions180327
Node: Radial Functions for Hyperbolic Space182450
Node: Logarithm and Related Functions184627
Node: Mathieu Functions186251
Node: Mathieu Function Workspace187600
Node: Mathieu Function Characteristic Values188414
Node: Angular Mathieu Functions189486
Node: Radial Mathieu Functions190522
Node: Power Function191764
Node: Psi (Digamma) Function192691
Node: Digamma Function193211
Node: Trigamma Function194041
Node: Polygamma Function194618
Node: Synchrotron Functions195002
Node: Transport Functions195765
Node: Trigonometric Functions196912
Node: Circular Trigonometric Functions197535
Node: Trigonometric Functions for Complex Arguments198544
Node: Hyperbolic Trigonometric Functions199580
Node: Conversion Functions200183
Node: Restriction Functions200966
Node: Trigonometric Functions With Error Estimates201898
Node: Zeta Functions202808
Node: Riemann Zeta Function203266
Node: Riemann Zeta Function Minus One203945
Node: Hurwitz Zeta Function204716
Node: Eta Function205221
Node: Special Functions Examples205780
Node: Special Functions References and Further Reading207474
Node: Vectors and Matrices208744
Node: Data types209503
Node: Blocks210706
Node: Block allocation211622
Node: Reading and writing blocks213106
Node: Example programs for blocks215170
Node: Vectors215797
Node: Vector allocation217683
Node: Accessing vector elements219301
Ref: Accessing vector elements-Footnote-1222443
Node: Initializing vector elements222682
Node: Reading and writing vectors223376
Node: Vector views225466
Node: Copying vectors232908
Node: Exchanging elements233768
Node: Vector operations234326
Node: Finding maximum and minimum elements of vectors236092
Node: Vector properties237678
Node: Example programs for vectors238652
Node: Matrices240941
Node: Matrix allocation243799
Node: Accessing matrix elements245466
Node: Initializing matrix elements247415
Node: Reading and writing matrices248211
Node: Matrix views250314
Node: Creating row and column views257454
Node: Copying matrices261924
Node: Copying rows and columns262520
Node: Exchanging rows and columns264197
Node: Matrix operations265675
Node: Finding maximum and minimum elements of matrices267642
Node: Matrix properties269546
Node: Example programs for matrices270648
Node: Vector and Matrix References and Further Reading274719
Node: Permutations275205
Node: The Permutation struct276493
Node: Permutation allocation276997
Node: Accessing permutation elements278431
Node: Permutation properties279274
Node: Permutation functions279979
Node: Applying Permutations281236
Node: Reading and writing permutations283162
Ref: Reading and writing permutations-Footnote-1285377
Node: Permutations in cyclic form285489
Node: Permutation Examples288914
Node: Permutation References and Further Reading291457
Node: Combinations292131
Node: The Combination struct292978
Node: Combination allocation293530
Node: Accessing combination elements295315
Node: Combination properties295974
Node: Combination functions296860
Node: Reading and writing combinations297800
Ref: Reading and writing combinations-Footnote-1300034
Node: Combination Examples300146
Node: Combination References and Further Reading301454
Node: Multisets301858
Node: The Multiset struct302784
Node: Multiset allocation303306
Node: Accessing multiset elements304997
Node: Multiset properties305626
Node: Multiset functions306439
Node: Reading and writing multisets307349
Ref: Reading and writing multisets-Footnote-1309500
Node: Multiset Examples309612
Node: Sorting311720
Node: Sorting objects312667
Node: Sorting vectors315578
Node: Selecting the k smallest or largest elements318154
Node: Computing the rank321488
Node: Sorting Examples322689
Node: Sorting References and Further Reading324333
Node: BLAS Support324879
Ref: BLAS Support-Footnote-1328254
Node: GSL BLAS Interface328422
Node: Level 1 GSL BLAS Interface328909
Node: Level 2 GSL BLAS Interface335648
Node: Level 3 GSL BLAS Interface344944
Node: BLAS Examples356084
Node: BLAS References and Further Reading357505
Node: Linear Algebra358852
Node: LU Decomposition359977
Node: QR Decomposition365222
Node: QR Decomposition with Column Pivoting371531
Node: Singular Value Decomposition375986
Node: Cholesky Decomposition379982
Node: Tridiagonal Decomposition of Real Symmetric Matrices383222
Node: Tridiagonal Decomposition of Hermitian Matrices385070
Node: Hessenberg Decomposition of Real Matrices387022
Node: Hessenberg-Triangular Decomposition of Real Matrices389432
Node: Bidiagonalization390552
Node: Householder Transformations392977
Node: Householder solver for linear systems395329
Node: Tridiagonal Systems396234
Node: Balancing399106
Node: Linear Algebra Examples399907
Node: Linear Algebra References and Further Reading401895
Node: Eigensystems403542
Node: Real Symmetric Matrices404756
Node: Complex Hermitian Matrices407270
Node: Real Nonsymmetric Matrices409897
Node: Real Generalized Symmetric-Definite Eigensystems415610
Node: Complex Generalized Hermitian-Definite Eigensystems418482
Node: Real Generalized Nonsymmetric Eigensystems421189
Node: Sorting Eigenvalues and Eigenvectors427773
Node: Eigenvalue and Eigenvector Examples431050
Node: Eigenvalue and Eigenvector References436440
Node: Fast Fourier Transforms437465
Node: Mathematical Definitions438674
Node: Overview of complex data FFTs441079
Node: Radix-2 FFT routines for complex data444022
Node: Mixed-radix FFT routines for complex data448234
Node: Overview of real data FFTs457563
Node: Radix-2 FFT routines for real data459997
Node: Mixed-radix FFT routines for real data464426
Node: FFT References and Further Reading475100
Node: Numerical Integration477969
Node: Numerical Integration Introduction479735
Node: Integrands without weight functions482004
Node: Integrands with weight functions482833
Node: Integrands with singular weight functions483544
Node: QNG non-adaptive Gauss-Kronrod integration484462
Node: QAG adaptive integration485743
Node: QAGS adaptive integration with singularities488374
Node: QAGP adaptive integration with known singular points490174
Node: QAGI adaptive integration on infinite intervals491488
Node: QAWC adaptive integration for Cauchy principal values493781
Node: QAWS adaptive integration for singular functions494956
Node: QAWO adaptive integration for oscillatory functions498178
Node: QAWF adaptive integration for Fourier integrals501957
Node: CQUAD doubly-adaptive integration504582
Node: Fixed order Gauss-Legendre integration507819
Node: Numerical integration error codes509797
Node: Numerical integration examples510556
Node: Numerical integration References and Further Reading512602
Node: Random Number Generation513593
Node: General comments on random numbers515095
Node: The Random Number Generator Interface517045
Node: Random number generator initialization518403
Node: Sampling from a random number generator520776
Node: Auxiliary random number generator functions524017
Node: Random number environment variables526337
Node: Copying random number generator state528863
Node: Reading and writing random number generator state529838
Node: Random number generator algorithms531232
Node: Unix random number generators541167
Node: Other random number generators544885
Node: Random Number Generator Performance553519
Node: Random Number Generator Examples554640
Node: Random Number References and Further Reading556196
Node: Random Number Acknowledgements557493
Node: Quasi-Random Sequences557979
Node: Quasi-random number generator initialization559086
Node: Sampling from a quasi-random number generator560101
Node: Auxiliary quasi-random number generator functions560798
Node: Saving and resorting quasi-random number generator state561744
Node: Quasi-random number generator algorithms562552
Node: Quasi-random number generator examples563688
Node: Quasi-random number references564673
Node: Random Number Distributions565199
Node: Random Number Distribution Introduction568398
Node: The Gaussian Distribution570193
Node: The Gaussian Tail Distribution572854
Node: The Bivariate Gaussian Distribution574523
Node: The Exponential Distribution575846
Node: The Laplace Distribution576985
Node: The Exponential Power Distribution578082
Node: The Cauchy Distribution579350
Node: The Rayleigh Distribution580599
Node: The Rayleigh Tail Distribution581771
Node: The Landau Distribution582643
Node: The Levy alpha-Stable Distributions583598
Node: The Levy skew alpha-Stable Distribution584656
Node: The Gamma Distribution586268
Node: The Flat (Uniform) Distribution587925
Node: The Lognormal Distribution589077
Node: The Chi-squared Distribution590425
Node: The F-distribution591829
Node: The t-distribution593465
Node: The Beta Distribution594859
Node: The Logistic Distribution596018
Node: The Pareto Distribution597147
Node: Spherical Vector Distributions598317
Node: The Weibull Distribution601151
Node: The Type-1 Gumbel Distribution602345
Node: The Type-2 Gumbel Distribution603582
Node: The Dirichlet Distribution604813
Node: General Discrete Distributions606480
Node: The Poisson Distribution610347
Node: The Bernoulli Distribution611355
Node: The Binomial Distribution612106
Node: The Multinomial Distribution613311
Node: The Negative Binomial Distribution615080
Node: The Pascal Distribution616445
Node: The Geometric Distribution617603
Node: The Hypergeometric Distribution618851
Node: The Logarithmic Distribution620504
Node: Shuffling and Sampling621295
Node: Random Number Distribution Examples624114
Node: Random Number Distribution References and Further Reading627310
Node: Statistics629453
Node: Mean and standard deviation and variance630853
Node: Absolute deviation634361
Node: Higher moments (skewness and kurtosis)635653
Node: Autocorrelation637786
Node: Covariance638600
Node: Correlation639576
Node: Weighted Samples640285
Node: Maximum and Minimum values646135
Node: Median and Percentiles648874
Node: Example statistical programs651289
Node: Statistics References and Further Reading653949
Node: Histograms655157
Node: The histogram struct656917
Node: Histogram allocation658720
Node: Copying Histograms661681
Node: Updating and accessing histogram elements662357
Node: Searching histogram ranges665624
Node: Histogram Statistics666623
Node: Histogram Operations668483
Node: Reading and writing histograms670555
Node: Resampling from histograms673597
Node: The histogram probability distribution struct674395
Node: Example programs for histograms677426
Node: Two dimensional histograms679489
Node: The 2D histogram struct680210
Node: 2D Histogram allocation682018
Node: Copying 2D Histograms684089
Node: Updating and accessing 2D histogram elements684794
Node: Searching 2D histogram ranges688450
Node: 2D Histogram Statistics689469
Node: 2D Histogram Operations692328
Node: Reading and writing 2D histograms694502
Node: Resampling from 2D histograms698131
Node: Example programs for 2D histograms701150
Node: N-tuples702978
Node: The ntuple struct704234
Node: Creating ntuples704712
Node: Opening an existing ntuple file705379
Node: Writing ntuples706007
Node: Reading ntuples706468
Node: Closing an ntuple file706799
Node: Histogramming ntuple values707139
Node: Example ntuple programs709147
Node: Ntuple References and Further Reading712476
Node: Monte Carlo Integration712797
Node: Monte Carlo Interface714042
Node: PLAIN Monte Carlo716665
Node: MISER719119
Ref: MISER-Footnote-1725795
Node: VEGAS725920
Node: Monte Carlo Examples735471
Node: Monte Carlo Integration References and Further Reading741456
Node: Simulated Annealing742238
Node: Simulated Annealing algorithm743445
Node: Simulated Annealing functions744599
Node: Examples with Simulated Annealing749134
Node: Trivial example749688
Node: Traveling Salesman Problem752339
Node: Simulated Annealing References and Further Reading755652
Node: Ordinary Differential Equations756063
Node: Defining the ODE System757254
Node: Stepping Functions760080
Node: Adaptive Step-size Control767022
Node: Evolution773094
Node: Driver777211
Node: ODE Example programs781150
Node: ODE References and Further Reading786218
Node: Interpolation787978
Node: Introduction to Interpolation789149
Node: Interpolation Functions789590
Node: Interpolation Types790797
Node: Index Look-up and Acceleration793520
Node: Evaluation of Interpolating Functions795572
Node: Higher-level Interface798073
Node: Interpolation Example programs800112
Node: Interpolation References and Further Reading803328
Node: Numerical Differentiation803901
Node: Numerical Differentiation functions804490
Node: Numerical Differentiation Examples807348
Node: Numerical Differentiation References808761
Node: Chebyshev Approximations809312
Node: Chebyshev Definitions810373
Node: Creation and Calculation of Chebyshev Series811163
Node: Auxiliary Functions for Chebyshev Series812148
Node: Chebyshev Series Evaluation812888
Node: Derivatives and Integrals814268
Node: Chebyshev Approximation Examples815510
Node: Chebyshev Approximation References and Further Reading817006
Node: Series Acceleration817455
Node: Acceleration functions818220
Node: Acceleration functions without error estimation820545
Node: Example of accelerating a series823155
Node: Series Acceleration References825501
Node: Wavelet Transforms826389
Node: DWT Definitions826930
Node: DWT Initialization827882
Node: DWT Transform Functions830531
Node: DWT in one dimension831066
Node: DWT in two dimension833085
Node: DWT Examples837663
Node: DWT References839480
Node: Discrete Hankel Transforms841643
Node: Discrete Hankel Transform Definition842111
Node: Discrete Hankel Transform Functions844310
Node: Discrete Hankel Transform References845974
Node: One dimensional Root-Finding846378
Node: Root Finding Overview847638
Node: Root Finding Caveats849496
Node: Initializing the Solver851263
Node: Providing the function to solve853896
Node: Search Bounds and Guesses857444
Node: Root Finding Iteration858307
Node: Search Stopping Parameters860156
Node: Root Bracketing Algorithms862670
Node: Root Finding Algorithms using Derivatives865968
Ref: Root Finding Algorithms using Derivatives-Footnote-1869510
Node: Root Finding Examples869665
Node: Root Finding References and Further Reading876953
Node: One dimensional Minimization877592
Node: Minimization Overview878894
Node: Minimization Caveats880600
Node: Initializing the Minimizer881937
Node: Providing the function to minimize884178
Node: Minimization Iteration884656
Node: Minimization Stopping Parameters886792
Node: Minimization Algorithms888401
Node: Minimization Examples891005
Node: Minimization References and Further Reading893932
Node: Multidimensional Root-Finding894388
Node: Overview of Multidimensional Root Finding895877
Node: Initializing the Multidimensional Solver898064
Node: Providing the multidimensional system of equations to solve901295
Node: Iteration of the multidimensional solver906200
Node: Search Stopping Parameters for the multidimensional solver908479
Node: Algorithms using Derivatives910216
Node: Algorithms without Derivatives915045
Node: Example programs for Multidimensional Root finding918133
Node: References and Further Reading for Multidimensional Root Finding926746
Node: Multidimensional Minimization927985
Node: Multimin Overview929338
Node: Multimin Caveats931415
Node: Initializing the Multidimensional Minimizer932164
Node: Providing a function to minimize935325
Node: Multimin Iteration939377
Node: Multimin Stopping Criteria941576
Node: Multimin Algorithms with Derivatives943145
Node: Multimin Algorithms without Derivatives946673
Node: Multimin Examples949673
Node: Multimin References and Further Reading956239
Node: Least-Squares Fitting957104
Node: Fitting Overview958103
Node: Linear regression960437
Node: Linear fitting without a constant term963039
Node: Multi-parameter fitting965232
Node: Fitting Examples971046
Node: Fitting References and Further Reading977871
Node: Nonlinear Least-Squares Fitting978693
Node: Overview of Nonlinear Least-Squares Fitting980143
Node: Initializing the Nonlinear Least-Squares Solver981610
Node: Providing the Function to be Minimized984475
Node: Iteration of the Minimization Algorithm987412
Node: Search Stopping Parameters for Minimization Algorithms989262
Node: Minimization Algorithms using Derivatives991357
Ref: Minimization Algorithms using Derivatives-Footnote-1994838
Node: Minimization Algorithms without Derivatives994927
Node: Computing the covariance matrix of best fit parameters995319
Node: Example programs for Nonlinear Least-Squares Fitting997353
Node: References and Further Reading for Nonlinear Least-Squares Fitting1005081
Node: Basis Splines1005817
Node: Overview of B-splines1006717
Node: Initializing the B-splines solver1008026
Node: Constructing the knots vector1009402
Node: Evaluation of B-spline basis functions1010189
Node: Evaluation of B-spline basis function derivatives1011846
Node: Obtaining Greville abscissae for B-spline basis functions1013838
Node: Example programs for B-splines1014880
Node: References and Further Reading1018892
Node: Physical Constants1019697
Node: Fundamental Constants1021107
Node: Astronomy and Astrophysics1022246
Node: Atomic and Nuclear Physics1022911
Node: Measurement of Time1024562
Node: Imperial Units1024992
Node: Speed and Nautical Units1025434
Node: Printers Units1025938
Node: Volume Area and Length1026261
Node: Mass and Weight1026947
Node: Thermal Energy and Power1027766
Node: Pressure1028189
Node: Viscosity1028802
Node: Light and Illumination1029078
Node: Radioactivity1029670
Node: Force and Energy1030005
Node: Prefixes1030409
Node: Physical Constant Examples1031152
Node: Physical Constant References and Further Reading1032941
Node: IEEE floating-point arithmetic1033671
Node: Representation of floating point numbers1034257
Node: Setting up your IEEE environment1038738
Node: IEEE References and Further Reading1045730
Node: Debugging Numerical Programs1046885
Node: Using gdb1047369
Node: Examining floating point registers1050712
Node: Handling floating point exceptions1051997
Node: GCC warning options for numerical programs1053409
Node: Debugging References1057371
Node: Contributors to GSL1058083
Node: Autoconf Macros1062552
Node: GSL CBLAS Library1066581
Node: Level 1 CBLAS Functions1067108
Node: Level 2 CBLAS Functions1072410
Node: Level 3 CBLAS Functions1089080
Node: GSL CBLAS Examples1098742
Node: Free Software Needs Free Documentation1100304
Node: GNU General Public License1105373
Node: GNU Free Documentation License1142985
Node: Function Index1168153
Node: Variable Index1417524
Node: Type Index1420827
Node: Concept Index1435602
End Tag Table
./gsl_DOC-1.15-s-i486/usr/share/info/gsl-ref.info-20000644000000000000000000111015712035456005017724 0ustar rootrootThis is gsl-ref.info, produced by makeinfo version 4.13 from
gsl-ref.texi.
INFO-DIR-SECTION Software libraries
START-INFO-DIR-ENTRY
* gsl-ref: (gsl-ref). GNU Scientific Library - Reference
END-INFO-DIR-ENTRY
Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004,
2005, 2006, 2007, 2008, 2009, 2010, 2011 The GSL Team.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.3 or
any later version published by the Free Software Foundation; with the
Invariant Sections being "GNU General Public License" and "Free Software
Needs Free Documentation", the Front-Cover text being "A GNU Manual",
and with the Back-Cover Text being (a) (see below). A copy of the
license is included in the section entitled "GNU Free Documentation
License".
(a) The Back-Cover Text is: "You have the freedom to copy and modify
this GNU Manual."
File: gsl-ref.info, Node: Combination Examples, Next: Combination References and Further Reading, Prev: Reading and writing combinations, Up: Combinations
10.7 Examples
=============
The example program below prints all subsets of the set {0,1,2,3}
ordered by size. Subsets of the same size are ordered
lexicographically.
#include
#include
int
main (void)
{
gsl_combination * c;
size_t i;
printf ("All subsets of {0,1,2,3} by size:\n") ;
for (i = 0; i <= 4; i++)
{
c = gsl_combination_calloc (4, i);
do
{
printf ("{");
gsl_combination_fprintf (stdout, c, " %u");
printf (" }\n");
}
while (gsl_combination_next (c) == GSL_SUCCESS);
gsl_combination_free (c);
}
return 0;
}
Here is the output from the program,
$ ./a.out
All subsets of {0,1,2,3} by size:
{ }
{ 0 }
{ 1 }
{ 2 }
{ 3 }
{ 0 1 }
{ 0 2 }
{ 0 3 }
{ 1 2 }
{ 1 3 }
{ 2 3 }
{ 0 1 2 }
{ 0 1 3 }
{ 0 2 3 }
{ 1 2 3 }
{ 0 1 2 3 }
All 16 subsets are generated, and the subsets of each size are sorted
lexicographically.
File: gsl-ref.info, Node: Combination References and Further Reading, Prev: Combination Examples, Up: Combinations
10.8 References and Further Reading
===================================
Further information on combinations can be found in,
Donald L. Kreher, Douglas R. Stinson, `Combinatorial Algorithms:
Generation, Enumeration and Search', 1998, CRC Press LLC, ISBN
084933988X
File: gsl-ref.info, Node: Multisets, Next: Sorting, Prev: Combinations, Up: Top
11 Multisets
************
This chapter describes functions for creating and manipulating
multisets. A multiset c is represented by an array of k integers in the
range 0 to n-1, where each value c_i may occur more than once. The
multiset c corresponds to indices of k elements chosen from an n
element vector with replacement. In mathematical terms, n is the
cardinality of the multiset while k is the maximum multiplicity of any
value. Multisets are useful, for example, when iterating over the
indices of a k-th order symmetric tensor in n-space.
The functions described in this chapter are defined in the header
file `gsl_multiset.h'.
* Menu:
* The Multiset struct::
* Multiset allocation::
* Accessing multiset elements::
* Multiset properties::
* Multiset functions::
* Reading and writing multisets::
* Multiset Examples::
File: gsl-ref.info, Node: The Multiset struct, Next: Multiset allocation, Up: Multisets
11.1 The Multiset struct
========================
A multiset is defined by a structure containing three components, the
values of n and k, and a pointer to the multiset array. The elements
of the multiset array are all of type `size_t', and are stored in
increasing order. The `gsl_multiset' structure looks like this,
typedef struct
{
size_t n;
size_t k;
size_t *data;
} gsl_multiset;
File: gsl-ref.info, Node: Multiset allocation, Next: Accessing multiset elements, Prev: The Multiset struct, Up: Multisets
11.2 Multiset allocation
========================
-- Function: gsl_multiset * gsl_multiset_alloc (size_t N, size_t K)
This function allocates memory for a new multiset with parameters
N, K. The multiset is not initialized and its elements are
undefined. Use the function `gsl_multiset_calloc' if you want to
create a multiset which is initialized to the lexicographically
first multiset element. A null pointer is returned if insufficient
memory is available to create the multiset.
-- Function: gsl_multiset * gsl_multiset_calloc (size_t N, size_t K)
This function allocates memory for a new multiset with parameters
N, K and initializes it to the lexicographically first multiset
element. A null pointer is returned if insufficient memory is
available to create the multiset.
-- Function: void gsl_multiset_init_first (gsl_multiset * C)
This function initializes the multiset C to the lexicographically
first multiset element, i.e. 0 repeated k times.
-- Function: void gsl_multiset_init_last (gsl_multiset * C)
This function initializes the multiset C to the lexicographically
last multiset element, i.e. n-1 repeated k times.
-- Function: void gsl_multiset_free (gsl_multiset * C)
This function frees all the memory used by the multiset C.
-- Function: int gsl_multiset_memcpy (gsl_multiset * DEST, const
gsl_multiset * SRC)
This function copies the elements of the multiset SRC into the
multiset DEST. The two multisets must have the same size.
File: gsl-ref.info, Node: Accessing multiset elements, Next: Multiset properties, Prev: Multiset allocation, Up: Multisets
11.3 Accessing multiset elements
================================
The following function can be used to access the elements of a multiset.
-- Function: size_t gsl_multiset_get (const gsl_multiset * C, const
size_t I)
This function returns the value of the I-th element of the
multiset C. If I lies outside the allowed range of 0 to K-1 then
the error handler is invoked and 0 is returned. An inline version
of this function is used when `HAVE_INLINE' is defined.
File: gsl-ref.info, Node: Multiset properties, Next: Multiset functions, Prev: Accessing multiset elements, Up: Multisets
11.4 Multiset properties
========================
-- Function: size_t gsl_multiset_n (const gsl_multiset * C)
This function returns the range (n) of the multiset C.
-- Function: size_t gsl_multiset_k (const gsl_multiset * C)
This function returns the number of elements (k) in the multiset C.
-- Function: size_t * gsl_multiset_data (const gsl_multiset * C)
This function returns a pointer to the array of elements in the
multiset C.
-- Function: int gsl_multiset_valid (gsl_multiset * C)
This function checks that the multiset C is valid. The K elements
should lie in the range 0 to N-1, with each value occurring in
nondecreasing order.
File: gsl-ref.info, Node: Multiset functions, Next: Reading and writing multisets, Prev: Multiset properties, Up: Multisets
11.5 Multiset functions
=======================
-- Function: int gsl_multiset_next (gsl_multiset * C)
This function advances the multiset C to the next multiset element
in lexicographic order and returns `GSL_SUCCESS'. If no further
multisets elements are available it returns `GSL_FAILURE' and
leaves C unmodified. Starting with the first multiset and
repeatedly applying this function will iterate through all
possible multisets of a given order.
-- Function: int gsl_multiset_prev (gsl_multiset * C)
This function steps backwards from the multiset C to the previous
multiset element in lexicographic order, returning `GSL_SUCCESS'.
If no previous multiset is available it returns `GSL_FAILURE' and
leaves C unmodified.
File: gsl-ref.info, Node: Reading and writing multisets, Next: Multiset Examples, Prev: Multiset functions, Up: Multisets
11.6 Reading and writing multisets
==================================
The library provides functions for reading and writing multisets to a
file as binary data or formatted text.
-- Function: int gsl_multiset_fwrite (FILE * STREAM, const
gsl_multiset * C)
This function writes the elements of the multiset C to the stream
STREAM in binary format. The function returns `GSL_EFAILED' if
there was a problem writing to the file. Since the data is
written in the native binary format it may not be portable between
different architectures.
-- Function: int gsl_multiset_fread (FILE * STREAM, gsl_multiset * C)
This function reads elements from the open stream STREAM into the
multiset C in binary format. The multiset C must be preallocated
with correct values of n and k since the function uses the size of
C to determine how many bytes to read. The function returns
`GSL_EFAILED' if there was a problem reading from the file. The
data is assumed to have been written in the native binary format
on the same architecture.
-- Function: int gsl_multiset_fprintf (FILE * STREAM, const
gsl_multiset * C, const char * FORMAT)
This function writes the elements of the multiset C line-by-line
to the stream STREAM using the format specifier FORMAT, which
should be suitable for a type of SIZE_T. In ISO C99 the type
modifier `z' represents `size_t', so `"%zu\n"' is a suitable
format.(1) The function returns `GSL_EFAILED' if there was a
problem writing to the file.
-- Function: int gsl_multiset_fscanf (FILE * STREAM, gsl_multiset * C)
This function reads formatted data from the stream STREAM into the
multiset C. The multiset C must be preallocated with correct
values of n and k since the function uses the size of C to
determine how many numbers to read. The function returns
`GSL_EFAILED' if there was a problem reading from the file.
---------- Footnotes ----------
(1) In versions of the GNU C library prior to the ISO C99 standard,
the type modifier `Z' was used instead.
File: gsl-ref.info, Node: Multiset Examples, Prev: Reading and writing multisets, Up: Multisets
11.7 Examples
=============
The example program below prints all multisets elements containing the
values {0,1,2,3} ordered by size. Multiset elements of the same size
are ordered lexicographically.
#include
#include
int
main (void)
{
gsl_multiset * c;
size_t i;
printf ("All multisets of {0,1,2,3} by size:\n") ;
for (i = 0; i <= 4; i++)
{
c = gsl_multiset_calloc (4, i);
do
{
printf ("{");
gsl_multiset_fprintf (stdout, c, " %u");
printf (" }\n");
}
while (gsl_multiset_next (c) == GSL_SUCCESS);
gsl_multiset_free (c);
}
return 0;
}
Here is the output from the program,
$ ./a.out
all multisets of {0,1,2,3} by size:
{ }
{ 0 }
{ 1 }
{ 2 }
{ 3 }
{ 0 0 }
{ 0 1 }
{ 0 2 }
{ 0 3 }
{ 1 1 }
{ 1 2 }
{ 1 3 }
{ 2 2 }
{ 2 3 }
{ 3 3 }
{ 0 0 0 }
{ 0 0 1 }
{ 0 0 2 }
{ 0 0 3 }
{ 0 1 1 }
{ 0 1 2 }
{ 0 1 3 }
{ 0 2 2 }
{ 0 2 3 }
{ 0 3 3 }
{ 1 1 1 }
{ 1 1 2 }
{ 1 1 3 }
{ 1 2 2 }
{ 1 2 3 }
{ 1 3 3 }
{ 2 2 2 }
{ 2 2 3 }
{ 2 3 3 }
{ 3 3 3 }
{ 0 0 0 0 }
{ 0 0 0 1 }
{ 0 0 0 2 }
{ 0 0 0 3 }
{ 0 0 1 1 }
{ 0 0 1 2 }
{ 0 0 1 3 }
{ 0 0 2 2 }
{ 0 0 2 3 }
{ 0 0 3 3 }
{ 0 1 1 1 }
{ 0 1 1 2 }
{ 0 1 1 3 }
{ 0 1 2 2 }
{ 0 1 2 3 }
{ 0 1 3 3 }
{ 0 2 2 2 }
{ 0 2 2 3 }
{ 0 2 3 3 }
{ 0 3 3 3 }
{ 1 1 1 1 }
{ 1 1 1 2 }
{ 1 1 1 3 }
{ 1 1 2 2 }
{ 1 1 2 3 }
{ 1 1 3 3 }
{ 1 2 2 2 }
{ 1 2 2 3 }
{ 1 2 3 3 }
{ 1 3 3 3 }
{ 2 2 2 2 }
{ 2 2 2 3 }
{ 2 2 3 3 }
{ 2 3 3 3 }
{ 3 3 3 3 }
All 70 multisets are generated and sorted lexicographically.
File: gsl-ref.info, Node: Sorting, Next: BLAS Support, Prev: Multisets, Up: Top
12 Sorting
**********
This chapter describes functions for sorting data, both directly and
indirectly (using an index). All the functions use the "heapsort"
algorithm. Heapsort is an O(N \log N) algorithm which operates
in-place and does not require any additional storage. It also provides
consistent performance, the running time for its worst-case (ordered
data) being not significantly longer than the average and best cases.
Note that the heapsort algorithm does not preserve the relative ordering
of equal elements--it is an "unstable" sort. However the resulting
order of equal elements will be consistent across different platforms
when using these functions.
* Menu:
* Sorting objects::
* Sorting vectors::
* Selecting the k smallest or largest elements::
* Computing the rank::
* Sorting Examples::
* Sorting References and Further Reading::
File: gsl-ref.info, Node: Sorting objects, Next: Sorting vectors, Up: Sorting
12.1 Sorting objects
====================
The following function provides a simple alternative to the standard
library function `qsort'. It is intended for systems lacking `qsort',
not as a replacement for it. The function `qsort' should be used
whenever possible, as it will be faster and can provide stable ordering
of equal elements. Documentation for `qsort' is available in the `GNU
C Library Reference Manual'.
The functions described in this section are defined in the header
file `gsl_heapsort.h'.
-- Function: void gsl_heapsort (void * ARRAY, size_t COUNT, size_t
SIZE, gsl_comparison_fn_t COMPARE)
This function sorts the COUNT elements of the array ARRAY, each of
size SIZE, into ascending order using the comparison function
COMPARE. The type of the comparison function is defined by,
int (*gsl_comparison_fn_t) (const void * a,
const void * b)
A comparison function should return a negative integer if the first
argument is less than the second argument, `0' if the two arguments
are equal and a positive integer if the first argument is greater
than the second argument.
For example, the following function can be used to sort doubles
into ascending numerical order.
int
compare_doubles (const double * a,
const double * b)
{
if (*a > *b)
return 1;
else if (*a < *b)
return -1;
else
return 0;
}
The appropriate function call to perform the sort is,
gsl_heapsort (array, count, sizeof(double),
compare_doubles);
Note that unlike `qsort' the heapsort algorithm cannot be made into
a stable sort by pointer arithmetic. The trick of comparing
pointers for equal elements in the comparison function does not
work for the heapsort algorithm. The heapsort algorithm performs
an internal rearrangement of the data which destroys its initial
ordering.
-- Function: int gsl_heapsort_index (size_t * P, const void * ARRAY,
size_t COUNT, size_t SIZE, gsl_comparison_fn_t COMPARE)
This function indirectly sorts the COUNT elements of the array
ARRAY, each of size SIZE, into ascending order using the
comparison function COMPARE. The resulting permutation is stored
in P, an array of length N. The elements of P give the index of
the array element which would have been stored in that position if
the array had been sorted in place. The first element of P gives
the index of the least element in ARRAY, and the last element of P
gives the index of the greatest element in ARRAY. The array
itself is not changed.
File: gsl-ref.info, Node: Sorting vectors, Next: Selecting the k smallest or largest elements, Prev: Sorting objects, Up: Sorting
12.2 Sorting vectors
====================
The following functions will sort the elements of an array or vector,
either directly or indirectly. They are defined for all real and
integer types using the normal suffix rules. For example, the `float'
versions of the array functions are `gsl_sort_float' and
`gsl_sort_float_index'. The corresponding vector functions are
`gsl_sort_vector_float' and `gsl_sort_vector_float_index'. The
prototypes are available in the header files `gsl_sort_float.h'
`gsl_sort_vector_float.h'. The complete set of prototypes can be
included using the header files `gsl_sort.h' and `gsl_sort_vector.h'.
There are no functions for sorting complex arrays or vectors, since
the ordering of complex numbers is not uniquely defined. To sort a
complex vector by magnitude compute a real vector containing the
magnitudes of the complex elements, and sort this vector indirectly.
The resulting index gives the appropriate ordering of the original
complex vector.
-- Function: void gsl_sort (double * DATA, size_t STRIDE, size_t N)
This function sorts the N elements of the array DATA with stride
STRIDE into ascending numerical order.
-- Function: void gsl_sort_vector (gsl_vector * V)
This function sorts the elements of the vector V into ascending
numerical order.
-- Function: void gsl_sort_index (size_t * P, const double * DATA,
size_t STRIDE, size_t N)
This function indirectly sorts the N elements of the array DATA
with stride STRIDE into ascending order, storing the resulting
permutation in P. The array P must be allocated with a sufficient
length to store the N elements of the permutation. The elements
of P give the index of the array element which would have been
stored in that position if the array had been sorted in place.
The array DATA is not changed.
-- Function: int gsl_sort_vector_index (gsl_permutation * P, const
gsl_vector * V)
This function indirectly sorts the elements of the vector V into
ascending order, storing the resulting permutation in P. The
elements of P give the index of the vector element which would
have been stored in that position if the vector had been sorted in
place. The first element of P gives the index of the least element
in V, and the last element of P gives the index of the greatest
element in V. The vector V is not changed.
File: gsl-ref.info, Node: Selecting the k smallest or largest elements, Next: Computing the rank, Prev: Sorting vectors, Up: Sorting
12.3 Selecting the k smallest or largest elements
=================================================
The functions described in this section select the k smallest or
largest elements of a data set of size N. The routines use an O(kN)
direct insertion algorithm which is suited to subsets that are small
compared with the total size of the dataset. For example, the routines
are useful for selecting the 10 largest values from one million data
points, but not for selecting the largest 100,000 values. If the
subset is a significant part of the total dataset it may be faster to
sort all the elements of the dataset directly with an O(N \log N)
algorithm and obtain the smallest or largest values that way.
-- Function: int gsl_sort_smallest (double * DEST, size_t K, const
double * SRC, size_t STRIDE, size_t N)
This function copies the K smallest elements of the array SRC, of
size N and stride STRIDE, in ascending numerical order into the
array DEST. The size K of the subset must be less than or equal
to N. The data SRC is not modified by this operation.
-- Function: int gsl_sort_largest (double * DEST, size_t K, const
double * SRC, size_t STRIDE, size_t N)
This function copies the K largest elements of the array SRC, of
size N and stride STRIDE, in descending numerical order into the
array DEST. K must be less than or equal to N. The data SRC is not
modified by this operation.
-- Function: int gsl_sort_vector_smallest (double * DEST, size_t K,
const gsl_vector * V)
-- Function: int gsl_sort_vector_largest (double * DEST, size_t K,
const gsl_vector * V)
These functions copy the K smallest or largest elements of the
vector V into the array DEST. K must be less than or equal to the
length of the vector V.
The following functions find the indices of the k smallest or
largest elements of a dataset,
-- Function: int gsl_sort_smallest_index (size_t * P, size_t K, const
double * SRC, size_t STRIDE, size_t N)
This function stores the indices of the K smallest elements of the
array SRC, of size N and stride STRIDE, in the array P. The
indices are chosen so that the corresponding data is in ascending
numerical order. K must be less than or equal to N. The data SRC
is not modified by this operation.
-- Function: int gsl_sort_largest_index (size_t * P, size_t K, const
double * SRC, size_t STRIDE, size_t N)
This function stores the indices of the K largest elements of the
array SRC, of size N and stride STRIDE, in the array P. The
indices are chosen so that the corresponding data is in descending
numerical order. K must be less than or equal to N. The data SRC
is not modified by this operation.
-- Function: int gsl_sort_vector_smallest_index (size_t * P, size_t K,
const gsl_vector * V)
-- Function: int gsl_sort_vector_largest_index (size_t * P, size_t K,
const gsl_vector * V)
These functions store the indices of the K smallest or largest
elements of the vector V in the array P. K must be less than or
equal to the length of the vector V.
File: gsl-ref.info, Node: Computing the rank, Next: Sorting Examples, Prev: Selecting the k smallest or largest elements, Up: Sorting
12.4 Computing the rank
=======================
The "rank" of an element is its order in the sorted data. The rank is
the inverse of the index permutation, P. It can be computed using the
following algorithm,
for (i = 0; i < p->size; i++)
{
size_t pi = p->data[i];
rank->data[pi] = i;
}
This can be computed directly from the function
`gsl_permutation_inverse(rank,p)'.
The following function will print the rank of each element of the
vector V,
void
print_rank (gsl_vector * v)
{
size_t i;
size_t n = v->size;
gsl_permutation * perm = gsl_permutation_alloc(n);
gsl_permutation * rank = gsl_permutation_alloc(n);
gsl_sort_vector_index (perm, v);
gsl_permutation_inverse (rank, perm);
for (i = 0; i < n; i++)
{
double vi = gsl_vector_get(v, i);
printf ("element = %d, value = %g, rank = %d\n",
i, vi, rank->data[i]);
}
gsl_permutation_free (perm);
gsl_permutation_free (rank);
}
File: gsl-ref.info, Node: Sorting Examples, Next: Sorting References and Further Reading, Prev: Computing the rank, Up: Sorting
12.5 Examples
=============
The following example shows how to use the permutation P to print the
elements of the vector V in ascending order,
gsl_sort_vector_index (p, v);
for (i = 0; i < v->size; i++)
{
double vpi = gsl_vector_get (v, p->data[i]);
printf ("order = %d, value = %g\n", i, vpi);
}
The next example uses the function `gsl_sort_smallest' to select the 5
smallest numbers from 100000 uniform random variates stored in an array,
#include
#include
int
main (void)
{
const gsl_rng_type * T;
gsl_rng * r;
size_t i, k = 5, N = 100000;
double * x = malloc (N * sizeof(double));
double * small = malloc (k * sizeof(double));
gsl_rng_env_setup();
T = gsl_rng_default;
r = gsl_rng_alloc (T);
for (i = 0; i < N; i++)
{
x[i] = gsl_rng_uniform(r);
}
gsl_sort_smallest (small, k, x, 1, N);
printf ("%d smallest values from %d\n", k, N);
for (i = 0; i < k; i++)
{
printf ("%d: %.18f\n", i, small[i]);
}
free (x);
free (small);
gsl_rng_free (r);
return 0;
}
The output lists the 5 smallest values, in ascending order,
$ ./a.out
5 smallest values from 100000
0: 0.000003489200025797
1: 0.000008199829608202
2: 0.000008953968062997
3: 0.000010712770745158
4: 0.000033531803637743
File: gsl-ref.info, Node: Sorting References and Further Reading, Prev: Sorting Examples, Up: Sorting
12.6 References and Further Reading
===================================
The subject of sorting is covered extensively in Knuth's `Sorting and
Searching',
Donald E. Knuth, `The Art of Computer Programming: Sorting and
Searching' (Vol 3, 3rd Ed, 1997), Addison-Wesley, ISBN 0201896850.
The Heapsort algorithm is described in the following book,
Robert Sedgewick, `Algorithms in C', Addison-Wesley, ISBN
0201514257.
File: gsl-ref.info, Node: BLAS Support, Next: Linear Algebra, Prev: Sorting, Up: Top
13 BLAS Support
***************
The Basic Linear Algebra Subprograms (BLAS) define a set of fundamental
operations on vectors and matrices which can be used to create optimized
higher-level linear algebra functionality.
The library provides a low-level layer which corresponds directly to
the C-language BLAS standard, referred to here as "CBLAS", and a
higher-level interface for operations on GSL vectors and matrices.
Users who are interested in simple operations on GSL vector and matrix
objects should use the high-level layer described in this chapter. The
functions are declared in the file `gsl_blas.h' and should satisfy the
needs of most users.
Note that GSL matrices are implemented using dense-storage so the
interface only includes the corresponding dense-storage BLAS functions.
The full BLAS functionality for band-format and packed-format matrices
is available through the low-level CBLAS interface. Similarly, GSL
vectors are restricted to positive strides, whereas the low-level CBLAS
interface supports negative strides as specified in the BLAS
standard.(1)
The interface for the `gsl_cblas' layer is specified in the file
`gsl_cblas.h'. This interface corresponds to the BLAS Technical
Forum's standard for the C interface to legacy BLAS implementations.
Users who have access to other conforming CBLAS implementations can use
these in place of the version provided by the library. Note that users
who have only a Fortran BLAS library can use a CBLAS conformant wrapper
to convert it into a CBLAS library. A reference CBLAS wrapper for
legacy Fortran implementations exists as part of the CBLAS standard and
can be obtained from Netlib. The complete set of CBLAS functions is
listed in an appendix (*note GSL CBLAS Library::).
There are three levels of BLAS operations,
Level 1
Vector operations, e.g. y = \alpha x + y
Level 2
Matrix-vector operations, e.g. y = \alpha A x + \beta y
Level 3
Matrix-matrix operations, e.g. C = \alpha A B + C
Each routine has a name which specifies the operation, the type of
matrices involved and their precisions. Some of the most common
operations and their names are given below,
DOT
scalar product, x^T y
AXPY
vector sum, \alpha x + y
MV
matrix-vector product, A x
SV
matrix-vector solve, inv(A) x
MM
matrix-matrix product, A B
SM
matrix-matrix solve, inv(A) B
The types of matrices are,
GE
general
GB
general band
SY
symmetric
SB
symmetric band
SP
symmetric packed
HE
hermitian
HB
hermitian band
HP
hermitian packed
TR
triangular
TB
triangular band
TP
triangular packed
Each operation is defined for four precisions,
S
single real
D
double real
C
single complex
Z
double complex
Thus, for example, the name SGEMM stands for "single-precision general
matrix-matrix multiply" and ZGEMM stands for "double-precision complex
matrix-matrix multiply".
Note that the vector and matrix arguments to BLAS functions must not
be aliased, as the results are undefined when the underlying arrays
overlap (*note Aliasing of arrays::).
* Menu:
* GSL BLAS Interface::
* BLAS Examples::
* BLAS References and Further Reading::
---------- Footnotes ----------
(1) In the low-level CBLAS interface, a negative stride accesses the
vector elements in reverse order, i.e. the i-th element is given by
(N-i)*|incx| for incx < 0.
File: gsl-ref.info, Node: GSL BLAS Interface, Next: BLAS Examples, Up: BLAS Support
13.1 GSL BLAS Interface
=======================
GSL provides dense vector and matrix objects, based on the relevant
built-in types. The library provides an interface to the BLAS
operations which apply to these objects. The interface to this
functionality is given in the file `gsl_blas.h'.
* Menu:
* Level 1 GSL BLAS Interface::
* Level 2 GSL BLAS Interface::
* Level 3 GSL BLAS Interface::
File: gsl-ref.info, Node: Level 1 GSL BLAS Interface, Next: Level 2 GSL BLAS Interface, Up: GSL BLAS Interface
13.1.1 Level 1
--------------
-- Function: int gsl_blas_sdsdot (float ALPHA, const gsl_vector_float
* X, const gsl_vector_float * Y, float * RESULT)
This function computes the sum \alpha + x^T y for the vectors X
and Y, returning the result in RESULT.
-- Function: int gsl_blas_sdot (const gsl_vector_float * X, const
gsl_vector_float * Y, float * RESULT)
-- Function: int gsl_blas_dsdot (const gsl_vector_float * X, const
gsl_vector_float * Y, double * RESULT)
-- Function: int gsl_blas_ddot (const gsl_vector * X, const gsl_vector
* Y, double * RESULT)
These functions compute the scalar product x^T y for the vectors X
and Y, returning the result in RESULT.
-- Function: int gsl_blas_cdotu (const gsl_vector_complex_float * X,
const gsl_vector_complex_float * Y, gsl_complex_float * DOTU)
-- Function: int gsl_blas_zdotu (const gsl_vector_complex * X, const
gsl_vector_complex * Y, gsl_complex * DOTU)
These functions compute the complex scalar product x^T y for the
vectors X and Y, returning the result in DOTU
-- Function: int gsl_blas_cdotc (const gsl_vector_complex_float * X,
const gsl_vector_complex_float * Y, gsl_complex_float * DOTC)
-- Function: int gsl_blas_zdotc (const gsl_vector_complex * X, const
gsl_vector_complex * Y, gsl_complex * DOTC)
These functions compute the complex conjugate scalar product x^H y
for the vectors X and Y, returning the result in DOTC
-- Function: float gsl_blas_snrm2 (const gsl_vector_float * X)
-- Function: double gsl_blas_dnrm2 (const gsl_vector * X)
These functions compute the Euclidean norm ||x||_2 = \sqrt {\sum
x_i^2} of the vector X.
-- Function: float gsl_blas_scnrm2 (const gsl_vector_complex_float * X)
-- Function: double gsl_blas_dznrm2 (const gsl_vector_complex * X)
These functions compute the Euclidean norm of the complex vector X,
||x||_2 = \sqrt {\sum (\Re(x_i)^2 + \Im(x_i)^2)}.
-- Function: float gsl_blas_sasum (const gsl_vector_float * X)
-- Function: double gsl_blas_dasum (const gsl_vector * X)
These functions compute the absolute sum \sum |x_i| of the
elements of the vector X.
-- Function: float gsl_blas_scasum (const gsl_vector_complex_float * X)
-- Function: double gsl_blas_dzasum (const gsl_vector_complex * X)
These functions compute the sum of the magnitudes of the real and
imaginary parts of the complex vector X, \sum |\Re(x_i)| +
|\Im(x_i)|.
-- Function: CBLAS_INDEX_t gsl_blas_isamax (const gsl_vector_float * X)
-- Function: CBLAS_INDEX_t gsl_blas_idamax (const gsl_vector * X)
-- Function: CBLAS_INDEX_t gsl_blas_icamax (const
gsl_vector_complex_float * X)
-- Function: CBLAS_INDEX_t gsl_blas_izamax (const gsl_vector_complex *
X)
These functions return the index of the largest element of the
vector X. The largest element is determined by its absolute
magnitude for real vectors and by the sum of the magnitudes of the
real and imaginary parts |\Re(x_i)| + |\Im(x_i)| for complex
vectors. If the largest value occurs several times then the index
of the first occurrence is returned.
-- Function: int gsl_blas_sswap (gsl_vector_float * X,
gsl_vector_float * Y)
-- Function: int gsl_blas_dswap (gsl_vector * X, gsl_vector * Y)
-- Function: int gsl_blas_cswap (gsl_vector_complex_float * X,
gsl_vector_complex_float * Y)
-- Function: int gsl_blas_zswap (gsl_vector_complex * X,
gsl_vector_complex * Y)
These functions exchange the elements of the vectors X and Y.
-- Function: int gsl_blas_scopy (const gsl_vector_float * X,
gsl_vector_float * Y)
-- Function: int gsl_blas_dcopy (const gsl_vector * X, gsl_vector * Y)
-- Function: int gsl_blas_ccopy (const gsl_vector_complex_float * X,
gsl_vector_complex_float * Y)
-- Function: int gsl_blas_zcopy (const gsl_vector_complex * X,
gsl_vector_complex * Y)
These functions copy the elements of the vector X into the vector
Y.
-- Function: int gsl_blas_saxpy (float ALPHA, const gsl_vector_float *
X, gsl_vector_float * Y)
-- Function: int gsl_blas_daxpy (double ALPHA, const gsl_vector * X,
gsl_vector * Y)
-- Function: int gsl_blas_caxpy (const gsl_complex_float ALPHA, const
gsl_vector_complex_float * X, gsl_vector_complex_float * Y)
-- Function: int gsl_blas_zaxpy (const gsl_complex ALPHA, const
gsl_vector_complex * X, gsl_vector_complex * Y)
These functions compute the sum y = \alpha x + y for the vectors X
and Y.
-- Function: void gsl_blas_sscal (float ALPHA, gsl_vector_float * X)
-- Function: void gsl_blas_dscal (double ALPHA, gsl_vector * X)
-- Function: void gsl_blas_cscal (const gsl_complex_float ALPHA,
gsl_vector_complex_float * X)
-- Function: void gsl_blas_zscal (const gsl_complex ALPHA,
gsl_vector_complex * X)
-- Function: void gsl_blas_csscal (float ALPHA,
gsl_vector_complex_float * X)
-- Function: void gsl_blas_zdscal (double ALPHA, gsl_vector_complex *
X)
These functions rescale the vector X by the multiplicative factor
ALPHA.
-- Function: int gsl_blas_srotg (float A[], float B[], float C[],
float S[])
-- Function: int gsl_blas_drotg (double A[], double B[], double C[],
double S[])
These functions compute a Givens rotation (c,s) which zeroes the
vector (a,b),
[ c s ] [ a ] = [ r ]
[ -s c ] [ b ] [ 0 ]
The variables A and B are overwritten by the routine.
-- Function: int gsl_blas_srot (gsl_vector_float * X, gsl_vector_float
* Y, float C, float S)
-- Function: int gsl_blas_drot (gsl_vector * X, gsl_vector * Y, const
double C, const double S)
These functions apply a Givens rotation (x', y') = (c x + s y, -s
x + c y) to the vectors X, Y.
-- Function: int gsl_blas_srotmg (float D1[], float D2[], float B1[],
float B2, float P[])
-- Function: int gsl_blas_drotmg (double D1[], double D2[], double
B1[], double B2, double P[])
These functions compute a modified Givens transformation. The
modified Givens transformation is defined in the original Level-1
BLAS specification, given in the references.
-- Function: int gsl_blas_srotm (gsl_vector_float * X,
gsl_vector_float * Y, const float P[])
-- Function: int gsl_blas_drotm (gsl_vector * X, gsl_vector * Y, const
double P[])
These functions apply a modified Givens transformation.
File: gsl-ref.info, Node: Level 2 GSL BLAS Interface, Next: Level 3 GSL BLAS Interface, Prev: Level 1 GSL BLAS Interface, Up: GSL BLAS Interface
13.1.2 Level 2
--------------
-- Function: int gsl_blas_sgemv (CBLAS_TRANSPOSE_t TRANSA, float
ALPHA, const gsl_matrix_float * A, const gsl_vector_float *
X, float BETA, gsl_vector_float * Y)
-- Function: int gsl_blas_dgemv (CBLAS_TRANSPOSE_t TRANSA, double
ALPHA, const gsl_matrix * A, const gsl_vector * X, double
BETA, gsl_vector * Y)
-- Function: int gsl_blas_cgemv (CBLAS_TRANSPOSE_t TRANSA, const
gsl_complex_float ALPHA, const gsl_matrix_complex_float * A,
const gsl_vector_complex_float * X, const gsl_complex_float
BETA, gsl_vector_complex_float * Y)
-- Function: int gsl_blas_zgemv (CBLAS_TRANSPOSE_t TRANSA, const
gsl_complex ALPHA, const gsl_matrix_complex * A, const
gsl_vector_complex * X, const gsl_complex BETA,
gsl_vector_complex * Y)
These functions compute the matrix-vector product and sum y =
\alpha op(A) x + \beta y, where op(A) = A, A^T, A^H for TRANSA =
`CblasNoTrans', `CblasTrans', `CblasConjTrans'.
-- Function: int gsl_blas_strmv (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t
TRANSA, CBLAS_DIAG_t DIAG, const gsl_matrix_float * A,
gsl_vector_float * X)
-- Function: int gsl_blas_dtrmv (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t
TRANSA, CBLAS_DIAG_t DIAG, const gsl_matrix * A, gsl_vector *
X)
-- Function: int gsl_blas_ctrmv (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t
TRANSA, CBLAS_DIAG_t DIAG, const gsl_matrix_complex_float *
A, gsl_vector_complex_float * X)
-- Function: int gsl_blas_ztrmv (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t
TRANSA, CBLAS_DIAG_t DIAG, const gsl_matrix_complex * A,
gsl_vector_complex * X)
These functions compute the matrix-vector product x = op(A) x for
the triangular matrix A, where op(A) = A, A^T, A^H for TRANSA =
`CblasNoTrans', `CblasTrans', `CblasConjTrans'. When UPLO is
`CblasUpper' then the upper triangle of A is used, and when UPLO
is `CblasLower' then the lower triangle of A is used. If DIAG is
`CblasNonUnit' then the diagonal of the matrix is used, but if
DIAG is `CblasUnit' then the diagonal elements of the matrix A are
taken as unity and are not referenced.
-- Function: int gsl_blas_strsv (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t
TRANSA, CBLAS_DIAG_t DIAG, const gsl_matrix_float * A,
gsl_vector_float * X)
-- Function: int gsl_blas_dtrsv (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t
TRANSA, CBLAS_DIAG_t DIAG, const gsl_matrix * A, gsl_vector *
X)
-- Function: int gsl_blas_ctrsv (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t
TRANSA, CBLAS_DIAG_t DIAG, const gsl_matrix_complex_float *
A, gsl_vector_complex_float * X)
-- Function: int gsl_blas_ztrsv (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t
TRANSA, CBLAS_DIAG_t DIAG, const gsl_matrix_complex * A,
gsl_vector_complex * X)
These functions compute inv(op(A)) x for X, where op(A) = A, A^T,
A^H for TRANSA = `CblasNoTrans', `CblasTrans', `CblasConjTrans'.
When UPLO is `CblasUpper' then the upper triangle of A is used,
and when UPLO is `CblasLower' then the lower triangle of A is
used. If DIAG is `CblasNonUnit' then the diagonal of the matrix
is used, but if DIAG is `CblasUnit' then the diagonal elements of
the matrix A are taken as unity and are not referenced.
-- Function: int gsl_blas_ssymv (CBLAS_UPLO_t UPLO, float ALPHA, const
gsl_matrix_float * A, const gsl_vector_float * X, float BETA,
gsl_vector_float * Y)
-- Function: int gsl_blas_dsymv (CBLAS_UPLO_t UPLO, double ALPHA,
const gsl_matrix * A, const gsl_vector * X, double BETA,
gsl_vector * Y)
These functions compute the matrix-vector product and sum y =
\alpha A x + \beta y for the symmetric matrix A. Since the matrix
A is symmetric only its upper half or lower half need to be
stored. When UPLO is `CblasUpper' then the upper triangle and
diagonal of A are used, and when UPLO is `CblasLower' then the
lower triangle and diagonal of A are used.
-- Function: int gsl_blas_chemv (CBLAS_UPLO_t UPLO, const
gsl_complex_float ALPHA, const gsl_matrix_complex_float * A,
const gsl_vector_complex_float * X, const gsl_complex_float
BETA, gsl_vector_complex_float * Y)
-- Function: int gsl_blas_zhemv (CBLAS_UPLO_t UPLO, const gsl_complex
ALPHA, const gsl_matrix_complex * A, const gsl_vector_complex
* X, const gsl_complex BETA, gsl_vector_complex * Y)
These functions compute the matrix-vector product and sum y =
\alpha A x + \beta y for the hermitian matrix A. Since the matrix
A is hermitian only its upper half or lower half need to be
stored. When UPLO is `CblasUpper' then the upper triangle and
diagonal of A are used, and when UPLO is `CblasLower' then the
lower triangle and diagonal of A are used. The imaginary elements
of the diagonal are automatically assumed to be zero and are not
referenced.
-- Function: int gsl_blas_sger (float ALPHA, const gsl_vector_float *
X, const gsl_vector_float * Y, gsl_matrix_float * A)
-- Function: int gsl_blas_dger (double ALPHA, const gsl_vector * X,
const gsl_vector * Y, gsl_matrix * A)
-- Function: int gsl_blas_cgeru (const gsl_complex_float ALPHA, const
gsl_vector_complex_float * X, const gsl_vector_complex_float
* Y, gsl_matrix_complex_float * A)
-- Function: int gsl_blas_zgeru (const gsl_complex ALPHA, const
gsl_vector_complex * X, const gsl_vector_complex * Y,
gsl_matrix_complex * A)
These functions compute the rank-1 update A = \alpha x y^T + A of
the matrix A.
-- Function: int gsl_blas_cgerc (const gsl_complex_float ALPHA, const
gsl_vector_complex_float * X, const gsl_vector_complex_float
* Y, gsl_matrix_complex_float * A)
-- Function: int gsl_blas_zgerc (const gsl_complex ALPHA, const
gsl_vector_complex * X, const gsl_vector_complex * Y,
gsl_matrix_complex * A)
These functions compute the conjugate rank-1 update A = \alpha x
y^H + A of the matrix A.
-- Function: int gsl_blas_ssyr (CBLAS_UPLO_t UPLO, float ALPHA, const
gsl_vector_float * X, gsl_matrix_float * A)
-- Function: int gsl_blas_dsyr (CBLAS_UPLO_t UPLO, double ALPHA, const
gsl_vector * X, gsl_matrix * A)
These functions compute the symmetric rank-1 update A = \alpha x
x^T + A of the symmetric matrix A. Since the matrix A is
symmetric only its upper half or lower half need to be stored.
When UPLO is `CblasUpper' then the upper triangle and diagonal of
A are used, and when UPLO is `CblasLower' then the lower triangle
and diagonal of A are used.
-- Function: int gsl_blas_cher (CBLAS_UPLO_t UPLO, float ALPHA, const
gsl_vector_complex_float * X, gsl_matrix_complex_float * A)
-- Function: int gsl_blas_zher (CBLAS_UPLO_t UPLO, double ALPHA, const
gsl_vector_complex * X, gsl_matrix_complex * A)
These functions compute the hermitian rank-1 update A = \alpha x
x^H + A of the hermitian matrix A. Since the matrix A is
hermitian only its upper half or lower half need to be stored.
When UPLO is `CblasUpper' then the upper triangle and diagonal of
A are used, and when UPLO is `CblasLower' then the lower triangle
and diagonal of A are used. The imaginary elements of the
diagonal are automatically set to zero.
-- Function: int gsl_blas_ssyr2 (CBLAS_UPLO_t UPLO, float ALPHA, const
gsl_vector_float * X, const gsl_vector_float * Y,
gsl_matrix_float * A)
-- Function: int gsl_blas_dsyr2 (CBLAS_UPLO_t UPLO, double ALPHA,
const gsl_vector * X, const gsl_vector * Y, gsl_matrix * A)
These functions compute the symmetric rank-2 update A = \alpha x
y^T + \alpha y x^T + A of the symmetric matrix A. Since the
matrix A is symmetric only its upper half or lower half need to be
stored. When UPLO is `CblasUpper' then the upper triangle and
diagonal of A are used, and when UPLO is `CblasLower' then the
lower triangle and diagonal of A are used.
-- Function: int gsl_blas_cher2 (CBLAS_UPLO_t UPLO, const
gsl_complex_float ALPHA, const gsl_vector_complex_float * X,
const gsl_vector_complex_float * Y, gsl_matrix_complex_float
* A)
-- Function: int gsl_blas_zher2 (CBLAS_UPLO_t UPLO, const gsl_complex
ALPHA, const gsl_vector_complex * X, const gsl_vector_complex
* Y, gsl_matrix_complex * A)
These functions compute the hermitian rank-2 update A = \alpha x
y^H + \alpha^* y x^H + A of the hermitian matrix A. Since the
matrix A is hermitian only its upper half or lower half need to be
stored. When UPLO is `CblasUpper' then the upper triangle and
diagonal of A are used, and when UPLO is `CblasLower' then the
lower triangle and diagonal of A are used. The imaginary elements
of the diagonal are automatically set to zero.
File: gsl-ref.info, Node: Level 3 GSL BLAS Interface, Prev: Level 2 GSL BLAS Interface, Up: GSL BLAS Interface
13.1.3 Level 3
--------------
-- Function: int gsl_blas_sgemm (CBLAS_TRANSPOSE_t TRANSA,
CBLAS_TRANSPOSE_t TRANSB, float ALPHA, const gsl_matrix_float
* A, const gsl_matrix_float * B, float BETA, gsl_matrix_float
* C)
-- Function: int gsl_blas_dgemm (CBLAS_TRANSPOSE_t TRANSA,
CBLAS_TRANSPOSE_t TRANSB, double ALPHA, const gsl_matrix * A,
const gsl_matrix * B, double BETA, gsl_matrix * C)
-- Function: int gsl_blas_cgemm (CBLAS_TRANSPOSE_t TRANSA,
CBLAS_TRANSPOSE_t TRANSB, const gsl_complex_float ALPHA,
const gsl_matrix_complex_float * A, const
gsl_matrix_complex_float * B, const gsl_complex_float BETA,
gsl_matrix_complex_float * C)
-- Function: int gsl_blas_zgemm (CBLAS_TRANSPOSE_t TRANSA,
CBLAS_TRANSPOSE_t TRANSB, const gsl_complex ALPHA, const
gsl_matrix_complex * A, const gsl_matrix_complex * B, const
gsl_complex BETA, gsl_matrix_complex * C)
These functions compute the matrix-matrix product and sum C =
\alpha op(A) op(B) + \beta C where op(A) = A, A^T, A^H for TRANSA
= `CblasNoTrans', `CblasTrans', `CblasConjTrans' and similarly for
the parameter TRANSB.
-- Function: int gsl_blas_ssymm (CBLAS_SIDE_t SIDE, CBLAS_UPLO_t UPLO,
float ALPHA, const gsl_matrix_float * A, const
gsl_matrix_float * B, float BETA, gsl_matrix_float * C)
-- Function: int gsl_blas_dsymm (CBLAS_SIDE_t SIDE, CBLAS_UPLO_t UPLO,
double ALPHA, const gsl_matrix * A, const gsl_matrix * B,
double BETA, gsl_matrix * C)
-- Function: int gsl_blas_csymm (CBLAS_SIDE_t SIDE, CBLAS_UPLO_t UPLO,
const gsl_complex_float ALPHA, const gsl_matrix_complex_float
* A, const gsl_matrix_complex_float * B, const
gsl_complex_float BETA, gsl_matrix_complex_float * C)
-- Function: int gsl_blas_zsymm (CBLAS_SIDE_t SIDE, CBLAS_UPLO_t UPLO,
const gsl_complex ALPHA, const gsl_matrix_complex * A, const
gsl_matrix_complex * B, const gsl_complex BETA,
gsl_matrix_complex * C)
These functions compute the matrix-matrix product and sum C =
\alpha A B + \beta C for SIDE is `CblasLeft' and C = \alpha B A +
\beta C for SIDE is `CblasRight', where the matrix A is symmetric.
When UPLO is `CblasUpper' then the upper triangle and diagonal of
A are used, and when UPLO is `CblasLower' then the lower triangle
and diagonal of A are used.
-- Function: int gsl_blas_chemm (CBLAS_SIDE_t SIDE, CBLAS_UPLO_t UPLO,
const gsl_complex_float ALPHA, const gsl_matrix_complex_float
* A, const gsl_matrix_complex_float * B, const
gsl_complex_float BETA, gsl_matrix_complex_float * C)
-- Function: int gsl_blas_zhemm (CBLAS_SIDE_t SIDE, CBLAS_UPLO_t UPLO,
const gsl_complex ALPHA, const gsl_matrix_complex * A, const
gsl_matrix_complex * B, const gsl_complex BETA,
gsl_matrix_complex * C)
These functions compute the matrix-matrix product and sum C =
\alpha A B + \beta C for SIDE is `CblasLeft' and C = \alpha B A +
\beta C for SIDE is `CblasRight', where the matrix A is hermitian.
When UPLO is `CblasUpper' then the upper triangle and diagonal of
A are used, and when UPLO is `CblasLower' then the lower triangle
and diagonal of A are used. The imaginary elements of the
diagonal are automatically set to zero.
-- Function: int gsl_blas_strmm (CBLAS_SIDE_t SIDE, CBLAS_UPLO_t UPLO,
CBLAS_TRANSPOSE_t TRANSA, CBLAS_DIAG_t DIAG, float ALPHA,
const gsl_matrix_float * A, gsl_matrix_float * B)
-- Function: int gsl_blas_dtrmm (CBLAS_SIDE_t SIDE, CBLAS_UPLO_t UPLO,
CBLAS_TRANSPOSE_t TRANSA, CBLAS_DIAG_t DIAG, double ALPHA,
const gsl_matrix * A, gsl_matrix * B)
-- Function: int gsl_blas_ctrmm (CBLAS_SIDE_t SIDE, CBLAS_UPLO_t UPLO,
CBLAS_TRANSPOSE_t TRANSA, CBLAS_DIAG_t DIAG, const
gsl_complex_float ALPHA, const gsl_matrix_complex_float * A,
gsl_matrix_complex_float * B)
-- Function: int gsl_blas_ztrmm (CBLAS_SIDE_t SIDE, CBLAS_UPLO_t UPLO,
CBLAS_TRANSPOSE_t TRANSA, CBLAS_DIAG_t DIAG, const
gsl_complex ALPHA, const gsl_matrix_complex * A,
gsl_matrix_complex * B)
These functions compute the matrix-matrix product B = \alpha op(A)
B for SIDE is `CblasLeft' and B = \alpha B op(A) for SIDE is
`CblasRight'. The matrix A is triangular and op(A) = A, A^T, A^H
for TRANSA = `CblasNoTrans', `CblasTrans', `CblasConjTrans'. When
UPLO is `CblasUpper' then the upper triangle of A is used, and
when UPLO is `CblasLower' then the lower triangle of A is used.
If DIAG is `CblasNonUnit' then the diagonal of A is used, but if
DIAG is `CblasUnit' then the diagonal elements of the matrix A are
taken as unity and are not referenced.
-- Function: int gsl_blas_strsm (CBLAS_SIDE_t SIDE, CBLAS_UPLO_t UPLO,
CBLAS_TRANSPOSE_t TRANSA, CBLAS_DIAG_t DIAG, float ALPHA,
const gsl_matrix_float * A, gsl_matrix_float * B)
-- Function: int gsl_blas_dtrsm (CBLAS_SIDE_t SIDE, CBLAS_UPLO_t UPLO,
CBLAS_TRANSPOSE_t TRANSA, CBLAS_DIAG_t DIAG, double ALPHA,
const gsl_matrix * A, gsl_matrix * B)
-- Function: int gsl_blas_ctrsm (CBLAS_SIDE_t SIDE, CBLAS_UPLO_t UPLO,
CBLAS_TRANSPOSE_t TRANSA, CBLAS_DIAG_t DIAG, const
gsl_complex_float ALPHA, const gsl_matrix_complex_float * A,
gsl_matrix_complex_float * B)
-- Function: int gsl_blas_ztrsm (CBLAS_SIDE_t SIDE, CBLAS_UPLO_t UPLO,
CBLAS_TRANSPOSE_t TRANSA, CBLAS_DIAG_t DIAG, const
gsl_complex ALPHA, const gsl_matrix_complex * A,
gsl_matrix_complex * B)
These functions compute the inverse-matrix matrix product B =
\alpha op(inv(A))B for SIDE is `CblasLeft' and B = \alpha B
op(inv(A)) for SIDE is `CblasRight'. The matrix A is triangular
and op(A) = A, A^T, A^H for TRANSA = `CblasNoTrans', `CblasTrans',
`CblasConjTrans'. When UPLO is `CblasUpper' then the upper
triangle of A is used, and when UPLO is `CblasLower' then the
lower triangle of A is used. If DIAG is `CblasNonUnit' then the
diagonal of A is used, but if DIAG is `CblasUnit' then the
diagonal elements of the matrix A are taken as unity and are not
referenced.
-- Function: int gsl_blas_ssyrk (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t
TRANS, float ALPHA, const gsl_matrix_float * A, float BETA,
gsl_matrix_float * C)
-- Function: int gsl_blas_dsyrk (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t
TRANS, double ALPHA, const gsl_matrix * A, double BETA,
gsl_matrix * C)
-- Function: int gsl_blas_csyrk (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t
TRANS, const gsl_complex_float ALPHA, const
gsl_matrix_complex_float * A, const gsl_complex_float BETA,
gsl_matrix_complex_float * C)
-- Function: int gsl_blas_zsyrk (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t
TRANS, const gsl_complex ALPHA, const gsl_matrix_complex * A,
const gsl_complex BETA, gsl_matrix_complex * C)
These functions compute a rank-k update of the symmetric matrix C,
C = \alpha A A^T + \beta C when TRANS is `CblasNoTrans' and C =
\alpha A^T A + \beta C when TRANS is `CblasTrans'. Since the
matrix C is symmetric only its upper half or lower half need to be
stored. When UPLO is `CblasUpper' then the upper triangle and
diagonal of C are used, and when UPLO is `CblasLower' then the
lower triangle and diagonal of C are used.
-- Function: int gsl_blas_cherk (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t
TRANS, float ALPHA, const gsl_matrix_complex_float * A, float
BETA, gsl_matrix_complex_float * C)
-- Function: int gsl_blas_zherk (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t
TRANS, double ALPHA, const gsl_matrix_complex * A, double
BETA, gsl_matrix_complex * C)
These functions compute a rank-k update of the hermitian matrix C,
C = \alpha A A^H + \beta C when TRANS is `CblasNoTrans' and C =
\alpha A^H A + \beta C when TRANS is `CblasConjTrans'. Since the
matrix C is hermitian only its upper half or lower half need to be
stored. When UPLO is `CblasUpper' then the upper triangle and
diagonal of C are used, and when UPLO is `CblasLower' then the
lower triangle and diagonal of C are used. The imaginary elements
of the diagonal are automatically set to zero.
-- Function: int gsl_blas_ssyr2k (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t
TRANS, float ALPHA, const gsl_matrix_float * A, const
gsl_matrix_float * B, float BETA, gsl_matrix_float * C)
-- Function: int gsl_blas_dsyr2k (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t
TRANS, double ALPHA, const gsl_matrix * A, const gsl_matrix *
B, double BETA, gsl_matrix * C)
-- Function: int gsl_blas_csyr2k (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t
TRANS, const gsl_complex_float ALPHA, const
gsl_matrix_complex_float * A, const gsl_matrix_complex_float
* B, const gsl_complex_float BETA, gsl_matrix_complex_float *
C)
-- Function: int gsl_blas_zsyr2k (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t
TRANS, const gsl_complex ALPHA, const gsl_matrix_complex * A,
const gsl_matrix_complex * B, const gsl_complex BETA,
gsl_matrix_complex * C)
These functions compute a rank-2k update of the symmetric matrix C,
C = \alpha A B^T + \alpha B A^T + \beta C when TRANS is
`CblasNoTrans' and C = \alpha A^T B + \alpha B^T A + \beta C when
TRANS is `CblasTrans'. Since the matrix C is symmetric only its
upper half or lower half need to be stored. When UPLO is
`CblasUpper' then the upper triangle and diagonal of C are used,
and when UPLO is `CblasLower' then the lower triangle and diagonal
of C are used.
-- Function: int gsl_blas_cher2k (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t
TRANS, const gsl_complex_float ALPHA, const
gsl_matrix_complex_float * A, const gsl_matrix_complex_float
* B, float BETA, gsl_matrix_complex_float * C)
-- Function: int gsl_blas_zher2k (CBLAS_UPLO_t UPLO, CBLAS_TRANSPOSE_t
TRANS, const gsl_complex ALPHA, const gsl_matrix_complex * A,
const gsl_matrix_complex * B, double BETA, gsl_matrix_complex
* C)
These functions compute a rank-2k update of the hermitian matrix C,
C = \alpha A B^H + \alpha^* B A^H + \beta C when TRANS is
`CblasNoTrans' and C = \alpha A^H B + \alpha^* B^H A + \beta C when
TRANS is `CblasConjTrans'. Since the matrix C is hermitian only
its upper half or lower half need to be stored. When UPLO is
`CblasUpper' then the upper triangle and diagonal of C are used,
and when UPLO is `CblasLower' then the lower triangle and diagonal
of C are used. The imaginary elements of the diagonal are
automatically set to zero.
File: gsl-ref.info, Node: BLAS Examples, Next: BLAS References and Further Reading, Prev: GSL BLAS Interface, Up: BLAS Support
13.2 Examples
=============
The following program computes the product of two matrices using the
Level-3 BLAS function DGEMM,
[ 0.11 0.12 0.13 ] [ 1011 1012 ] [ 367.76 368.12 ]
[ 0.21 0.22 0.23 ] [ 1021 1022 ] = [ 674.06 674.72 ]
[ 1031 1032 ]
The matrices are stored in row major order, according to the C
convention for arrays.
#include
#include
int
main (void)
{
double a[] = { 0.11, 0.12, 0.13,
0.21, 0.22, 0.23 };
double b[] = { 1011, 1012,
1021, 1022,
1031, 1032 };
double c[] = { 0.00, 0.00,
0.00, 0.00 };
gsl_matrix_view A = gsl_matrix_view_array(a, 2, 3);
gsl_matrix_view B = gsl_matrix_view_array(b, 3, 2);
gsl_matrix_view C = gsl_matrix_view_array(c, 2, 2);
/* Compute C = A B */
gsl_blas_dgemm (CblasNoTrans, CblasNoTrans,
1.0, &A.matrix, &B.matrix,
0.0, &C.matrix);
printf ("[ %g, %g\n", c[0], c[1]);
printf (" %g, %g ]\n", c[2], c[3]);
return 0;
}
Here is the output from the program,
$ ./a.out
[ 367.76, 368.12
674.06, 674.72 ]
File: gsl-ref.info, Node: BLAS References and Further Reading, Prev: BLAS Examples, Up: BLAS Support
13.3 References and Further Reading
===================================
Information on the BLAS standards, including both the legacy and
updated interface standards, is available online from the BLAS Homepage
and BLAS Technical Forum web-site.
`BLAS Homepage'
`http://www.netlib.org/blas/'
`BLAS Technical Forum'
`http://www.netlib.org/blas/blast-forum/'
The following papers contain the specifications for Level 1, Level 2 and
Level 3 BLAS.
C. Lawson, R. Hanson, D. Kincaid, F. Krogh, "Basic Linear Algebra
Subprograms for Fortran Usage", `ACM Transactions on Mathematical
Software', Vol. 5 (1979), Pages 308-325.
J.J. Dongarra, J. DuCroz, S. Hammarling, R. Hanson, "An Extended
Set of Fortran Basic Linear Algebra Subprograms", `ACM
Transactions on Mathematical Software', Vol. 14, No. 1 (1988),
Pages 1-32.
J.J. Dongarra, I. Duff, J. DuCroz, S. Hammarling, "A Set of Level
3 Basic Linear Algebra Subprograms", `ACM Transactions on
Mathematical Software', Vol. 16 (1990), Pages 1-28.
Postscript versions of the latter two papers are available from
`http://www.netlib.org/blas/'. A CBLAS wrapper for Fortran BLAS
libraries is available from the same location.
File: gsl-ref.info, Node: Linear Algebra, Next: Eigensystems, Prev: BLAS Support, Up: Top
14 Linear Algebra
*****************
This chapter describes functions for solving linear systems. The
library provides linear algebra operations which operate directly on
the `gsl_vector' and `gsl_matrix' objects. These routines use the
standard algorithms from Golub & Van Loan's `Matrix Computations' with
Level-1 and Level-2 BLAS calls for efficiency.
The functions described in this chapter are declared in the header
file `gsl_linalg.h'.
* Menu:
* LU Decomposition::
* QR Decomposition::
* QR Decomposition with Column Pivoting::
* Singular Value Decomposition::
* Cholesky Decomposition::
* Tridiagonal Decomposition of Real Symmetric Matrices::
* Tri