2007 ConceptsOfApproximationTheory
Jump to navigation
Jump to search
- (de Boor et al., 2007) ⇒ Carl de Boor, Allan Pinkus, and Vilmos Totik. (2007). “Concepts of Approximation Theory.” In: Surveys in Approximation Theory Web Site, July 21, 2007.
Subject Headings: Numerical Function Approximation Theory, Glossary.
Notes
Cited By
Quotes
- approximand is the element to be approximated.
- approximant is the element that is doing the approximating.
- approximation spaces consist of functions with prescribed rate of approximation. E.g. if En(f) is the error of best approximation of f by polynomials of degree at most [math]\displaystyle{ n }[/math], [math]\displaystyle{ α }[/math] > 0 and 0 < q ≤ ∞, then the collection of functions f for which ...
- approximation with constraint: An additional requirement (like monotonicity, convexity, interpolation) has to be satisfied by the approximation process.
- best approximant to f from M in a metric space X (with metric ρ) containing M is any element g ∈ M for which ρ(f, g) is minimal, i.e., ρ(f, g) = infh∈M ρ(f, h) =: dist(f,M). Also called best approximation.
- breakpoint or break of a spline or piecewise polynomial f is a place across which f or one of its derivatives has a jump. The (order of) smoothness of f at a breakpoint is the smallest m for which Dmf has a jump across it.
- capacity: Let K be a set in a metric space and, for ε > 0, let M"(K) be the maximum number of points with mutual distances > ε. Then, log2M"(K) is called the ε-capacity of K.
- central difference: Same as symmetric difference.
- Chebyshev polynomials: They are defined for x ∈ [−1, 1] as ...
- Chebyshev space is the same as unicity space. Sometimes, the term “Chebyshev space” is used for the linear span of a Chebyshev system.
- completely monotone: A real-valued infinitely differentiable function f on [a, b] is said to be completely monotone on [a, b] if ...
- collocation is another word for interpolation at given sites. Correspondingly, a collocation matrix is a matrix of the form (fj(τi) : [math]\displaystyle{ i }[/math], j = 1, . . ., n), with τ1, . . ., τn a sequence of points and f1, . . ., fn a sequence of functions.
- distance of the element a, in the metric space X with metric ρ, from the set B in X is the number dist(a,B) := inf b∈B ρ(a, b).
- The distance of the set A in the metric space X from the set B in X is the number dist(A,B) := sup a∈A dist(a,B).
- The corresponding Hausdorff distance, between A and B, is the number ρH(A,B) := max(dist(A,B), dist(B,A)). It provides a metric for the set of compact subsets of X.
- entropy: Let K be a compact subset of a metric space X and ε > 0. Let M"(K) be the mininum number of subsets of X in an ε-covering for K, i.e., in a collection of subsets, each of diameter ≤ 2ε, whose union covers K. Then, log2M"(K) is called the ε-entropy of K (also metric entropy, to distinguish it from the probabilistic entropy). ...
- equimeasurable functions have the same distribution functions.
- existence set is any subset M of a metric space X such that to each element of X there exists a best approximant from M.
- Fourier series: Let f be a 2π-periodic function. Its Fourier series in complex form is ...
- fundamental polynomials is a term used for the basic polynomials of Lagrange interpolation.
- generalized polynomials are functions of the form ...
- Gram matrix is the matrix underlying the Gram determinant. More generally, any matrix of the form (λifj), with λ := (λ1, . . ., λm) a sequence of linear functionals on some vector space and f := (f1, . . ., fn) a sequence of elements of that vector space, is called the Gram matrix (for the sequences λ and f).
- Hausdorff distance of two sets: see distance.
- Markov function is of the form z → Integral[ dμ(t) / z − t], where μ is a compactly supported measure on the real line. See also Cauchy transform.
- Markov inequality: If p is a real algebraic polynomial of degree at most n and |p(x)| ≤ 1 for all x ∈ [−1, 1], then
- minimax approximation is best approximation with respect to the uniform norm.
- moments of a measure μ are the numbers Integral[xn, dμ(x)]], [math]\displaystyle{ n }[/math] = 0, 1, . . . .
- order of a polynomial: A polynomial of order k is any polynomial of degreek. The collection of all (univariate) polynomials of order k is a vector space of dimension k. (The collection of all polynomials of degree k is not even a vector space.)
- polynomial form: One of several ways of writing a polynomial. Most are in terms of some basis, like the Newton form, the (local) power form, the Lagrange form, the Bernstein-Bézier form. In the univariate case, there is at least one other useful form, the root form.
- polynomials are functions that can be written in power form
- p(x) = anxn + an−1xn−1 + · · · + a0
- where the ai’s are real or complex numbers and x is a real or complex variable. If an 6= 0, then n is called the degree of the polynomial, an its leading coefficient, and anxn its leading term.
- positive operators map nonnegative functions to nonnegative functions.
- radial basis function is a function in [math]\displaystyle{ \R^d }[/math] of the form : [math]\displaystyle{ x \longmapsto g(\parallel \mathbf{x} − \mathbf{a} \parallel) }[/math] where [math]\displaystyle{ g }[/math] is a univariate function, [math]\displaystyle{ \mathbf{a} }[/math] is a point in [math]\displaystyle{ \R^d }[/math], and [math]\displaystyle{ \parallel·\parallel }[/math] denotes the Euclidean norm in [math]\displaystyle{ \R^d }[/math].
- ridge function is a function in Rd of the form f(x) = g(x · a) where g is a univariate function, a is a nonzero vector in Rd, and x · a = x1a1 + · · · + xdad is the inner product.
- splines: A (univariate) spline of order k with knot sequence t = (ti) is any weighted sum of the corresponding B-splines N(·|ti, . . ., ti+k). Each knot ti may be a breakpoint for such a spline f, with the smoothness of f at ti no smaller than the order k minus the multiplicity with which the number ti occurs in t. This multiplicity is sometimes called the defect of the spline at ti.
- wavelet is a function ψ on (−∞,∞) such that the system
- ψ(2kx − j), [math]\displaystyle{ k }[/math], j = 0,±1,±2, . . . (3)
- forms a basis in L2(−∞,∞). It is called an orthogonal wavelet if (3) constitutes an orthogonal basis in L2(−∞,∞).
- weight usually means a nonnegative function.
- weighted approximation means that a weight is used in the norm.
- zero counting measure is the measure that puts mass k at every zero of multiplicity k.
References
,