18.335J | Spring 2019 | Graduate

Introduction to Numerical Methods

Week 13

Lecture 35: Chebyshev Approximation

Summary

Explained connection of Clenshaw-Curtis quadrature and cosine series to Chebyshev polynomials. This leads into the general topic of Chebyshev approximation, and how we can approximate any smooth function on a finite interval by a polynomial with exponential accuracy (in the degree of the polynomial) as long as we interpolate via Chebyshev points.

Using Chebyshev approximation, explained how lots of problems can be solved by first approximating a nasty function via a polynomial, at which point one can just use easy methods for polynomials. Showed examples of root finding, minimization, integration, and solving ODEs via the ApproxFun package for Julia, which implements a modern version of these ideas (following in the tracks of the pioneering Chebfun package for MATLAB).

Further Reading

Lecture 36: Integration with Weight Functions, and Gaussian Quadrature

Summary

Discussed integration with weight functions: I = ∫w(x)f(x)dx ≈ I\(_n\) = ∑w\(_i\)f(x\(_i\)). Even if the “weight” w(x) is “nasty” (discontinuous, singular, highly oscillatory…), we can do some (possibly expensive) precomputations of w\(_i\) and x\(_i\) once and re-use them for integrating many “nice” (smooth) functions f(x) with only a few quadrature points.

One approach to this is weighted Clenshaw-Curtis quadrature: expand f(x) in Chebyshev polynomials, i.e. f(cosθ) in a cosine series, as before via a trapezoidal rule (DCT), and then compute the integrals ∫w(cosθ)cos(kθ)sin(θ)dθ in whatever way you can … possibly by a very slowly converging method, but you only need to compute these weight integrals once and can then re-use them for many different f(x).

An alternative approach (for real w ≥ 0, and w > 0 almost everywhere) is Gaussian quadrature: fit f(x) to a polynomial of degree n–1 by evaluating at n points consisting of the roots of an orthogonal polynomial q\({n+1}\) of degree n, where the polynomials {q\(_1\),q\(_2\),…} are formed via Gram-Schmidt orthogonalization of the basis {1,x,x\(^2\),…} with the inner product (p,q)=∫w(x)p(x)q(x)dx. There is a beautiful proof that this integrates polynomials exactly up to degree 2n–1, it takes w(x) into account analytically, and by a more complicated analysis it converges exponentially for analytic f. Moreover, it turns out that finding {q\(_1\),q\(_2\),…} is equivalent to performing the Lanczos iteration with the real-symmetric linear operator x, leading to a three-term recurrence. We already saw one such three-term recurrence for the Chebyshev polynomials T\(_n\)(x), corresponding to the weight \(\text{w(x)}=1/\sqrt{1-\text{x}^2}\), and other weight functions give rise to other orthogonal polynomials with their own 3-term recurrences: w=1 gives Legendre polynomials, but there are also Gegenbauer polynomialsLaguerre polynomials for w=e\(^{-x}\) on 0…\(\infty\), Hermite polynomials for w=exp(-x\(^2\)) on -\(\infty\)…+\(\infty\), and others. Moreover, the Lanczos tridiagonal eigenvalues turn out to be precisely the roots of q\({n+1}\) that we want, while the eigenvectors turn out to give the weights, and for a long time this surprising(!) O(n\(^2\)) “Golub-Welsch” method was the method of choice for finding Gaussian quadrature nodes and weights.

Further Reading

Lecture 37: Adaptive and Multidimensional Quadrature

Summary

Discussed the importance of nested quadrature rules for a posteriori error estimation (Clenshaw-Curtis, Gauss-Kronrod, Gauss-Patterson) and adaptive quadrature. Discussed p-adaptive vs. h-adaptive adaptive schemes. Talked about multidimensional integration and the curse of dimensionality.

Course Info

As Taught In
Spring 2019
Level
Learning Resource Types
Problem Sets with Solutions
Exams with Solutions
Lecture Notes