18.335J | Spring 2019 | Graduate

Introduction to Numerical Methods

Week 1

Lecture 1: Course Overview, Newton’s Method for Root-Finding

Summary

Brief overview of the huge field of numerical methods and outline of the small portion that this course will cover. Key new concerns in numerical analysis, which don’t appear in more abstract mathematics, are (i) performance (traditionally, arithmetic counts, but now memory access often dominates) and (ii) accuracy (both floating-point roundoff errors and also convergence of intrinsic approximations in the algorithms).

As a starting example, we considered the convergence of Newton’s method (as applied to square roots); see the handout and Julia notebook below.

Assignment

Further Reading

  • Newton’s method from Wikipedia is a reasonable starting point. Googling “Newton’s method” can find lots of references.
  • Beware that the terminology for the rate of convergence (linear, quadratic, etc.) is somewhat different in this context from the terminology for discretization schemes (first-order, second-order, etc.); see e.g. the linked Wikipedia article.

Lecture 2: Floating-Point Arithmetic

Summary

The basic issue is that, for computer arithmetic to be fast, it has to be done in hardware, operating on numbers stored in a fixed, finite number of digits (bits). As a consequence, only a finite subset of the real numbers can be represented, and the question becomes which subset to store, how arithmetic on this subset is defined, and how to analyze the errors compared to theoretical exact arithmetic on real numbers.

In floating-point arithmetic, we store both an integer coefficient and an exponent in some base: essentially, scientific notation. This allows large dynamic range and fixed relative accuracy: if fl(x) is the closest floating-point number to any real x, then |fl(x)-x| < ε|x| where ε is the machine precision. This makes error analysis much easier and makes algorithms mostly insensitive to overall scaling or units, but has the disadvantage that it requires specialized floating-point hardware to be fast. Nowadays, all general-purpose computers, and even many little computers like your cell phones, have floating-point units.

Went through some simple examples in Julia (see notebook below), illustrating basic syntax and a few interesting tidbits, in particular on the accuracy of summation algorithms, that we will investigate in more detail later.

Overview of floating-point representations, focusing on the IEEE 754 standard (see also handout from previous lecture). The key point is that the nearest floating-point number to x, denoted fl(x), has the property of uniform relative precision (for |x| and 1/|x| less than some range, ≈10308 for double precision) that |fl(x)−x| ≤ εmachine|x|, where εmachine is the relative “machine precision” (about 10−16 for double precision). There are also a few special values: ±Inf (e.g. for overflow), NaN, and ±0 (e.g. for underflow).

Further Reading

Julia Tutorial

We introduced the Julia Programming Language that we will use this term.

Course Info

As Taught In
Spring 2019
Level
Learning Resource Types
Problem Sets with Solutions
Exams with Solutions
Lecture Notes