| Day | Topic |
|---|---|
| Mon, Jan 12 | Floating point arithmetic |
| Wed, Jan 14 | Relative error, significant digits |
| Fri, Jan 16 | Taylorโs theorem |
We talked about how computers store floating point numbers. Most modern programming languages store floating point numbers using the IEEE 754 standard.
In the IEEE 754 standard, a 64-bit floating point number has the form
where
We talked about how to convert between binary
numbers and decimal numbers.
This system works very well, but it leads to weird outcomes like
0.1 + 0.1 + 0.1 = 0.30000000000000004.
When you convert to binary, you get a infinitely repeating binary decimal: So any finite binary representation of will have rounding errors.
We did the following examples in class:
Convert (10110)2 to decimal. (https://youtu.be/a2FpnU9Mm3E)
Convert 35 to binary. (https://youtu.be/gGiEu7QTi68)
What is the 64-bit string that represents the number 35 in the IEEE standard?
What are the largest and smallest 64-bit floating point numbers that can be stored?
In Python, compute 2.0**1024 and
2**1024. Why do you get different results?
In Python, compare 2.0**1024 with
2.0**(-1024) and 2.0**(-1070). What do you
notice?
Today we talked about significant digits. Here is a quick video on how these work.
Rules for Significant Digits.
Next we defined absolute and relative error:
Definition. Let be an approximation of a real number .
The base-10 logarithm of the relative error is approximately the number of significant digits, so you can think of significant digits as a measure of relative error.
Intuitively, addition & subtraction โplay niceโ with absolute error while multiplication and division โplay niceโ with relative error. This can lead to problems:
Catastrophic cancellation. When you subtract two numbers of roughly the same size, the relative error can get much worse. For example, both 53.76 and 53.74 have 4 significant digits, but only has 1 significant digit.
Useless precision. If you add two numbers with very different magnitudes, then having a very low relative error in the smaller one will not be useful.
We did these examples in class.
Rounding Error. The worst case relative error from rounding to significant digits is Since 64-bit floating point numbers have up to 53 significant digits, they typically have a relative error of up to . This quantity is known as machine epsilon.
You can sometimes re-write algorithms on a computer to avoid issues with floating point numbers such as overflow/underflow and catastrophic cancellation.
Consider the function . Use Python to compute .
The exact answer to previous question is (accurate to 22 decimal places). Use this to find the relative error in your previous calculation.
A better way to compute is to use a trick to avoid the catastrophic cancellation: Use this new formula to compute . What is the relative error now?
Stirlingโs formula is a famous approximation for the factorial function. We approximated Stirlingโs formula using the following code.
import math
n = 100
print(float(math.factorial(n)))
f = lambda n: math.sqrt(2 * math.pi * n) * n ** n / math.exp(n)
print(f(n))Our formula worked well until , then we got an overflow error. The problem was that got too big to convert to a floating point number. But you can prevent the overflow error by adjusting the formula slightly to.
n = 143
f = lambda n: math.sqrt(2 * math.pi * n) * (n / math.e) ** n
print(f(n))Today we reviewed Taylor series. We recalled the following important Maclaurin series (which are Taylor series with center ):
We graphed Maclaurin polynomials for and on Desmos to see how they converge with different radii of convergence.
We also use the Maclaurin series for to approximate the integral
Then we did the following workshop in class.
| Day | Topic |
|---|---|
| Mon, Jan 19 | Martin Luther King day - no class |
| Wed, Jan 21 | Taylorโs theorem - conโd |
| Fri, Jan 23 | Babylonian algorithm for square roots |
Today we reviewed some theorems that we will need throughout the course. The first is probably the most important theorem in numerical analysis since it lets us estimate error when using Taylor series approximations.
Taylorโs Theorem. Let be a function that has derivatives in the interval between and . Then there exists a strictly inside the interval from to such that where is the th degree Taylor polynomial for centered at .
A special case of Taylorโs theorem is when . Then you get the Mean Value Theorem (MVT):
Mean Value Theorem. Let be a function that is differentiable in the interval between and . Then there exists a strictly inside the interval from to such that
We did this example:
Then we started this workshop
Last time we started this workshop about using Taylorโs remainder formula and the triangle inequality to find upper bounds for functions. Today we revisited that workshop, but first we talked about the following.
Taylorโs Error Formula. Let be a function that has derivatives in the interval between and . Then where is the maximum value of with between and .
This error formula gives a way to estimate the worst case (absolute) error when you use a Taylor polynomial approximation.
After that we talked about the triangle inequality.
Triangle Inequality. For any numbers and (real or complex),
We talked about how you can use the triangle inequality to find upper bounds for functions. We also talked about tight upper bounds versus upper bounds that are not tight. We did this example.
| Day | Topic |
|---|---|
| Mon, Jan 26 | Bisection method |
| Wed, Jan 28 | Newtonโs method |
| Fri, Jan 30 | Rates of convergence |
| Day | Topic |
|---|---|
| Mon, Feb 2 | Secant method |
| Wed, Feb 4 | Fixed point iteration |
| Fri, Feb 6 | Newtonโs method with complex numbers |
| Day | Topic |
|---|---|
| Mon, Feb 9 | Solving nonlinear systems |
| Wed, Feb 11 | Systems of linear equations |
| Fri, Feb 13 | LU decomposition |
| Day | Topic |
|---|---|
| Mon, Feb 16 | LU decomposition with pivoting |
| Wed, Feb 18 | Review |
| Fri, Feb 20 | Midterm 1 |
| Day | Topic |
|---|---|
| Mon, Feb 23 | Matrix norms and conditioning |
| Wed, Feb 25 | Inner-products and orthogonality |
| Fri, Feb 27 | Unitary and Hermitian matrices |
| Day | Topic |
|---|---|
| Mon, Mar 2 | Gram-Schmidt algorithm |
| Wed, Mar 4 | QR decomposition |
| Fri, Mar 6 | Orthogonal projections |
| Day | Topic |
|---|---|
| Mon, Mar 16 | Least squares problems |
| Wed, Mar 18 | Least squares problems - conโd |
| Fri, Mar 20 | Orthogonal functions |
| Day | Topic |
|---|---|
| Mon, Mar 23 | Continuous least squares |
| Wed, Mar 25 | Fourier series |
| Fri, Mar 27 | Fourier series - conโd |
| Day | Topic |
|---|---|
| Mon, Mar 30 | Numerical integration |
| Wed, Apr 1 | Newton-Cotes methods |
| Fri, Apr 3 | Error in Newton-Cotes methods |
| Day | Topic |
|---|---|
| Mon, Apr 6 | Numerical differentiation |
| Wed, Apr 8 | Review |
| Fri, Apr 10 | Midterm 2 |
| Day | Topic |
|---|---|
| Mon, Apr 13 | Eigenvectors and eigenvalues |
| Wed, Apr 15 | Power iteration |
| Fri, Apr 17 | Schur triangular decomposition |
| Day | Topic |
|---|---|
| Mon, Apr 20 | QR algorithm |
| Wed, Apr 22 | Singular value decomposition |
| Fri, Apr 24 | Applications of the SVD |
| Mon, Apr 27 | Last day, recap & review |