Numerical Analysis Notes

Math 342 - Spring 2026

Jump to: Math 342 homepage, Week 1, Week 2, Week 3, Week 4, Week 5, Week 6, Week 7, Week 8, Week 9, Week 10, Week 11, Week 12, Week 13, Week 14

Week 1 Notes

Day Topic
Mon, Jan 12 Floating point arithmetic
Wed, Jan 14 Relative error, significant digits
Fri, Jan 16 Taylorโ€™s theorem

Mon, Jan 12

We talked about how computers store floating point numbers. Most modern programming languages store floating point numbers using the IEEE 754 standard.

In the IEEE 754 standard, a 64-bit floating point number has the form

๐ฏ๐š๐ฅ๐ฎ๐ž=(โˆ’1)๐ฌ๐ข๐ ๐งร—2(๐ž๐ฑ๐ฉ๐จ๐ง๐ž๐ง๐ญโˆ’1023)ร—(1+๐Ÿ๐ซ๐š๐œ๐ญ๐ข๐จ๐ง)\mathbf{value} = (-1)^\mathbf{sign} \, \times \, 2^{(\mathbf{exponent} - 1023)} \, \times \, (1 + \mathbf{fraction})

where

We talked about how to convert between binary numbers and decimal numbers. This system works very well, but it leads to weird outcomes like 0.1 + 0.1 + 0.1 = 0.30000000000000004.

When you convert 110=0.1\tfrac{1}{10} = 0.1 to binary, you get a infinitely repeating binary decimal: 110=(0.000110011001100โ€ฆ)2.\tfrac{1}{10} = (0.000110011001100\ldots)_2. So any finite binary representation of 110\tfrac{1}{10} will have rounding errors.

We did the following examples in class:

  1. Convert (10110)2 to decimal. (https://youtu.be/a2FpnU9Mm3E)

  2. Convert 35 to binary. (https://youtu.be/gGiEu7QTi68)

  3. What is the 64-bit string that represents the number 35 in the IEEE standard?

  4. What are the largest and smallest 64-bit floating point numbers that can be stored?

  5. In Python, compute 2.0**1024 and 2**1024. Why do you get different results?

  6. In Python, compare 2.0**1024 with 2.0**(-1024) and 2.0**(-1070). What do you notice?

Wed, Jan 14

Today we talked about significant digits. Here is a quick video on how these work.

Rules for Significant Digits.

  1. Addition/Subtraction. The last common digit that is significant for both numbers is the last significant digit of the answer.
  2. Multiplication/Division. Result has significant digits equal to the minimum number of significant digits of the two inputs.

Next we defined absolute and relative error:

Definition. Let x*x^* be an approximation of a real number xx.

The base-10 logarithm of the relative error is approximately the number of significant digits, so you can think of significant digits as a measure of relative error.

Intuitively, addition & subtraction โ€œplay niceโ€ with absolute error while multiplication and division โ€œplay niceโ€ with relative error. This can lead to problems:

  1. Catastrophic cancellation. When you subtract two numbers of roughly the same size, the relative error can get much worse. For example, both 53.76 and 53.74 have 4 significant digits, but 53.76โˆ’53.74=0.0253.76 - 53.74 = 0.02 only has 1 significant digit.

  2. Useless precision. If you add two numbers with very different magnitudes, then having a very low relative error in the smaller one will not be useful.

We did these examples in class.

  1. ฯ€=3.141592...\pi = 3.141592.... What is the absolute and relative error if your round ฯ€\pi to 3.143.14?

Rounding Error. The worst case relative error from rounding to kk significant digits is ๐ซ๐ž๐ฅ๐š๐ญ๐ข๐ฏ๐ž๐ž๐ซ๐ซ๐จ๐ซโ‰ค{5ร—10โˆ’k(decimal)2โˆ’k(binary).\mathbf{relative~error} \le \begin{cases} 5 \times 10^{-k} & (\text{decimal}) \\ 2^{-k} & (\text{binary}). \end{cases} Since 64-bit floating point numbers have up to 53 significant digits, they typically have a relative error of up to 2โˆ’53โ‰ˆ1.11ร—10โˆ’162^{-53} \approx 1.11 \times 10^{-16}. This quantity is known as machine epsilon.

You can sometimes re-write algorithms on a computer to avoid issues with floating point numbers such as overflow/underflow and catastrophic cancellation.

  1. Consider the function f(x)=1โˆ’cosxsinxf(x) = \dfrac{1 - \cos x}{\sin x}. Use Python to compute f(10โˆ’7)f(10^{-7}).

  2. The exact answer to previous question is 0.00000005=5ร—10โˆ’80.00000005 = 5 \times 10^{-8} (accurate to 22 decimal places). Use this to find the relative error in your previous calculation.

  3. A better way to compute f(x)f(x) is to use a trick to avoid the catastrophic cancellation: f(x)=1โˆ’cosxsinx=1โˆ’cosxsinxโ‹…(1+cosx1+cosx)=sinx1+cosx.f(x) = \dfrac{1-\cos x}{\sin x} = \dfrac{1 - \cos x}{\sin x} \cdot \left( \frac{1+ \cos x}{1+\cos x} \right) = \dfrac{\sin x}{1 + \cos x}. Use this new formula to compute f(10โˆ’7)f(10^{-7}). What is the relative error now?

Stirlingโ€™s formula is a famous approximation for the factorial function. n!โ‰ˆ2ฯ€nnnen.n! \approx \sqrt{2 \pi n} \frac{n^n}{e^n}. We approximated Stirlingโ€™s formula using the following code.

import math
n = 100
print(float(math.factorial(n)))
f = lambda n: math.sqrt(2 * math.pi * n) * n ** n / math.exp(n)
print(f(n))

Our formula worked well until n=143n=143, then we got an overflow error. The problem was that nnn^n got too big to convert to a floating point number. But you can prevent the overflow error by adjusting the formula slightly to.

n = 143
f = lambda n: math.sqrt(2 * math.pi * n) * (n / math.e) ** n
print(f(n))

Fri, Jan 26

Today we reviewed Taylor series. We recalled the following important Maclaurin series (which are Taylor series with center c=0c = 0):

We graphed Maclaurin polynomials for cosx\cos x and 11โˆ’x\dfrac{1}{1-x} on Desmos to see how they converge with different radii of convergence.

We also use the Maclaurin series for sinxx\dfrac{\sin x}{x} to approximate the integral

โˆซโˆ’ฯ€ฯ€sinxxdx.\displaystyle \int_{-\pi}^\pi \frac{\sin x}{x} \, dx.

Then we did the following workshop in class.


Week 2 Notes

Day Topic
Mon, Jan 19 Martin Luther King day - no class
Wed, Jan 21 Taylorโ€™s theorem - conโ€™d
Fri, Jan 23 Bounding error

Wed, Jan 21

Today we reviewed some theorems that we will need throughout the course. The first is probably the most important theorem in numerical analysis since it lets us estimate error when using Taylor series approximations.

Taylorโ€™s Theorem. Let ff be a function that has (n+1)(n+1) derivatives in the interval between xx and cc. Then there exists a zz strictly inside the interval from xx to cc such that f(x)โˆ’Pn(x)=f(n+1)(z)(n+1)!(xโˆ’c)n+1f(x) - P_n(x) = \frac{f^{(n+1)}(z)}{(n+1)!} (x-c)^{n+1} where PnP_n is the nnth degree Taylor polynomial for ff centered at cc.

A special case of Taylorโ€™s theorem is when n=0n = 0. Then you get the Mean Value Theorem (MVT):

Mean Value Theorem. Let ff be a function that is differentiable in the interval between aa and bb. Then there exists a cc strictly inside the interval from aa to bb such that fโ€ฒ(c)=f(b)โˆ’f(a)bโˆ’a.f'(c) = \frac{f(b) - f(a)}{b-a}.

We did this example:

  1. Use Taylorโ€™s theorem to estimate the error in using the 0th and 2nd degree Maclaurin polynomials to estimate cos(0.03)\cos(0.03) and cos(0.6)\cos(0.6).

Then we started this workshop

Fri, Jan 23

Last time we started this workshop about using Taylorโ€™s remainder formula and the triangle inequality to find upper bounds for functions. Today we revisited that workshop, but first we talked about the following.

Taylorโ€™s Error Formula. Let ff be a function that has (n+1)(n+1) derivatives in the interval between xx and cc. Then |f(x)โˆ’Pn(x)|โ‰คMโ‹…|xโˆ’c|n+1(n+1)!|f(x) - P_n(x)| \le \frac{M \cdot |x-c|^{n+1}}{(n+1)!} where MM is the maximum value of |f(n+1)(z)||f^{(n+1)}(z)| with zz between xx and cc.

This error formula gives a way to estimate the worst case (absolute) error when you use a Taylor polynomial approximation.

  1. The 3rd degree Taylor polynomial for cosx\cos x centered at c=ฯ€c = \pi is P3(x)=โˆ’1+12(xโˆ’ฯ€)2P_3(x) = -1 + \tfrac{1}{2}(x-\pi)^2 (the coefficient on the 3rd degree term is zero). What is the worst case absolute error using this polynomial to estimate cos3\cos 3?

After that we talked about the triangle inequality.

Triangle Inequality. For any numbers aa and bb (real or complex), |a+b|โ‰ค|a|+|b|.|a+b| \le |a| + |b|.

We talked about how you can use the triangle inequality to find upper bounds for functions. We also talked about tight upper bounds versus upper bounds that are not tight. We did this example.

  1. Use the triangle inequality to find an upper bound for |x2+3xcosx||x^2 + 3x \cos x|.

Week 3 Notes

Day Topic
Mon, Jan 26 No class (snow day)
Wed, Jan 28 Bisection method
Fri, Jan 30 Newtonโ€™s method

Wed, Jan 28

We talked about how to find the roots of a function. Recall that a root (AKA a zero) of a function f(x)f(x) is an xx-value where the function hits the xx-axis. We introduced an algorithm called the bisection method for finding roots of a continuous function that changes sign from positive to negative or negative to positive on an interval [a,b][a, b]. We wrote the following code to implement this algorithm.

def bisection(f, a, b, N):
    """
    Applies the bisection method recursively up to N times to estimate a root 
    of a continuous function f on an interval [a,b]. 
    """
    m = (a + b) / 2
    if N == 0 or f(m) == 0:
        return m
    if (f(a) > 0 and f(m) > 0) or (f(a) < 0 and f(m) < 0):
        return bisection(f, m, b, N - 1)
    else:
        return bisection(f, a, m, N - 1)

We tested this algorithm on the function f(x)=tanxโˆ’1f(x) = \tan x - 1 which has a root at ฯ€4\tfrac{\pi}{4}.

One feature of the bisection method is that we can easily find the worst case absolute error in our approximation of a root. That is because every time we repeat the algorithm and cut the interval in half, the error reduces by a factor of 2, so that Absolute errorโ‰ค(bโˆ’a)2N+1.\text{Absolute error} \le \frac{(b-a)}{2^{N+1}}. We saw that it takes about 10 iterations to increase the accuracy by 3 decimal places (because 210โ‰ˆ1032^{10} \approx 10^3).

Fri, Jan 30

Today we covered Newtonโ€™s method. This is probably the most important method for finding roots of differentiable functions. The formula is xn+1=xnโˆ’f(xn)fโ€ฒ(xn). x_{n+1} = x_n - \dfrac{f(x_n)}{f'(x_n)}. This formula comes from the idea which is to start with a guess x0x_0 for a root and then repeatedly improve your guess by following the tangent line at xnx_n until it hits the xx-axis.

  1. Use Newtonโ€™s method to find roots of tanxโˆ’1\tan x - 1.

  2. How can you use Newtonโ€™s method to find ee? Hint: use f(x)=lnxโˆ’1f(x) = \ln x -1.

Theorem. Let fโˆˆC2[a,b]f \in C^2[a,b] and suppose that ff has a root rโˆˆ(a,b)r \in (a,b). Suppose that there are constants L,M>0L,M >0 such that |fโ€ฒ(x)|โ‰ฅL|f'(x)| \ge L and |fโ€ณ(x)|โ‰คM|f''(x)| \le M for all xโˆˆ[a,b]x \in [a,b]. Then |xn+1โˆ’r|โ‰คM2L|xnโˆ’r|2|x_{n+1} - r| \le \frac{M}{2L} |x_n-r|^2 when xnโˆˆ[a,b]x_n \in [a,b].

Proof. Start with the first degree Taylor polynomial (centered at xnx_n) for f(r)f(r) including the remainder term and the Newtonโ€™s method iteration formula:

f(r)=f(xn)+fโ€ฒ(xn)(rโˆ’xn)+12fโ€ณ(z)(rโˆ’xn)2=0,f(r) = f(x_n) + f'(x_n)(r-x_n) + \frac{1}{2} f''(z)(r-x_n)^2 = 0, and xn+1=xnโˆ’f(xn)fโ€ฒ(xn)โ‡’fโ€ฒ(xn)(xn+1โˆ’xn)+f(xn)=0.x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)} \Rightarrow f'(x_n)(x_{n+1} - x_n) + f(x_n)=0.

Subtract these two formulas to get a formula that relates (rโˆ’xn+1)(r-x_{n+1}) with (rโˆ’xn)(r-x_n).

fโ€ฒ(xn)(rโˆ’xn+1)+12fโ€ณ(z)(rโˆ’xn)2=0.f'(x_n)(r-x_{n+1}) + \frac{1}{2} f''(z)(r-x_n)^2 = 0.

Use this to get an upper bound on |rโˆ’xn+1||r-x_{n+1}|. โ–ก

Corollary. Let C=2LMC = \dfrac{2L}{M}. As long as the Newton method iterates xnx_n stay in [a,b][a,b], then the absolute error after nn steps will satisfy |xnโˆ’rC|โ‰ค|x0โˆ’rC|2n.|\frac{x_n-r}{C}| \le \left|\frac{x_0 - r}{C} \right|^{2^n}.

This corollary explains why, if you start with a good guess in Newtonโ€™s method, the number of correct decimal places tends to double with each iteration!


Week 4 Notes

Day Topic
Mon, Feb 2 Secant method
Wed, Feb 4 Fixed point iteration
Fri, Feb 6 Newtonโ€™s method with complex numbers

Mon, Feb 2

The secant method is a variation of Newtonโ€™s method that uses secant lines instead of tangent lines. The advantage of the secant method is that it doesnโ€™t require calculating a derivative. The disadvantage is that it is a little slower to converge than Newtonโ€™s method, but it is still much faster than the bisection method. Here is the formula:

xn+1=xnโˆ’f(xn)(xnโˆ’xnโˆ’1)f(xn)โˆ’f(xnโˆ’1).x_{n+1} = x_n - \frac{f(x_n) \, (x_n - x_{n-1})}{f(x_n) - f(x_{n-1})}.

  1. Solve the equation 2x=102^x = 10 using the secant method. What would make good initial guesses x0x_0 and x1x_1?

     # This code will do one step of the secant method.
     f = lambda x: 2 ** x - 10
     a, b = 3, 3.5
     a, b = b, b - f(b) * (b - a) / (f(b) - f(a)); print(b)

Definition. A sequence xnx_n converges with order ฮฑ\alpha if it converges to rr and there are constants C>0C > 0 and ฮฑโ‰ฅ1\alpha \ge 1 such that limnโ†’โˆž|xn+1โˆ’r||xnโˆ’r|ฮฑ=C.\lim_{n \rightarrow \infty} \frac{|x_{n+1} - r|}{|x_n - r|^\alpha} = C.

Convergence of order ฮฑ=1\alpha = 1 is called linear convergence and convergence of order ฮฑ=2\alpha = 2 is called quadratic convergence. Newtonโ€™s method converges quadratically. It turns out that the secant method converges with order ฮฑ=1+52โ‰ˆ1.618\alpha = \frac{1+\sqrt{5}}{2} \approx 1.618 which is the golden ratio!

Wed, Feb 4

Newtonโ€™s method is a special case of a method known as fixed point iteration. A fixed point of a function f(x)f(x) is a number pp such that f(p)=pf(p) = p. A real-valued function f(x)f(x) has a fixed point on an interval [a,b][a, b] if and only if it intersects the graph y=xy = x on that interval.

  1. Show that cosx\cos x has a fixed point in [0,ฯ€2][0,\tfrac{\pi}{2}].

  2. Explain why f(x)=exf(x) = e^x has no fixed points.

  3. Does sinx\sin x have any fixed points?

A fixed point pp is attracting if the recursive sequence defined by xn+1=f(xn)x_{n+1} = f(x_n) converges to pp for all x0x_0 sufficiently close to pp. It is repelling if points close to pp get pushed away from pp when you apply the function ff. You can draw a picture of these fixed point iterates by drawing a cobweb diagram.

To make a cobweb diagram, repeat these steps:

  1. Show that the fixed point of cosx\cos x is attracting by repeatedly iterating.

  2. Show that g(x)=1โˆ’2xโˆ’x5g(x) = 1 - 2x -x^5 has a fixed point, but it is not attracting.

Theorem If ff has a fixed point pp and

  1. |fโ€ฒ(p)|<1|f'(p)| < 1, then pp is attracting,
  2. |fโ€ฒ(p)|>1|f'(p)| > 1, then pp is repelling,
  3. |fโ€ฒ(p)|=1|f'(p)| = 1, then no info.

Proof. Write down the first degree Taylor approximation for ff centered at pp. Use it with x=xnx = x_n, f(xn)=xn+1f(x_n) = x_{n+1} to show that if MM is an upper bound for fโ€ณ(z)f''(z) (near pp), then |xn+1โˆ’p|โ‰ค|xnโˆ’p|(|fโ€ฒ(p)|+M2|xnโˆ’p|).|x_{n+1} - p| \le |x_n - p| \left(|f'(p)| + \frac{M}{2}|x_n-p| \right). Then as long as xnx_n is close enough to pp, the terms inside the parentheses are less than 1 which means that |xn+1โˆ’p|<|xnโˆ’p||x_{n+1} - p| < |x_n - p|, i.e, xn+1x_{n+1} is closer to pp than xnx_n for every nn. โ–ก

Theorem. If ff is differentiable at a fixed point pp and 0<|fโ€ฒ(p)|<10 < |f'(p)| < 1, then for any point x0x_0 sufficiently close to pp, the fixed point iterates xnx_n defined by xn+1=f(xn)x_{n+1} = f(x_n) converge to pp with linear order. If fโ€ฒ(p)=0f'(p) = 0, then the iterates converge to pp with order ฮฑ\alpha where f(ฮฑ)(p)f^{(\alpha)}(p) is the first nonzero derivative of ff at pp.

  1. For any function fโˆˆC2[a,b]f \in C^2[a,b] with a root rr in (a,b)(a,b), let g(x)=xโˆ’f(x)fโ€ฒ(x)g(x) = x - \dfrac{f(x)}{f'(x)}. Show that rr is a fixed point of gg and show that gโ€ฒ(r)=0g'(r) = 0.

Fri, Feb 6

In any root finding method, we have two ways of measuring the error when we compare an approximation xnx_n with the actual root. The forward error is the distance between the root rr and the approximation xnx_n on the xx-axis. Since we usually donโ€™t know the true value of rr, it is hard to estimate the forward error. The backward error is just |f(xn)||f(x_n)|, which is usually easy to calculate.

  1. If xnx_n is really close to rr, then how could you use fโ€ฒ(r)f'(r) to estimate the forward error if you know the backward error?

We used the idea of forward and backward error to automate Newtonโ€™s method. The idea is to include an optional tolerance argument. We stop iterating when the backward error is smaller than the tolerance.

def newton(f, Df, x, tolerance = 10 ** (-15)):
    """
    Applies Newton's method to a function f with derivative Df and initial guess x.  
    Stops when |f(x)| < tol. 
    """
    step = 0
    while not abs(f(x)) < tolerance:
        x = x - f(x) / Df(x)
        step = step + 1
    print("Converged in", step, "steps.")
    return x

We finished with a cool fact about Newtonโ€™s method. It also works for to find complex number roots if you use complex numbers. We did a quick review of complex numbers.

Eulerโ€™s formula.

eiฮธ=cosฮธ+isinฮธ.e^{i \theta} = \cos \theta + i \sin \theta.

We used the cmath library in Python to do the following experiments with Newtonโ€™s method.

  1. Cube roots of unity. Use Newtonโ€™s method to find all 3 roots of x3โˆ’1x^3 - 1.

  2. Eulerโ€™s identity. Use Newtonโ€™s method to solve ex+1=0e^x + 1 = 0.

We finished by talking about Newton fractals. When you use Newtonโ€™s method on a function with more than one root in the complex plane, the set of points that converge to each root (called the basins of attraction) form fractal patterns.

Basins of attraction for the roots of x3โˆ’1x^3-1.

Week 5 Notes

Day Topic
Mon, Feb 9 Solving nonlinear systems
Wed, Feb 11 Systems of linear equations
Fri, Feb 13 LU decomposition

Mon, Feb 9

Today we talked about how to solve systems of nonlinear equations with Newtonโ€™s method. As a motivating example, we talked about how this could be used in navigation, for example in the LORAN system.

To solve a vector-valued system ๐…(๐ฑ)=๐ŸŽ\mathbf{F}(\mathbf{x}) = \mathbf{0}, you can iterate the formula ๐ฑn+1=๐ฑnโˆ’๐‰(๐ฑn)โˆ’1๐…(๐ฑ๐ง)\mathbf{x}_{n+1} = \mathbf{x}_n - \mathbf{J}(\mathbf{x}_n)^{-1} \mathbf{F}(\mathbf{x_n}) where ๐‰(๐ฑ)\mathbf{J}(\mathbf{x}) is the Jacobian matrix ๐‰(๐ฑ)=[โˆ‚F1โˆ‚x1โ€ฆโˆ‚F1โˆ‚xnโ‹ฎโ‹ฑโ‹ฎโˆ‚Fnโˆ‚x1โ€ฆโˆ‚Fnโˆ‚xn].\mathbf{J}(\mathbf{x}) = \begin{bmatrix} \frac{\partial F_1}{\partial x_1} & \ldots & \frac{\partial F_1}{\partial x_n} \\ \vdots & \ddots & \vdots \\ \frac{\partial F_n}{\partial x_1} & \ldots & \frac{\partial F_n}{\partial x_n} \end{bmatrix}.

To program this formula in Python, weโ€™ll need to load the numpy library. We wrote the following code to solve this system:

xy+y2=1x2โˆ’y2=1\begin{align*} xy + y^2 &= 1 \\ x^2 - y^2 &= 1 \end{align*}

import numpy as np

F = lambda x: np.array([
    x[0] * x[1] + x[1]**2 - 1, 
    x[0]**2 - x[1]**2 - 1
])
J = lambda x: np.array([
    [   x[1], x[0] + 2*x[1] ], 
    [ 2*x[0],       -2*x[1] ]
])

x = np.array([1,1])
for i in range(10): 
    x = x - np.linalg.inv(J(x)) @ F(x) # Use @ not * for matrix multiplication.
    print(x)

Wed, Feb 11

Today we talked about systems of linear equations and linear algebra.

  1. Suppose you have a jar full of pennies, nickles, dimes, and quarters. There are 80 coins in the jar, and the total value of the coins is $10.00. If there are twice as many dimes as quarters, then how many of each type of coin are in the jar?

We can represent this question as a system of linear equations. p+n+d+q=80p+n+d+q = 80 p+5n+10d+25q=1000p+5n+10d+25q = 1000 d=2qd = 2q where p,n,d,qp,n,d,q are the numbers of pennies, nickles, dimes, and quarters respectively. It is convenient to use matrices to simplify these equations: (1111151025001โˆ’2)(pndq)=(8010000).\begin{pmatrix} 1 & 1 & 1 & 1 \\ 1 & 5 & 10 & 25 \\ 0 & 0 & 1 & -2 \end{pmatrix} \, \begin{pmatrix} p \\ n \\ d \\ q \end{pmatrix} = \begin{pmatrix} 80 \\ 1000 \\ 0 \end{pmatrix}. Here we have a matrix equation of the form A๐ฑ=๐›A\mathbf{x} = \mathbf{b} where Aโˆˆโ„3ร—4A \in \mathbb{R}^{3 \times 4}, ๐ฑโˆˆโ„4\mathbf{x} \in \mathbb{R}^4 is the unknown vector, and ๐›โˆˆโ„3\mathbf{b} \in \mathbb{R}^3. Then you can solve the problem by row-reducing the augmented matrix

(1111801510251000001โˆ’20)\left( \begin{array}{cccc|c} 1 & 1 & 1 & 1 & 80 \\ 1 & 5 & 10 & 25 & 1000 \\ 0 & 0 & 1 & -2 & 0\end{array}\right)

which can be put into echelon form

(11118004924920001โˆ’20).\left( \begin{array}{cccc|c} 1 & 1 & 1 & 1 & 80 \\ 0 & 4 & 9 & 24 & 920 \\ 0 & 0 & 1 & -2 & 0\end{array}\right).

The variables p,np, n, and dd are pivot variables, and the last variable qq is a free variable. Once you pick values for the free variable(s), you can solve for the pivot variables one at a time using back substitution. We did this using Python:

q = 20 
d = 2*q
n = (920 - 9*d - 24*q) / 4
p = 80 - n - d - q
print(p, n, q)

# Only q = 20 makes sense, otherwise either p or n will be negative.

After that we reviewed some more concepts and terminology from linear algebra. The most important thing to understand is that you can think of matrix-vector multiplication as a linear combination of the columns of the matrix:

A๐ฑ=(1111151025001โˆ’2)(pndq)=(110)p+(150)n+(1101)d+(125โˆ’2)qโŸA๐ฑ is a linear combination of the columns of A.A \mathbf{x} = \begin{pmatrix} 1 & 1 & 1 & 1 \\ 1 & 5 & 10 & 25 \\ 0 & 0 & 1 & -2 \end{pmatrix} \, \begin{pmatrix} p \\ n \\ d \\ q \end{pmatrix} = \underbrace{\begin{pmatrix} 1 \\ 1 \\ 0 \end{pmatrix}p + \begin{pmatrix} 1 \\ 5 \\ 0 \end{pmatrix}n + \begin{pmatrix} 1 \\ 10 \\ 1 \end{pmatrix}d + \begin{pmatrix} 1 \\ 25 \\ -2 \end{pmatrix}q}_{A \mathbf{x} \text{ is a linear combination of the columns of } A}.

For any matrix Aโˆˆโ„mร—nA \in \mathbb{R}^{m \times n}:

A matrix equation A๐ฑ=๐›A\mathbf{x} = \mathbf{b} has a solution if and only if ๐›\mathbf{b} is in the column space of AA. If ๐›\mathbf{b} is in the column space, then there will be either one unique solution if there are no free variables (i.e., the nullity of AA is zero) or there will be infinitely many solutions if there are free variables.

If Aโˆˆโ„nร—nA \in \mathbb{R}^{n \times n} (i.e., AA is a square matrix) and the rank of AA is nn, then AA is invertible which means that there is a matrix Aโˆ’1A^{-1} such that AAโˆ’1=Aโˆ’1A=IA A^{-1} = A^{-1} A = I where II is the identity matrix which is I=(10โ€ฆ001โ€ฆ0โ‹ฎโ‹ฎโ‹ฑโ‹ฎ00โ€ฆ1).I = \begin{pmatrix} 1 & 0 & \ldots & 0 \\ 0 & 1 & \ldots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \ldots & 1 \end{pmatrix}.
You can use row-reduction to find the inverse of an invertible matrix by row reducing the augmented matrix [AI]\left[ \begin{array}{c|c} A & I \end{array} \right] until you get [IAโˆ’1]\left[ \begin{array}{c|c} I & A^{-1} \end{array} \right]. We did this example in class:

  1. Find the inverse of A=(120013001)A = \begin{pmatrix} 1 & 2 & 0 \\ 0 & 1 & 3 \\ 0 & 0 & 1 \end{pmatrix}.

Here is another example that we did not do in class:

  1. Use row-reduction to find the inverse of A=(1325)A = \begin{pmatrix} 1 & 3 \\ 2 & 5 \end{pmatrix}. (https://youtu.be/cJg2AuSFdjw)

Fri, Feb 13

Today we talked about LU decomposition. We defined the LU decomposition as follows. The LU decomposition of a matrix Aโˆˆโ„mร—nA \in \mathbb{R}^{m \times n} is a pair of matrices Lโˆˆโ„mร—mL \in \mathbb{R}^{m \times m} and Uโˆˆโ„mร—nU \in \mathbb{R}^{m \times n} such that A=LUA = LU and UU is in echelon form and LL is a lower triangular matrix with 1โ€™s on the main diagonal, 0โ€™s above the main diagonal, and entries LijL_{ij} in row ii, column jj that are equal to the multiple of row ii that you subtracted from row jj as you row reduced AA to UU.

  1. Compute the LU decomposition of A=(11112253โˆ’1โˆ’1144)A = \begin{pmatrix} 1 & 1 & 1 & 1 \\ 2 & 2 & 5 & 3 \\ -1 & -1 & 14 & 4 \end{pmatrix}.

  2. Use the LU decomposition to solve Ax=(268)Ax = \begin{pmatrix} 2 \\ 6 \\ 8 \end{pmatrix}.

We also did this workshop.

We finished with one more example.

  1. For what real numbers aa and bb does the matrix (101aaabba)\begin{pmatrix} 1 & 0 & 1 \\ a & a & a \\ b & b & a \end{pmatrix} have an LU decomposition? (https://youtu.be/-eA2D_rIcNA)

Hereโ€™s another LU decomposition example if you want more practice.

  1. Decompose A=(2435โˆ’4โˆ’7โˆ’5โˆ’8682949โˆ’214)A = \begin{pmatrix} 2 & 4 & 3 & 5 \\ -4 & -7 & -5 & -8 \\ 6 & 8 & 2 & 9 \\ 4 & 9 & -2 & 14 \end{pmatrix}. (https://youtu.be/BFYFkn-eOQk)

Week 6 Notes

Day Topic
Mon, Feb 16 Matrix norms and conditioning
Wed, Feb 18 Review
Fri, Feb 20 Midterm 1

Mon, Feb 16

  1. Let A=(1111.001)A = \begin{pmatrix} 1 & 1 \\ 1 & 1.001 \end{pmatrix}. Let ๐ฒ=(22)\mathbf{y} = \begin{pmatrix} 2 \\ 2 \end{pmatrix} and ๐ณ=(22.001)\mathbf{z} = \begin{pmatrix} 2 \\ 2.001 \end{pmatrix}. Use Aโˆ’1=(1001โˆ’1000โˆ’10001000)A^{-1} = \begin{pmatrix} 1001 & -1000 \\ -1000 & 1000 \end{pmatrix} to solve A๐ฑ=๐ฒA\mathbf{x} = \mathbf{y} and A๐ฑ=๐ณA\mathbf{x} = \mathbf{z}.

Notice that even though ๐ฒ\mathbf{y} and ๐ณ\mathbf{z} are very close, the two solutions are not close at all. When the solutions of a linear system A๐ฑ=๐›A\mathbf{x} = \mathbf{b} are very sensitive to small changes in bb, we say that the matrix AA is ill-conditioned.

Norms of Vectors

Definition. A norm is a function โˆฅโ‹…โˆฅ\|\cdot\| from a vector space VV to [0,โˆž)[0,\infty) with the following properties:

  1. Definiteness. โˆฅxโˆฅ=0\|x\| = 0 if and only if x=0x=0.
  2. Homogeneity. โˆฅcxโˆฅ=|c|โˆฅxโˆฅ\|c x \| = |c| \|x\| for all xโˆˆVx \in V and cโˆˆโ„c \in \mathbb{R}.
  3. Triangle Inequality. โˆฅx+yโˆฅโ‰คโˆฅxโˆฅ+โˆฅyโˆฅ\|x+y\| \le \|x\| + \|y\| for all x,yโˆˆVx, y \in V.

Intuitively a norm measures the length of a vector. But there are different norms and they measure length in different ways. The three most important norms on the vector space โ„n\mathbb{R}^n are:

  1. The 22-norm (also known as the Euclidean norm) is the most commonly used, and it is exactly the formula for the length of a vector using the Pythagorean theorem. โˆฅxโˆฅ2=x12+x22+โ€ฆ+xn2.\|x\|_2 = \sqrt{x_1^2 + x_2^2 + \ldots + x_n^2}.

  2. The 11-norm (also known as the Manhattan norm) is โˆฅxโˆฅ1=|x1|+|x2|+โ€ฆ+|xn|.\|x\|_1 = |x_1|+|x_2|+\ldots+|x_n|. This is the distance you would get if you had to navigate a city where the streets are arranged in a rectangular grid and you canโ€™t take diagonal paths.

  3. The โˆž\infty-norm (also known as the Maximum norm) is โˆฅxโˆฅโˆž=max{|x1|,|x2|,โ€ฆ,|xn|}.\|x\|_\infty = \max \{ |x_1|, |x_2|, \ldots, |x_n| \}.

These are all special cases of pp-norms which have the form โˆฅxโˆฅp=|x1|p+|x2|p+โ€ฆ+|xn|pp.\|x\|_p = \sqrt[p]{|x_1|^p + |x_2|^p + \ldots + |x_n|^p}.

We used Desmos to graph the set of vectors in โ„2\mathbb{R}^2 with pp-norm equal to one, then we could see how those norms change as pp varies between 1 and โˆž\infty.

Norms of Matrices

The set of all matrices in โ„mร—n\mathbb{R}^{m \times n} is a vector space. So it makes sense to talk about the norm of a matrix. There are many ways to define norms for matrices, but the most important for us are the induced norms (also known as operator norms). For a matrix Aโˆˆโ„mร—nA \in \mathbb{R}^{m \times n}, the induced pp-norm is โˆฅAโˆฅp=max{โˆฅAxโˆฅp:xโˆˆโ„n,โˆฅxโˆฅ=1}.\|A\|_p = \max \{\|Ax\|_p : x \in \mathbb{R}^n, \|x\|=1\}.
Two important special cases are

  1. When p=2p=2, the induced norm โˆฅAโˆฅ2\|A\|_2 is the square root of the largest eigenvalue of ATAA^T A.
  2. When p=โˆžp=\infty, the induced norm โˆฅAโˆฅโˆž\|A\|_\infty is the largest 1-norm of the rows of AA.

Here is a quick exercise:

  1. Find โˆฅAโˆฅโˆž\|A\|_\infty for the matrix A=(11112253โˆ’1โˆ’1144)A = \begin{pmatrix} 1 & 1 & 1 & 1 \\ 2 & 2 & 5 & 3 \\ -1 & -1 & 14 & 4 \end{pmatrix}.

Condition Number

For an invertible matrix Aโˆˆโ„nร—nA \in \mathbb{R}^{n \times n}, the condition number of AA is ฮบ(A)=โˆฅAโˆฅโˆฅAโˆ’1โˆฅ\kappa(A) = \|A\| \, \|A^{-1}\|. You can use any induced norm to define ฮบ(A)\kappa(A), but our default will be the induced โˆž\infty-norm since it is the only one that is easy to calculate by hand.

  1. Find the condition number ฮบ(A)\kappa(A) for the matrix A=(1111.001)A = \begin{pmatrix} 1 & 1 \\ 1 & 1.001 \end{pmatrix} with Aโˆ’1=(1001โˆ’1000โˆ’10001000)A^{-1} = \begin{pmatrix} 1001 & -1000 \\ -1000 & 1000 \end{pmatrix}.

If the condition number is large, then the matrix is ill-conditioned. When we try to solve an ill-conditioned linear system, small errors in either AA or ๐›\mathbf{b} could become large errors in our calculated solution.

Just like with root finding, we can talk about forward and backward error when we try to solve a linear system A๐ฑ=๐›A \mathbf{x} = \mathbf{b}. The table below defines the absolute and relative forward and backwards errors. In the table, ๐ฑ\mathbf{x} is the exact solution to the system A๐ฑ=๐›A \mathbf{x} = \mathbf{b}, and ๐ฑa\mathbf{x}_a is an approximation of ๐ฑ\mathbf{x}.

Forward Error Backward Error
Absolute โˆฅ๐ฑaโˆ’๐ฑโˆฅ\|\mathbf{x}_a - \mathbf{x}\| โˆฅA๐ฑ๐šโˆ’๐›โˆฅ\| A \mathbf{x_a} - \mathbf{b} \|
Relative โˆฅ๐ฑaโˆ’๐ฑโˆฅโˆฅ๐ฑโˆฅ\dfrac{\|\mathbf{x}_a - \mathbf{x}\|}{\|\mathbf{x}\|} โˆฅA๐ฑaโˆ’๐›โˆฅโˆฅ๐›โˆฅ\dfrac{\|A\mathbf{x}_a - \mathbf{b}\|}{\|\mathbf{b}\|}

The following result shows how the condition number lets us estimate the relative forward error using the relative backward error.

Theorem. If AA is an invertible matrix with condition number ฮบ(A)\kappa(A), and ๐›โ‰ 0\mathbf{b} \ne 0, then โˆฅ๐ฑaโˆ’๐ฑโˆฅโˆฅ๐ฑโˆฅโ‰คฮบ(A)โˆฅA๐ฑaโˆ’๐›โˆฅโˆฅ๐›โˆฅ.\dfrac{\|\mathbf{x}_a - \mathbf{x}\|}{\|\mathbf{x}\|} \le \kappa(A) \dfrac{\|A\mathbf{x}_a - \mathbf{b}\|}{\|\mathbf{b}\|}.

Proof. By the definition of the induced norm, โˆฅ๐ฑaโˆ’๐ฑโˆฅโˆฅ๐ฑโˆฅ=โˆฅAโˆ’1(A๐ฑaโˆ’๐›)โˆฅโˆฅ๐ฑโˆฅโ‰คโˆฅAโˆ’1โˆฅโˆฅA๐ฑaโˆ’๐›โˆฅโˆฅ๐ฑโˆฅ.\dfrac{\|\mathbf{x}_a - \mathbf{x}\|}{\|\mathbf{x}\|} = \dfrac{\|A^{-1} (A\mathbf{x}_a - \mathbf{b}) \|}{\|\mathbf{x}\|} \le \|A^{-1}\| \dfrac{\|A \mathbf{x}_a - \mathbf{b} \|}{\|\mathbf{x}\|}. Since ๐›=A๐ฑ\mathbf{b} = A \mathbf{x}, โˆฅ๐›โˆฅโ‰คโˆฅAโˆฅโˆฅ๐ฑโˆฅ\|\mathbf{b}\| \le \|A \| \|\mathbf{x}\|, so โˆฅAโˆ’1โˆฅโˆฅA๐ฑaโˆ’๐›โˆฅโˆฅ๐ฑโˆฅโ‰คโˆฅAโˆ’1โˆฅโˆฅAโˆฅโˆฅA๐ฑaโˆ’๐›โˆฅโˆฅ๐›โˆฅ=ฮบ(A)โˆฅA๐ฑaโˆ’๐›โˆฅโˆฅ๐›โˆฅ \|A^{-1}\| \dfrac{\|A \mathbf{x}_a - \mathbf{b} \|}{\|\mathbf{x}\|} \le \|A^{-1}\| \|A\| \dfrac{\|A \mathbf{x}_a - \mathbf{b} \|}{\|\mathbf{b}\|} = \kappa(A) \dfrac{\|A \mathbf{x}_a - \mathbf{b} \|}{\|\mathbf{b}\|} which proves the statement. โ–ก

The following is a consequence of this theorem.

Rule of thumb. If the entries of AA and ๐›\mathbf{b} are both accurate to nn-significant digits and the condition number of AA is ฮบ(A)=10k\kappa(A) = 10^k, then the solution of the linear system A๐ฑ=๐›A\mathbf{x} = \mathbf{b} will be accurate to nโˆ’kn-k significant digits.


Week 7 Notes

Day Topic
Mon, Feb 23 LU decomposition with pivoting
Wed, Feb 25 Inner-products and orthogonality
Fri, Feb 27 Orthogonal sets & matrices

Mon, Feb 23

Consider the matrix B=(0.001111)B = \begin{pmatrix} 0.001 & 1 \\ 1 & 1 \end{pmatrix} which has LULU decomposition B=LU=(1010001)(0.00110โˆ’999).B = LU = \begin{pmatrix} 1 & 0 \\ 1000 & 1 \end{pmatrix} \, \begin{pmatrix} 0.001 & 1 \\ 0 & -999 \end{pmatrix}.
Although BB is not ill-conditioned, you have to be careful using row reduction to solve equations with this matrix because both LL and UU in the LU-decomposition for BB are ill-conditioned.

In Matlab/Octave, you can use the norm(A, inf) function to find the induced โˆž\infty-norm of matrix. The function norm(A) computes the induced 2-norm by default. You can also compute the condition number of a matrix using cond(A) or cond(A, inf).

  1. Use Octave to find the condition numbers for the matrices BB, LL, and UU above.

     B  = [0.001 1; 1 1]
     L = [1 0; 1000 1]
     U = [0.001 1; 0 -999]
     cond(B)
     cond(L)
     cond(U)

It is possible to avoid this problem using the method of partial pivoting. The idea is that there is a permutation PP such that PA=LUPA = LU where PP is a permutation matrix such that PAPA has a nice LU-decomposition. The formula PA=LUPA = LU is known as the PLU-decomposition or sometimes the LUP-decomposition of AA. This method fixes two problems:

The algorithm to find the PLU-decomposition is almost the same as the LU decomposition, except you also swap rows so that the largest available entry in a column (in absolute value) becomes the pivot at each step. When you swap two rows of the echelon form matrix UU, you also have to swap the same rows of AA and the same rows in the completed columns of LL (leave the unfinished columns alone). The permutation matrix PP is the matrix you get by swapping the same rows of the identity matrix as you swapped while you found the PLU decomposition.

In Octave, you can just use the following command to get the PLU-decomposition:

[L, U, P] = lu(A)

One nice thing to know about permutation matrices is that they are always invertible and Pโˆ’1=PTP^{-1} = P^T where PTP^T is the transpose of PP obtained by converting every row of PP to a column of PTP^T.

  1. Show that when you use partial pivoting to row reduce B=(0.001111)B = \begin{pmatrix} 0.001 & 1 \\ 1 & 1 \end{pmatrix} to echelon form, the resulting LU matrices are not ill-conditioned.

  2. Use the PLU-decomposition to solve B๐ฑ=(12).B\mathbf{x} = \begin{pmatrix} 1 \\ 2 \end{pmatrix}.

Finding the PLU decomposition by hand is tedious, especially if you need to swap more than 2 rows, but here is a good video explanation of how it works:

  1. Find the PLU decomposition for A=(01212100012121512436)A = \begin{pmatrix} 0 & 1 & 2 & 1 & 2 \\ 1 & 0 & 0 & 0 & 1 \\ 2 & 1 & 2 & 1 & 5 \\ 1 & 2 & 4 & 3 & 6 \end{pmatrix}. (https://youtu.be/E3cCRcdFGmE)

Wed, Feb 25

The inner product of two vectors ๐ฑ,๐ฒ\mathbf{x}, \mathbf{y} in โ„n\mathbb{R}^n is ๐ฑT๐ฒ\mathbf{x}^T \mathbf{y}. We proved some important facts about inner products and got some practice with matrix algebra in the following workshop:

Fri, Feb 27

In exercise 2 from the workshop last time, we needed the property that (๐ฑ+๐ฒ)T=๐ฑT+๐ฒT(\mathbf{x} + \mathbf{y})^T = \mathbf{x}^T + \mathbf{y}^T. This is a special case of one of the rules for transposes:

Today we talked about two special types of matrices.

A set of vectors S={๐ฏ1,โ€ฆ,๐ฏd}S = \{\mathbf{v}_1, \ldots, \mathbf{v}_d\} is an orthogonal set if every vector in SS is orthogonal to every other vector in SS. An orthogonal set where every vector also has length equal to 1 is called an orthonormal set.

  1. Suppose that we have an orthonormal set in โ„2\mathbb{R}^2 that contains two vectors ๐ฑ\mathbf{x} and ๐ฒ\mathbf{y}. If ๐ฑ=(cosฮธsinฮธ)\mathbf{x} = \begin{pmatrix} \cos \theta \\ \sin \theta \end{pmatrix} for some angle ฮธ\theta, then there are only two possible vectors that ๐ฒ\mathbf{y} could be. What are they? Hint: Draw a picture!

  2. If a matrix UU in โ„mร—n\mathbb{R}^{m \times n} has orthonormal columns, then UTUU^T U is the n-by-n identity matrix. Use this fact to prove that UU preserves inner-products. That is (U๐ฑ)T(U๐ฒ)=๐ฑT๐ฒ(U\mathbf{x})^T (U \mathbf{y}) = \mathbf{x}^T \mathbf{y} for any vectors ๐ฑ,๐ฒ\mathbf{x}, \mathbf{y} in โ„n\mathbb{R}^n.

  3. If UU has orthonormal columns, then show that UU preserves lengths, that is, โˆฅU๐ฑโˆฅ=โˆฅ๐ฑโˆฅ\|U\mathbf{x}\| = \|\mathbf{x}\| for all ๐ฑ\mathbf{x} in โ„n\mathbb{R}^n.

A square matrix with orthonormal columns is called an orthogonal matrix.

  1. Show that every 2-by-2 orthogonal matrix must be either (cosฮธโˆ’sinฮธsinฮธcosฮธ) or (cosฮธsinฮธsinฮธโˆ’cosฮธ).\begin{pmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{pmatrix} \text{ or } \begin{pmatrix} \cos \theta & \sin \theta \\ \sin \theta & -\cos \theta \end{pmatrix}.

The first possibility represents all possible rotations of โ„2\mathbb{R}^2. The second represents all possible reflections. Those are the only length-preserving linear transformations of the plane!

We finished by talking about symmetric matrices. A matrix AA in โ„nร—n\mathbb{R}^{n \times n} is symmetric if AT=AA^T = A.

  1. Suppose that ๐ฑ\mathbf{x} and ๐ฒ\mathbf{y} are eigenvectors of a symmetric matrix AA that correspond to two different eigenvalues ฮป\lambda and ฮผ\mu respectively. Prove that ๐ฑ\mathbf{x} is orthogonal to ๐ฒ\mathbf{y}.
    Hint: Observe that (A๐ฑ)T๐ฒ=๐ฑT(A๐ฒ)(A\mathbf{x})^T \mathbf{y} = \mathbf{x}^T (A\mathbf{y}).

It turns out that symmetric matrices can only have real eigenvalues and they always have an orthonormal basis of eigenvectors.

Since we had a little extra time, we finished by talking about the conjugate transpose of a complex matrix. For any matrix AA in โ„‚mร—n\mathbb{C}^{m \times n}, the conjugate transpose is A*=(Aโ€พ)TA^* = (\bar{A})^T. It combines taking the transpose of AA with computing the complex conjugate of every entry. Recall that the complex conjugate of a complex number a+ibยฏ=aโˆ’ib\overline{a+ib} = a - ib. For matrices with real number entries, A*A^* and ATA^T are the same thing. In Matlab/Octave you use an apostrophe to get the conjugate transpose:

A = [1 2i; 3 4i]
A'

The conjugate transpose mostly has the same properties as the transpose:

One very important exception is when you work with inner-products of complex vectors. The complex inner-product of ๐ฑ,๐ฒ\mathbf{x}, \mathbf{y} in โ„‚n\mathbb{C}^n is ๐ฑ*๐ฒ\mathbf{x}^* \mathbf{y}. But unlike regular inner-products, order matters for complex inner-products because ๐ฒ*๐ฑ=๐ฑ*๐ฒยฏ.\mathbf{y}^* \mathbf{x} = \overline{\mathbf{x}^* \mathbf{y}}.


Week 8 Notes

Day Topic
Mon, Mar 2 Gram-Schmidt algorithm
Wed, Mar 4 QR decomposition
Fri, Mar 6 Orthogonal projections

Mon, Mar 2

Last week we introduced the orthogonal complement WโŠฅW^\perp of a set WW in โ„n\mathbb{R}^n. You should know that:

The Fundamental Theorem of Linear Algebra. Let AA be a real mm-by-nn matrix. Then

  1. We looked at the problem of finding {(111)}โŠฅ\left\{ \begin{pmatrix} 1 \\ 1\\ 1 \end{pmatrix} \right\}^\perp in the context of the fundamental theorem of linear algebra. The orthogonal complement is the nullspace of the matrix (111)\begin{pmatrix} 1 & 1 & 1 \end{pmatrix}.
    1. What is the dimension of the nullspace?
    2. What is a basis for the null space?

After that we talked about why orthogonal bases are better. We used this example:

Since the two components of the force of gravity are orthogonal, it is easy find the right coefficients for the force of gravity in the angled basis.

Gram-Schmidt Algorithm. Converts a basis ๐ฏ1,โ€ฆ,๐ฏp\mathbf{v}_1, \ldots, \mathbf{v}_p into an orthogonal basis ๐ฎ1,โ€ฆ,๐ฎp\mathbf{u}_1, \ldots, \mathbf{u}_p for the same subspace.

Start with ๐ฎ1=๐ฏ1\mathbf{u}_1 = \mathbf{v}_1. Then for each kk from 2 to pp, find ๐ฎk\mathbf{u}_{k} using these steps:

  1. Find the orthogonal projection of ๐ฏk\mathbf{v}_{k} onto the span of ๐ฎ1,โ€ฆ,๐ฎkโˆ’1\mathbf{u}_1, \ldots, \mathbf{u}_{k-1}: Proj๐ฏk=(๐ฏk+1โ‹…๐ฎ1๐ฎ1โ‹…๐ฎ1)๐ฎ1+(๐ฏkโ‹…๐ฎ2๐ฎ2โ‹…๐ฎ2)๐ฎ2+โ€ฆ+(๐ฏkโ‹…๐ฎkโˆ’1๐ฎkโˆ’1โ‹…๐ฎkโˆ’1)๐ฎkโˆ’1.\operatorname{Proj} \mathbf{v}_{k} = \left(\frac{\mathbf{v}_{k+1} \cdot \mathbf{u}_1}{\mathbf{u}_1 \cdot \mathbf{u}_1}\right) \mathbf{u}_1 + \left(\frac{\mathbf{v}_{k} \cdot \mathbf{u}_2}{\mathbf{u}_2 \cdot \mathbf{u}_2}\right) \mathbf{u}_2 + \ldots + \left(\frac{\mathbf{v}_{k} \cdot \mathbf{u}_{k-1}}{\mathbf{u}_{k-1} \cdot \mathbf{u}_{k-1}}\right) \mathbf{u}_{k-1}.
  2. Subtract the orthogonal projection from ๐ฏk\mathbf{v}_{k}: ๐ฎk=๐ฏkโˆ’Proj๐ฏk.\mathbf{u}_{k} = \mathbf{v}_{k} - \operatorname{Proj} \mathbf{v}_{k}.

If you want an orthonormal basis, then just normalize by dividing each ๐ฎi\mathbf{u}_i by its length.

  1. Apply Gram-Schmidt to ๐ฑ1=(360)\mathbf{x}_1 = \begin{pmatrix} 3 \\ 6 \\ 0 \end{pmatrix}, ๐ฑ2=(122).\mathbf{x}_2 = \begin{pmatrix} 1 \\ 2 \\ 2 \end{pmatrix}. (https://youtu.be/Rz3O6hJ9xZQ)

  2. Apply Gram-Schmidt to ๐ฑ1=(1111)\mathbf{x}_1 = \begin{pmatrix} 1\\ 1\\ 1\\ 1\end{pmatrix}, ๐ฑ2=(0111)\mathbf{x}_2 = \begin{pmatrix} 0\\ 1\\ 1\\ 1\end{pmatrix}, ๐ฑ3=(0011)\mathbf{x}_3 = \begin{pmatrix} 0\\ 0\\ 1\\ 1\end{pmatrix}. (https://youtu.be/Rz3O6hJ9xZQ?t=350)

Wed, Mar 4

We started with these two warm-up problems.

  1. Find the orthogonal projection of ๐ฒ=(123)\mathbf{y} = \begin{pmatrix} 1 \\ 2 \\ 3 \end{pmatrix} onto the line spanned by ๐ฑ=(10โˆ’1)\mathbf{x} = \begin{pmatrix} 1 \\ 0 \\ -1 \end{pmatrix}. Then find two orthogonal vectors ๐š\mathbf{a} and ๐›\mathbf{b} such that ๐š\mathbf{a} lies in the span of ๐ฑ\mathbf{x} and ๐š+๐›=๐ฒ\mathbf{a} + \mathbf{b} = \mathbf{y}.

  2. Find the orthogonal projection of ๐ณ=(6โˆ’30)\mathbf{z} = \begin{pmatrix} 6 \\ -3 \\ 0 \end{pmatrix} onto the plane spanned by ๐ฑ\mathbf{x} and ๐ฒ\mathbf{y}.

In order to calculate the second orthogonal projection, we used the following formula:

Orthogonal Projection onto a Subspace. If VV is a subspace of โ„n\mathbb{R}^n with an orthogonal basis ๐ฏ1,โ€ฆ,๐ฏd\mathbf{v}_1, \ldots, \mathbf{v}_d, then ProjV(๐ฑ)=โˆ‘k=1d(๐ฑโ‹…๐ฏk๐ฏkโ‹…๐ฏk)๐ฏk.\operatorname{Proj}_V(\mathbf{x}) = \sum_{k = 1}^d \left( \frac{\mathbf{x} \cdot \mathbf{v}_k }{\mathbf{v}_k \cdot \mathbf{v}_k} \right) \mathbf{v}_k.

Notice that this formula is even simpler if the basis is orthonormal. Why is that?

QR Decomposition. If AA is a matrix in โ„mร—n\mathbb{R}^{m \times n}, then there is a matrix QQ with orthonormal columns and an upper triangular matrix RR such that A=QRA = QR.

You can find the QR decomposition by applying Gram-Schmidt to the columns of AA and normalizing to get the columns of QQ. Then compute R=QTAR = Q^T A.

We did the following examples.

  1. Use Octave to find the QR decomposition for A=(11602โˆ’3โˆ’130)A = \begin{pmatrix} 1 & 1 & 6 \\ 0 & 2 & -3 \\ -1 & 3 & 0 \end{pmatrix}.

    A = [1 1 6; 0 2 -3; -1 3 0];
    [Q, R] = qr(A)
  2. Find the QR decomposition for B=(1111)B = \begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix}.

Here is a video example with a tall-skinny matrix AA that we did not do in class.

  1. Find the QR decomposition for A=(232411)A = \begin{pmatrix} 2 & 3 \\ 2 & 4 \\ 1 & 1 \end{pmatrix}. (https://youtu.be/J41Ypt6Mftc)

Fri, Mar 6

Today we introduced the idea of floating point operations (FLOPs). Every time a computer needs to add, subtract, multiply or divide two floating point numbers, we count that as one flop. We also count every square root computation as one flop. This is a theoretical estimate of how long it takes the computer to calculate the operation.

After we got started with that workshop, we stopped to talk about computational issues with the Gram-Schmidt algorithm. We analyzed the following Octave code to perform the Gram-Schmidt algorithm.

function Q = cgs(A)
    % Returns a matrix Q with orthonormal columns by applying 
    % the classical Gram-Schmidt algorithm to the columns of A. 
    
    [m,n] = size(A);
    Q = zeros(m,n);
    for i = 1:n
        v = A(:,i);
        for k = 1:i-1
            v = v - (Q(:,k)' * A(:,i)) * Q(:,k);
        end
        Q(:,i) = v / norm(v);
    end
end

We calculated that this algorithm requires 2mn2+mnโˆ’12n2+12n2mn^2 + mn - \tfrac{1}{2}n^2 + \tfrac{1}{2}n flops for a matrix with mm rows and nn columns. Typically we only focus on the leading term, so we say that asymptotically the algorithm requires 2mn22mn^2 flops.

Unfortunately the Gram-Schmidt algorithm is numerically unstable. One way to show this is to start with an large ill-conditioned matrix AA, and then compute QQ using the algorithm. The columns of QQ usually wonโ€™t be orthogonal, which is a problem. We used the Hilbert matrix with (i,j)(i,j) entry equal to 1i+jโˆ’1\tfrac{1}{i+j-1} to show this.

n = 3
A = hilb(n);
Q = cgs(A);
norm(Q'*Q - eye(n))

Week 9 Notes

Day Topic
Mon, Mar 16 Least squares problems
Wed, Mar 18 Least squares problems - conโ€™d
Fri, Mar 20 Orthogonal functions

Week 10 Notes

Day Topic
Mon, Mar 23 Continuous least squares
Wed, Mar 25 Fourier series
Fri, Mar 27 Fourier series - conโ€™d

Week 11 Notes

Day Topic
Mon, Mar 30 Numerical integration
Wed, Apr 1 Newton-Cotes methods
Fri, Apr 3 Error in Newton-Cotes methods

Week 12 Notes

Day Topic
Mon, Apr 6 Numerical differentiation
Wed, Apr 8 Review
Fri, Apr 10 Midterm 2

Week 13 Notes

Day Topic
Mon, Apr 13 Eigenvectors and eigenvalues
Wed, Apr 15 Power iteration
Fri, Apr 17 Schur triangular decomposition

Week 14 Notes

Day Topic
Mon, Apr 20 QR algorithm
Wed, Apr 22 Singular value decomposition
Fri, Apr 24 Applications of the SVD
Mon, Apr 27 Last day, recap & review