Consider your examination project like an extra homework. Make the project in a separate folder in your repository with an indicative name (like "exam") which I should easily find. Supply a README.TXT (or similar) file with a short description of what you did and (optionally) a self-evaluation of the project on the scale [0,10].
Remember to check that your repository is publicly browseable and publicly cloneable, and that everything is built.
The list of the projects below will be reshuffled randomly the day when the examination period starts.
The index of the project assigned to you (from the indexed list below) is given by your number (index) in our wiki-list, modulo the number N of the projects (N=31). For example,
— 1 —
Symmetric rank-1 update of a size-n symmetric eigenvalue problem
The matrix A to diagonalize is given in the form
A = D +uuT,
where D is a diagonal matrix and u is a column-vector.
Given the diagonal elements of the matrix D and the elements of the vector u find the eigenvalues of the matrix A using only O(n2) operations (see section "Eigenvalues of updated matrix" in the book).
— 2 —
Implicit Heun's stepper for Ordinary Differential Equations
The step is given by the formula,
yx+h = yx + h/2(f(x,yx)+f(x+h,yx+h))where the vector yx+h has to be found by solving numerically the above equation using your own root-finder.
The start point for your rootfinding can be the Euler's step,
yx+h ≈ yx + h f(x,yx)and the error estimate could be the difference between the Euler's step and the implicit Heun's step.
— 3 —
In mathematics the Rayleigh quotient for a given real symmetric matrix H and a non-zero vector v is defined as
R(H,v)=(vTHv)/(vTv) .
It can be shown that the Rayleigh quotient reaches its minimum value λmin—the smallest eigenvalue of H—when v equals vmin (the corresponding eigenvector),
∀v≠0 R(H,v)≥λmin, R(H,vmin)=λmin if Hvmin=λminvmin.Analogously,
∀v≠0 R(H,v)≤λmax, R(H,vmax)=λmax if Hvmax=λmaxvmax.
In quantum mechanics this is called Variational Method.
Implement a function that calculates the lowest eigenvalue and the corresponding eigenvector of a given symmetric matrix H by minimizing its Rayleigh quotient R(H,v) using the components of the vector v as free parameters. The function should use newton's method with analytic gradient and optimized linesearch.
The gradient ∂R/∂vk is given as
∂R/∂vT=2[Hv-Rv]/(vTv) ∝ (H-R)v .Therefore one can search for the optimal Newton's step as -αΔv, where
Δv = (H-R)v ,and where α is to be found by minimizing R(v-αΔv) which is given as
R(v-αΔv) = (v-αΔv)TH(v-αΔv)/(v-αΔv)T(v-αΔv) = (vTHv-2αΔvTHv+α²ΔvTHΔv)/(vTv+α²ΔvTΔv) .This is a one-dimensional minimization that can be done numerically precisely with you own numerical minimizer.
Hint: you might need to keep the norm of your v-vector close to unity.
Investigate the scaling of this method (time to find the eigenvalue as function of the size of the marix) and find out how far in the size of the matrix you can realistically go (that is, within few seconds of calculation time).
Calculate the ground state of the Hydrogen atom.
Calculate also the second smallest eigenvalue.
— 4 —
Cubic (sub-)spline for data with derivatives
S(x) x∈[xi,xi+1] =Si(x)
whereSi(x)= yi +bi(x-xi) +ci(x-xi)2 +di(x-xi)3.
For each interval the three coefficients bi, ci, di are determined by the three conditions,
Si(xi+1)=yi+1,
S'i(xi)=y'i,
S'i(xi+1)=y'i+1.
See the subsection "Akima sub-spline interpolation" for the inspiration.
Si(x)= yi +bi(x-xi) +ci(x-xi)2 +di(x-xi)3 +ei(x-xi)2(x-xi+1)2 ,
and choosing the coefficients ei such that the spline has continuous second derivative.— 5 —
Adaptive recursive integrator with subdivision into three subintervals.
Implement a (one-dimensional) adaptive recursive integrator which at each iteration subdivides the interval not into two, but into three sub-intervals. Reuse points. Compare its effectiveness with the adaptive integrator from your homework.
For example, a rule like this (check that the numbers are correct),
xi={1/6,3/6,5/6} reusable points for division by 3; wi={3/8,2/8,3/8} trapezium rule; (check this) vi={1/3,1/3,1/3} rectangle rule; (check this)
— 6 —
Particle swarm optimization
Implement the particle swarm optimization algorithm.
— 7 —
Implement the least-squares signal declipping method.
— 8 —
Implement the Accelerated Particle Swarm Optimization algorithm (APSO).
In this variant of Particle Swarm Optimization (PSO) algorithm one dispenses with the particle's velocity and the particle's best position and instead updates the position of the particle according to the following rule,
xi ← (1-β)xi+βg+αv ,where xi is the particle's position, g is the global best, β~0.1-0.7 and α~0.1-0.5 are the parameters of the algorithm, and v is a random vector given as vk=U(-(b-a)k,(b-a)k) where a and b are the "lower-left" and "upper-right" corners of the search volume and U(c,d) is a uniform distribution from c to d.
— 9 —
Adaptive two-dimensional integrator
Implement a two-dimensional integrator for integrals in the form
∫abdx ∫d(x)u(x)dy f(x,y)
which applies your favourite adaptive one-dimensional integrator along each of the two dimensions. The signature might be something like
static double integ2D( Func<double,double,double> f, double a, double b, Func<double,double> d, Func<double,double> u, double acc, double eps)
— 10 —
Inverse iteration algorithm for eigenvalues (and eigenvectors)
Implement the inverse iteration algorithm that calculates the eigenvalue that is closest to the given shift-value s (and the corresponding eigenvector) of the given symmetric matrix A. Use your own linear algebra routines.
Inverse iteration algorithm allows calculation of the eigenvalue (and the corresponding eigenvector) that is closest to a given value. Read the book and/or the [inverse iteration] article in Wikipedia.
The interface could be like this (check everything),
(double,vector) inverse_iteration (matrix A,double s,vector x0=null) : if x0==null : x0 = random_vector Q,R = qrdecomp(A-s*1) x1 = qrsolve(Q,R,x0) λ1 = s + (x1•x0)/(x1•x1) do : x0 = x1 x1 = qrsolve(Q,R,x0) λ0 = λ1 λ1 = s + (x1•x0)/(x1•x1) if abs(λ1-λ0) < acc : break while true return (λ1,x1)
— 11 —
Implement the LU-decomposition of a real square matrix.
Implement also the corresponding functions to solve linear equations, calculate the determinant, and calculate the inverse matrix.
Is your LU faster than your QR?
— 12 —
Implement the Lanczos tridiagonalization algorithm for real symmetric matrices.
Consider the hydrogen atom using our discretization method with the given Δr and rmax=NΔr. Use the Lanczos iteration algorithm and build a size-n (where n≤N) tridiagonal representation T of the Hamiltonian. Calculate the ground state energy of the matrix T. Investigate the convergence of the ground state energy as function of n.
Check whether Jacobi eigenvalue algorithm can be tuned to take advantage of the tridiagonal form of a matrix.
— 13 —
Implement least-squares signal smoothing as described in the Book.
— 14 —
Hessenberg factorization of a real square matrix using Jacobi transformations
A lower Hessenberg matrix H is a square matrix that has zero elements above the first sup-diagonal, {Hij=0}j>i+1.
An upper Hessenberg matrix H is a square matrix that has zero elements below the first sub-diagonal, {Hij=0}i>j+1.
Hessenberg factorization is a representation of a square real matrix A in the form A=QHQT where Q is an orthogonal matrix and H is a Hessenberg matrix.
If A is a symmetrix matrix its Hessenberg factorization produces a tridiagonal matrix H.
If the constraints of a linear algebra problem do not allow a general matrix to be conveniently reduced to a triangular form, reduction to Hessenberg form is often the next best thing. Reduction of any matrix to a Hessenberg form can be achieved in a finite number of steps. Many linear algebra algorithms require significantly less computational effort when applied to Hessenberg matrices.
Consider a Jacobi transformation with the matrix J(p,q),
A→J(p,q)TAJ(p,q)
where the rotation angle is chosen not to zero the element Ap,q but rather to zero the element Ap-1,q. Argue that the following sequence of such Jacobi transformations,
J(2,3), J(2,4), ..., J(2,n); J(3,4), ..., J(3,n); ...; J(n-1,n)
reduces the matrix to the lower Hessenberg form.
Find out (by time measurements) which factorization is faster, Hessenberg or QR.
Check if QR-factorization of an upper Hessenberg matrix using Givens rotations can be done in O(n²) operations.
Check if Jacobi eigenvalue algorithm applied cleverly (eigenvalue-by-eigenvalue, and don't zero the elements that are already zero) to a tridiagonal matrix takes O(n) operations.
— 15 —
Bi-linear interpolation on a rectilinear grid in two dimensions
The signature of the interpolating subroutine can be
static double bilinear(double[] x, double[] y, matrix F, double px, double py)
— 16 —
One-sided Jacobi algorithm for Singular Value Decomposition
A = U D VT ,
where matrix D is diagonal with non-negative elements and matrices U and V are orghogonal. The diagonal elements of matrix D can always be chosen non-negative by multiplying the relevant columns of matrix U with (-1).SVD can be used to solve a number of problems in linear algebra.
A → A J(θ,p,q)
where the indices (p,q) are swept cyclicly (p=1..n, q=p+1..n) and where the angle θ is chosen such that the columns number p and q of the matrix AJ(θ,p,q) are orthogonal. One can show that the angle should be taken from the following equation (you should use atan2 function),tan(2θ)=2apTaq /(aqTaq - apTap)
where ai is the i-th column of matrix A (check this).After the iterations converge and the matrix A'=AJ (where J is the accumulation of the individual rotations) has orthogonal columns, the SVD is simply given as
A=UDVT
whereV=J, Dii=||a'i||, ui=a'i/||a'i||,
where a'i is the i-th column of matrix A' and ui us the i-th column of matrix U.— 17 —
Gauss-Newton algorithm
Implement the Gauss-Newton algorithm.
— 18 —
— 19 —
Implement the Bare Bones Particle Swarm Optimization (BBPSO) algorithm (see the book).
In this variant of the Particle Swarm Optimization (PSO) algorithm one dispences with the particle's velocity and instead updates the position of the particle according to the following rule,
xi ← G((pi+g)/2 , ‖pi-g‖) ,where xi is the particle's position, pi is the particle's best position, g is the global best, ‖…‖ signify the norm of the vector, and G(m,σ) is the multi-dimensional Gaussian (normal) distribution with the mean m and variance σ.
Hint: The normal distribution can be approximated as n=12 Irwin-Hall distribution (read the book),
G(0,1)≈Σi=1…12u(0,1) - 6where u(0,1) is a rangom number from the uniform [0,1] distribution.
— 20 —
The ordinary cubic spline simetimes makes unpleasant wiggles, for example, around an outlier, or around a step-like feature of the tabulated function (read the Akima sub-spline chapter in the lecture notes). Here is yet another attempt to reduce the wiggles by building a sub-spline.
Consider a data set {xi, yi}i=1,..,n which represents a tabulated function.
Implement a sub-spline of this data using the following algorithm:
Si(x)= yi +bi(x-xi) +ci(x-xi)2 +di(x-xi)3,
where for each interval the three coefficients bi, ci, di are determined by the three conditions,
Si(xi+1)=yi+1,
S'i(xi)=pi,
S'i(xi+1)=pi+1.
— 21 —
Implement the Berrut B1 rational function interpolation algorithm.
— 22 —
Cholesky decomposition of a real symmetric positive-definite matrix
For a real symmetric positive-definite matrix, A, the Cholesky decomposition, A=LLT (where L is lower-triangular), is more efficient than the LU-decomposition. Read the book and/or Wikipedia article Cholesky-decomposition.
Implement Cholesky decomposition of a real symmetric positive-definite matrix using Cholesky-Banachiewicz or Cholesky-Crout algorithms.
Try implement also the corresponding linear equation solver, calculation of determinant, and calculation of the inverse matrix.
— 23 —
Akima sub-spline
Implement the Akima (sub-)spline interpolation. See the section "Akima sub-spline interpolation" in the book for the inspiration.
— 24 —
Eigenvalues via rootfinding: Variational method with Lagrange multipliers
E(v)=vTAv , where vTv=1 .
This can be done using the method of Lagrange multipliers: the original problem is reformulated into unconstrained search for the stationary point (the point where all partial derivatives are zero) of the Lagrangian function defined as
L(v,λ) = vTAv - λ(vTv-1) .in the space {v,λ}. At the stationary point: λ is the eigenvalue and v is the corresponding eigenvector normalized to unity. Indeed, at the stationary point
∂L(v,λ)/∂v = 2(Av - λv) = 0 , ∂L(v,λ)/∂λ = vTv-1 = 0 .Of course this is equivalent to finding the root in the n+1 dimensional space x={v,λ} of the n+1 dimensional vector-function
f(x) = {Av-λv, vTv-1}where n is the size of v.
Note that the Jacobian of this function is analytic,
∂fi(x)/∂vj = Aij-λδij , where i,j≤n , ∂fi(x)/∂λ = -vi , where i≤n , ∂fn+1(x)/∂vj = 2vj , where j≤n ∂fn+1(x)/∂λ = 0 .The analytic Jacobian greatly facilitates the search for the root.
Implement a subroutine that calculates the eigenvalue (close to a given value λstart) and the corresponding eigenvector of a given symmetric matrix A by searching for the root of the function
f(v,λ) = {Av-λv, vTv-1}using the components of the vector v and the (Lagrangian multiplier) λ as free parameters. Use your own Newton's rootfinding routine and customize it to use analytic Jacobian.
Investigate the scaling of this method (time to find the eigenvalue as function of the size of the marix) and find out how far in the size of the matrix you can realistically go (that is, within few seconds of calculation time).
Calculate several lowest state of the Hydrogen atom.
It seems that the line-search here can be done analytically, you might want to try to investigate this possibility.
— 25 —
Implement the least-squares signal extrapolation method (linear prediction).
— 26 —
Two-sided Jacobi algorithm for Singular Value Decomposition (SVD)
A = U D VT ,
where matrix D is diagonal with non-negative elements and matrices U and V are orghogonal. The diagonal elements of matrix D can always be chosen non-negative by multiplying the relevant columns of matrix U with (-1).SVD can be used to solve a number of problems in linear algebra.
A → JT A J
where J ≐ J(θ,p,q) is the Jacobi matrix (where the angle θ is chosen to eliminate the off-diagonal elements Apq=Aqp) to all upper off-diagonal matrix elements in cyclic sweeps until the matrix becomes diagonal.
The two-sided Jacobi SVD algorithm for general real square matrices is the same as the Jacobi eigenvalue algorithm, except that the elementary transformation is slightly different,
A → JT GT A J ,
where
tan(θ) = (Apq - Aqp)/(Aqq + App)
(check that this is correct).The matrices U and V are updated after each transformation as
Of course you should not actually multiply Jacobi matrices—which costs O(n³) operations—but rather perform updates which only cost O(n) operations.
— 27 —
ODE: a two-step method
Implement a two-step stepper for solving ODE (as in the book). Use your own adaptive stepsize driver.
— 28 —
Symmetric row/column update of a size-n symmetric eigenvalue problem
The matrix to diagonalize is given in the form
A = D + e(p) uT + u e(p)T
where D is a diagonal matrix with diagonal elements {dk, k=1,...,n}, u is a given column-vector, and the vector e(p) with components e(p)i=δip is a unit vector in the direction p where 1≤p≤n.
Given the diagonal elements of the matrix D, the vector u, and the integer p, calculate the eigenvalues of the matrix A using O(n2) operations (see section "Eigenvalues of updated matrix" in the book).
— 29 —
Implement the least-squares missing samples recovery method.
— 30 —
Implement a stochastic global optimizer using the following algorithm,
— 31 —
Implement a function to solve a generalized eigenvalue problem AV=BVE (where A is a real symmetric matrix, B is a real symmetric positive definite matrix, V is the matrix of the generalized eigenvectors and E is the diagonal matrix with the generalized eigenvalues) via diagonalization of the matrix B. Specifically, the diagonalization of matrix B, given as B=QSQT, where S is a diagnoal matrix with positive elements and Q is an orthogonal matrix, turns the eigenproblem
AV=BVE
AV=QSQTVE .
S-½QTAQS-½ S½QTV = S½QTV E ,
à X = X E ,
à = S-½QTAQS-½ ,
X = S½QTV .
V = QS-½X .
Find the ground state of the hydrogen atom with the Schrödinger equation
(-1/2 d²/dr² - 1/r)u(r) = εu(r),
u(r) = ∑i=1..nciφi(r)
φi(r) = r e-αir²
H c = ε N c
Hij = ∫0∞dr φi(r) (-1/2 d²/dr² - 1/r) φj(r) , Nij = ∫0∞dr φi(r) φj(r) .
∫0∞dr r e-αr² r e-βr² = 1/4 √π (α+β)-3/2 ,
∫0∞dr r e-αr² (d²/dr²) r e-βr² = -3/2 √π αβ (α+β)-5/2 ,
∫0∞dr r e-αr² (1/r) r e-βr² = 1/2 (α+β)-1 .
Try n=1,2,3,... gaussians in your basis and investigate the convergence. The method will probably break down at about 5 Gaussians due to their excessive overlap.