Math 5610 - Computational Linear Algebra


Project maintained by BrandonFurman Hosted on GitHub Pages — Theme by mattgraham

Software Manual

Table of Contents

Completed Homework Tasks

Homework 1

  1. Task 1 was to create routines that return machine precision for any computer. Those routines are smaceps for single precision and dmaceps for double precision.
  2. Task 2 was to create a github repository.
  3. Task 3 was to create this web page.
  4. Task 4 was to create a folder for the software manual.
  5. Task 5 was to create a table of contents for the software manual. It can be found here.
  6. Task 6 was to create a shared library containing the routines created in Task 1. It can be found here.
  7. Task 7 was to use OpenMP to find out how many cores our computers have. The results can be found here
  8. Task 8 was to write brief paragraphs on three disasters caused by bad numerical computing. Those paragraphs can be found here.
  9. Task 9 was to create a routine that returns a random matrix of given size. It is entry randMat in the software manual.
  10. Task 10 was to discuss current linear algebra packages available. That discussion can be found here.

Homework 2

  1. Task 1 has been completed by virtue of this being seen.
  2. Task 2 was to create a routine that returns the absolute error between two numbers. This routine is detailed in the absErr entry of the software manual.
  3. Task 3 was to create a routine that returns the relative error between two numbers. This routine is detailed in the relErr entry of the software manual.
  4. Task 4 was to create a routine to add two vectors of the same length. This routine is detailed in the addVec entry of the software manual.
  5. Task 5 was to create a routine to multiply a vector by a scalar. This routine is detailed in the scaleVec entry of the software manual.
  6. Task 6 was to create a routine that returns the 2-norm of a given vector. This routine is detailed in the twoNormVec entry of the software manual.
  7. Task 7 was to create a routine that returns the 1-norm of a given vector. This routine is detailed in the oneNormVec entry of the software manual.
  8. Task 8 was to create a routine that returns the infinity-norm of a given vector. This routine is detailed in the infNormVec entry of the software manual.
  9. Task 9 was to create a routine that returns a random symmetric matrix. This routine is detailed in the randSymMat entry of the software manual.
  10. Task 10 was to write a brief summary of induced matrix norms. That summary can be found here.

Homework 3

  1. Task 1 was to create a routine that returns the absolute error between two vectors when the 2-norm is used. This routine is detailed in the absErrVecTwoNorm entry of the software manual.
  2. Task 2 was to create a routine that returns the absolute error between two vectors when the 1-norm is used. This routine is detailed in the absErrVecOneNorm entry of the software manual.
  3. Task 3 was to create a routine that returns the absolute error between two vectors when the infinity-norm is used. This routine is detailed in the absErrVecInfNorm entry of the software manual.
  4. Task 4 was to create a routine that returns the one-norm of a given square matrix. This routine is detailed in the oneNormMat entry of the software manual.
  5. Task 5 was to create a routine that returns the infinity-norm of a given square matrix. This routine is detailed in the infNormMat entry of the software manual.
  6. Task 6 was to create a routine that returns the dot product of two vectors of same length. This routine is detailed in the dotProduct entry of the software manual.
  7. Task 7 was to create a routine that returns the cross product of three vectors of length three. This routine is detailed in the crossProduct entry of the software manual.
  8. Task 8 was to create a routine that returns the product of two matrices with equal inner dimension. This routine is detailed in the multMat entry of the software manual.
  9. Task 9 was to create a routine that returns a diagonally dominant matrix that has real values in all entries of the matrix. This routine is detailed in the randDiagDomMat entry of the software manual.
  10. Task 10 was to discuss the Frobenius Norm. That discussion can be found here.

Homework 4

  1. Task 1 was to implement a method that returns the scalar multiple of a given matrix. This method is detailed in the scaleMat entry of the software manual.
  2. This method is detailed in the addMat entry of the software manual.
  3. This method is detailed in the vecOuterProduct entry of the software manual.
  4. This method is detailed in the diagSolver entry of the software manual.
  5. This method is detailed in the backSub entry of the software manual.
  6. This method is detailed in the forwardSub entry of the software manual.
  7. This method is detailed in the matRowReduction entry of the software manual.
  8. This method is detailed in the slowSquareSystemSolver entry of the software manual.
  9. This method is detailed in the randSymDiagDomMat entry of the software manual.
  10. A discussion of parallel matrix-vector and matrix-matrix multiplication algorithms can be found here.

Homework 5

  1. An inlined square system solver is detailed in the SquareSystemSolver entry of the software manual. A comparison between this function and a non-inlined version of the same function is detailed here.
  2. This method is detailed in the LUDecomp entry of the software manual.
  3. This method is detailed in the LUSquareSystemSolver entry of the software manual.
  4. This method is detailed in the randSymPosDefMat entry of the software manual.
  5. This method is detailed in the CholeskyDecomp entry of the software manual.
  6. This method is detailed in the normalEqSolver entry of the software manual.
  7. This method is detailed in the QRDecomp_CGS entry of the software manual.
  8. A discussion on the performance of the Classical Gram-Schmidt process with respect to Hilbert matrices can be found here.
  9. This method is detailed in the randDiagDomMat entry of the software manual.
  10. A summary of the limitations of direct methods can be found here.

Homework 6

  1. This method is detailed in the QRSquareSystemSolver entry of the software manual.
  2. This method is detailed in the QRDecomp_MGS entry of the software manual. A comparison between this function and the Classical Gram-Schmidt procedure can be found here.
  3. WIP
  4. WIP
  5. WIP
  6. This method is detailed in the jacobiSolver entry of the software manual. The second example shows that the function is capable of solving a system of 1000 equations in 1000 unknowns.
  7. This method is detailed in the gaussSeidelSolver entry of the software manual. The second example shows that the function is capable of solving a system of 1000 equations in 1000 unknowns.
  8. A comparison between this Jacobi algorithm and the Gauss-Seidel algorithm can be found here. This comparison shows how the number of iterations for the Jacobi and Gauss-Seidel algorithms to converge changes with matrix size.
  9. WIP
  10. WIP

Homework 7

  1. A comparison between Gaussian Elimination and Jacobi Iteration is given here.
  2. A comparison between Gaussian Elimination and the Gauss-Seidel algorithm is given on the same page as problem 1 (here).
  3. This method is detailed in the steepestDescent entry of the software manual.
  4. A discussion on the performance of the Steepest Descent Method for Hilbert matrices can be found here
  5. This method is detailed in the conjGrad entry of the software manual.
  6. A discussion on the performance of the Conjugate Gradient Method for Hilbert matrices can be found here
  7. A list of iterative methods to solve linear systems of equations can be found here.
  8. A list of preconditioning strategies for iterative methods can be found here.
  9. A comparison between Jacobi Iteration and the Conjugate Gradient Method can be found here.
  10. WIP

Homework 8 (FIXED #6 AND ADDED #5, #8, #10)

  1. This method is detailed in the powerMethod entry of the software manual. This entry also includes an example for a Hilbert matrix of size 8.
  2. This method is detailed in the inverseIteration entry of the software manual. This entry also includes an example for a Hilbert matrix of size 8.
  3. This method is detailed in the condNumApprox entry of the software manual. This entry also includes an example for a Hilbert matrix of size 8.
  4. A graph of Condition Number vs. Hilbert Matrix size is available here.
  5. A function that attempts to locate multiple eigenvalues of a given matrix by subdividing the interval between the largest and smallest eigenvalue can be found in the eigenFind entry of the software manual.
  6. This method is detailed in the rayleighQuotient entry of the software manual. This entry also includes an example for a Hilbert matrix of size 8.
  7. WIP
  8. WIP
  9. WIP
  10. This method is detailed in the inverseIteration_JACOBI entry of the software manual. This entry also includes an example using a randomly generated diagonally dominant matrix.