ebook img

Introduction to Computational Linear Algebra PDF

242 Pages·2016·1.143 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Introduction to Computational Linear Algebra

INTRODUCTION TO COMPUTATIONAL LINEAR ALGEBRA Nabil Nassif American University of Beirut Lebanon Jocelyne Erhel INRIA, Rennes France Bernard Philippe INRIA, Rennes France CRC Press Taylor & Francis Group 6000 Broken Sound Parkway NW, Suite 300 Boca Raton, FL 33487-2742 © 2016 by Taylor & Francis Group, LLC CRC Press is an imprint of Taylor & Francis Group, an Informa business Version Date: 20150529 International Standard Book Number-13: 978-1-4822-5871-4 (eBook - PDF) Visit the Taylor & Francis Web site at http://www.taylorandfrancis.com and the CRC Press Web site at http://www.crcpress.com Contents Preface xiii List of Figures xix List of Tables xxi List of Algorithms xxiii 1 Basic Linear Algebra Subprograms - BLAS 1 1.1 An Introductory Example . . . . . . . . . . . . . . . . . . . . 1 1.2 Matrix Notations . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 IEEE Floating Point Systems and Computer Arithmetic . . 4 1.4 Vector-Vector Operations: Level-1 BLAS . . . . . . . . . . . 5 1.5 Matrix-Vector Operations: Level-2 BLAS . . . . . . . . . . . 8 1.6 Matrix-Matrix Operations: Level-3 BLAS . . . . . . . . . . . 11 1.6.1 Matrix Multiplication Using GAXPYs . . . . . . . . . 12 1.6.2 Matrix Multiplication Using Scalar Products . . . . . 13 1.6.3 Matrix Multiplication Using External Products . . . . 13 1.6.4 Block Multiplications . . . . . . . . . . . . . . . . . . 13 1.6.5 An Efficient Data Management . . . . . . . . . . . . . 14 1.7 Sparse Matrices: Storage and Associated Operations . . . . . 15 1.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.9 Computer Project: Strassen Algorithm . . . . . . . . . . . . 25 2 Basic Concepts for Matrix Computations 27 2.1 Vector Norms . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.2 Complements on Square Matrices . . . . . . . . . . . . . . . 28 2.2.1 Definition of Important Square Matrices . . . . . . . . 29 2.2.2 Use of Orthonormal Bases . . . . . . . . . . . . . . . . 29 2.2.3 Gram-Schmidt Process . . . . . . . . . . . . . . . . . . 30 2.2.4 Determinants . . . . . . . . . . . . . . . . . . . . . . . 33 2.2.5 Eigenvalue-Eigenvector and Characteristic Polynomial 34 2.2.6 Schur’s Decomposition . . . . . . . . . . . . . . . . . . 36 2.2.7 Orthogonal Decomposition of Symmetric Real and Complex Hermitian Matrices . . . . . . . . . . . . . . 39 2.2.7.1 A Real and Symmetric: A=AT . . . . . . . 39 2.2.7.2 A Complex Hermitian: A=A∗ . . . . . . . . 40 2.2.8 Symmetric Positive Definite and Positive Semi-Definite Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . 41 2.3 Rectangular Matrices: Ranks and Singular Values . . . . . . 42 2.3.1 Singular Values of a Matrix . . . . . . . . . . . . . . . 43 2.3.2 Singular Value Decomposition. . . . . . . . . . . . . . 44 2.4 Matrix Norms . . . . . . . . . . . . . . . . . . . . . . . . . . 47 2.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 2.6 Computer Exercises . . . . . . . . . . . . . . . . . . . . . . . 53 3 Gauss Elimination and LU Decompositions of Matrices 57 3.1 Special Matrices for LU Decomposition . . . . . . . . . . . . 57 3.1.1 Triangular Matrices . . . . . . . . . . . . . . . . . . . 57 3.1.2 Permutation Matrices . . . . . . . . . . . . . . . . . . 58 3.2 Gauss Transforms . . . . . . . . . . . . . . . . . . . . . . . . 60 3.2.1 Preliminaries for Gauss Transforms . . . . . . . . . . . 60 3.2.2 Definition of Gauss Transforms . . . . . . . . . . . . . 61 3.3 Naive LU Decomposition for a Square Matrix with Principal Minor Property (pmp) . . . . . . . . . . . . . . . . . . . . . 62 3.3.1 Algorithm and Operations Count . . . . . . . . . . . . 65 3.3.2 LDLT DecompositionofaMatrixHavingthePrincipal 1 Minor Property (pmp) . . . . . . . . . . . . . . . . . . 66 3.3.3 The Case of Symmetric and Positive Definite Matrices: Cholesky Decomposition . . . . . . . . . . . . . . . . . 66 3.3.4 Diagonally Dominant Matrices . . . . . . . . . . . . . 68 3.4 PLU Decompositions with Partial Pivoting Strategy . . . . 69 3.4.1 Unscaled Partial Pivoting . . . . . . . . . . . . . . . . 69 3.4.2 Scaled Partial Pivoting . . . . . . . . . . . . . . . . . 71 3.4.3 Solving a System Ax=b Using the LU Decomposition 72 3.5 MATLAB Commands Related to the LU Decomposition . . . . 73 3.6 Condition Number of a Square Matrix . . . . . . . . . . . . . 73 3.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 3.8 Computer Exercises . . . . . . . . . . . . . . . . . . . . . . . 77 4 Orthogonal Factorizations and Linear Least Squares Problems 79 4.1 Formulation of Least Squares Problems: Regression Analysis 79 4.1.1 Least Squares and Regression Analysis . . . . . . . . . 79 4.1.2 Matrix Formulation of Regression Problems . . . . . . 80 4.2 Existence of Solutions Using Quadratic Forms . . . . . . . . 80 4.2.1 Full Rank Cases: Application to Regression Analysis . 82 4.3 Existence of Solutions through Matrix Pseudo-Inverse . . . . 83 4.3.1 Obtaining Matrix Pseudo-Inverse through Singular Value Decomposition . . . . . . . . . . . . . . . . . . . 85 4.4 The QR Factorization Theorem . . . . . . . . . . . . . . . . 87 4.4.1 Householder Transforms . . . . . . . . . . . . . . . . . 87 4.4.2 Steps of the QR Decomposition of a Matrix . . . . . . 91 4.4.3 Particularizing When m>n . . . . . . . . . . . . . . 92 4.4.4 Givens Rotations . . . . . . . . . . . . . . . . . . . . . 93 4.5 Gram-Schmidt Orthogonalization . . . . . . . . . . . . . . . 94 4.6 Least Squares Problem and QR Decomposition . . . . . . . . 98 4.7 Householder QR with Column Pivoting . . . . . . . . . . . . 99 4.8 MATLAB Implementations . . . . . . . . . . . . . . . . . . . . 99 4.8.1 Use of the Backslash Operator . . . . . . . . . . . . . 99 4.8.2 QR Decompositions . . . . . . . . . . . . . . . . . . . 99 4.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 4.10 Computer Exercises . . . . . . . . . . . . . . . . . . . . . . . 102 5 Algorithms for the Eigenvalue Problem 105 5.1 Basic Principles . . . . . . . . . . . . . . . . . . . . . . . . . 105 5.1.1 Why Compute the Eigenvalues of a Square Matrix? . 105 5.1.2 Spectral Decomposition of a Matrix . . . . . . . . . . 107 5.1.3 The Power Method and its By-Products . . . . . . . . 112 5.2 QR Method for a Non-Symmetric Matrix . . . . . . . . . . . 115 5.2.1 Reduction to an Upper Hessenberg Matrix . . . . . . 115 5.2.2 QR Algorithm for an Upper Hessenberg Matrix . . . . 117 5.2.3 Convergence of the QR Method . . . . . . . . . . . . . 119 5.3 Algorithms for Symmetric Matrices . . . . . . . . . . . . . . 121 5.3.1 Reduction to a Tridiagonal Matrix . . . . . . . . . . . 121 5.3.2 Algorithms for Tridiagonal Symmetric Matrices . . . . 122 5.4 Methods for Large Size Matrices . . . . . . . . . . . . . . . . 124 5.4.1 Rayleigh-Ritz Projection. . . . . . . . . . . . . . . . . 125 5.4.2 Arnoldi Procedure . . . . . . . . . . . . . . . . . . . . 126 5.4.3 The Arnoldi Method for Computing Eigenvalues of a Large Matrix . . . . . . . . . . . . . . . . . . . . . . . 128 5.4.4 Arnoldi Method for Computing an Eigenpair . . . . . 130 5.4.5 Symmetric Case: Lanczos Algorithm . . . . . . . . . . 131 5.5 Singular Value Decomposition . . . . . . . . . . . . . . . . . 134 5.5.1 Full SVD . . . . . . . . . . . . . . . . . . . . . . . . . 134 5.5.2 Singular Triplets for Large Matrices . . . . . . . . . . 136 5.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 5.7 Computer Exercises . . . . . . . . . . . . . . . . . . . . . . . 141 6 Iterative Methods for Systems of Linear Equations 149 6.1 Stationary Methods . . . . . . . . . . . . . . . . . . . . . . . 150 6.1.1 Splitting . . . . . . . . . . . . . . . . . . . . . . . . . . 150 6.1.2 Classical Stationary Methods . . . . . . . . . . . . . . 150 6.2 Krylov Methods . . . . . . . . . . . . . . . . . . . . . . . . . 152 6.2.1 Krylov Properties . . . . . . . . . . . . . . . . . . . . 152 6.2.2 Subspace Condition . . . . . . . . . . . . . . . . . . . 153 6.2.3 Minimization Property for spd Matrices . . . . . . . . 154 6.2.4 Minimization Property for General Matrices . . . . . . 155 6.3 Method of Steepest Descent for spd Matrices . . . . . . . . . 156 6.3.1 Convergence Properties of the Steepest Descent Method 157 6.3.2 Preconditioned Steepest Descent Algorithm . . . . . . 157 6.4 Conjugate Gradient Method (CG) for spd Matrices . . . . . 158 6.4.1 Krylov Basis Properties . . . . . . . . . . . . . . . . . 159 6.4.2 CG Algorithm . . . . . . . . . . . . . . . . . . . . . . 161 6.4.3 Convergence of CG . . . . . . . . . . . . . . . . . . . . 162 6.4.4 Preconditioned Conjugate Gradient . . . . . . . . . . 163 6.4.5 Memory and CPU Requirements in PCG . . . . . . . 164 6.4.6 Relation with the Lanczos Method . . . . . . . . . . . 164 6.4.7 Case of Symmetric Indefinite Systems: SYMMLQ Method . . . . . . . . . . . . . . . . . . . . . . . . . . 165 6.5 The Generalized Minimal Residual Method . . . . . . . . . . 165 6.5.1 Krylov Basis Computation . . . . . . . . . . . . . . . 166 6.5.2 GMRES Algorithm . . . . . . . . . . . . . . . . . . . . 166 6.5.3 Convergence of GMRES . . . . . . . . . . . . . . . . . 167 6.5.4 Preconditioned GMRES . . . . . . . . . . . . . . . . . 168 6.5.5 Restarted GMRES . . . . . . . . . . . . . . . . . . . . 169 6.5.6 MINRES Algorithm . . . . . . . . . . . . . . . . . . . 169 6.6 The Bi-Conjugate Gradient Method . . . . . . . . . . . . . . 169 6.6.1 Orthogonality Properties in BiCG . . . . . . . . . . . 170 6.6.2 BiCG Algorithm . . . . . . . . . . . . . . . . . . . . . 172 6.6.3 Convergence of BiCG . . . . . . . . . . . . . . . . . . 172 6.6.4 Breakdowns and Near-Breakdowns in BiCG . . . . . . 173 6.6.5 Complexity of BiCG and Variants of BiCG . . . . . . 173 6.6.6 Preconditioned BiCG . . . . . . . . . . . . . . . . . . 173 6.7 Preconditioning Issues . . . . . . . . . . . . . . . . . . . . . . 174 6.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 7 Sparse Systems to Solve Poisson Differential Equations 177 7.1 Poisson Differential Equations . . . . . . . . . . . . . . . . . 177 7.2 The Path to Poisson Solvers . . . . . . . . . . . . . . . . . . 179 7.3 Finite Differences for Poisson-Dirichlet Problems . . . . . . . 179 7.3.1 One-Dimensional Dirichlet-Poisson . . . . . . . . . . . 180 7.3.2 Two-Dimensional Poisson-Dirichlet on a Rectangle . . 187 7.3.3 Complexity for Direct Methods: Zero-Fill Phenomenon 192 7.4 Variational Formulations . . . . . . . . . . . . . . . . . . . . 195 7.4.1 Integration by Parts and Green’s Formula . . . . . . . 195 7.4.2 Variational Formulation to One-Dimensional Poisson Problems . . . . . . . . . . . . . . . . . . . . . . . . . 197 7.4.3 Variational Formulations to Two-Dimensional Poisson Problems . . . . . . . . . . . . . . . . . . . . . . . . . 198 7.4.4 Petrov-Galerkin Approximations . . . . . . . . . . . . 200 7.5 One-Dimensional Finite-Element Discretizations . . . . . . . 201 7.5.1 The P Finite-Element Spaces . . . . . . . . . . . . . 202 1 7.5.2 Finite-Element Approximation Using S (Π) . . . . . . 203 1 7.5.3 Implementation of the Method . . . . . . . . . . . . . 205 7.5.4 One-Dimensional P Finite-Elements . . . . . . . . . . 211 2 7.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 7.7 Computer Exercises . . . . . . . . . . . . . . . . . . . . . . . 215 Bibliography 227 Index 233 Preface Thisworkresultsfromtwodecadesofcommonacademicexperiencesharedby the authors in teaching, between 1996 and 2003, introductory and advanced material in computational linear algebra and its application to numerical so- lutions of partial and ordinary differential equations. During that period, the authors worked as a team in a Master’s program on “Mathematical Modeling and Numerical Simulation” managed jointly in Beirut by French, Swiss and Lebaneseuniversities.Since2003,thatcommonexperiencehascontinuedand is still pursued through multiple French-Lebanese collaborations in various researchprojects,teachingmissionsandco-tutoringofMaster’sandPhDthe- ses. The core of the book is adapted to a course on Numerical Linear Algebra offered yearly in the American University of Beirut to senior undergraduate students in Mathematics, Applied Mathematics and Engineering. Additional applications are also included. These are usually given to first-year graduate students in Engineering and Computational Science. The main learning objectives of the book stand as follows: 1. InChapter1,thereaderisexposedtoBLASoperationsoftypes1,2and 3. These are particularly adapted to a scientific computer environment such as MATLAB(cid:114) version 7. Please note that: MATLAB is a registered trademark of The MathWorks, Inc. For product information, please contact: The MathWorks, Inc. 3 Apple Hill Drive Natick, MA 01760-20098 USA Tel: 508-647-7000 Fax: 508-647-7001 E-mail: [email protected] Web: www.mathworks.com 2. Chapter 2 presents the basic mathematical tools needed in Numeri- cal Linear Algebra: ranks, determinants, eigenvalues, vector and matrix norms,orthogonalmatrices,Gram-Schmidtprocess,SchursDecomposi- tion and Singular Value Decomposition (SVD). 3. Chapter3givestheclassicalmaterialonGaussdecompositionsfollowed byLU andCholeskysfactorizationsofmatrices.Additionally,itprovides the use of condition numbers for evaluating the effect of finite precision arithmetic when solving directly a system of linear equations Ax=b. 4. Chapter 4 illustrates the use of Householder transforms in obtaining the QR orthogonal factorization of a rectangular matrix that leads to findingitspseudo-inverse.Thisisfollowedbyapplicationstoleastsquare solutions of rectangular systems and statistical regression analysis. 5. Chapter 5 is a detailed numerical treatment of the algebraic eigenvalue problem, starting with the power method and followed by the QR and Householder-Givens algorithms. Several applications are given as exer- cises, in particular examples from population dynamics and “Google” matrices. 6. Chapter 6 discusses at length (indirect) iterative methods to solve a system of linear equations Ax=b. It exposes stationary methods based onmatrixsplitting(Jacobi,Gauss-Seidel,SOR,SSOR)aswellasKrylov spacesmethods(steepestdescent,ConjugateGradient,GMRESandBi- Conjugate Gradient). The determinant role of preconditioners is also exhibited. 7. Finally,Chapter7illustratespracticesonsolvingdiscretizedsparsesys- tems of linear equations AU = F, obtained using either finite differ- ences or finite elements when approximating the solutions of ordinary and partial differential equations. It provides a complete treatment of the problem from generating nodes and elements, computing local coef- ficients and “assembling” the sparse linear system. Various solvers are then implemented and compared in a number of computer projects. The core material can be easily achieved in one-semester by covering: • Sections 1.1 to 1.6 of Chapter 1. • Chapter2,withoutnecessarily“insisting”ontheproofofShur’sdecom- position theorem. • All of Chapter 3. • Sections 4.1, 4.2, 4.4.1, 4.4.2, 4.5, 4.6 and 4.8 of Chapter 4. • Sections 5.1.3, 5.2, 5.3 and 5.5 of Chapter 5. • Sections 6.1, 6.2, 6.3 and 6.4 of Chapter 6. The selection of additional topics, particularly that of applications, is left to the course instructor, particularly that regarding Sections 1.7 and 1.9 (sparse matricesandStrassenalgorithm),Sections4.11,5.7and5.8,andselectedma- terial from Chapter 7 on sparse systems resulting from finite differences and finite element discretizations of ordinary and partial differential equations. Throughout the book, special attention is given to algorithms’ implementa- tion using MATLAB’s syntax. As a matter of fact, each of the numerical meth- ods explained in any of the seven chapters is directly expressed either using a pseudo-code or a detailed MATLAB program. Exercises and Computer Projects Each chapter ends with a large number of exercises. Throughout the seven chapters, several computer projects are proposed. These aim at testing the student’s understanding of both the mathematics of numerical methods and the art of computer programming. Nabil Nassif, Jocelyne Erhel and Bernard Philippe

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.