Linear Algebra

Misc

  • Packages
    • {sparsevctrs} - Sparse Vectors for Use in Data Frames or Tibbles
      • Sparse matrices are not great for data in general, or at least not until the very end, when mathematical calculations occur.
      • Some computational tools for calculations use sparse matrices, specifically the Matrix package and some modeling packages (e.g., xgboost, glmnet, etc.).
      • A sparse representation of data that allows us to use modern data manipulation interfaces, keeps memory overhead low, and can be efficiently converted to a more primitive matrix format so that we can let Matrix and other packages do what they do best.
    • {quickr} - R to Fortran Transpiler
      • Only atomic vectors, matrices, and array are currently supported: integer, double, logical, and complex.

      • The return value must be an atomic array (e.g., not a list)

      • Only a subset of R’s vocabulary is currently supported.

        #>  [1] !=        %%        %/%       &         &&        (         *        
        #>  [8] +         -         /         :         <         <-        <=       
        #> [15] =         ==        >         >=        Fortran   [         [<-      
        #> [22] ^         c         cat       cbind     character declare   double   
        #> [29] for       if        ifelse    integer   length    logical   matrix   
        #> [36] max       min       numeric   print     prod      raw       seq      
        #> [43] sum       which.max which.min {         |         ||

Resources

Matrix Multiplication

Matrix Algebra

  • An expected value equation (VC stands for variance-covariance in example) multiplied by a matrix, C.

    • C is factored out of an expected value as C
    • C is factored out of a transpose as CT

Factorization

Methods

The Moore-Penrose Inverse

  • The Moore-Penrose inverse, often denoted as \(A^+\), is a generalization of the ordinary matrix inverse that applies to any matrix, even those that are singular (non-invertible) or rectangular. It was independently developed by E.H. Moore and Roger Penrose.
  • Generalized Inverse
    • Unlike a regular inverse, which only exists for square, non-singular matrices, the Moore-Penrose inverse exists for all matrices. This is incredibly useful in situations where you have more equations than unknowns (overdetermined systems) or fewer equations than unknowns (underdetermined systems), or when your matrix is singular.
  • Four Penrose Conditions
    • The Moore-Penrose inverse \(A^+\) of a matrix \(A\) is uniquely defined by four conditions:
      1. \(A A^+ A = A\)
      2. \(A^+ A A^+ = A^+\)
      3. \((A A^+)^T = A A^+\)
      4. \((A^+ A)^T = A^+ A\)
  • Least Squares Solution
    • One of its most important applications is in finding the “best fit” (least squares) solution to a system of linear equations \(Ax = b\). When an exact solution doesn’t exist (e.g., in overdetermined systems), the Moore-Penrose inverse provides the \(x\) that minimizes the Euclidean norm of the residual error, \(\|Ax - b\|^2\). The solution is given by \(x = A^+ b\). If there are multiple solutions (e.g., in underdetermined systems), it provides the solution with the minimum Euclidean norm \(\|x\|^2\).
  • Computation
    • The most common and robust method for computing the Moore-Penrose inverse is through Singular Value Decomposition (SVD). If \(A = U \Sigma V^T\) is the SVD of \(A\), then \(A^+ = V \Sigma^+ U^T\), where \(\Sigma^+\) is obtained by taking the reciprocal of the non-zero singular values in \(\Sigma\) and transposing the resulting matrix.