Matrix Operations

Matrices also support addition and scalar multiplication, identically to vectors. In addition, they support the following operations:

Transpose
\(A^\T\) denotes the transpose of matrix \(A\), which just switches the rows and columns of \(A\). That is, if \(A \in \Reals^{m \times n}\) and \(B = A^\T\), then \(B \in \Reals^{n \times m}\) and \(b_{ij} = a_{ji}\).
Multiplication

Unlike vectors, two matrices can be multiplied, provided their dimensions align. If \(A \in \Reals^{m \times k}\) and \(B \in \Reals^{k \ times n}\) (so \(A\) has the same number of columns as \(B\) has rows), then the multiplication \(C = A B\) yields a matrix \(C \in \Reals^{m \times n}\) such that:

\[c_{ij} = \sum_{e=1}^{k} a_{ie} b_{ej}\]

This is the same as the inner product: \(c_{ij}\) is the inner product of row \(i\) of \(A\) and column \(j\) of \(B\).

Note that unlike scalar multiplication, matrix multiplication is not commutative (\(A B\) is not necessarily the same as \(B A\), even if the matrices are square so the multiplication is defined). Multplication also interacts with transpose, by reversing the matrices: \((AB)^\T = B^\T A^\T\).

Matrix-vector multiplication

You can also multiply a matrix by a vector. If \(A \in \Reals^{m \times n}\) and \(\mathbf{x} \in \Reals^n\), then the multiplication \(\mathbf{y} = A \mathbf{x}\) is defined, and the result is the same as multiplying \(A\) by a matrix with a single column \(\mathbf{x}\) (\(y_j = \sum_i a_{ij} x_j\)).

If \(A \in \Reals^{m \times n}\) and \(\mathbf{x} \in \Reals^m\) (the vector matches the number of rows), then instead you can pre-multiply the vector by the matrix, as in \(\mathbf{y} = \mathbf{x} A\)\(\mathbf{y} \in \Reals^m\), and \(y_{ij} = \sum_j x_i a_{ij}\).

Matrix inversion

The matrix inverse \(A^{-1}\) is a matrix such that \(A^{-1}A = \mathbb{1}\). Not all matrices have inverses; if the inverse of a matrix \(A\) exists, then \(A\) is called invertible, and if no inverse exists then it is singular.

While linear algebra packages include routines to compute matrix inverses, we very rarely use them. The primary use for a matrix inverse is to solve a system of equations, and there are usually betters ways to solve the system (see Linear Systems).

Frobenius norm
The Frobenius norm of a matrix, denoted \(\|A\|_F\), is the matrix version of the L₂ norm: the square root of the sum of the squares. Its square \(\|A\|_F^2 = \sum_i \sum_j a_{ij}^2\).