What is a Matrix?
A matrix is just a 2-dimensional array of variables or values. Each matrix has a size, determined by its number of rows and columns. For example, a 2×3 matrix (2 rows, 3 columns) can look like this:
Matrices can be used for a variety of things, from solving equations to applying transformations to vectors, such as translation, rotation, and scale.
Matrix Addition
Adding two matrices together results in a matrix which, when multiplied by a vector matrix, results in a sort of averaging effect of the matrices on the resulting vector:
Essentially it’s the same as applying each matrix to the vector individually, and then adding the resulting vectors together to produce the final result vector.
To add two matrices together, we need the matrices to be the same dimensions, and then we just add each element of the two matrices:
Matrix addition has very niche uses in game development, but is used for linear blend skinning, which is a technique for calculating vertex positions based on bone weights and transforms, which would look something like this:
Matrix Multiplication
Matrix multiplication is often used to manipulate points, or vectors in games. We can take a transformation matrix, and when we multiply it by our vector, the result will be our vector transformed in 2D/3D space.
Matrix multiplication can also be used to create composite transformations which apply multiple transformations at once. This is done by multiplying the two transformation matrices.
To multiply two matrices, A and B, matrix B must have the same number of rows as the matrix A has columns. The result of the operation is a matrix.
In the following example, note that the matrix A is a 2×1 matrix, with 2 rows and 1 column. Matrix B must therefore have 1 row, but can have any number of columns. The result is a 2×2 matrix ().
To calculate each value in the result, we take these steps:
- Start out at matrix A, row 0 (the top row), and matrix B, column 0 (the left column).
- Multiply each value in the current row of A by the corresponding value of B current column, and sum the result of all of those multiplications.
- The resulting value is placed in the same row as the current matrix A row, and the same column as the current column of matrix B.
- If there is another column in matrix B, move on to that column and repeat the process again for the current row of matrix A. If not, reset to use column 0 of matrix B.
- Once all columns have been multiplied and summed, and while we have additional rows, move on to the next row of matrix A, and repeat the process until all values have been computed.
Here is an example of multiplying a 2×3 matrix with a 3×1 matrix…the result is a 2×1 matrix:
And we get a 3×4 matrix when we multiply a 3×2 matrix with a 2×4 matrix:
Identity Matrices
An identity matrix essentially preserves the values of a matrix when that matrix is multiplied by the identity matrix. It’s pretty much like multiplying by 1. If you have a matrix which represents an object’s 3D transform (position, rotation, and scale), then multiplying by an identity matrix will result in no change to the object’s transform.
An identity matrix consists of 0s, except for the diagonal from top left to bottom right, which is composed of 1s. The following are all examples of identity matrices:
In the example below, you can see how multiplying a matrix by the identity matrix results in the original matrix:
Transpose of a Matrix
The transpose of a matrix flips a matrix around its diagonal, changing the rows values into column values and vice versa.
For example, given a matrix, M:
If we take the transpose of the matrix (denoted with a superscript T), we swap around the diagonal, a…e…etc.:
Determinants
A determinant can only be found for square matrices. Determinants of matrices are useful for a number of things:
- Finding out whether a matrix has an inverse matrix or not.
- Determining whether a formula has a single solution in a linear system ().
- Calculating the signed area of a parallelogram or a triangle between two 2D vectors.
- Calculating the signed volume of a parallelepiped formed by three 3D vectors.
- Measuring how much a matrix will scale area (in 2D) or volume (in 3D).
- Determining whether a matrix will preserve a vector’s orientation or reverse it.
To find the determinant of a matrix, we take the sums of the multiples of all diagonals, from top left to bottom right, and subtract the sum of the multiples of all diagonals which go from top right to bottom left.
As an example, here is how the determinant is calculated for a 3×3 matrix:
And for a 2×2 matrix:
Checking if a matrix has an inverse
Determinants are useful for quickly determining if a matrix has in inverse. If the determinant is non-zero, and the matrix is a square matrix, it has an inverse.
Inverse of a Matrix
An inverse matrix is a matrix which, when multiplied by some other matrix, results in an identity matrix. These matrices can be used to perform the inverse operation of another matrix. For example, if you use a matrix to rotate a vector, then multiplying by the inverse of that matrix will undo the rotation.
For a matrix to have an inverse, it needs to be a square matrix (meaning it has the same number of rows and columns), and it must have a non-zero determinant.
To calculate the inverse matrix (denoted by a superscript -1), we divide a matrix’s adjugate matrix by the determinant of the matrix:
Calculating the Adjugate Matrix
An adjucate matrix can be found by taking the transpose of a cofactor matrix. A cofactor matrix is formed by taking the minor of each element in a matrix, and then multiplying by a power of -1, where the power depends on the position of the element in the matrix.
For each element of a matrix, we take the minor by eliminating all values of the row and column that element is in, then taking the determinant of the remaining values.
Consider the matrix below, where each value is attributed with a row and column, for example, the top right is in row 1, column 3.
In order to find the minor for we remove all values in row 1, and all values in column 3, which gives us a matrix like this:
Then we take the cofactor and multiply it by the minor. The cofactor is . So in this case:
It’s more intuitive to look at a matrix to understand what the cofactor value for any given element would be. As you can see below, the value alternates between 1 and -1 based on whether the row and column for any element are even and odd. This scales up and down for all matrix sizes:
So, to get our cofactor matrix, we do this process and compute the minor value for each element in the original matrix, then multiply that by the appropriate 1/-1 cofactor, and then place the result in a matrix in the same row and column as the element/value being evaluated.
We then take the transpose of this cofactor matrix, and that gives us the adjugate matrix.
Dividing the adjugate by the original matrix’s determinant gives us our inverse matrix:
Translation
2D Translation
To translate a vector by some amount in 2D space, we multiply a translation matrix by a matrix representing our 2D vector. First we build the translation matrix using the 3×3 identity matrix, and we set the right-most column’s values to the X and Y offsets we want to apply to our vector.
We then multiply the translation matrix by our vector matrix:
3D Translation
To translate a vector by some offset in 3D, we essentially do the same thing as in 2D, but we use a 4×4 identity matrix for our translation matrix, including the Z value for our offset, and we use a 4×1 array for our vector’s matrix:
Rotation
2D Rotation
For 2D rotations, the following 2×2 matrix can be used, where is the angle of rotation:
Or if using in combination with other transformations such as translation and scale (created by combining with a 1×1 larger identity matrix):
3D Rotation
Rotations can be performed on a 3D vector on a per-axis basis, where is the rotation angle around that axis, the following matrices can be used:
And for 3D:
And when using rotation matrices in combination with other transformations, such as translation and scale, the following matrices can be created by combining the rotation matrices with a 1×1 larger identity matrix:
Additionally, for rotations around an arbitrary axis (in any direction), we can define a unit vector, , as the axis we want to rotate around, and use the following matrix for the transformation:
Scale
To scale a vector by some amount, we simply replace the values in an identity matrix with our scale values for each dimension, and then multiply the matrix by our vector, , to get our scaled result, . In 2D:
And in 3D:
The Transformation Matrix (Translation, Rotation, and Scale – Together)
One of the nice features of matrices is that, using matrix multiplication, we can combine them into a composite matrix. This is often utilized in game engines to improve efficiency and performance, since we can combine our rotation, translation, and scale into a single matrix, reducing the number of calculations by two thirds.
But we have to make sure we do these operations in the right order, since the result is order-dependent. Usually, game engines compute the composite transformation matrix such that the scale is applied first, then the rotation, and lastly, the translation: . That looks like this:
It’s important to keep in mind that matrices are multiplied with the right-most pair first, which gives us the combined rotation and scale:
Then, we multiply the RS matrix and the translation matrix: