Linear Algebra

The mathematics of vectors, matrices, and linear transformations. Linear algebra is the backbone of modern computation, powering everything from computer graphics and machine learning to quantum mechanics and economic modeling.

Introduction to Linear Algebra

Linear algebra is one of the most widely applicable branches of mathematics. While algebra deals with single unknown quantities, linear algebra deals with collections of unknowns organized into vectors and matrices, and it studies the linear relationships between them.

The word "linear" refers to straight lines and flat planes — the simplest geometric objects. A linear equation in variables x₁, x₂, …, xₙ has the form:

a₁x₁ + a₂x₂ + … + aₙxₙ = b

where a₁, a₂, …, aₙ and b are constants. There are no squares, cubes, products of variables, or other nonlinear terms — just variables multiplied by constants and added together.

Linear algebra asks — and answers — fundamental questions such as:

Linear algebra is arguably the single most useful branch of mathematics for the 21st century. It underpins machine learning, data science, computer graphics, signal processing, quantum computing, optimization, and much more. Mastering it will open doors across science and engineering.

The subject was developed over centuries, with key contributions from mathematicians including Carl Friedrich Gauss (elimination methods), Arthur Cayley (matrix theory), Hermann Grassmann (vector spaces), and David Hilbert (abstract spaces). Today, linear algebra is typically the first mathematics course that moves from calculation to abstraction and proof.

Vectors

A vector is a mathematical object that has both magnitude (length) and direction. Geometrically, a vector is an arrow pointing from one location to another. Algebraically, a vector is an ordered list of numbers called components.

Notation and Representation

A vector in ℝⁿ (n-dimensional real space) is written as:

v = (v₁, v₂, …, vₙ) or v = [v₁, v₂, …, vₙ]ᵀ

For example, the vector v = (3, 4) in ℝ² represents a point or an arrow from the origin to the point (3, 4) in the plane.

Common notations for vectors include boldface (v), an arrow above the letter (v⃗), or an underline. We'll use boldface throughout this lesson.

Vector Addition

Vectors of the same dimension are added component by component:

u + v = (u₁ + v₁, u₂ + v₂, …, uₙ + vₙ)

Example: Add (2, 5, -1) and (3, -2, 4)

(2, 5, -1) + (3, -2, 4) = (2+3, 5+(-2), -1+4) = (5, 3, 3)

Geometrically, adding two vectors corresponds to placing them tip-to-tail and drawing the arrow from the start of the first to the end of the second (the parallelogram rule).

Scalar Multiplication

Multiplying a vector by a scalar (a real number) scales every component:

cv = (cv₁, cv₂, …, cvₙ)

Example: Compute 3 · (2, -1, 4)

3 · (2, -1, 4) = (3·2, 3·(-1), 3·4) = (6, -3, 12)

If c > 1, the vector stretches. If 0 < c < 1, it shrinks. If c < 0, it reverses direction and scales.

Magnitude (Length) of a Vector

The magnitude (or norm) of a vector v = (v₁, v₂, …, vₙ) is:

v‖ = √(v₁² + v₂² + … + vₙ²)

Example: Find the magnitude of (3, 4)

‖(3, 4)‖ = √(3² + 4²) = √(9 + 16) = √25 = 5

Unit Vectors

A unit vector has magnitude 1. To find the unit vector in the direction of v, divide by its magnitude:

û = v / ‖v

Example: Find the unit vector in the direction of (3, 4)

û = (3, 4) / 5 = (3/5, 4/5) = (0.6, 0.8)

Check: ‖(0.6, 0.8)‖ = √(0.36 + 0.64) = √1 = 1 ✓

The standard unit vectors in ℝ³ are i = (1, 0, 0), j = (0, 1, 0), and k = (0, 0, 1).

Dot Product (Scalar Product)

The dot product of two vectors is a scalar quantity defined as:

u · v = u₁v₁ + u₂v₂ + … + uₙvₙ

The dot product can also be expressed geometrically:

u · v = ‖u‖ ‖v‖ cos θ

where θ is the angle between the two vectors.

Example: Compute the dot product of (1, 2, 3) and (4, -5, 6)

(1)(4) + (2)(-5) + (3)(6) = 4 - 10 + 18 = 12

Two vectors are orthogonal (perpendicular) if and only if their dot product equals zero: u · v = 0. This is one of the most important tests in linear algebra.

Finding the Angle Between Vectors

Rearranging the geometric dot product formula gives:

cos θ = (u · v) / (‖u‖ ‖v‖)

Example: Find the angle between (1, 0) and (1, 1)

u · v = (1)(1) + (0)(1) = 1

u‖ = 1, ‖v‖ = √2

cos θ = 1 / (1 · √2) = 1/√2

θ = arccos(1/√2) = 45° (π/4 radians)

Cross Product (Vector Product)

The cross product is defined only in ℝ³ and produces a new vector that is perpendicular to both input vectors:

u × v = (u₂v₃ - u₃v₂, u₃v₁ - u₁v₃, u₁v₂ - u₂v₁)

Example: Compute (1, 2, 3) × (4, 5, 6)

First component: (2)(6) - (3)(5) = 12 - 15 = -3

Second component: (3)(4) - (1)(6) = 12 - 6 = 6

Third component: (1)(5) - (2)(4) = 5 - 8 = -3

u × v = (-3, 6, -3)

Verify orthogonality:

(-3, 6, -3) · (1, 2, 3) = -3 + 12 - 9 = 0 ✓

(-3, 6, -3) · (4, 5, 6) = -12 + 30 - 18 = 0 ✓

The magnitude of the cross product equals the area of the parallelogram formed by the two vectors:

u × v‖ = ‖u‖ ‖v‖ sin θ
The cross product obeys the right-hand rule: if you curl the fingers of your right hand from u toward v, your thumb points in the direction of u × v. Note that the cross product is anti-commutative: u × v = −(v × u).

Matrices

A matrix is a rectangular array of numbers arranged in rows and columns. An m × n matrix has m rows and n columns. Matrices are the central data structure of linear algebra and provide a compact way to represent systems of equations, transformations, and data.

Matrix Notation

A general m × n matrix A is written as:

A = [a₁₁ a₁₂ … a₁ₙ]
    [a₂₁ a₂₂ … a₂ₙ]
    [ ⋮ ⋮ ⋱ ⋮ ]
    [aₘ₁ aₘ₂ … aₘₙ]

The entry in row i and column j is denoted aᵢⱼ.

Types of Matrices

Matrix Addition and Scalar Multiplication

Matrices of the same dimensions are added entry by entry, and scalar multiplication scales every entry:

(A + B)ᵢⱼ = aᵢⱼ + bᵢⱼ
(cA)ᵢⱼ = c · aᵢⱼ

Matrix Multiplication

The product AB is defined when the number of columns in A equals the number of rows in B. If A is m × n and B is n × p, then AB is m × p:

(AB)ᵢⱼ = Σₖ aᵢₖ bₖⱼ = aᵢ₁b₁ⱼ + aᵢ₂b₂ⱼ + … + aᵢₙbₙⱼ

Each entry of AB is the dot product of the corresponding row of A with the corresponding column of B.

Example: Matrix Multiplication

Let A = [1 2] and B = [5 6]

        [3 4]        [7 8]

AB:

(AB)₁₁ = (1)(5) + (2)(7) = 5 + 14 = 19

(AB)₁₂ = (1)(6) + (2)(8) = 6 + 16 = 22

(AB)₂₁ = (3)(5) + (4)(7) = 15 + 28 = 43

(AB)₂₂ = (3)(6) + (4)(8) = 18 + 32 = 50

AB = [19 22] = [19 22]

     [43 50]   [43 50]

Matrix multiplication is NOT commutative: in general, AB ≠ BA. Even when both products are defined, they usually give different results. However, matrix multiplication is associative: (AB)C = A(BC).

Transpose

The transpose of an m × n matrix A, written Aᵀ, is the n × m matrix obtained by swapping rows and columns:

(Aᵀ)ᵢⱼ = aⱼᵢ

Example: Find the Transpose

A = [1 2 3]

    [4 5 6]

Aᵀ = [1 4]

     [2 5]

     [3 6]

Key transpose properties:

Systems of Linear Equations

A system of linear equations is a collection of one or more linear equations involving the same set of variables. Linear algebra provides systematic methods for solving such systems efficiently, even when they involve thousands of variables.

Matrix Form of a System

Any system of linear equations can be written in matrix form as:

Ax = b

where A is the coefficient matrix, x is the vector of unknowns, and b is the vector of constants.

Example: Writing a System in Matrix Form

The system:

  2x + 3y = 7

  x − y = 1

In matrix form:

[2 3] [x] [7]

[1 -1] [y] = [1]

Augmented Matrix

To solve a system, we form the augmented matrix [A | b] by appending the constants column to the coefficient matrix:

[A | b] = [2 3 | 7]
            [1 -1 | 1]

Gaussian Elimination

Gaussian elimination is the fundamental algorithm for solving systems of linear equations. It uses three elementary row operations to systematically reduce the augmented matrix:

  1. Row swap: Interchange two rows (Rᵢ ↔ Rⱼ)
  2. Row scaling: Multiply a row by a nonzero constant (Rᵢ → cRᵢ)
  3. Row replacement: Add a multiple of one row to another (Rᵢ → Rᵢ + cRⱼ)

Example: Solve using Gaussian Elimination

System:

  x + y + z = 6

  2x + 3y + z = 14

  x + 2y + 3z = 16

Step 1: Form augmented matrix:

[1 1 1 | 6]

[2 3 1 | 14]

[1 2 3 | 16]

Step 2: R₂ → R₂ - 2R₁:

[1 1 1 | 6]

[0 1 -1 | 2]

[1 2 3 | 16]

Step 3: R₃ → R₃ - R₁:

[1 1 1 | 6]

[0 1 -1 | 2]

[0 1 2 | 10]

Step 4: R₃ → R₃ - R₂:

[1 1 1 | 6]

[0 1 -1 | 2]

[0 0 3 | 8]

Step 5: Back-substitute:

From R₃: 3z = 8 → z = 8/3

From R₂: y - z = 2 → y = 2 + 8/3 = 14/3

From R₁: x + y + z = 6 → x = 6 - 14/3 - 8/3 = 6 - 22/3 = -4/3

Solution: x = -4/3, y = 14/3, z = 8/3

Row Echelon Form (REF)

A matrix is in row echelon form if:

  1. All rows consisting entirely of zeros are at the bottom
  2. The first nonzero entry (called the pivot) in each nonzero row is to the right of the pivot in the row above
  3. All entries below a pivot are zero

Reduced Row Echelon Form (RREF)

A matrix is in reduced row echelon form if it satisfies all REF conditions plus:

  1. Each pivot is 1
  2. Each pivot is the only nonzero entry in its column

Example: RREF

The matrix:

[1 0 0 | 2]

[0 1 0 | 3]

[0 0 1 | 5]

is in RREF. The solution is immediately readable: x = 2, y = 3, z = 5.

Every matrix has a unique reduced row echelon form. This is extremely useful because it means the RREF gives a canonical description of the solution set. The process of converting REF to RREF is called Gauss-Jordan elimination.

Types of Solutions

A system of linear equations has exactly one of three possibilities:

Determinants

The determinant is a scalar value associated with every square matrix. It encodes important information about the matrix: whether it's invertible, how it scales area/volume, and more. The determinant of matrix A is written det(A) or |A|.

2 × 2 Determinant

For a 2 × 2 matrix:

det [a b] = ad - bc
    [c d]

Example: 2 × 2 Determinant

det [3 7] = (3)(2) - (7)(1) = 6 - 7 = -1

    [1 2]

3 × 3 Determinant (Cofactor Expansion)

For a 3 × 3 matrix, we expand along the first row using cofactor expansion:

det [a b c]
    [d e f] = a(ei - fh) - b(di - fg) + c(dh - eg)
    [g h i]

Example: 3 × 3 Determinant

Find det(A) where:

A = [2 1 -1]

    [3 0 2]

    [1 4 -3]

Expanding along the first row:

= 2[(0)(-3) - (2)(4)] - 1[(3)(-3) - (2)(1)] + (-1)[(3)(4) - (0)(1)]

= 2[0 - 8] - 1[-9 - 2] + (-1)[12 - 0]

= 2(-8) - 1(-11) + (-1)(12)

= -16 + 11 - 12

= -17

Properties of Determinants

Geometrically, the absolute value of the determinant of a 2 × 2 matrix gives the area of the parallelogram formed by its column vectors. For a 3 × 3 matrix, it gives the volume of the parallelepiped. A negative determinant indicates that the transformation reverses orientation.

Determinant and Invertibility

A square matrix A is invertible (nonsingular) if and only if det(A) ≠ 0. If det(A) = 0, the matrix is singular — it squashes space into a lower dimension, losing information irreversibly.

Matrix Inverse

An n × n matrix A is invertible if there exists a matrix A⁻¹ such that:

AA⁻¹ = A⁻¹A = I

where I is the n × n identity matrix. The inverse "undoes" the action of A.

Inverse of a 2 × 2 Matrix

If A = [a b; c d] and det(A) = ad - bc ≠ 0, then:

A⁻¹ = (1/det(A)) [ d -b]
                 [-c a]

Example: Find the Inverse of a 2 × 2 Matrix

A = [4 7]

    [2 6]

det(A) = (4)(6) - (7)(2) = 24 - 14 = 10

A⁻¹ = (1/10) [ 6 -7] = [ 0.6 -0.7]

             [-2 4]   [-0.2 0.4]

Verify:

AA⁻¹ = [4 7] [ 0.6 -0.7] = [4(0.6)+7(-0.2) 4(-0.7)+7(0.4)]

       [2 6] [-0.2 0.4]   [2(0.6)+6(-0.2) 2(-0.7)+6(0.4)]

= [2.4-1.4 -2.8+2.8] = [1 0] ✓

  [1.2-1.2 -1.4+2.4]   [0 1]

Finding the Inverse Using Row Reduction

For larger matrices, augment A with the identity matrix and row reduce to RREF:

[A | I] → row operations → [I | A⁻¹]

If the left side reduces to I, the right side is A⁻¹. If the left side cannot be reduced to I (you get a row of zeros), then A is not invertible.

Example: Finding a 3 × 3 Inverse by Row Reduction

Find the inverse of:

A = [1 0 1]

    [0 1 1]

    [1 1 0]

Form [A | I]:

[1 0 1 | 1 0 0]

[0 1 1 | 0 1 0]

[1 1 0 | 0 0 1]

R₃ → R₃ - R₁:

[1 0 1 | 1 0 0]

[0 1 1 | 0 1 0]

[0 1 -1 | -1 0 1]

R₃ → R₃ - R₂:

[1 0 1 | 1 0 0]

[0 1 1 | 0 1 0]

[0 0 -2 | -1 -1 1]

R₃ → R₃ / (-2):

[1 0 1 | 1 0 0]

[0 1 1 | 0 1 0]

[0 0 1 | 1/2 1/2 -1/2]

R₁ → R₁ - R₃ and R₂ → R₂ - R₃:

[1 0 0 | 1/2 -1/2 1/2]

[0 1 0 | -1/2 1/2 1/2]

[0 0 1 | 1/2 1/2 -1/2]

A⁻¹ = (1/2) [ 1 -1 1]

            [-1 1 1]

            [ 1 1 -1]

Properties of the Inverse

Cramer's Rule

Cramer's Rule provides an explicit formula for the solution of a system Ax = b when A is invertible. For each variable xᵢ:

xᵢ = det(Aᵢ) / det(A)

where Aᵢ is the matrix formed by replacing the i-th column of A with b.

Example: Solve Using Cramer's Rule

System:

  3x + 2y = 16

  x + 5y = 18

A = [3 2], b = [16]

    [1 5]        [18]

det(A) = (3)(5) - (2)(1) = 15 - 2 = 13

A₁ (replace column 1 with b):

det [16 2] = (16)(5) - (2)(18) = 80 - 36 = 44

    [18 5]

A₂ (replace column 2 with b):

det [3 16] = (3)(18) - (16)(1) = 54 - 16 = 38

    [1 18]

x = 44/13, y = 38/13

Solution: x = 44/13 ≈ 3.385, y = 38/13 ≈ 2.923

Cramer's Rule is elegant and useful for small systems or theoretical work, but it's computationally expensive for large systems. In practice, Gaussian elimination is far more efficient for systems with more than a few variables.

Vector Spaces

A vector space (over ℝ) is a set V together with two operations — addition and scalar multiplication — that satisfy ten axioms. Vector spaces abstract the familiar properties of arrows in the plane to much more general settings.

Axioms of a Vector Space

A set V with operations + (addition) and · (scalar multiplication) is a vector space if for all u, v, w ∈ V and scalars c, d ∈ ℝ:

  1. u + v ∈ V (closure under addition)
  2. u + v = v + u (commutativity)
  3. (u + v) + w = u + (v + w) (associativity of addition)
  4. There exists a zero vector 0 such that v + 0 = v
  5. For each v, there exists -v such that v + (-v) = 0
  6. cv ∈ V (closure under scalar multiplication)
  7. c(u + v) = cu + cv (distributivity over vector addition)
  8. (c + d)v = cv + dv (distributivity over scalar addition)
  9. c(dv) = (cd)v (associativity of scalar multiplication)
  10. 1v = v (multiplicative identity)

Common examples of vector spaces include:

Subspaces

A subspace of a vector space V is a nonempty subset W ⊆ V that is itself a vector space under the same operations. To verify W is a subspace, check three conditions:

  1. The zero vector is in W
  2. W is closed under addition: if u, v ∈ W, then u + v ∈ W
  3. W is closed under scalar multiplication: if v ∈ W and c ∈ ℝ, then cv ∈ W

Example: Subspace Verification

Is W = {(x, y, z) ∈ ℝ³ : x + y + z = 0} a subspace of ℝ³?

Zero vector: (0, 0, 0) → 0 + 0 + 0 = 0 ✓

Addition: If (x₁, y₁, z₁) and (x₂, y₂, z₂) are in W, then:

  (x₁+x₂) + (y₁+y₂) + (z₁+z₂) = (x₁+y₁+z₁) + (x₂+y₂+z₂) = 0 + 0 = 0 ✓

Scalar multiplication: If (x, y, z) ∈ W and c ∈ ℝ, then:

  cx + cy + cz = c(x + y + z) = c(0) = 0 ✓

W is a subspace of ℝ³.

Linear Combinations and Span

A linear combination of vectors v₁, v₂, …, vₖ is any sum of the form:

c₁v₁ + c₂v₂ + … + cₖv

where c₁, c₂, …, cₖ are scalars.

The span of a set of vectors is the set of all possible linear combinations of those vectors. It is the "smallest" subspace containing all the given vectors:

span{v₁, v₂, …, vₖ} = {c₁v₁ + c₂v₂ + … + cₖvₖ : c₁, …, cₖ ∈ ℝ}

Example: Span in ℝ²

Let v₁ = (1, 0) and v₂ = (0, 1).

span{v₁, v₂} = {c₁(1,0) + c₂(0,1)} = {(c₁, c₂) : c₁, c₂ ∈ ℝ} = ℝ²

These two vectors span all of ℝ² because any point (a, b) can be written as a(1,0) + b(0,1).

Linear Independence

A set of vectors {v₁, v₂, …, vₖ} is linearly independent if the only solution to:

c₁v₁ + c₂v₂ + … + cₖvₖ = 0

is c₁ = c₂ = … = cₖ = 0. Otherwise, the set is linearly dependent, meaning at least one vector can be expressed as a linear combination of the others.

Example: Testing Linear Independence

Are v₁ = (1, 2, 0), v₂ = (0, 1, 1), v₃ = (1, 0, -2) linearly independent?

We need to determine if c₁(1,2,0) + c₂(0,1,1) + c₃(1,0,-2) = (0,0,0) has only the trivial solution.

This gives the system:

  c₁ + c₃ = 0

  2c₁ + c₂ = 0

  c₂ - 2c₃ = 0

Form the augmented matrix and row reduce:

[1 0 1 | 0]

[2 1 0 | 0]

[0 1 -2 | 0]

R₂ → R₂ - 2R₁:

[1 0 1 | 0]

[0 1 -2 | 0]

[0 1 -2 | 0]

R₃ → R₃ - R₂:

[1 0 1 | 0]

[0 1 -2 | 0]

[0 0 0 | 0]

There's a free variable (c₃), so the system has nontrivial solutions. The vectors are linearly dependent.

Setting c₃ = 1: c₁ = -1, c₂ = 2, so v₃ = v₁ - 2v₂.

Basis and Dimension

A basis for a vector space V is a set of vectors that is:

  1. Linearly independent
  2. Spans V

A basis is a minimal spanning set — every vector in V can be written uniquely as a linear combination of the basis vectors.

The dimension of a vector space is the number of vectors in any basis (all bases have the same size).

Example: Standard Basis for ℝ³

The standard basis for ℝ³ is:

e₁ = (1, 0, 0), e₂ = (0, 1, 0), e₃ = (0, 0, 1)

These are linearly independent and span all of ℝ³, so dim(ℝ³) = 3.

However, there are infinitely many other valid bases, such as:

{(1, 1, 0), (1, 0, 1), (0, 1, 1)}

Any set of three linearly independent vectors in ℝ³ forms a basis.

Key dimension facts: dim(ℝⁿ) = n, dim(Pₙ) = n + 1 (the basis is {1, x, x², …, xⁿ}), and dim(M₂ₓ₂) = 4. The Rank-Nullity Theorem states that for any linear transformation T: V → W, dim(V) = rank(T) + nullity(T).

Linear Transformations

A linear transformation is a function T: V → W between two vector spaces that preserves the operations of addition and scalar multiplication:

T(u + v) = T(u) + T(v)
T(cv) = cT(v)

Equivalently, a function is linear if and only if T(c₁v₁ + c₂v₂) = c₁T(v₁) + c₂T(v₂) for all vectors and scalars.

Linear transformations include familiar operations like:

Example: Verifying Linearity

Is T(x, y) = (2x + y, x - 3y) a linear transformation?

Let u = (x₁, y₁) and v = (x₂, y₂):

T(u + v) = T(x₁+x₂, y₁+y₂) = (2(x₁+x₂) + (y₁+y₂), (x₁+x₂) - 3(y₁+y₂))

= (2x₁+y₁ + 2x₂+y₂, x₁-3y₁ + x₂-3y₂)

= (2x₁+y₁, x₁-3y₁) + (2x₂+y₂, x₂-3y₂)

= T(u) + T(v) ✓

T(cu) = T(cx₁, cy₁) = (2cx₁+cy₁, cx₁-3cy₁) = c(2x₁+y₁, x₁-3y₁) = cT(u) ✓

Yes, T is linear.

Matrix Representation

Every linear transformation T: ℝⁿ → ℝᵐ can be represented as multiplication by an m × n matrix A:

T(x) = Ax

The matrix A is found by applying T to each standard basis vector and using the results as columns:

A = [T(e₁) | T(e₂) | … | T(eₙ)]

Example: Finding the Matrix of a Linear Transformation

Find the matrix for T(x, y) = (2x + y, x - 3y).

T(e₁) = T(1, 0) = (2(1)+0, 1-3(0)) = (2, 1)

T(e₂) = T(0, 1) = (2(0)+1, 0-3(1)) = (1, -3)

A = [2 1]

    [1 -3]

Verify: A(3, 2)ᵀ = [2 1][3] = [2(3)+1(2)] = [8]

                 [1 -3][2]   [1(3)-3(2)]   [-3]

T(3, 2) = (2(3)+2, 3-3(2)) = (8, -3) ✓

Common Transformation Matrices in ℝ²

Here are standard transformation matrices for geometric operations:

Kernel and Image

Two fundamental subspaces are associated with every linear transformation T: V → W:

The kernel (or null space) of T is the set of all vectors that T maps to zero:

ker(T) = {v ∈ V : T(v) = 0}

The image (or range) of T is the set of all possible output vectors:

im(T) = {T(v) : v ∈ V}

Example: Finding the Kernel

Find the kernel of T(x, y, z) = (x + y, y + z).

The matrix is A = [1 1 0]

                  [0 1 1]

Solve Ax = 0:

[1 1 0 | 0] → R₁ → R₁ - R₂:

[0 1 1 | 0]

[1 0 -1 | 0]

[0 1 1 | 0]

From RREF: x = z, y = -z. Setting z = t:

ker(T) = {t(1, -1, 1) : t ∈ ℝ}

The kernel is a line in ℝ³ through the origin. The nullity is 1.

A linear transformation T is one-to-one (injective) if and only if ker(T) = {0}. It is onto (surjective) if im(T) = W. The Rank-Nullity Theorem guarantees: dim(V) = dim(ker(T)) + dim(im(T)).

Eigenvalues and Eigenvectors

Eigenvalues and eigenvectors are among the most important concepts in linear algebra. They reveal the "natural" behavior of a linear transformation — the directions that are merely scaled, not rotated.

Definition

Let A be an n × n matrix. A nonzero vector v is an eigenvector of A if:

Av = λv

for some scalar λ. The scalar λ is called the corresponding eigenvalue. In other words, multiplying v by A simply scales v by the factor λ — the direction of v is preserved (or reversed if λ < 0).

The Characteristic Equation

To find eigenvalues, we rearrange Av = λv:

(A - λI)v = 0

For a nonzero solution v to exist, the matrix (A - λI) must be singular, which means:

det(A - λI) = 0

This is the characteristic equation. The left side, when expanded, is a polynomial of degree n in λ called the characteristic polynomial.

Example: Find Eigenvalues and Eigenvectors of a 2 × 2 Matrix

A = [4 1]

    [2 3]

Step 1: Find eigenvalues.

det(A - λI) = det [4-λ 1 ] = (4-λ)(3-λ) - (1)(2)

                  [ 2 3-λ]

= λ² - 7λ + 12 - 2

= λ² - 7λ + 10

= (λ - 5)(λ - 2) = 0

Eigenvalues: λ₁ = 5 and λ₂ = 2

Step 2: Find eigenvectors for λ₁ = 5.

(A - 5I)v = 0:

[-1 1] [x] = [0]

[ 2 -2] [y] [0]

From R₁: -x + y = 0, so y = x.

Eigenvector: v₁ = t(1, 1) for any t ≠ 0. We often choose v₁ = (1, 1).

Step 3: Find eigenvectors for λ₂ = 2.

(A - 2I)v = 0:

[2 1] [x] = [0]

[2 1] [y] [0]

From R₁: 2x + y = 0, so y = -2x.

Eigenvector: v₂ = t(1, -2) for any t ≠ 0. We often choose v₂ = (1, -2).

Verify: A(1,1)ᵀ = [4+1, 2+3]ᵀ = [5, 5]ᵀ = 5(1,1)ᵀ ✓

Verify: A(1,-2)ᵀ = [4-2, 2-6]ᵀ = [2, -4]ᵀ = 2(1,-2)ᵀ ✓

Example: Eigenvalues of a 3 × 3 Matrix

A = [2 0 0]

    [0 3 1]

    [0 1 3]

det(A - λI) = det [2-λ 0 0 ]

                  [ 0 3-λ 1 ]

                  [ 0 1 3-λ]

Expanding along the first column (since it has two zeros):

= (2-λ)[(3-λ)(3-λ) - (1)(1)]

= (2-λ)(9 - 6λ + λ² - 1)

= (2-λ)(λ² - 6λ + 8)

= (2-λ)(λ-2)(λ-4)

= -(λ-2)²(λ-4)

Eigenvalues: λ₁ = 2 (multiplicity 2) and λ₂ = 4 (multiplicity 1)

Key Properties of Eigenvalues

Diagonalization

An n × n matrix A is diagonalizable if it can be written as:

A = PDP⁻¹

where D is a diagonal matrix of eigenvalues and P is the matrix whose columns are the corresponding eigenvectors. A matrix is diagonalizable if and only if it has n linearly independent eigenvectors.

Example: Diagonalize the Matrix

From our earlier example, A = [4 1; 2 3] with eigenvalues λ₁ = 5, λ₂ = 2 and eigenvectors v₁ = (1, 1), v₂ = (1, -2).

P = [1 1]    D = [5 0]

    [1 -2]        [0 2]

Then A = PDP⁻¹.

This is incredibly useful! For example, computing Aⁿ becomes easy:

Aⁿ = PDⁿP⁻¹ = P [5ⁿ 0 ] P⁻¹

                [ 0 2ⁿ]

This converts an expensive matrix power into simply raising scalars to powers!

Diagonalization is one of the most powerful tools in linear algebra. It simplifies matrix computations dramatically and has deep applications in differential equations, Markov chains, quantum mechanics, and principal component analysis (PCA). Not every matrix is diagonalizable, but symmetric matrices always are.

Applications of Linear Algebra

Linear algebra is not just an abstract mathematical subject — it is the computational engine behind many of the technologies and scientific methods that shape the modern world. Here are some of its most impactful applications.

Computer Graphics

Every image, animation, and 3D model you see on a screen is rendered using linear algebra. Transformations of objects — rotation, scaling, translation, and perspective projection — are all represented as matrix operations.

In 3D graphics, homogeneous coordinates extend ℝ³ to ℝ⁴ so that translations (which are not linear) can also be represented as matrix multiplication. A point (x, y, z) becomes (x, y, z, 1), and all transformations become 4 × 4 matrices:

Translation: [1 0 0 tₓ]  Scaling: [sₓ 0 0 0]
            [0 1 0 tᵧ]         [ 0 sᵧ 0 0]
            [0 0 1 t_z]         [ 0 0 s_z 0]
            [0 0 0 1 ]         [ 0 0 0 1]

Composing multiple transformations is simply multiplying matrices together. The GPU in your computer is essentially a massively parallel linear algebra engine.

Example: 2D Rotation

Rotate the point (3, 1) by 90° counterclockwise.

Rotation matrix: R = [cos 90° -sin 90°] = [ 0 -1]

                  [sin 90° cos 90°]   [ 1 0]

R [3] = [0(-1)(3)+(-1)(1)] = [-1]

  [1]   [1(3)+0(1)]         [ 3]

Wait, let's compute correctly:

[ 0 -1] [3] = [(0)(3)+(-1)(1)] = [-1]

[ 1 0] [1]   [(1)(3)+(0)(1)]    [ 3]

The rotated point is (-1, 3)

Data Science and Machine Learning

Data science uses linear algebra at every level. Datasets are naturally represented as matrices (rows = samples, columns = features), and most machine learning algorithms are built on linear algebraic operations:

If you want to work in data science or AI, linear algebra is non-negotiable. Understanding matrix operations, eigendecomposition, and the SVD (Singular Value Decomposition) gives you deep insight into how algorithms actually work — not just how to call library functions.

Google PageRank

One of the most famous applications of linear algebra is Google's original PageRank algorithm, which ranked web pages by importance. The key insight is modeling the web as a Markov chain:

  1. Represent the web as a directed graph where pages are nodes and links are edges
  2. Construct a transition matrix M where Mᵢⱼ = 1/Lⱼ if page j links to page i (Lⱼ = number of outgoing links from j)
  3. The PageRank vector r is the eigenvector of M corresponding to eigenvalue 1:
Mr = r

In practice, a damping factor d ≈ 0.85 is used:

r = (1-d)/N · 1 + d · Mr

This is solved iteratively (power iteration), and pages with higher PageRank values appear higher in search results.

Example: Simplified PageRank

Consider 3 pages: A links to B and C, B links to C, C links to A.

Transition matrix (each column sums to 1):

M = [ 0 0 1 ]  (A receives from C)

    [1/2 0 0 ]  (B receives from A)

    [1/2 1 0 ]  (C receives from A and B)

Starting with r₀ = (1/3, 1/3, 1/3) and iterating rₖ₊₁ = Mrₖ converges to the steady-state PageRank vector.

Least Squares Approximation

When a system Ax = b has no exact solution (more equations than unknowns, i.e., the system is overdetermined), we seek the least squares solution — the vector that minimizes the total squared error ‖Ax - b‖².

The solution satisfies the normal equations:

AᵀA = Aᵀb

If AᵀA is invertible:

= (AᵀA)⁻¹Aᵀb

Example: Fitting a Line to Data

Find the best-fit line y = c₀ + c₁x for the data points (1, 2), (2, 3), (3, 6), (4, 8).

Set up Acb:

A = [1 1]  b = [2]

    [1 2]       [3]

    [1 3]       [6]

    [1 4]       [8]

AᵀA = [1 1 1 1][1 1] = [ 4 10]

       [1 2 3 4][1 2]   [10 30]

                [1 3]

                [1 4]

Aᵀb = [1 1 1 1][2] = [ 19]

       [1 2 3 4][3]   [ 57]

                [6]

                [8]

Solving [ 4 10 | 19] → c₀ = -0.5, c₁ = 2.1

       [10 30 | 57]

Best-fit line: y = -0.5 + 2.1x

Quantum Mechanics

Quantum mechanics is fundamentally built on linear algebra. The state of a quantum system is represented as a vector in a complex vector space (Hilbert space), and physical observables (like energy, momentum, position) are represented as Hermitian matrices. Measurements correspond to eigenvalues, and the system's state collapses to the corresponding eigenvector after measurement.

Economics and Network Analysis

Linear algebra appears in economics through input-output models (Leontief model), where the relationships between industries are captured in a matrix. In network analysis, matrices encode connections between nodes, and their eigenvalues reveal properties like connectivity, clustering, and influence.

Linear algebra truly lives up to its reputation as the "mathematics of the 21st century." Whether you pursue pure mathematics, engineering, computer science, physics, economics, or data science, the concepts you've learned here — vectors, matrices, transformations, eigenvalues — will be among the most frequently used tools in your mathematical toolkit.