categories: math
tags: linear algebra eigenvector eigenvalues

Eigenvectors and Eigenvalues

BY Jonathan Paek

PUBLISHED March 20, 2023
 

This is a review of the topic for myself.

Return for Review

  • Dimensionality Reduction (General Concept)
  • Dimension Reduction for Machine Learning
  • Clustering Analysis

Topics to return to for writing a separate article.

Basis Vector

Basis vectors may be initially thought of as orthogonal vectors conceptually (but that's not necessarily true). More accurately, they are linearly independent vectors which span the vector space. So they do not hae to be orthogonal, but could be. When orthogonal, this can be helpful for the projection of the vector.

For example, in R2\mathbb{R^2} we may have here i^,j^\hat{i}, \hat{j} and in R3\mathbb{R^3}, i^,j^,k^\hat{i}, \hat{j}, \hat{k}.

Instead we want to think of the span of the vector space. Using (1,1) and (1,0) is a non-orthogonal example which forms a basis spanning R2\mathbb{R^2}.

Eigenvector and Eigenvalue

The eigenvector, if any exists, will remain within the same span during a linear transformation. The vector may remain of the same magitude, scale to some factor λ\lambda including the zero vector. This scaling factor, λ\lambda is the eigenvalue for the eigenvector.

Conceptual in 2D space

A=[1α01]A = \begin{bmatrix} 1 & \alpha \\ 0 & 1 \\ \end{bmatrix}

An example of a sheer matrix is shown above. One such eigenvector could here could be the i^\hat{i} basis vector itself (1, 0) along the x-axis with λ=1\lambda = 1. Note that during the sheer transformation, this eigenvector remains in the same position.

Conceptual in 3D space

Here imagine a transformation for a 3D object like a cube rotating along some axis. Then, this vector along this rotational axis is the eigenvector which does not change during the transformation.

Understanding with Determinants

Since we know the transformation will only scale an eigenvector we can solve to find such vectors and its associated eigenvalue for the eigenvector. This can be written as follows where v\vec{v} is the eigenvector.

Av=λv A\vec{v} = \lambda\vec{v}

Recall earlier, convenience of knowing a basis vector only scaling by a constant.

See that:

[λ000λ000λ]v=λ[100010001]v=(λI)v\begin{bmatrix} \lambda & 0 & 0 \\ 0 & \lambda & 0 \\ 0 & 0 & \lambda \\ \end{bmatrix} \vec{v} = \lambda \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{bmatrix} \vec{v} = (\lambda I) \vec{v}

The RHS has been converted from scalar multiplication to matrix multiplication in similar form to the LHS to solve for the zero vector and factored out v\vec{v}.

Av(λI)v=0A\vec{v} - (\lambda I)\vec{v} = \vec{0} \\

Perform matrix substraction:

(AλI)v=0(A - \lambda I) \vec{v} = \vec{0}

Arriving at the following equation:

[a00λa01a02a10a11λa12a20a21a22λ]v=0\begin{bmatrix} a_{00} - \lambda & a_{01} & a_{02} \\ a_{10} & a_{11} - \lambda & a_{12} \\ a_{20} & a_{21} & a_{22} - \lambda \\ \end{bmatrix} \vec{v} = \vec{0}

Here, if we solve for the determinant to be 0, then we have possible values for lambda. Suppose here there are 3 with values λ1,λ2,andλ3\lambda_1, \lambda_2, and \lambda_3 after solving for this cubic polynomial.

Then for a given eigenvalue, say λ1\lambda_{1} there is a span of vectors which make the equation true.

λ1[xyz]=0\lambda_{1} \begin{bmatrix} x \\ y \\ z \\ \end{bmatrix} = 0

The vector solutions all lie on the same span of the eigenvector with the associated eigenvalue.