This is a review of the topic for myself.
Return for Review
- Dimensionality Reduction (General Concept)
- Dimension Reduction for Machine Learning
- Clustering Analysis
Topics to return to for writing a separate article.
Basis Vector
Basis vectors may be initially thought of as orthogonal vectors conceptually (but that's not necessarily true). More accurately, they are linearly independent
vectors which span the vector space. So they do not hae to be orthogonal, but could be. When orthogonal, this can be helpful for the projection of the vector.
For example, in we may have here and in , .
Instead we want to think of the span of the vector space. Using (1,1) and (1,0) is a non-orthogonal example which forms a basis spanning .
Eigenvector and Eigenvalue
The eigenvector, if any exists, will remain within the same span during a linear transformation. The vector may remain of the same magitude, scale to some factor including the zero vector. This scaling factor, is the eigenvalue for the eigenvector.
Conceptual in 2D space
An example of a sheer matrix is shown above. One such eigenvector could here could be the basis vector itself (1, 0) along the x-axis with . Note that during the sheer transformation, this eigenvector remains in the same position.
Conceptual in 3D space
Here imagine a transformation for a 3D object like a cube rotating along some axis. Then, this vector along this rotational axis is the eigenvector which does not change during the transformation.
Understanding with Determinants
Since we know the transformation will only scale an eigenvector we can solve to find such vectors and its associated eigenvalue for the eigenvector. This can be written as follows where is the eigenvector.
Recall earlier, convenience of knowing a basis vector only scaling by a constant.
See that:
The RHS has been converted from scalar multiplication to matrix multiplication in similar form to the LHS to solve for the zero vector and factored out .
Perform matrix substraction:
Arriving at the following equation:
Here, if we solve for the determinant to be 0, then we have possible values for lambda. Suppose here there are 3 with values after solving for this cubic polynomial.
Then for a given eigenvalue, say there is a span of vectors which make the equation true.
The vector solutions all lie on the same span of the eigenvector with the associated eigenvalue.