Principal Component Analysis

Principal Component Analysis (PCA) is a statistical method used for dimensionality reduction or feature extraction. The technique transforms original variables into a new set of variables, known as principal components, which are orthogonal and which reflect the maximum variance. The first principal component reflects the most variance, the second (which is orthogonal to the first) reflects the second most, and so on.


PCA was invented in 1901 by Karl Pearson as a method of transforming observed correlated variables into a set of uncorrelated variables. The method gained significant importance in various fields, especially with the advent of high-dimensional data.

Mathematical Foundations

  • Eigenvalues and Eigenvectors: PCA makes use of eigenvalues and eigenvectors, which are produced from the covariance matrix or singular value decomposition of the data.
  • Covariance Matrix: The covariance matrix captures the covariance between each pair of variables. The eigenvectors of this matrix represent the directions of maximum variance (principal components), and the corresponding eigenvalues indicate the magnitude of this variance.


  • Data Standardization: The first step is usually to standardize the data so that each variable has a mean of zero and a standard deviation of one.
  • Covariance Matrix Computation: The covariance matrix of the data is computed.
  • Eigenvalue Decomposition: Eigenvalues and eigenvectors of the covariance matrix are computed.
  • Component Selection: The number of principal components to retain is chosen based on the amount of variance that they account for, often visualized through a scree plot.
  • Projection: The original data is then projected onto the selected principal components to create a lower-dimensional representation.


  • Data Visualization: Reducing the number of dimensions for visualization purposes.
  • Machine Learning: Feature extraction and dimensionality reduction.
  • Natural Language Processing: Document classification and clustering.
  • Bioinformatics: For genomic data analysis and expression levels.
  • Finance: For risk assessment and portfolio optimization.


  • R (prcomp, pcaMethods)
  • Python (scikit-learn, PCA)
  • MATLAB (pca function)
  • SPSS


  • Linearity: Assumes a linear relationship between variables.
  • Loss of Interpretability: Principal components may not be easily interpretable.
  • Sensitive to Outliers: Outliers can distort the principal components.

See Also