Dimensionality Reduction Method

1.4.1  Linear Discriminant Analysis (LDA)

Linear Discriminant Analysis or LDA is a dimensionality reduction technique. It is used as a pre-processing step in Machine Learning and applications of pattern classification. The goal of LDA is to project the features in higher dimensional space onto a lower-dimensional space to avoid the curse of dimensionality and also reduce resources and dimensional costs. It brings all the higher dimensional variables (which we can’t plot and analyze) onto a 2D graph & while doing so removes the useless feature.

LDA is also a ‘Supervised Dimension Reduction’ technique and more kind of Feature Extraction than Selection (as it is creating a new variable by reducing its dimension). So it works only on labeled data.

Now, let’s consider a situation where you have plotted the relationship between two variables where each color represents a different class. One is shown with a red color and the other with blue. We have two sets of data points belonging to two different classes and we need to classify/separate them efficiently. 

If you are willing to reduce the number of dimensions to 1, you can just project everything to the x-axis as shown below:

As shown in the given 2D graph, when the data points are plotted on the 2D plane, using only a single feature to classify them there’s no straight line that can separate the two classes of the data points and may result in some overlapping as shown in the above figure. So, we will keep on increasing the number of features for proper classification. This approach neglects any helpful information provided by the second feature.

However, you can use LDA to the plot which reduces the 2D graph into a 1D graph in order to maximize the separability between the two classes. 

The advantage of LDA is that it uses information from both the features (axes x and y) to create a new axis which in turn minimizes the variance and maximizes the class distance of the two variables.

Linear Discriminant Analysis can be broken up into the following steps:

  1. Compute the within-class and between-class scatter matrices
  2. Compute the eigenvectors and corresponding eigenvalues for the scatter matrices
  3. Sort the eigenvalues and select the top k
  4. Create a new matrix containing eigenvectors that map to the k eigenvalues
  5. Obtain the new features (i.e. LDA components) by taking the dot product of the data and the matrix from step 4

For more information on the implementation of LDA numerical example, you can refer to this: https://people.revoledu.com/kardi/tutorial/LDA/Numerical%20Example.html

For more information on the implementation of LDA in python, you can refer to this:

https://www.mygreatlearning.com/blog/linear-discriminant-analysis-or-lda/

Similar Posts

Leave a Reply

Your email address will not be published.