There are mainly two types of Dimensionality Reduction techniques:

Feature Selection

Feature Extraction

Feature Selection techniques are mainly:

Backward Elimination

Forward Selection

Bidirectional Elimination

Score Comparison

We covered some of these topics in the Regression chapter when we were going through the Multiple Linear Regression models.

In this chapter, we will go through various Feature Extraction techniques:

Principal Component Analysis (PCA)

Linear Discriminant Analysis (LDA)

Kernel PCA

Quadratic Discriminant Analysis (QDA)

Principal Component Analysis

Principal Component Analysis (PCA) is basically a dimensionality reduction algorithm, but it can also be useful as a tool for visualization, noise filtering, feature extraction & engineering, and much more.

PCA is a fast and flexible unsupervised method for dimensionality reduction in data. It’s behaviour is easiest to visualize by looking at the two-dimensional dataset. Consider the following 300 points:

It is clear that there is a nearly linear relationship between x & y variables. In PCA, the relationship between x & y, is quantified by finding a list of the principal axes in the data, and using those axes to describe the dataset. Using Scikit-learn’s PCA estimator, we can compute this as follows:

from sklearn.decomposition import PCApca = PCA(n_components=2)pca.fit(X)

The fit learns some quantities from the data, most importantly the “components” and “explained variance”:

print(pca.components_)

[[ 0.94625072 0.32343403]

[-0.32343403 0.94625072]]

print(pca.explained_variance_)

[0.77935137 0.02004522]

To see what these numbers mean, let’s visualize them as vectors over the input data, using the “components” to define the direction of the vector, and the “explained variance” to define the squared-length of the vector:

These vectors represent the principal axes of the data, and the length of the vector is an indication of how “important” that axis is in describing the distribution of the data—more precisely, it is a measure of the variance of the data when projected onto that axis. The projection of each data point onto the principal axes are the “principal components” of the data.

Principal Component Analysis as dimensionality reduction algo

PCA for dimensionality reduction involves zeroing out one or more of the smallest principal components, resulting in a lower-dimensional projection of the data that preserves the maximal data variance.

Here is an example of using PCA as a dimensionality reduction transform:

original shape: (300, 2) transformed shape: (300, 1)

The data now has been reduced to a single dimension. To understand the effect of this dimensionality reduction, we can perform the inverse transform of this reduced data and plot it along with the original data:

The light points are the original data, while the dark points are the projected version. This makes clear what a PCA dimensionality reduction means: the information along the least important principal axis or axes is removed, leaving only the component(s) of the data with the highest variance. The fraction of variance that is cut out (proportional to the spread of points about the line formed in this figure) is roughly a measure of how much “information” is discarded in this reduction of dimensionality.

This reduced-dimension dataset is in some senses “good enough” to encode the most important relationships between the points: despite reducing the dimension of the data by 50%, the overall relationship between the data points are mostly preserved.

Let’s now move on to a high dimensional data, and see how PCA deals with the digits data.

We start by loading the data:

from sklearn.datasets import load_digitsdigits = load_digits()digits.data.shape

Out[76]: (1797, 64)

The data consists of 8X8 pixel images, i.e. 64 dimensions. To gain some intuition into the relationship between these points, we can use PCA to project them to a more manageable number of dimensions i.e. 2

pca = PCA(2) # project from 64 to 2 dimensionsprojected = pca.fit_transform(digits.data)print(digits.data.shape)print(projected.shape)

(1797, 64) (1797, 2)

Let’s plot the first two components of each point to learn about the data:

The full data is a 64-D point cloud, and these points are the projection of each data point along with directions with the largest variance

Choosing the number of components

A vital part of using PCA in practice is the ability to estimate how many components are needed to describe the data. This can be determined by looking at the cumulative explained variance ratio as a function of the number of components:

pca = PCA().fit(digits.data)plt.plot(np.cumsum(pca.explained_variance_ratio_))plt.xlabel('number of components')plt.ylabel('cumulative explained variance');

This curve quantifies how much of the total, 64-dimensional variance is contained within the first N components. For example, we see that with the digits of the first 10 components contain approximately 75% of the variance, while you need around 50 components to describe close to 100% of the variance.

Here we see that our two-dimensional projection loses a lot of information (as measured by the explained variance) and that we’d need about 20 components to retain 90% of the variance. Looking at this plot for a high-dimensional dataset can help you understand the level of redundancy present in multiple observations.

PCA as Noise Filtering

PCA can also be used as a noise filtering technique. Any components with variance much larger than the effect of the noise should be relatively unaffected by the noise. So if you reconstruct the data using just the largest subset of principal components, you should be preferentially keeping the signal and throwing out the noise.

Let’s see how this looks with the digits data. First we will plot several of the input noise-free data:

def plot_digits(data): fig, axes = plt.subplots(4, 10, figsize=(10, 4), subplot_kw={‘xticks’:[], ‘yticks’:[]}, gridspec_kw=dict(hspace=0.1, wspace=0.1)) for i, ax in enumerate(axes.flat): ax.imshow(data[i].reshape(8, 8), cmap=’binary’, interpolation=’nearest’, clim=(0, 16)) plot_digits(digits.data)

Now let’s add some random noise to create a noisy dataset, and re-plot it:

The images are noisy, and contain spurious pixels. Let’s train a PCA on the noisy data, requesting that the projection preserve 50% of the variance:

pca = PCA(0.50).fit(noisy)pca.n_components_

Out[85]: 12

Here 50% of the variance amounts to 12 principal components. Now we compute these components, and then use the inverse of the transform to reconstruct the filtered digits:

This signal preserving/noise filtering property makes PCA a very useful feature selection routine—for example, rather than training a classifier on very high-dimensional data, you might instead train the classifier on the lower-dimensional representation, which will automatically serve to filter out random noise in the inputs.

Linear Discriminant Analysis

Properties of LDA

Used as a dimensionality reduction technique

Used in the pre-processing step for pattern classification

Has the goal to project a dataset onto a lower-dimensional space

However, looking at the properties, it sounds similar to PCA, but LDA differs because in addition to finding the component axises with LDA we are interested in the axes that maximize the separation between multiple classes.

The goal of LDA is to project a feature space (a dataset n-dimensional samples) onto a small subspace k (where k≤n−1) while maintaining the class discriminatory problem.

Both PCA and LDA are linear transformation techniques used for dimensional reduction. PCA is described as unsupervised but LDA is supervised because of the relation to the dependent variable.

So, in a nutshell, often the goal of an LDA is to project a feature space (a dataset n-dimensional samples) onto a smaller subspace k (where k≤n−1) while maintaining the class-discriminatory information. In general, dimensionality reduction does not only help reduce computational costs for a given classification task, but it can also be helpful to avoid overfitting by minimizing the error in parameter estimation (“curse of dimensionality”).

Summarizing LDA approach in 5 steps:

Compute the d-dimensional mean vectors for the different classes from the dataset.

Compute the scatter matrices (in-between-class and within-class scatter matrix).

Compute the eigenvectors and corresponding eigenvalues for the scatter matrices.

Sort the eigenvectors by decreasing eigenvalues and choose k eigenvectors with the largest eigenvalues to form a k X d dimensional matrix W (where every column represents an eigenvector)

Use this k X d eigenvector matrix to transform the samples onto the new subspace. This can be summarized by the mathematical equation: Y = X x W (where X is a n X d dimensional matrix representing the n samples, and y are the transformed n X k dimensional samples in the new subspace)

Python Implementation

from mlxtend.data import iris_datafrom mlxtend.preprocessing import standardizefrom mlxtend.feature_extraction import LinearDiscriminantAnalysisX, y = iris_data()X = standardize(X)lda = LinearDiscriminantAnalysis(n_discriminants=2)lda.fit(X, y)X_lda = lda.transform(X)import matplotlib.pyplot as pltwith plt.style.context('seaborn-whitegrid'): plt.figure(figsize=(6, 4)) for lab, col in zip((0, 1, 2), ('blue', 'red', 'green')): plt.scatter(X_lda[y == lab, 0], X_lda[y == lab, 1], label=lab, c=col) plt.xlabel('Linear Discriminant 1') plt.ylabel('Linear Discriminant 2') plt.legend(loc='lower right') plt.tight_layout() plt.show()

Example 2 – Plotting the Between-Class Variance Explained Ratio

from mlxtend.data import iris_datafrom mlxtend.preprocessing import standardizefrom mlxtend.feature_extraction import LinearDiscriminantAnalysisimport numpy as npX, y = iris_data()X = standardize(X)lda = LinearDiscriminantAnalysis(n_discriminants=None)lda.fit(X, y)X_lda = lda.transform(X)tot = sum(lda.e_vals_)var_exp = [(i / tot)*100 for i in sorted(lda.e_vals_, reverse=True)]cum_var_exp = np.cumsum(var_exp) with plt.style.context('seaborn-whitegrid'): fig, ax = plt.subplots(figsize=(6, 4)) plt.bar(range(4), var_exp, alpha=0.5, align='center', label='individual explained variance') plt.step(range(4), cum_var_exp, where='mid', label='cumulative explained variance') plt.ylabel('Explained variance ratio') plt.xlabel('Principal components') plt.xticks(range(4)) ax.set_xticklabels(np.arange(1, X.shape[1] + 1)) plt.legend(loc='best') plt.tight_layout()

Kernel PCA

PCA (as a dimensionality reduction technique) tries to find a low-dimensional linear subspace that the data are confined to. But it might be that the data are confined to low-dimensional nonlinear subspace. What will happen then?

Take a look at this Figure, taken from Bishop’s “Pattern recognition and Machine Learning” textbook (Figure 12.16):

The data points here (on the left) are located mostly along a curve in 2D. PCA cannot reduce the dimensionality from two to one, because the points are not located along a straight line. But still, the data are “obviously” located around a one-dimensional non-linear curve. So while PCA fails, there must be another way! And indeed, kernel PCA can find this non-linear manifold and discover that the data are in fact nearly one-dimensional.

It does so by mapping the data into a higher-dimensional space. This can indeed look like a contradiction (your question #2), but it is not. The data are mapped into a higher-dimensional space, but then turn out to lie on a lower dimensional subspace of it. So you increase the dimensionality in order to be able to decrease it.

The essence of the “kernel trick” is that one does not actually need to explicitly consider the higher-dimensional space, so this potentially confusing leap in dimensionality is performed entirely undercover. The idea, however, stays the same.

Let’s compare an example using PCA & Kernel-PCA and see the results:

PCA

# PCA# Importing the librariesimport numpy as npimport matplotlib.pyplot as pltimport pandas as pd# Importing the datasetdataset = pd.read_csv('Social_Network_Ads.csv')X = dataset.iloc[:, [2, 3]].valuesy = dataset.iloc[:, 4].values# Splitting the dataset into the Training set and Test setfrom sklearn.model_selection import train_test_splitX_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)# Feature Scalingfrom sklearn.preprocessing import StandardScalersc = StandardScaler()X_train = sc.fit_transform(X_train)X_test = sc.transform(X_test)# Applying PCAfrom sklearn.decomposition import PCApca = PCA(n_components = 2)X_train = pca.fit_transform(X_train)X_test = pca.transform(X_test)# Fitting Logistic Regression to the Training setfrom sklearn.linear_model import LogisticRegressionclassifier = LogisticRegression(random_state = 0)classifier.fit(X_train, y_train)# Predicting the Test set resultsy_pred = classifier.predict(X_test)# Making the Confusion Matrixfrom sklearn.metrics import confusion_matrixcm = confusion_matrix(y_test, y_pred)# Visualising the Training set resultsfrom matplotlib.colors import ListedColormapX_set, y_set = X_train, y_trainX1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01), np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape), alpha = 0.75, cmap = ListedColormap(('red', 'green')))plt.xlim(X1.min(), X1.max())plt.ylim(X2.min(), X2.max())for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j)plt.title('Logistic Regression (Training set)')plt.xlabel('Age')plt.ylabel('Estimated Salary')plt.legend()plt.show()# Visualising the Test set resultsfrom matplotlib.colors import ListedColormapX_set, y_set = X_test, y_testX1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01), np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape), alpha = 0.75, cmap = ListedColormap(('red', 'green')))plt.xlim(X1.min(), X1.max())plt.ylim(X2.min(), X2.max())for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j)plt.title('Logistic Regression (Test set)')plt.xlabel('Age')plt.ylabel('Estimated Salary')plt.legend()plt.show()

Kernel-PCA

# Kernel PCA# Importing the librariesimport numpy as npimport matplotlib.pyplot as pltimport pandas as pd
# Importing the datasetdataset = pd.read_csv('Social_Network_Ads.csv')X = dataset.iloc[:, [2, 3]].valuesy = dataset.iloc[:, 4].values# Splitting the dataset into the Training set and Test setfrom sklearn.model_selection import train_test_splitX_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)# Feature Scalingfrom sklearn.preprocessing import StandardScalersc = StandardScaler()X_train = sc.fit_transform(X_train)X_test = sc.transform(X_test)# Applying Kernel PCAfrom sklearn.decomposition import KernelPCAkpca = KernelPCA(n_components = 2, kernel = 'rbf')X_train = kpca.fit_transform(X_train)X_test = kpca.transform(X_test)# Fitting Logistic Regression to the Training setfrom sklearn.linear_model import LogisticRegressionclassifier = LogisticRegression(random_state = 0)classifier.fit(X_train, y_train)# Predicting the Test set resultsy_pred = classifier.predict(X_test)# Making the Confusion Matrixfrom sklearn.metrics import confusion_matrixcm = confusion_matrix(y_test, y_pred)# Visualising the Training set resultsfrom matplotlib.colors import ListedColormapX_set, y_set = X_train, y_trainX1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01), np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape), alpha = 0.75, cmap = ListedColormap(('red', 'green')))plt.xlim(X1.min(), X1.max())plt.ylim(X2.min(), X2.max())for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j)plt.title('Logistic Regression (Training set)')plt.xlabel('Age')plt.ylabel('Estimated Salary')plt.legend()plt.show()
# Visualising the Test set resultsfrom matplotlib.colors import ListedColormapX_set, y_set = X_test, y_testX1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01), np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape), alpha = 0.75, cmap = ListedColormap(('red', 'green')))plt.xlim(X1.min(), X1.max())plt.ylim(X2.min(), X2.max())for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j)plt.title('Logistic Regression (Test set)')plt.xlabel('Age')plt.ylabel('Estimated Salary')plt.legend()plt.show()

Now we can see the new feature space formed by these 2 new extracted principal components resulting from PCA in which our data is linearly separable and much better separated by linear classifier.

We can obtain the same results even if our original data is 0% linearly separable, by using Kernel PCA, we will be able to get a straight line separating red points & green points. Hence, it is strongly recommended to use Kernel PCA whenever PCA doesn’t give good results on your dataset, which might be a dataset of a non-linear problem

Dimensionality ReductionThere are mainly two types of Dimensionality Reduction techniques:

Feature Selection techniques are mainly:

We covered some of these topics in the Regression chapter when we were going through the Multiple Linear Regression models.

In this chapter, we will go through various Feature Extraction techniques:

Principal Component AnalysisPrincipal Component Analysis (PCA) is basically a dimensionality reduction algorithm, but it can also be useful as a tool for visualization, noise filtering, feature extraction & engineering, and much more.

Let’s begin with importing some of the libraries:

PCA is a fast and flexible unsupervised method for dimensionality reduction in data. It’s behaviour is easiest to visualize by looking at the two-dimensional dataset. Consider the following 300 points:

It is clear that there is a nearly linear relationship between x & y variables. In PCA, the relationship between x & y, is quantified by finding a list of the principal axes in the data, and using those axes to describe the dataset. Using Scikit-learn’s PCA estimator, we can compute this as follows:

Out[70]:

PCA(copy=True, iterated_power=’auto’, n_components=2, random_state=None,

svd_solver=’auto’, tol=0.0, whiten=False)

The fit learns some quantities from the data, most importantly the “components” and “explained variance”:

`print(pca.components_)`

[[ 0.94625072 0.32343403]

[-0.32343403 0.94625072]]

`print(pca.explained_variance_)`

[0.77935137 0.02004522]

To see what these numbers mean, let’s visualize them as vectors over the input data, using the “components” to define the direction of the vector, and the “explained variance” to define the squared-length of the vector:

These vectors represent the

principal axesof the data, and the length of the vector is an indication of how “important” that axis is in describing the distribution of the data—more precisely, it is a measure of the variance of the data when projected onto that axis. The projection of each data point onto the principal axes are the “principal components” of the data.Principal Component Analysis as dimensionality reduction algoPCA for dimensionality reduction involves zeroing out one or more of the smallest principal components, resulting in a lower-dimensional projection of the data that preserves the maximal data variance.

Here is an example of using PCA as a dimensionality reduction transform:

Output:

original shape: (300, 2)

transformed shape: (300, 1)

The data now has been reduced to a single dimension. To understand the effect of this dimensionality reduction, we can perform the inverse transform of this reduced data and plot it along with the original data:

The light points are the original data, while the dark points are the projected version. This makes clear what a PCA dimensionality reduction means: the information along the least important principal axis or axes is removed, leaving only the component(s) of the data with the highest variance. The fraction of variance that is cut out (proportional to the spread of points about the line formed in this figure) is roughly a measure of how much “information” is discarded in this reduction of dimensionality.

This reduced-dimension dataset is in some senses “good enough” to encode the most important relationships between the points: despite reducing the dimension of the data by 50%, the overall relationship between the data points are mostly preserved.

Let’s now move on to a high dimensional data, and see how PCA deals with the digits data.We start by loading the data:

Out[76]: (1797, 64)

The data consists of 8X8 pixel images, i.e. 64 dimensions. To gain some intuition into the relationship between these points, we can use PCA to project them to a more manageable number of dimensions i.e. 2

(1797, 64)

(1797, 2)

Let’s plot the first two components of each point to learn about the data:

The full data is a 64-D point cloud, and these points are the projection of each data point along with directions with the largest variance

Choosing the number of componentsA vital part of using PCA in practice is the ability to estimate how many components are needed to describe the data. This can be determined by looking at the cumulative explained variance ratio as a function of the number of components:

This curve quantifies how much of the total, 64-dimensional variance is contained within the first N components. For example, we see that with the digits of the first 10 components contain approximately 75% of the variance, while you need around 50 components to describe close to 100% of the variance.

Here we see that our two-dimensional projection loses a lot of information (as measured by the explained variance) and that we’d need about 20 components to retain 90% of the variance. Looking at this plot for a high-dimensional dataset can help you understand the level of redundancy present in multiple observations.

PCA as Noise FilteringPCA can also be used as a noise filtering technique. Any components with variance much larger than the effect of the noise should be relatively unaffected by the noise. So if you reconstruct the data using just the largest subset of principal components, you should be preferentially keeping the signal and throwing out the noise.

Let’s see how this looks with the digits data. First we will plot several of the input noise-free data:

def plot_digits(data):

fig, axes = plt.subplots(4, 10, figsize=(10, 4),

subplot_kw={‘xticks’:[], ‘yticks’:[]},

gridspec_kw=dict(hspace=0.1, wspace=0.1))

for i, ax in enumerate(axes.flat):

ax.imshow(data[i].reshape(8, 8),

cmap=’binary’, interpolation=’nearest’,

clim=(0, 16))

plot_digits(digits.data)

Now let’s add some random noise to create a noisy dataset, and re-plot it:

The images are noisy, and contain spurious pixels.

Let’s train a PCA on the noisy data, requesting that the projection preserve 50% of the variance:

Out[85]: 12

Here 50% of the variance amounts to 12 principal components. Now we compute these components, and then use the inverse of the transform to reconstruct the filtered digits:

This signal preserving/noise filtering property makes PCA a very useful feature selection routine—for example, rather than training a classifier on very high-dimensional data, you might instead train the classifier on the lower-dimensional representation, which will automatically serve to filter out random noise in the inputs.

Linear Discriminant AnalysisProperties of LDA

However, looking at the properties, it sounds similar to PCA, but LDA differs because in addition to finding the component axises with LDA we are interested in the axes that maximize the separation between multiple classes.

The goal of LDA is to project a feature space (a dataset n-dimensional samples) onto a small subspace k (where k≤n−1) while maintaining the class discriminatory problem.

Both PCA and LDA are linear transformation techniques used for dimensional reduction. PCA is described as unsupervised but LDA is supervised because of the relation to the dependent variable.

So, in a nutshell, often the goal of an LDA is to project a feature space (a dataset n-dimensional samples) onto a smaller subspace k (where k≤n−1) while maintaining the class-discriminatory information. In general, dimensionality reduction does not only help reduce computational costs for a given classification task, but it can also be helpful to avoid overfitting by minimizing the error in parameter estimation (“curse of dimensionality”).

Summarizing LDA approach in 5 steps:

-dimensional mean vectors for the different classes from the dataset.deigenvectors with the largest eigenvalues to form akdimensional matrixk X dW(where every column represents an eigenvector)eigenvector matrix to transform the samples onto the new subspace. This can be summarized by the mathematical equation:k X dY = X x W(whereXis adimensional matrix representing then X dsamples, andnare the transformedydimensional samples in the new subspace)n X kPython ImplementationExample 2 – Plotting the Between-Class Variance Explained RatioKernel PCAPCA (as a dimensionality reduction technique) tries to find a low-dimensional linear subspace that the data are confined to. But it might be that the data are confined to low-dimensional

nonlinearsubspace. What will happen then?Take a look at this Figure, taken from Bishop’s “Pattern recognition and Machine Learning” textbook (Figure 12.16):

The data points here (on the left) are located mostly along a curve in 2D. PCA cannot reduce the dimensionality from two to one, because the points are not located along a straight line. But still, the data are “obviously” located around a one-dimensional non-linear curve. So while PCA fails, there must be another way! And indeed, kernel PCA can find this non-linear manifold and discover that the data are in fact nearly one-dimensional.

It does so by mapping the data into a higher-dimensional space. This can indeed look like a contradiction (your question #2), but it is not. The data are mapped into a higher-dimensional space, but then turn out to lie on a lower dimensional subspace of it. So you increase the dimensionality in order to be able to decrease it.

The essence of the “kernel trick” is that one does not actually need to

explicitlyconsider the higher-dimensional space, so this potentially confusing leap in dimensionality is performed entirely undercover. The idea, however, stays the same.Let’s compare an example using PCA & Kernel-PCA and see the results:PCAKernel-PCANow we can see the new feature space formed by these 2 new extracted principal components resulting from PCA in which our data is linearly separable and much better separated by linear classifier.

We can obtain the same results even if our original data is 0% linearly separable, by using Kernel PCA, we will be able to get a straight line separating red points & green points. Hence, it is strongly recommended to use Kernel PCA whenever PCA doesn’t give good results on your dataset, which might be a dataset of a non-linear problem