DIMENSIONALITY REDUCTION.

INTRODUCTION.

In machine learning classification problems, there are often so many factors on which the final classification is finished. The factors are basically variables called features. Higher the number of features, the harder it gets to visualize the training dataset and then to work on it. This is where the dimensionality reduction algorithms come into picture.

The number of input features or variables for a dataset is known as its dimensionality.

Dimensionality reduction refers to the techniques in which they reduce the number of input variables in training data.

COMPONENTS.

There are two components of dimensionality reduction:

Feature selection:     Here, we try to find a subset of the original set of variables, to get a smaller subset which can be used to model the problem. It usually has three ways:

  • Filter
  • Wrapper
  • Embedded

Feature extraction:     This component reduces the data in a high dimensional space to a lower dimension space, i.e., a space with lesser number of dimensions.

DIMENSIONALITY REDUCTION METHODS.

The different methods used for dimensionality reduction are:

  • Linear Discriminant Analysis (LDA)
  • Principal Component Analysis (PCA)
  • Generalized Discriminant Analysis (GDA)

Dimensionality reduction can be both linear or non-linear, it depends upon the method used in the model.

Linear Discriminant Analysis (LDA)

Linear Discriminant Analysis is a dimensionality reduction technique which is used for the supervised classification problems. It is used for modelling differences in groups i.e., separating two or more classes.

In order to apply the LDA technique for dimensionality reduction, first the target column has to be selected. The maximum number of reduced dimensions is the number of classes in target column-1, or if smaller than that, the number of numeric columns in the data. All numeric columns in the dataset are projected onto these linear discriminant functions, effectively moving the dataset from the n-dimensionality to the m-dimensionality.

Principal Component Analysis (PCA)

Principal component analysis (PCA) is a statistical procedure that transforms the original n-numeric dimensions of a dataset into a new set of n-dimensions called principal components.

As the first principal component accounts for possible variation of the original data, after which each succeeding component has the highest possible variance.

Each succeeding principal component has the highest possible variance under the constraint that it is uncorrelated with the preceding principal components.

ADVANTAGES OF DIMENSIONALITY REDUCTION.

  • Dimensionality Reduction helps in data compression, and hence the storage space is reduced.
  • It reduces computation time.
  • It also helps remove redundant features.

DISADVANTAGES OF DIMENSIONALITY REDUCTION.

  • May lead to some data loss.
  • PCA always tends to find linear correlations between the variables, in which it is sometimes undesirable.
  • PCA fails in some cases where mean and covariance are not enough to define datasets.

Leave a Comment

Your email address will not be published. Required fields are marked *