Feature Selection for Dimensionality Reduction

Shaily jain
3 min readMay 14, 2021

Feature Selection is a simple way to reduce dimensions of your data, which is easier to understand as well.

Most common ones are listed below:

  • Ratio of missing values: Data columns with a ratio of missing values greater than a given threshold can be removed.
  • Low variance in the column values :Columns with little changes in the data carry little information. Data columns with a variance lower than a given threshold can be removed. Notice that the variance depends on the column range, and therefore normalization is required before applying this technique. It can be used only for numerical data.
  • High correlation between two columns :Data columns with very similar trends are also likely to carry very similar information, and only one of them will suffice for classification. Here we calculate the Pearson product-moment correlation coefficient between numeric columns and the Pearson’s chi-square value between nominal columns. For the final classification, we only retain one column of each pair of columns whose pairwise correlation exceeds a given threshold. Notice that correlation depends on the column range, and therefore, normalization is required before applying this technique. We can not measure correlation between Numerical and Nominal features
  • Random Forests/Ensemble Trees. Decision tree ensembles, often called random forests, are useful for column selection in addition to being effective classifiers. Here we generate a large and carefully constructed set of trees to predict the target classes and then use each column’s usage statistics to find the most informative subset of columns. We generate a large set (2,000) of very shallow trees (two levels), and each tree is trained on a small fraction (three columns) of the total number of columns. If a column is often selected as the best split, it is very likely to be an informative column that we should keep. For all columns, we calculate a score as the number of times that the column was selected for the split, divided by the number of times in which it was a candidate. The most predictive columns are those with the highest scores.
  • Backward Feature Elimination. In this technique, at a given iteration, the selected classification algorithm is trained on n input columns. Then we remove one input column at a time and train the same model on n-1 columns. The input column whose removal has produced the smallest increase in the error rate is removed, leaving us with n-1 input columns. The classification is then repeated using n-2 columns, and so on. Each iteration k produces a model trained on n-k columns and an error rate e(k). By selecting the maximum tolerable error rate, we define the smallest number of columns necessary to reach that classification performance with the selected machine learning algorithm.
  • Forward Feature Construction. This is the inverse process to backward feature elimination. We start with one column only, progressively adding one column at a time, i.e., the column that produces the highest increase in performance. Both algorithms, backward feature elimination and forward feature construction, are quite expensive in terms of time and computation. They are practical only when applied to a dataset with an already relatively low number of input columns.

Thanks for reading till the end. I will look to provide examples with cide on each.

Please check out >>>

MY SOCIAL SPACE

Instagram https://www.instagram.com/codatalicious/

LinkedIn https://www.linkedin.com/in/shaily-jain-6a991a143/

Medium https://codatalicious.medium.com/

YouTube https://www.youtube.com/channel/UCKowKGUpXPxarbEHAA7G4MA

--

--

Shaily jain

Problem Solver, Data Science, Actuarial Science, Knowledge Sharer, Hardcore Googler