Feature Ranking with Recursive Feature Elimination in Scikit-Learn

Using scikit-learn to obtain the optimal number of features

Feature selection is an important task for any machine learning application. This is especially crucial when the data in question has many features. The optimal number of features also leads to improved model accuracy. Obtaining the most important features and the number of optimal features can be obtained via feature importance or feature ranking. In this piece, we’ll explore feature ranking.

Recursive Feature Elimination

The first item needed for recursive feature elimination is an estimator; for example, a linear model or a decision tree model.

These models have coefficients for linear models and feature importances in decision tree models. In selecting the optimal number of features, the estimator is trained and the features are selected via the coefficients, or via the feature importances. The least important features are removed. This process is repeated recursively until the optimal number of features is obtained.

Application in Sklearn

Scikit-learn makes it possible to implement recursive feature elimination via the sklearn.feature_selection.RFE class. The class takes the following parameters:

  • estimator — a machine learning estimator that can provide features importances via the coef_ or feature_importances_ attributes.
  • n_features_to_select — the number of features to select. Selects half if it’s not specified.
  • step — an integer that indicates the number of features to be removed at each iteration, or a number between 0 and 1 to indicate the percentage of features to remove at each iteration.

Once fitted, the following attributes can be obtained:

  • ranking_ — the ranking of the features.
  • n_features_ — the number of features that have been selected.
  • support_ — an array that indicates whether or not a feature was selected.

Application

As noted earlier, we’ll need to work with an estimator that offers a feature_importance_s attribute or a coeff_ attribute. Let’s work through a quick example. The dataset has 13 features—we’ll work on getting the optimal number of features.

Let’s obtain the X and y features.

We’ll split it into a testing and training set to prepare for modeling:

Let’s get a couple of imports out of the way:

  • Pipeline — since we’ll perform some cross-validation. It’s best practice in order to avoid data leakage.
  • RepeatedStratifiedKFold — for repeated stratified cross-validation.
  • cross_val_score — for evaluating the score on cross-validation.
  • GradientBoostingClassifier — the estimator we’ll use.
  • numpy — so that we can compute the mean of the scores.

The first step is to create an instance of the RFE class while specifying the estimator and the number of features you’d like to select. In this case, we’re selecting 6:

Next, we create an instance of the model we’d like to use:

We’ll use a Pipeline to transform the data. In the Pipeline we specify rfe for the feature selection step and the model that’ll be used in the next step.

We then specify a RepeatedStratifiedKFold with 10 splits and 5 repeats. The stratified K fold ensures that the number of samples from each class is well balanced in each fold. RepeatedStratifiedKFold repeats the stratified K fold the specified number of times, with a different randomization in each repetition.

The next step is to fit this pipeline to the dataset.

With that in place, we can check the support and the ranking. The support indicates whether or not a feature was chosen.

We can put that into a dataframe and check the result.

We can also check the relative rankings.

Automatic Feature Selection

Instead of manually configuring the number of features, it would be very nice if we could automatically select them. This can be achieved via recursive feature elimination and cross-validation. This is done via the sklearn.feature_selection.RFECV class. The class takes the following parameters:

  • estimator — similar to the RFE class.
  • min_features_to_select — the minimum number of features to be selected.
  • cv— the cross-validation splitting strategy.

The attributes returned are:

  • n_features_ — the optimal number of features selected via cross-validation.
  • support_ — the array containing information on the selection of a feature.
  • ranking_ — the ranking of the features.
  • grid_scores_ — the scores obtained from cross-validation.

The first step is to import the class and create its instance.

The next step is to specify the pipeline and the cv. In this pipeline we use the just created rfecv.

Let’s fit the pipeline and then obtain the optimal number of features.

The optimal number of features can be obtained via the n_features_ attribute.

The rankings and support can be obtained just like last time.

With the grid_scores_ we can plot a graph showing the cross-validated scores.

Final Thoughts

The process for applying this in a regression problem is the same. Just ensure to use regression metrics instead of accuracy. I hope this piece has given you some insight on selecting the optimal number of features for your machine learning problems.

Avatar photo

Fritz

Our team has been at the forefront of Artificial Intelligence and Machine Learning research for more than 15 years and we're using our collective intelligence to help others learn, understand and grow using these new technologies in ethical and sustainable ways.

Comments 0 Responses

Leave a Reply

Your email address will not be published. Required fields are marked *