Advances in Feature Selection and Classification

The field of machine learning is moving towards developing more efficient and robust feature selection and classification methods. Researchers are exploring new approaches to improve the accuracy and reduce the computational cost of existing techniques. One of the key directions is the development of hybrid methods that combine different feature selection and classification techniques to achieve better results. Another important area of research is the use of entropy-based functions to optimize classification models. Overall, the field is witnessing a significant shift towards more innovative and efficient methods that can handle complex datasets and improve the performance of machine learning models. Noteworthy papers in this area include:

  • A paper that proposes a refined random forest classifier that achieves improved accuracy compared to standard random forest models by iteratively refining itself and removing redundant trees.
  • A paper that develops a new feature selection method based on sampling techniques and rough set theory, which can select a feature subset with high discriminatory ability in a short period of time.
  • A paper that presents a hybrid approach with correlation-aware voting rules for feature selection, which combines parameter-to-parameter and parameter-to-target correlations to eliminate redundant features and retain relevant ones.
  • A paper that proposes a novel classification approach by searching for a vector of parameters in a bounded hypercube and a positive vector, obtained through the minimization of an entropy-based function.

Sources

Diversity Conscious Refined Random Forest

Positive region preserved random sampling: an efficient feature selection method for massive data

HCVR: A Hybrid Approach with Correlation-aware Voting Rules for Feature Selection

Classification by Separating Hypersurfaces: An Entropic Approach

Built with on top of