Measuring the bias of incorrect application of feature selection when using cross-validation in radiomics

Background: Many studies in radiomics are using feature selection methods to identify the most predictive features. At the same time, they employ cross-validation to estimate the performance of the developed models. However, if the feature selection is performed before the cross-validation, data leakage can occur, and the results can be biased. To measure the extent of this bias, we collected ten publicly available radiomics datasets and conducted two experiments. First, the models were developed by incorrectly applying the feature selection prior to cross-validation. Then, the same experiment was conducted by applying feature selection correctly within cross-validation to each fold. The resulting models were then evaluated against each other in terms of AUC-ROC, AUC-F1, and Accuracy.

Results: Applying the feature selection incorrectly prior to the cross-validation showed a bias of up to 0.15 in AUC-ROC, 0.29 in AUC-F1, and 0.17 in Accuracy.

Conclusions: Incorrect application of feature selection and cross-validation can lead to highly biased results for radiomic datasets.

Cite

Citation style:
Could not load citation form.

Rights

Use and reproduction:
This work may be used under a
CC BY 4.0 LogoCreative Commons Attribution 4.0 License (CC BY 4.0)
.