How to evaluate classifier performance in the presence of additional effects : A new POD-based approach allowing certification of machine learning approaches

Classifiers are useful and well-known learning algorithms classifications. A classifier may be suited for a specific task depending on the application and datasets. To select an approach for a task, performance evaluation may be imperative. Existing approaches like the receiver operating characteristic and precision–recall curves are popular in evaluating classifier performance, however both measures do not directly address the influence of additional and possibly unknown (process) parameters on the classification results. In this contribution, this limitation is discussed and addressed by adapting the Probability of Detection (POD) measure. The POD is aprobabilistic method to quantify the reliability of a diagnostic procedure taking into account statistical variability of sensor and measurements properties. In this contribution the POD approach is adapted and extended. The introduced approach is implemented on driving behavior prediction data serving as illustrative example. Based on the introduced POD-related evaluation, different classifiers can be clearly distinguished with respect to their ability to predict the correct intended driver behavior as a function of remaining time (here assumed as process parameter) before the event itself. The introduced approach provides a new diagnostic and comprehensive interpretation of the quality of a classification model.

Zitieren

Zitierform:
Zitierform konnte nicht geladen werden.

Rechte

Nutzung und Vervielfältigung:
Dieses Werk kann unter einer
CC BY-NC-ND 4.0 LogoCreative Commons Namensnennung - Nicht kommerziell - Keine Bearbeitungen 4.0 Lizenz (CC BY-NC-ND 4.0)
genutzt werden.