Model assessment - Cnam - Conservatoire national des arts et métiers
Communication Dans Un Congrès Année : 2006

Model assessment

Résumé

In data mining and machine learning, models come from data and provide insights for understanding data (unsupervised classification) or making prediction (supervised learning) (Giudici, 2003, Hand, 2000). Thus the scientific status of this kind of models is different from the classical view where a model is a simplified representation of reality provided by an expert of the field. In most data mining applications a good model is a model which not only fits the data but gives good predictions, even if it is not interpretable (Vapnik, 2006). In this context, model validation and model choice need specific indices and approaches. Penalized likelihood measures (AIC, BIC etc.) may not be pertinent when there is no simple distributional assumption on the data and (or) for models like regularized regression, SVM and many others where parameters are constrained. Complexity measures like the VC-dimension are more adapted, but very difficult to estimate. In supervised classification, ROC curves and AUC are commonly used (Saporta & Niang, 2006). Comparing models should be done on validation (hold-out) sets but resampling is necessary in order to get confidence intervals.
Fichier principal
Vignette du fichier
RC1059.pdf (262.89 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02507614 , version 1 (13-03-2020)

Identifiants

  • HAL Id : hal-02507614 , version 1

Citer

Gilbert Saporta, Ndèye Niang. Model assessment. KNEMO'06 Knowledge Extraction and Modeling, Jan 2006, Capri, Italy. ⟨hal-02507614⟩

Collections

CNAM CEDRIC-CNAM
55 Consultations
121 Téléchargements

Partager

More