PRTools Contents

PRTools User Guide

testc

TESTC

Test classifier, error / performance estimation

    [E,C] = TESTC(A*W,TYPE)
    [E,C] = TESTC(A,W,TYPE)
     E = A*W*TESTC([],TYPE)

    [E,F] = TESTC(A*W,TYPE,LABEL)
    [E,F] = TESTC(A,W,TYPE,LABEL)
     E = A*W*TESTC([],TYPE,LABEL)

Input
 A Dataset
 W Trained classifier mapping
 TYPE Type of performance estimate, default: probability of error
 LABEL Target class, default: none

Output
 E Error / performance estimate
 C Number of erroneously classified objects per class.  They are sorted according to A.LABLIST
 F Error / performance estimate of the non-target classes

Description

This routine supplies several performance estimates for a trained  classifier W based using a test dataset A. A should have objects for all  classes assigned by W. Class prior probabilities given in A are taken  into account. Use TESTD is just the number of incorrectly assigned  objects have to be determied.

It is possible to supply a cell array of datasets {A*W}, or a cell array  of datasets {A} or a cell array of classifiers {W}. In case A as well as  W is a cell array, W might be 2-dimensional with as many columns as A has  datasets. See DISPERROR for an example.

Objects in A belonging to different classes than defined for W as well  as unlabeled objects are neglected. Note that this implies that TESTC applied to a rejecting classifier (e.g. REJECTC) estimates the  performance on the not rejected objects only. By    [E,C] = TESTC(A,W); E = (C./CLASSSIZES(A))*GETPRIOR(A)';
  the classification error with respect to all objects in A may be  computed. Use CONFMAT for an overview of the total class assignment  including the unlabeled (rejected) objects.

In case of missing classes in A, [E,C] = TESTC(A*W) returns in E a NaN  but in C still the number of erroneously classified objects per class.

If LABEL is given, the performance estimate relates just to that class as  target class. If LABEL is not given a class average is returned weighted  by the class priors.

The following performance measures are supported for TYPE

'crisp' Expected classification error based on error counting,  weighted by the class priors (default).
 'FN' E False negative  F False positive
 'TP' E True positive  F True negative
'soft' Expected classification error based on soft error  summation, i.e. a sum of the absolute difference between  classifier output and target, weighted by class priors.
 'F' Lissack and Fu error estimate
'mse' Expected mean square difference between classifier output  and target (based on soft labels), weighted by class  priors.
'auc' Area under the ROC curve (this is an error and not a  performance!). For multi class problems this is the  weigthed average (by class priors) of the one-against-rest  contributions of the classes.
'precision' E Fraction of true target objects among the objects  classified as target. The target class is defined by LABEL.  Priors are not used.  F Recall, fraction of correctly classified objects in the  target class. Priors are not used.
'sensitivity' E Fraction of correctly classified objects in the target  class (defined by LABEL). Priors are not used.  Sensitivity as used her is identical to recall.  F Specificity, fraction non target objects that are not  classified into the target class (defined by LABEL).  Priors are not used.

Example(s)

prex_plotc,

See also

mappings, datasets, confmat, rejectc,

PRTools Contents

PRTools User Guide

This file has been automatically generated. If badly readable, use the help-command in Matlab.