Distools User Guide,
2 Hour Course,
4 Day Course
Computing Dissimilarities,
Manipulation ,
Visualization
Dissimilarity Matrix Classification,
Dissimilarity Space,
PEEmbedding,
Evaluation
Evaluation
This page belongs to the User Guide of the DisTools Matlab package. It describes some of its commands. Links to other pages are listed above. More information can be found in the pages of the PRTools User Guide. Links are given at the bottom of this page.
Here some examples are given illustrating how for the three ways a dissimilarity representation can be handled, classifiers can be computed and tested.
[DT,DS] = genddat(D,0.5) W = knndc(DT) ; E_DM= DS*w*testc 
Split dissimilarity matrix, ds contains dissimilarities to trainsetCompute kNN classifier (optimize k) on given dissimilarities. Classification error on testset 
[DT,DS]= gendat(D,0.5) ; W = knnc(DT); E_DS= DS*W* 
Split dataset in disspace (based on all data!) in trainset and testset Compute classifier in disspace on trainset Classification error on testset 

Split dataset in disspace (based on trainset) in trainset and testset Compute classifier in disspace on trainset Classification error on testset 
X = D* [XT,XS] = W = XT*pe_knnc ; E_PE = XS*W* 
Compute PE space from all data Split in trainset and testset Compute classifier in PE space on trainset Classification error on testset 

Split dataset in disspace (based on trainset) in trainset and testset Find PE space Map trainset on PE space Map testset on PE space Compute classifier in PE space on trainset Classification error on testset 
In the second and the fourth example the spaces are defined on all data as the gendat
function just selects objects in a given space. In the construction of that space the labels are not used. The test results are thereby still fair, but give an estimate of the performance when test sets of a similar size are used and classifiers are recomputed. This is also called transductive learning: the classifiers are adapted to the test data.
In the third and the fifth example just the training sets are used for building the spaces as the genddat
function uses by default a representation set (columns) that is equal to the trainset. Thereby classifiers can be computed that are valid for all future data. For some small, severely nonEucledian datasets the projection of the test data on a PE space that has been computed on just the train data yields bad, e.g. complex, results. Therefor many studies are based on transductive learning.
PRTools contains special routines for crossvalidation, learning curves and feature curves: crossval
, cleval
and clevalf
. Sometimes special versions, crossvald
and clevald
, are needed for dissimilarity data in order to use
instead of genddat
for splitting datasets. In the transductive approaches this is not needed. Here are some examples.gendat
% 
Transductive learning 10fold crossvalidation of nmc (5 times) in disspace.Embedding in PE space and run a PE classifier. 
% 
Use of crossvald builds spaces from the trainset only.Disspaces with at random 20 training objects for representation. PE spaces recomputed for for every new trainset. 
E = 
nmc learning curve in disspace, repset is all data. 5repeats.Plot learning curves of test and apparent error 
E = 
Compare learning curves of 3 classifiers in disspace with a repset of 1 training object per class. 
E =

Learning curves of knndc (on dismat) and knnc (in disspace), repset is trainset, plot without apparent error curve. 
E = 
Compare learning curves of kNN in PE space, disspace and dismat. The representation set equals the training set. 
E =

Feature curve in disspace using all objects for representation and 50% of the objects for training. 
E =

Feature curve in PE space using all objects for representation and 50% of the objects for training. 
PRTools User Guide
elements: datasets, datafiles. cells and doubles, mappings, classifiers, mapping types
operations: datasets, datafiles, cells and doubles, mappings, classifiers, stacked, parallel, sequential, dyadic
commands: datasets, representation, classifiers, evaluation, clustering and regression, examples, support