Distools User Guide, 2 Hour Course, 4 Day Course
Computing Dissimilarities, Manipulation , Visualization
Dissimilarity Matrix Classification, Dissimilarity Space, PE-Embedding, Evaluation

Classification of given dissimilarity matrices

This page belongs to the User Guide of the DisTools Matlab package. It describes some of its commands. Links to other pages are listed above. More information can be found in the pages of the PRTools User Guide. Links are given at the bottom of this page.

In the analysis for such matrices it is assumed that the dissimilarity matrix is square, i.e. training set and representation set coincide. A dissimilarity matrix constructed as a PRTools dataset should have object labels for the rows and feature labels for the columns. The labels of the two data sets used for constructing the dissimilarity matrix should be used for that. PRTools commands computing dissimilarity matrix take care of that. If a dissimilarity matrix given in a PRTools dataset is transposed (E = D') then the two labellings are switched automatically by PRTools.

nne Leave-one-out nearest neighbor error estimated from a square labeled dissimilarity matrix.
e = D*nne
nnerror1 Exact expected NN error from a dissimilarity matrix, trainset size per class. The exact expected error is made by a full integration.
e = D*nnerror1; plote(e) Compute and plot learning curve based on the 1NN rule
e = D*nnerror1([],20) Compute and show the exact expected 1NN error for 20 objects / class
nnerror2 Exact expected NN error from a dissimilarity matrix, trainset size overall. The exact expected error is made by a full integration.
 e = D*nnerror2; plote(e) Compute and plot learning curve based on the 1NN rule
 e = D*nnerror2([],20) Compute and show the exact expected 1NN error for 20 objects
testkd Test k-NN classifier for dissimilarity data
e = D*testkd(3,'loo') Compute for a square dismat classification error of the kNN rule with k=3 using leave-one-out crossvalidation.
e = D*testkd Compute for a dissimilarity dataset of test data (dissimilarities with a trainset) the classification error using the 1NN rule.
knndc k-nearest neighbor classifier for dissimilarity data
[DT,DS] = genddat(D,0.5);
W = DT*knndc;
e = DS*W*testc
Generate trainset and testset from the given dismat D. The repset equals the training set. Train k-nearest neigbor classifier (optimize k) by DT and test it on DS. DT and DS should be dissmilarity datasets based on the same representation set.
 testpd Parzen classifier for dissimilarity data
e = D*testpd(h,'loo') Compute for a square dismat classification error of the Parzen classifier for a given value of the smoothing parameter (kernel width) h, using leave-one-out crossvalidation.
e = D*testpd Compute for a dissimilarity dataset of test data (dissimilarities with a trainset) the classification error using the Parzen classifier. The smoothing parameter is optimized by parzenddc.
parzenddc  Parzen classifier for dissimilarity data
[DT,DS] = genddat(D,0.5);
W = DT*parzenddc;
e = DS*W*testc
Generate trainset and testset from the given dismat D. The repset equals the training set. Train the Parzen classifier (optimize h) by DT and test it on DS. DT and DS should be dissmilarity datasets based on the same representation set.

PRTools User Guide
elements: datasets, datafiles. cells and doubles, mappings, classifiers, mapping types
operations: datasets, datafiles, cells and doubles, mappings, classifiers, stacked, parallel, sequential, dyadic
commands: datasets, representation, classifiers, evaluation, clustering and regression, examples, support

Print Friendly, PDF & Email