In previous posts the usage of features and pixels is discussed for representing objects in numerical ways. Pros and cons are sketched. Here, a third alternative will be considered: the direct use of dissimilarities. First, we summarize the conclusions on the use of features and pixels.
Features are well suited to represent objects by numbers if it is known where to look for and if this knowledge can be crystallized in measurable quantities. These two conditions may raise problems. If it is not certain what the characteristic properties are, a number of irrelevant ones might be taken and, even worse, the essential ones may be overlooked. This is the main cause behind the perceived or assumed class overlap in feature vector spaces. Consequently, a probabilistic approach is needed for building an optimal recognition system.
The second problem of the feature representation is that human knowledge is not naturally specified in measurable properties. We prefer qualities. We may describe somebody as being a distinguished person, or having a sharp look, or as uncertain. This may be used for communication between people but a lot of work has to be done before such properties can be used to build an automatic recognition system.
If objects are sampled with a sufficient resolution, such as pixels describing an image, then all what is needed is covered, as it seems. Classes do not overlap anymore as objects belonging to different classes are different in this representation. However, next to the enormous dimensionalities that are necessary for this approach, the essential defect is that the structure of the object is sacrificed to the desire of being complete. As a result, we will get a heap of samples, but the relations and connections between them will be lost.
An entirely different road is taken by abandoning the idea of describing objects independently, without a reference to other objects. When we have a first, fresh look at a set of objects, the first thing that strikes us are the differences between them. Some objects are similar, other objects are very different. We compare. After that the question may arise of how to characterize the differences in terms of properties. Finally, and only when we are pressed to it, we may attempt to specify these characteristics in measurable properties. The observation of differences between the objects comes first, while the specification follows later.
This has brought many researchers to approaches in which dissimilarities between objects are measured directly on the raw data and not on the basis of measured properties. The advantage over the feature representation is that every aspect of the object may be taken into account. No important properties can be forgotten. On the other hand, this does not result in a fragmentation of the objects like in the pixel representation as in the definition of the distance measure the totality of the objects, their structure, can be included.
Originally the use of object dissimilarities led to the classification procedures which relied on template matching (matching with an “ideal” or an “average” object per class) or on the nearest neighbor rule. For a long time advanced learning approaches have not been studied in relation to the dissimilarities. Recent years have brought new advancements, which will be described later in future posts.