Classification Archives

Who invented the nearest neighbor rule?

The Nearest Neighbor (NN) rule is a classic in pattern recognition. It is intuitive and there is no need to describe an algorithm. Everybody who programs it obtains the same results. It is thereby very suitable as a base routine in comparative studies. But who invented it? Marcello Pelillo looked back in history and tried…

Read the rest of this entry

The error in the error

How large is the classification error? What is the performance of the recognition  system? At the end this is the main question, in applications, in proposing novelties, in comparative studies. But how trustworthy is the number that is measured, how accurate is the error estimate? The most common way to estimate the error of a…

Read the rest of this entry

Classifying the exception

Exceptions do not follow the rules. That is their nature. Humans know how to handle them. Can that be learnt? Learning a rule One of the first real world datasets I had to handle consisted of the examination results of the two-year propedeuse in physics. Students passed or failed depending on their scores for 15…

Read the rest of this entry

Recognition, belief or knowledge

Recognition systems have to be trained. An expert is necessary to act as a teacher. He has to know what is what. But … Does he really know, or does he just believe that he knows? Or, does he know that he just believes? And, does he know how good his belief is? Nils Nilsson,…

Read the rest of this entry

Is the neural network model good for pattern recognition? Or, is it too complex, too vague, too clumsy to be of any use in performing applications or in building understanding? The relation between the pattern recognition community and these questions have always been very sensitive. Its history is also interesting for observing how science may…

Read the rest of this entry

Peaking summarized

Pattern recognition learns from examples. Thereby, generalization is needed. This can only be done if the objects, or at least the differences between pattern classes have a finite complexity. That is what peaking teaches us. We will go once more through the steps. (See also our previous discussions on peaking, dimensionality problems and Hughes’ phenomenon)….

Read the rest of this entry

Trunk’s example of the peaking phenomenon

In 1979 G.V. Trunk published a very clear and simple example of the peaking phenomenon. It has been cited many times to explain the existence of peaking. Here, we will summarize and discuss it for those who want to have a better idea about the peaking problem. The paper presents an extreme example. Its value…

Read the rest of this entry

A crisis in the theory of pattern recognition

The Russian scientist A. Lerner published in 1972 a paper under the title: “A crisis in the theory of Pattern Recognition”. This is definitely a title that attracts the attention of researchers interested in the history of the field. What was it to have appeared as the crisis? The answer is surprising, in short it…

Read the rest of this entry

The curse of dimensionality

Imagine a two-class problem represented by 100 training objects in a 100-dimensional feature (vector) space. If the objects are in general position (not by accident in a low-dimensional subspace) then they still fit perfectly in a 99-dimensional subspace. This is a ‘plane’, formally a hyperplane, in the 100-dimensional feature space. We will argue that this…

Read the rest of this entry

Generalization by dissimilarities

  Dissimilarities have the advantage over features that they potentially consider the entire objects and thereby may avoid class overlap. Dissimilarities have the advantage over pixels that they potentially consider the objects as connected totalities, where pixels tear them apart in thousands of pieces. Consequently, the use of dissimilarities may result in better classification performances…

Read the rest of this entry

 Page 2 of 3 « 1  2  3 »