History Archives

Who invented the nearest neighbor rule?

The Nearest Neighbor (NN) rule is a classic in pattern recognition. It is intuitive and there is no need to describe an algorithm. Everybody who programs it obtains the same results. It is thereby very suitable as a base routine in comparative studies. But who invented it? Marcello Pelillo looked back in history and tried…

Read the rest of this entry

Classifying the exception

Exceptions do not follow the rules. That is their nature. Humans know how to handle them. Can that be learnt? Learning a rule One of the first real world datasets I had to handle consisted of the examination results of the two-year propedeuse in physics. Students passed or failed depending on their scores for 15…

Read the rest of this entry

A crisis in the theory of pattern recognition

The Russian scientist A. Lerner published in 1972 a paper under the title: “A crisis in the theory of Pattern Recognition”. This is definitely a title that attracts the attention of researchers interested in the history of the field. What was it to have appeared as the crisis? The answer is surprising, in short it…

Read the rest of this entry

The curse of dimensionality

Imagine a two-class problem represented by 100 training objects in a 100-dimensional feature (vector) space. If the objects are in general position (not by accident in a low-dimensional subspace) then they still fit perfectly in a 99-dimensional subspace. This is a ‘plane’, formally a hyperplane, in the 100-dimensional feature space. We will argue that this…

Read the rest of this entry

Hughes phenomenon

The peaking paradox was heavily discussed in pattern recognition after a general mathematical analysis of the phenomenon was published by Hughes in 1968. It has puzzled researchers for at least a decade. This peaking is a real world phenomenon and it has been observed many times. Although the explanation by Hughes seemed general and convincing,…

Read the rest of this entry

The peaking paradox

To measure is to know. Thereby, if we measure more, we know more. This is a fundamental understanding in science, phrased by Kelvin. If we want to know more, or want to increase the accuracy of our knowledge, we should observe more. How to realize this in pattern recognition, however, is a permanently returning problem…

Read the rest of this entry

Personal history on the dissimilarity representation

Personal and historical notes The previous post briefly explains arguments for the steps taken by us between 1995 and 2005. From the perspective we have now, it has become much more clear what we did in those years. Below a few historical remarks will be made as they sketch how research may proceed. It all started when…

Read the rest of this entry

The dissimilarity space – a step into the darkness

Features are defined such that they focus on isolated aspects of objects. They may neglect other relevant aspects, leading to class overlap. Pixels in images describe everything but pixel-based vector spaces tear the objects apart because their structure is not consciously encoded in the representation. Structural descriptions are rich and describe the structure well, yet they…

Read the rest of this entry

PRTools History

Scientists should build their own instruments. Or at least, be able to open, investigate and understand the tools they are using. If, however, the tools are provided as a black box there should be a manual or literature available that fully explains the ins and outs. In principle, scientists should be able to create their…

Read the rest of this entry

Artificial Intelligence and Pattern Recognition

What is the difference between Artificial Intelligence (AI) and Pattern Recognition (PR)? Is one a subfield of the other or do they stand next to each other? Historically these two fields are strongly connected. Meetings in the 50’s and 60’s attracted researchers from both domains and for many the interest was so broad or not…

Read the rest of this entry