Prototype Learning Schemes (PLS) started appearing over 30 years ago (Hart 1968, \cite{Hart68a}) in order to alleviate the drawbacks of nearest neighbour classifiers (NNC). These drawbacks include:

\be
\item computation time, 
\item storage requirements, 
\item the effects of outliers on the classification results,
\item the negative effect of data sets with non-separable and/or overlapping classes,
\item and a low tolerance for noise.
\ee

To that end, all PLS have endeavored to create or select a \emph{good} representation of training data which is a mere fraction of the size of the original training data. In most of the literature this fraction is approximately 10\%. The aim of this work is to present solutions for these drawbacks of NNC. To accomplish this, the design, implementation and evaluation of CLIFF, a collection of new prototype learning schemes (CLIFF1, CLIFF2 and CLIFF3) are described. The basic structure of the CLIFF algorithms involves a ranking measure which ranks the values of each attribute in a training set. The values with the highest ranks are the used as a rule or criteria to select instances/prototypes which obeys the rule/criteria. Intuitively these prototypes best represents the region or neighborhood it comes from and so are expected to eliminate the drawbacks of NNC particularly 3, 4 and 5 above. 

With 13 standard data sets from the UCI repository \cite{Frank+Asuncion:2010}, the results of this work demonstrate that CLIFF presents results which are statistically the same as those from NNC. Finally in the forensic case study a data set composed of the infrared spectra of the clear coat layer of a range of cars, the performance analysis showed that is strong with near 100\% of the validation set finding the right target. Also, prototype learning is applied successfully with a reduction in brittleness while maintaining statistically indistinguishable results with validation sets.
