\section{Prototype Learning for Nearest Neighbor Classifiers}

Research in prototype learning is an active field of study \cite{Devi2002, Bezdek2001, Chang74, Dasarathy94, Li2009, Bezdek98, Cano2005, Garain2008, Veenman2005, García2008, Kohonen98, Kohonen90}. A review of the literature in this field has revealed two categories of PLS: 1) instance selection, and 2) instance abstraction. Instance selection involves selecting a subset of instances from the original training set as prototypes. Using what Dasarathy terms as \emph{edit rules}, instance selection can take place in four(4) different ways.

\be
\item incremental (CNN \cite{Hart68a})
\item decremental (RNN)
\item a combination of 1 and 2
\item border points, non-border points or central points
\ee

Instance abstraction involves creating prototypes by merging the instances in a training set according to pre-determined rules. For example, Chang \cite{Change74} merges two instances if the have the same class, are closer to each other than any other instances and the result of merging does not degrade the performance of NNC. The following section gives a brief survey of PLS for both instance selection and instance abstraction.

\subsection{Instance Selection}

\subsubsection{Condensed Nearest Neighbor (CNN)}

\subsubsection{Reduced Nearest Neighbor (RNN)}

\subsubsection{Minimal Consistent Set (MCS)}
In 1994, Dasarthy \cite{Dasarathy94} presented the MCS algorithm. This is a PLS designed to select exemplar samples of a training set with the aim of reducing the computational demands of NNC, while maintaining high consistency levels that is all the original samples are correctly classified by prototypes under the NN rule. Dasrathy's selection process is based on the concept of NUNS, the Nearest Unlike Neighbor Subset. With this concept, he first defines the initial subset as the entire training set, then for each instance in the subset the distance of the NUN is found and all instances with the same class whose distance is closer than the distance of the NUN are stored and counted as votes. The instance with the most votes is then designated as a prototype. All instances which contributed to the number of votes for this prototype is then removed from the other lists. The process is continued until the subset is no longer being reduced (**i need to clear this up**) 


\subsection{Instance Abstraction}
\subsubsection{Chang}
Chang’s work deals with finding prototypes for nearest neighbor classifiers. The basic idea of his algorithm is to start with every sample in the training set as a prototype, then merge any two nearest prototype (p1 and p2 to form p*) with the same class as long as the recognition rate is not degraded. The new prototype p* can be formed by simply finding the average of p1 and p2 or the average vector of weighted p1 and p2. p* will have the same class as the individual prototypes p1 and p2. The merging process will continue until the number of incorrect classifications of patterns in the data set starts to increase. \fig{chang}, shows Chang’s algorithm.

\begin{figure}[h!]
\small
\begin{center}
\begin{tabular}{ p{8cm} }
\hline
\begin{verbatim}
Step 1: Start with an arbitrary point tj
        in B* and assign it to A*
Step 2: For all points tk in B* such that
        class (tk) is not equal to class (tj),
        update bk to be the distance between
        tk ans tj if this daistance is smaller
        than the present bk. Otherwise, bk is
        unchanged.
Step 3: Amoung all the points in B*, find the
        point ts which has the smallest bs
        associated with it.
Step 4: If tj is not the nearest point ts such
        that the classes of tj and ts are
        different, go to Step 6. Othewise
        continue.
Step 5: Check whether or not d(tj, ts) is less
        than bj. If no, go to Step 6. If yes,
        let bj=d(tj, ts) and continue.
Step 6: Let j=s, move ts from B* to A*, and go
        to Step 2 until B* is empty. When B* is
        empty, the final b1,...,bm are the
        desired ones.
\end{verbatim}
 \\ \hline
    \end{tabular}
\end{center}
\caption{Chang's algorithm for finding prototypes}\label{fig:chang}
\end{figure} 
