\begin{abstract}

A copious amount of solutions exists for the evaluation of forensic evidence. Most however, fail to quantify the real problem of \emph{brittleness} which plagued these model solutions and only one method has been presented to reduce this problem. Brittleness here refers to what Ken Smalldon labelled the ``fall-off-the-cliff-effect". In this work three(3) causes of brittleness have been identified:

\begin{enumerate}
\item A tiny error in the collection of data can lead to a spurious outcome from a forensic model
\item If the model is dependent in statistical assumptions, such as assuming that the distributions of refractive indices of glass collected at a crime scene or a suspect obeys the properties of a normal distribution. 
\item If the model requires the use of measured parameters from surveys to calculate the \emph{frequency of occurrence} of trace evidence in a population - a value used in models which follow the Bayesian approach.
\end{enumerate}

The aim of this work is to present solutions for these causes of brittleness and also a method for reducing brittleness. To accomplish this, this paper describes the design, the implementation and evaluation of CLIFF, a forensic evaluation tool for measurements on trace evidence. CLIFF avoids the three(3) causes of brittleness mentioned above and goes two(2) steps further to quantify and reduce brittleness. A novel approach to quantify brittleness introduced and prototype learning is used to reduce the brittleness in CLIFF.

With a data set composed of the infrared spectra of the clear coat layer of a range of cars, the performance analysis showed that is strong with near 100\% of the validation set finding the right target. Finally, prototype learning is applied successfully with a reduction in brittleness while maintaining statistically indistinguishable results with validation sets.


\end{abstract}
