The results here follow the procedure outlined in the experimental design, reporting the rankings of the different learner combinations.  The rankings are generated by first comparing a learner with each other learner across the 7 different error measures using a paired Wilcoxon ranked test for significance.  If the test shows significance, the descriptive statistic such as mean or median is compared and the algorithm with closer predictions is marked a winner while the other is marked a loser.  If both methods are equal or the test does not show significantly different distributions, a tie is reported.

The percentage of possible losses is computed for each dataset by comparing how many times an algorithm lost to how many times it could have lost.  With 96 algorithm combinations and 7 error measures, an algorithm can lose up to 665 times on a given dataset.  The sum of the number of losses on each dataset is added up and divided by the total number of possible losses across all datasets.  This measure is referred to as percentage of losses or loss percentage. \fig{DataLosses,AlgorithmLosses,AllLosses}

Algorithms with a lower percentage of losses are ranked higher for the sort order.\fig{SortedAlgorithms}  In the ranks, none\_SLReg is the best performing algorithm while norm\_PlSR is the worst performing.  These rankings are consistent with an earlier experiment performed in a different simulation environment.


\begin{figure}[H]
\begin{center} \includegraphics[width=3in]{plots/rankings.png} \end{center}
\caption{All permutations of data miners and learner pairs, ordered from least amount of loss measures to the most.}\label{fig:SortedAlgorithms}
\end{figure}

\begin{figure}[H]
\begin{center} \includegraphics[width=3in]{plots/algorithmPlot.png} \end{center}
\caption{The 96 algorithms sorted by percentage of losses.}\label{fig:DataLosses}
\end{figure}

\begin{figure}[H]
\begin{center} \includegraphics[width=3in]{plots/datasetPlot.png} \end{center}
\caption{The 20 datasets sorted by percentage of losses.}\label{fig:AlgorithmLosses}
\end{figure}

\begin{figure}[H]
\begin{center} \includegraphics[width=3in]{plots/both.png} \end{center}
\caption{
The 20 datasets and 96 algorithms, with order determined by rankings in previous figures.}
\label{fig:AllLosses}
\end{figure} 



