\documentclass{sig-alternate}
\usepackage{times}
\usepackage{algorithm}
\usepackage{algorithmic}



%\usepackage[none]{hyphenat} 
\newenvironment{smallitem}
 {\setlength{\topsep}{0pt}
  \setlength{\partopsep}{0pt}
  \setlength{\parskip}{0pt}
  \begin{itemize}
  \setlength{\leftmargin}{.2in}
  \setlength{\parsep}{0pt}
  \setlength{\parskip}{0pt}
  \setlength{\itemsep}{0pt}}
 {\end{itemize}}

\newenvironment{smallenum}
 {\setlength{\topsep}{0pt}
  \setlength{\partopsep}{0pt}
  \setlength{\parskip}{0pt}
  \begin{enumerate}
  \setlength{\leftmargin}{.2in}  \setlength{\parsep}{0pt}
  \setlength{\parskip}{0pt}
  \setlength{\itemsep}{0pt}}
 {\end{enumerate}}


\usepackage[table]{xcolor}
\usepackage{url,graphicx}
\newcommand{\G}{\cellcolor[rgb]{0.8,0.8,0.8}}
\newcommand{\fig}[1]{Figure~\ref{fig:#1}}
\newcommand{\eq}[1]{Equation~\ref{eq:#1}}
\newcommand{\hyp}[1]{Hypothesis~\ref{hyp:#1}}
\newcommand{\tion}[1]{\S\ref{sec:#1}}

\newcommand{\bi}{\begin{smallitem}}
\newcommand{\ei}{\end{smallitem}}
\newcommand{\be}{\begin{smallenum}}
\newcommand{\ee}{\end{smallenum}}
\newcommand{\bd}{\begin{description}}
\newcommand{\ed}{\end{description}}

\begin{document}
%
% --- Author Metadata here ---
\conferenceinfo{CS736}{2010 Lecture Project}
\CopyrightYear{2010} % Allows default copyright year (20XX) to be over-ridden - IF NEED BE.
%\crdata{X-XXXXX-XX-X/XX/XX}  % Allows default copyright data (0-89791-88-6/97/05) to be over-ridden - IF NEED BE.
% --- End of Author Metadata ---

%\title{An Empirical Study on the Influence of Evaluation Criteria used in Software Cost Estimation  }
\title{Investigation of the paper: A Method for Design and Performance Modeling of Client/Server Systems}

%\numberofauthors{4}
%\author{
% You can go ahead and credit any number of authors here,
% e.g. one 'row of three' or two rows (consisting of one row of three
% and a second row of one, two or three).
%
% The command \alignauthor (no curly braces needed) should
% precede each author name, affiliation/snail-mail address and
% e-mail address. Additionally, tag each line of
% affiliation/address with \affaddr, and tag the
% e-mail address with \email.
%
% 1st. author
%\alignauthor
%Author1
% 2nd. author
%\alignauthor
%Author2
% 3rd. author
%\alignauthor 
%Author3
%\and  % use '\and' if you need 'another row' of author names
% 4th. author
%\alignauthor 
%Author4

%}


\numberofauthors{2} %  in this sample file, there are a *total*
% of EIGHT authors. SIX appear on the 'first-page' (for formatting
% reasons) and the remaining two appear in the \additionalauthors section.
%
\author{
% You can go ahead and credit any number of authors here,
% e.g. one 'row of three' or two rows (consisting of one row of three
% and a second row of one, two or three).
%
% The command \alignauthor (no curly braces needed) should
% precede each author name, affiliation/snail-mail address and
% e-mail address. Additionally, tag each line of
% affiliation/address with \affaddr, and tag the
% e-mail address with \email.
%
% 1st. author
\\
\alignauthor  Ekrem Kocaguneli, Katerina Goseva\\
            \affaddr{CS \& EE, WVU, Morgantown, USA}\\
       \email{ekocagun@mix.wvu.edu, katerina.goseva@mail.wvu.edu }
}



\maketitle
\begin{abstract}
\textbf{Problem:} 
It is hard for performance engineers to design complex and distributed client/server (C/S) applications that can meet the performance goals.
\\
\textbf{Aim:} 
When design and performance modeling activities are combined, they will help one another and lead to better desing of C/S systems.
\\
\textbf{Method:} 
A performance engineering language developed by the authors map the use cases to performance specifications and generates an analytical performance model for the system.
Service demands at servers, storage boxes and the network are also generated from the system specifications.
\\
\textbf{Results:} As an example the query optimizer of a DBMS was modeled to be able to generate more accurate estimates on the I/Os.
They have observed a clear effect of improvement when the database configuration was changed in accordance with the suggestions of the proposed method.
\\
\textbf{Conclusion:} 
A method that integrated design and performance modeling activities was proposed and was applied on a practical system.
The model enabled early predictive performance models and the parameters.
\end{abstract}

% A category with the (minimum) three required fields
%\category{H.4}{Information Systems Applications}{Miscellaneous}
%A category including the fourth, optional field follows...
%\category{D.2.8}{Software Engineering}{Metrics}[complexity measures, performance measures]
\category{Software Engineering}{Software Performance Engineering}{Client Server Systems}

\terms{Software Performance Engineering}

\keywords{Performance Models, Client Server Systems}

\section{Introduction}

The paper will be organized in the following manner: I will follow the original paper~\cite{Menasce2000} and their experiments while discussing the topic and from time to time I will branch out to other papers in the literature that have a common topic with this paper.

When designing a distributed C/S system, there happens to be quite a lot of design decisions that are made at very early stages of development, such as:
\bi
\item Work distribution between client and server
\item Type of servers and clients
\item Distribution of functions among servers
\ei
Usually the impact of early design decisions on the performance is not very clear.
Furthermore, the wrong decisions may result in expensive re-design, re-write of the code or may even waste an entire project.

The idea advocated in \cite{Menasce2000} is that the design and software performance engineering (SPE) activities can be integrated in an iterative manner.
The aim of that approach is to analyze the desing for the purposes of performance and consider alternative design configurations.
However, to be able to consider the performance engineering activities at the desing level (i.e. to be able to application and access logic), we need to have the message communication and functionality at the client and server sides.

As an example application a relational databse intensive C/S system is chosen and is observed from the beginning till the end by using the described strategy.
As for the design of the project an object oriented approach was chosen, where use cases, structural view (using the object models) and dynamic view (using the collaboration diagrams) of the program was used.
The performance modeling of the project was carried out through a language called CLISSPE (Client/Server Software Performance Evaluation).
This language was also developed by the authors of the paper and details of the language are given in \cite{Menasce1997}.
As explained in \cite{Menasce1997}, the language was originally developed for the remodelling of a very arge mission-critical system.
The use of CLISSPE on this work however, aims at bringing together the design and performance activities by enabling software developers and the performance analysts to work on the use cases collaboratively on the same platform.
In more detail, by using CLISSPE designers of the C/S platform specify objects (server, client, database, tables, transactions and networks), the relations between these objects and the transactions executed in the system. 
Then the compiler of CLISSPE compiles all the design specifications into a queuing network and also the parameters of the system are specified by the compiler so that the performance analyst can work on that network.
 
I will structure the rest of the paper as follows: In \tion{background} some background information regarding the SPE and the CLISSPE language.
In \tion{application} some details of the application that motivated this study will be provided.
Then the steps of the proposed model will be given in \tion{model_steps}.
The analytic models that were utilized by the compiler of the proposed model are provided in \tion{analytic_models} and the parameter gathering activities of the reported project are summarized in \tion{parameter_gathering}.
Finally the reported results will be summarized in \tion{results} and a brief discussion of my own ideas regarding the approach as well as the paper will be presented in \tion{discussion}.


\section{Background of CLISSPE}
\label{sec:background}

So as to be able to predict the performance of a new systemt that is under development, we need performance models.
The model chosen in this paper was queuing networks, which require two parameters:

\bi
\item workload intensity (e.g. arrival rates)
\item demand at each resource
\ei

The former of those requirements (workload intensity) can be attained through performance requirements.
However, the latter (demand at each resource) is trickier, in the sense that it requires a deep understanding of the domain and the software that is under development.
Performance analysts use the knowledge of developers to have an idea of the demands, however, if the software developers are too busy to provide that information at initial stages or if they are unwilling to cooperate with the analysts, then having the demand information proves to be extremely difficult.
In the case of this paper, CLISSPE is capable of estimating those demands given that it is provided with objects, mappings and the transactions.
CLISSPE is reported to be composed of 3 parts: 
\begin{enumerate}
\item \textbf{Declaration Part:} The following objects are declared in this part: Client, client type, server, server type, disk(s), disk type(s), database management system, database tables, network, network types, transactions, remote procedure calls and constants.
\item \textbf{Mapping part:} The objects defined in the previous step are mapped to one another (e.g. clients and servers to networks or DB tables to servers). 
\item \textbf{Transaction Specification Part:} This section explains the logic of each transaction. For example if it is a loop then the estimate of how many times it is supposed to be executed is given or if it is a branching operation, then the probability of every branch is given.
\end{enumerate}

The details of the language and the specifications of the afore mentioned steps are given in \cite{Menasce1997a}.
I have also downloaded and went through the specification document for the language and I have chosen some examples for each step so that we can get a feel of the language as well as its syntax.
Below are 3 example statements for each one of the explained steps:
\begin{enumerate}
\item \textbf{Declaration Example:} 
\begin{verbatim}
# below is the syntax
network_type NetworkType bandwidth= number
          type= { ATM | Ethernet 
          | Fast_Ethernet | TokenRing 
          | FDDI | WAN 
# below is an example
network_type Ring16 bandwidth= 16.0 
          type= TokenRing 
\end{verbatim}
\item \textbf{Mapping Example:} 
\begin{verbatim}
# below is the syntax
network NetworkName  type= NetworkType ;
# below is an example
network StationLAN  type= Ring16 ;
\end{verbatim}
\item \textbf{Transaction Specification Example:} 
\begin{verbatim}
# below is the syntax
transaction TransactionName rate= number ;
# below is an example
transaction assign rate= 0.02 
\end{verbatim}
\end{enumerate}

\section{Application to be Modeled}
\label{sec:application}

Since the paper that I am presenting is built upon an application, it is helpful to have a small section that explains this application.
The application is reported to be a recruitment and training system (RTS) that is used by a government agency.
The application before the renewal was a more than 20 year old mainframe application with a legacy code written in multiple languages.
Furthermore, due to its mainframe nature, it has limited capability of scaling.

The renewal process aims at transferring the RTS to a C/S architecture.
The new system will consist of several recruitment centers that will be connected with a 10-Mbps ethernet LAN.
While the recruitment centers may or may not have a local server for application and database, the headquarters will be aided with multiple application and database servers.
Furthermore, the current system uses an old virtual storage access method (VSAM), whereas the new system will be using an ORACLE database.

\section{Steps of the Model}
\label{sec:model_steps}

The proposed model is composed of 8 steps. 
However, before discussing the steps of the model, some fundamental information regarding the notation and employed modeling language should be provided.

The unified modeling language (UML) used in this paper is used in conjunction with an object oriented analysis and modeling method.
This method uses a combination of use cases, object modeling, statecharts and sequence diagrams.
The UML notation that was used in the paper is based on UML specified by \cite{Booch2005, Gomaa2006}.
The functional requirements of the system are defined by use cases and the actors.
The structural (static view of the system) modeling of the system is accomplished via classes as well as class relations, whereas behavioral (dynamic view of the system) modeling is achieved via interaction between use cases.
The colaboration of the objects when executing a use case is shown via collaboration diagrams and sequence diagrams.
Finally the particular aspects of the system that depend on particular states of the system are shown via statecharts.

Now that I have summarized the used methodology, I will continue with the steps of the proposed iterative, integrated, object oriented method for the desing and analysis of the client and server systems.
A sample figure, describing architecture of the steps that are going to be explained in the following subsections is given in \fig{architecture}.

\begin{figure}[!h]
\begin{center} \includegraphics[width=0.4\textwidth]{architecture.png} \end{center}
\caption{The architecture of the integrated software design and analysis method for C/S systems.}\label{fig:architecture}
\end{figure}

\subsection{Step 1: Use Case Definition}
In this stage the functional requirements of the system are modeled via use cases.
Use case is basically an interaction scenario that defines the interaction between the user/actor and the system\cite{Booch2005}.
Prior to use cases, the requirements are elicited and when eliciting the requirements, the use case is somewhat considered as a black box.
Then the requirements are map to actions and user types.
When we analyze separate use cases, then we can see a lot of common actions.
Abstract use cases help us reflect the common functionality to different use cases.

A nice example given in the paper for the training system is also included here as \fig{usecase}.
\fig{usecase} shows the main user of the system (personel specialist) and different actions that are available to him for 2 different tasks (checking the skills of a new or an existing employee).
In fact there are two concrete use cases here and they use common abstract use cases, which allows us to integrate the two use cases into one as given in \fig{usecase}.

\begin{figure}[!h]
\begin{center} \includegraphics[width=0.4\textwidth]{architecture.png} \end{center}
\caption{The use case example of check the skills of an existing employee or a new employee.}\label{fig:usecase}
\end{figure}



\subsection{Step 2: Structural Model Definition}
The structural model of the system describes the static structure of the system through modeling the real world objects into classes.
The structural model defines the classes as well as the attributes of those classes as well as their functions and the interaction between those classes.

The initial consideration of the structural model is the entity classes, which are mapped to database.
Entity classes bear particular importance in the sense that they will persist for a very long time in the system and will serve to a number use cases.
Another example is the association class, which is used to define the associations between other classes.
For example each skill may have sub-skills and each sub-skill may have one or more \textit{``course''} classes that are required for that sub-skill.
The structural model of the training system is provided in \fig{structural_model}.
Note that the static structure of the collaboration diagram will be fundamental when we are designing the relations of the relational database in \tion{database-mapping}.

\begin{figure}[!h]
\begin{center} \includegraphics[width=0.4\textwidth]{structural-model.png} \end{center}
\caption{The structural model of the training system.}\label{fig:structural_model}
\end{figure}


\subsection{Step 3: Behavioral Model Definition}
Behavioral model definition tries to model the dynamic behavior of the system.
After defining the use cases, the objects from every use case is identified and the message interactions between those objects are defined.
The identified sequence of interactions between the objects are defined on an object collaboration diagram.

An example object collaboration diagram (OCD) for the \textit{``check skills''} use case is provided in \fig{collaboration_diagram}.
See in \fig{collaboration_diagram} that there is a user interface to the user, which in reality may consist of several different GUI componenets. 
However, in object collaboration diagram the objects are modeled as application-level objects, hence it will be shown as a single object.
Also see from \fig{collaboration_diagram} that each OCD has an associated message sequence.
This sequence is nothing more than a structured and ordered refinement of the activities that were listed in the use-cases.

\begin{figure}[!h]
\begin{center} \includegraphics[width=0.4\textwidth]{collaboration-diagram.png} \end{center}
\caption{The object collaboration diagram for the \textit{``check skills''} use case.}\label{fig:collaboration_diagram}
\end{figure}


\subsection{Step 4: Mapping Structural Model to Relational Database}
\label{sec:database-mapping}
This stage of the method entails the mapping of the objects to particular relations in a relational database.
The relations between objects have to do with the static structure of the system, which was identified by the structural model identification.
The relations between the classes identified during structural modeling can be summarized as follows:
\bi
\item Applicant(\underline{SSN}, name, ...) 
\item Skill(\underline{SkillCode}, SkillName,...)
\item SkillPrerequisite(\underline{SkillCode}, \underline{PreRegSkillCode})
\item Course(\underline{CourseNum}, CourseName,...)
\item Section(\underline{CourseNum}, \underline{SectionNum},...)
\item ApplicantHasSkill(\underline{SSN}, \underline{SkillCode}, SkillValue,...)
\item Enrollment(\underline{CourseNum}, \underline{SectionNum}, \underline{SSN})
\ei
Note that in the above listing, the udnerlined field names are supposed to be the primary keys.


\subsection{Step 5: Client Server Software Architecture Development}
In this step we will assign the objects that were identified earlier to different C/S architectures.
Our aim in this step is going to be able to identify a C/S structure that would provide us different configuration alternatives.
Later on these alternatives will be used for different performance evaluations.
For example for our problem at hand we can come up with 2 types of C/S configurations:
\bi
\item 2 Tier Architecture: User interface as well as application layer is provided in the client side, whereas the database is kept in the server node.
\item 3 Tier Architecture: User interface is kept in the client node, application functionality is stored in an application server and the third tier becomes the database server, which may be kept in a single node or may be distributed to several different servers.
\ei
However, the actual mapping from architecture to system configuration will be done in software and hardware mapping step.

\subsection{Step 6: Transaction Specification}
The specified transactions in the method defines the business logic of the C/S system.
The sequence of transactions that will be written in CLISSPE will be derived from the object collaboration diagram that was given in \fig{collaboration_diagram}.
The specifications of the transactions are coded up using the language of CLISSPE.

Another point that deserves to be mentioned is the fact that there are two parts of a transaction: The client side and the server side.
If we are to reference the collaboration diagram of \fig{collaboration_diagram}, the client part of the transaction corresponds to the user interface object, whereas the server side corresponds to control and entity objects.

A very simple example of a transaction is provided below.
This transaction basically shows the skills that a particular applicant is qualified to train for.
Note that since we need information from the server side, there is also a remote procedure call (RPC) to the server in that transaction.

\begin{verbatim}
transaction CheckSkills running_on client
  ! Actor enters applicant SSN
  ! check applicant skills
  rpc check_skills to_server ApplicServer;
  ! Display skills applicant is qualified to
   train for
end_transaction;
\end{verbatim}

\subsection{Step 7: Hardware/Software Mappings Definition}
Once we have defined the system architecture and the transactions, now is the time to map the C/S system architecture components to physical components (such as CPU, network etc.).
The architecture components are also given particular characteristics such as latencies and or bandwidth requirements.
However, these mappings happen at the language level rather than actually building up the physical system.
The CLISSPE language allows the its users to define physical components in the laguage and specify particular characteristics of those physical components.
After defining the physical components in the language, they are mapped to one another.
For example, server and client machines are tied to a network (note that servers, clients and networks are object defined to represent the physical items).
Furthermore, DB tables are assigned to DBMS objects.
Below is a simple example for a DBMS system, where 	server type is set to IBM and the DBMS running on the machine is set to ORACLE with 8,192 KBytes of buffer size and with a dual core.

\begin{verbatim}
! this goes in the declaration section
server DBServer type= IBM-RS-6000-M43P133
dbms= Oracle DB_BuffSize= 8192 num_CPUs= 2
disk dsk01 type= ServerDisk
disk dsk02 type= ServerDisk;
network_type HQType bandwidth= 100
type= Fast_Ethernet;
network HQLan type= HQType;
\end{verbatim}

\subsection{Step 8: Performance Modeling and Assessment}
In this part of the system all the previously specified system configurations, mappings and transactions are used by the CLISSPE system.
The system is composed of the following parts: Specifications, compiler, model parameters, model solver and the model results (throughputs, response times, utilizations).
For the ease of understanding \fig{clisspe_system} is presented.
The order of specifications, compiler, model parameters and model solver are presented in that figure.

\begin{figure}[!h]
\begin{center} \includegraphics[width=0.4\textwidth]{clisspe-system.png} \end{center}
\caption{The clisspe system where the components are shown in the order of execution.}\label{fig:clisspe_system}
\end{figure}

Now that we are aware of the organization of the CLISSPE system, I will introduce the notation that we are going to use throughout the rest of the paper to calculate the performance of the system.
After specifying the classes of the system as well as their interactions with one another, the CLISSPE compiler will come up with a multiclass open queuing network (QN).
To QN will be represented with $Q=\left( R, W, \lambda, D \right)$, where $R$ corresponds to resources, $W$ corresponds to workload, $\lambda$ corresponds to a vector of arrival rates for transactions for every class and finally $D$ corresponds to a matrix of service demands.
The matrix $D=\left[ D_{i,r} \right]$ corresponds to a $|R| \times |W|$ service demands, where $D_{i,r}$ is the service demand for workload class $r$ at resource $i$.

\subsubsection{Computing Service Demands} 
Since we are given the required notation, in this section we will continue with the calcualtion of service demands in $D$.
Before starting the computation we will define $s$, which is a statement of a transaction associated with class $r$.
Keep in mind that $s$ may be associated with various CPU and DB demands.
The general demand formula is given in Formula \ref{equ:general_demand}

\begin{equation}
D_{i,r} = \sum^{}_{s \in S_{i,r}}{n_s \times p_s \times D_{i,r}^{s} }
\label{equ:general_demand}
\end{equation}

In Formula \ref{equ:general_demand} $S_{i,r}$ is all the statements that contribute to the service demand of of class $r$ at resource $i$, $n_s$ is the number of times the statement $s$ is executed and $p_s$ is the possibility of that statement being executed.
Finally $D_{i,r}^{s}$ becomes the average service demand at resource $i$ due to single execution of $s$.
The finalization of the values $n_s$ and $p_s$ are influenced by the loops and conditional statements (if, else-if, switch) respectively.

\subsubsection{Modeling the database statements}
When $s$ is a statement database access such as $select$, then the calcualtion of $D_{i,r}^{s}$ becomes somewhat complicated.
In this sub-section for each database access statament $s$ the associated number of
I/Os, the disk time and CPU times will be calculated.
Note that there is a lot of dependency in those calculations depending on how DBMS handles the $select$ statements.
For example, the existence of indexes in the DMBS or the size of the buffer as well as the access method to the database (hashing, B-trees) all have an influence on the calculation.
For general concepts on the DBMS systems and indexing, access strategies some of the references given in the paper are: \cite{Swami1994, Murray1995, Yao1979}.

The complication is doubled when we consider the possible access plans that are built on top of these different access methods.
Some of the possible access plans are:
\bi
\item Table space scan, which scans all the rows of a table
\item Indexed scan, where one or more indices are present to ease the select statement
\bi
\item Single table single index
\item Single table multiple index
\item Two table joins
\item More than two table joins
\ei
\ei

The assumption made in CLISSPE language while calculating the cost of an access plan is that CPU cost is linearly related to the number of I/O's generated by the access plan.
Therefore, the general formula might be summarized as follows:

\begin{equation}
C_{CPU}(AccessPlan) = a x N_{I/O}(AccessPlan) + b
\label{equ:access_plan_cost}
\end{equation}

In Equation \ref{equ:access_plan_cost}, the number of I/O accesses depends on the access plan and the linear relationship of I/O operations to the CPU cost is defined by the constants a and b.
The constant $b$ is used for the start-up CPU cost and the constant $a$ stands for the CPU cost per each I/O.

The selection of the optimum access plan for calculating the I/O and CPU costs of a select statement is done by the CLISSPE compiler.
By trying out all the possible access plans, the compiler chooses the one with the lowest cost.
Note that depending on how the DBMS system was specified in CLISSPE language, the available access plans may change or be restricted.

\subsubsection{Performance Parameter Gathering}
A considerable number of parameters regarding the performance calculations have to be known, before starting the assessment of the new C/S system.
The reason for the estimation is that at the desing level, since the system is not operational, those numbers cannot be known.
While estimating these numbers the frequency of execution of each use case is estimated.
In other words, for our case it is the number of times an existing or a new employee is processed through the system over a certain amount of time.

If the system is a renewal of a legacy system, then the logs of the old system can be adapted to the new system and the frequency of the use cases can be taken as the baseline for the transaction frequencies.
So as to test the system for higher workloads, a multiplier greater than 1 can be applied to the frequency elicited from the legacy system.

Not only the frequency of use cases, but also some other parameters such as the execution of loops or the probability of branch statements are needed to be estimated.
For those parameters, again the legacy system can be used (if there was retrospective data collection) or some of the domain experts (in the case of no retrospective data) may be utilized.
It is reported in the paper that for the particular application on which this particular research was based, all of those methods were employed at different levels.
In a similar manner for the size of the dataset, both the investigation of the legacy system as well as interviews were used.
CLISPE system allows its users to code up those parameters as constants and probabilities.
In the reported system, the authors have come up with 65 constants and 35 probabilities.

\subsubsection{Assessment of the Performance}
As we have noted earlier, the system whose performance analysis was performed in this paper is a recruitment system of a US government agency.
Unfortunately, rather than providing all the details of the calculation for the new system, the authors have preferred to include a summary table for the new applicant transaction.
This table is comprehensive to some extent, in the sense that it summarizes the response times of the whole system under 8 different configurations.
The summary table is given in \fig{response_values}.
The reason why that particular was chosen is summarized by the authors as the fact that it is the most critical and most demanding transaction of the system.
In other words, if the system is capable of handling that particular transaction, then the system can handle less demanding jobs more easily.


\begin{figure}[!h]
\begin{center} \includegraphics[width=0.4\textwidth]{response-values.png} \end{center}
\caption{The response times for the transaction: Check new applicant.}\label{fig:response_values}
\end{figure}

So as to be able to understand \fig{response_values}, we first need to explain the notation used.
In \fig{response_values} there are 8 scenarios.
Each scenarios is represented one of the two forms: $mD(p)nA(q)$ or $Comb(p)$.
In $mD(p)nA(q)$, m is the number of database servers in the scenario and p is the number of processors in every database server, whereas n is the application server number and q is the number of pre-processors in the application servers.
In $Comb(p)$ the application and the database servers are on the same machine that has p number of processors.
Lastly, notice that some of the scenarios have a suffix of SC, which stands for selective caching, i.e. some of the database tables are stored entirely in the main memory.

Now that we know how to read \fig{response_values}, we can start interpreting the results.
For each scenario, there are 4 different multipliers. 
Multiplier $1$ means that the new C/S system is having the workload of the old system, whereas multiplier of $2.5$ means that the new system was tried out with $2.5$ times the old system.
As we can see from \fig{response_values} the first three scenarios ($1D(2)1A(1)$, $1D(2)1A(1)SC$, $3D(2)1A(1)$) have much too high response times and they are not usable.

Now we are left with $5$ workable scenarios.
Among these 5 scenarios first we will compare $3D(2)2A(1)SC$ to scenarios $1D(4)1A(2)SC$ and $1D(3)1A(1)SC$ to observe the effect of having a distributed C/S system.
We can observe that for each multiplier the distributed C/S scenario $3D(2)2A(1)SC$ has about $20\%$ more response time than the centralized scenarios.
My first guess for that difference was the latency due to the network.
However, the authors have reported a different reason for that.
The reason for the $20\%$ extra response time is mainly attributed to the fact that the processors in $3D(2)2A(1)SC$ scenario are about $67\%$ slower than the ones in other scenarios.

As reported in the paper, the CLISSPE system also allows the users to determine the bottleneck, i.e. where most of the time was spent.
As expected, for the scenarios where the database server(s) and the application server(s) were connected via a LAN connection ($3D(2)2A(1)SC$, $1D(4)1A(2)SC$ and $1D(3)1A(1)SC$), the bottleneck was the LAN.
In the de-centralized scenario ($3D(2)2A(1)SC$) the average time spent in LAN was $43\%$ of the total response time, whereas for the decentralized versions ($1D(4)1A(2)SC$ and $1D(3)1A(1)SC$) it was $36\%$.
On the other hand, since combined scenarios ($Comb(6)SC$ and $Comb(4)SC$) have no LAN connections, their response times are much faster than the rest.
For the combined scenarios, since the transactions described is very CPU intensive, the bottleneck device becomes the CPU at the combined server.


\section{Results}
\label{sec:results}

\section{Discussion}
\label{sec:discussion}


\bibliographystyle{abbrv}
\bibliography{myref}
\end{document}
