Gmail Tim Menzies Decision Re: TSE-2008-03-0107 9 messages tse@computer.org Wed, Jul 9, 2008 at 5:45 AM To: tim@menzies.us Cc: tse@computer.org RE: TSE-2008-03-0107, "Identifying the ¿Best¿ Software Prediction Models Requires a Combination of Methods" Manuscript Type: Regular Dear Dr. Menzies, We have completed the review process of the above referenced paper that was submitted to the IEEE Transactions on Software Engineering. Enclosed are your reviews. We hope that you will find the editor's and reviewers¿ comments and suggestions helpful. I regret to inform you that based on the reviewer feedback, Associate Editor Dr. Mark Harman could not recommend publishing your paper to our Editor-in-Chief. We hope that this decision does not deter you from submitting to us again. Thank you for your interest in the IEEE Transactions on Software Engineering. Thank you, Mrs. Joyce Arnold on behalf of Dr. Mark Harman Transactions on Software Engineering. 10662 Los Vaqueros Circle Los Alamitos, CA 90720 USA tse@computer.org Phone: +714.821.8380 Fax: +714.821.9975 *********** Editor Comments Editor: 1 Comments to the Author: Dear Tim, I am afraid that the referees have some serious concerns about this work. The concerns are not the kind that can be addressed by a revision (even a major one) and so I have to recommend rejection of the paper. The reviewers are all expert in the field and they have provided quite a lot of feedback on the work. I hope that you will find this helpful and that you will continue to consider TSE as a potential submission venue. Regards, Mark. 1 *********************** 2 3 Reviewer Comments 4 5 Reviewer: 1 6 7 Recommendation: Author Should Prepare A Major Revision For A Second Review 8 9 Comments: 10 11 This paper studies 158 variants of the COSEEKMO effort estimation 12 workbench and applies them to 19 (partially overlapping) COCOMO 13 style data sets. Applying sound empirical and statistical techniques, 14 the authors conclude that 4 out of the 158 methods outperform the 15 rest. 16 17 18 While the paper is technically sound, my reservation is on ¿What do the results really mean¿? My answer is: 19 They show that under the given specific circumstances, these 4 winner methods are enough. However, I would expect different results if different methods and different data sets were studied. 19 20 More specifically, the authors should address the following questions 21 for a revised version of the paper: 22 23 1. The title is not clear. The search is not for the best prediction 24 models (as there is no universally best). At best, the results are 25 for specific sets of COCOMO type of data. 26 27 2. If the search is for the best method, from any consideration you 28 need to specify the criteria you are looking at. There are three 29 criteria selected, but to what extent these criteria are representative 30 and sufficient? 31 32 3. In the same vein, how representative are the 158 methods taken 33 from the COSEEKMO effort estimation workbench? 34 35 4. This paper was motivated by the conclusion INSTABILITY of 36 prediction models across data sets. The authors should give a 37 rigorous definition of this term. I believe that this property 38 related to the methods under consideration. So, how would the authors 39 argue is the four winner methods were not part of the set, and only 40 the remaining 154 sets were looked at? Maybe, the results would be 41 quite different then. 42 43 5. From a practical perspective, I would not rely on these results 44 for any new data sets. The external validity of the results is 45 limited, which should be made more explicit (the authors claim more 46 or less external validity at least for COCMO type of data). 47 48 6. The term "feature" to me means "A set of logically related 49 requirements that provide a capability to the user and enable the 50 satisfaction of business objectives [Wiegers '03]" . Not sure why 51 feature is used in the paper to describe attributes. 52 53 7. There is no discussion relating both the ranking method and the 54 results to the instability problem. We do not know whether the four 55 "best" models obtained through the ranking method are conclusion-stable 56 or not. If the four "bvest" models are not conclusion-stable, 57 please explain how the conclusion was obtained on page 17 regarding 58 , we were able to find stable conclusions across different data 59 sets, and on page 18 regarding Here, we found conclusion stability 60 after . If the four "best" models are conclusion-stable as a result 61 of the ranking, then, obviously we can conclude from the above 62 result that the majority of the prediction models (154 out of 158) 63 are not conclusion-stable across the 19 sub data sets. This conclusion 64 is actually consistent with that from Shepperd and Kadoda [10]. 65 66 8. In fact, Robert Glass has discussed this problem in general for 67 any technology to match a problem domain in: "Matching methodology 68 to problem domain, Communications of the ACM, 47 (5), 19-21." 69 According to that, any technology need to be customized for a 70 specific domain. In the case of software prediction, any prediction 71 model must be customized to the dada sets. In other words, there 72 are no general prediction models that are suitable to any data sets 73 with conclusion stability, because the prediction models are data 74 set-specific, as indicated in the conclusion of the paper. 75 76 9. Section B regarding the Experimental Procedure does not present 77 the procedure at all. Instead, a procedure is given in section V 78 regarding Results. The authors are suggested to define the experimental 79 procedure explicitly in a manner that other researchers can replicate 80 the experiment by following this procedure. 81 82 83 How relevant is this manuscript to the readers of this periodical? 84 Please explain under the Public Comments section below.: Relevant 85 86 Is the manuscript technically sound? Please explain under the Public Comments section below.: Partially 87 88 1. Are the title, abstract, and keywords appropriate? Please explain 89 under the Public Comments section below.: No 90 91 2. Does the manuscript contain sufficient and appropriate references? 92 Please explain under the Public Comments section below.: References 93 are sufficient and appropriate 94 95 3. Please rate the organization and readability of this manuscript. 96 Please explain under the Public Comments section below.: Readable 97 - but requires some effort to understand 98 99 Please rate the manuscript. Explain your rating under the Public Comments section below.: Fair 100 101 102 Reviewer: 2 103 104 Recommendation: Reject 105 106 Comments: 107 108 I found the title of the paper to be quite interesting but unfortunately 109 the paper did not live up to my expectations. The topic of software 110 prediction is of interest to many readers of TSE, so the paper is 111 within the scope of the journal. However, the main problem that I 112 have with this paper is emphasized by the title, which uses the 113 word "method" incorrectly, in my opinion. I have read the authors' 114 'comments on reviewer feedback', so I know that this terminology 115 was also questioned by a previous reviewer. I am not convinced by 116 the authors" reply to this query. I must re-iterate the previous 117 reviewer's comment: 'just changing some parameters in a process 118 does not create a new method". I would go further and suggest that 119 the paper is actually comparing a number of techniques or models, 120 not methods. To quote the Concise Oxford Dictionary: a method is 121 "a special form of procedure". Further evidence for the use of the 122 word "model" is given in the reference list: many of the cited 123 papers use the term "prediction models". Shepperd and Kadoda use 124 the term "prediction techniques" (reference 10). None of the 125 references use the word "method" in this context. COCOMO has always 126 been referred to as a cost estimation model. Although this may seem 127 like a small point I think it is fundamental as it relates to the 128 importance of the contribution: a combination of methods (such as 129 linear regression and case-based reasoning) would be a breakthrough, 130 whereas a minor improvement on a model has less significance. In 131 my opinion this paper represents the latter rather than the former. 132 This lax use of terminology is evident throughout the paper. The 133 introduction uses the terms "methods", "techniques", "models" and 134 "paradigms" interchangeably. 135 136 I was surprised to see the tutorial introduction to COCOMO included 137 in section II. The features and local calibration for COCOMO were 138 all performed in a specific application domain (aerospace) and this 139 also reduces the generalizability of this work. The authors do 140 discuss this as a source of sampling bias in section VII but dismiss 141 this concern on the grounds that other researchers have also used 142 this data previously. I am also concerned about the use of just two 143 data sets and the overlap in the data subsets. Again this was raised 144 by reviewer 2 in the previous review but is dismissed out of hand 145 by the authors. 146 147 In summary, it does not seem to me that the significance of the 148 paper is sufficient to merit publication in TSE. 149 150 151 152 How relevant is this manuscript to the readers of this periodical? 153 Please explain under the Public Comments section below.: Interesting 154 - but not very relevant 155 156 Is the manuscript technically sound? Please explain under the Public 157 Comments section below.: Yes 158 159 1. Are the title, abstract, and keywords appropriate? Please explain 160 under the Public Comments section below.: No 161 162 2. Does the manuscript contain sufficient and appropriate references? 163 Please explain under the Public Comments section below.: References 164 are sufficient and appropriate 165 166 3. Please rate the organization and readability of this manuscript. 167 Please explain under the Public Comments section below.: Easy to 168 read 169 170 Please rate the manuscript. Explain your rating under the Public Comments section below.: Poor 171 172 173 Reviewer: 3 174 175 Recommendation: Author Should Prepare A Major Revision For A Second Review 176 177 Comments: 178 179 The paper reports on an experimental evaluation of a range of 180 estimation techniques over two sources of data. The authors have 181 clearly received some criticism regarding the use of these two 182 sources (I'm carefully avoiding the use of the word "sets"!), and 183 have attempted to address this in their comments. However this is 184 an issue that cannot be avoided, not only because of the limited 185 range of data, but also because of its relevance. Most of this data 186 was collected from projects 20 to 30 years ago when features such 187 as development strategies, design methods, programming languages 188 and project scale (to name just a few) were very different to what 189 they are now. I deeply appreciate the problems of acquiring data 190 sets to carry out this kind of work, but if it was possible for the 191 authors to also apply the evalaution on a more recent set of data 192 it would improve the paper significantly. 193 194 The remainder of my criticisms are more minor, and mainly relate to the readability of the paper: 195 196 The introduction is quite terse and fairly detailed and does not 197 introduce the paper well. For example, in the third paragraph the 198 concept of stacked meta-learning schemes is raised with very little 199 supporting context. And when discussing the Shepperd and Kadoda 200 results the use of random seeds is mentioned, again without explanation 201 of what is being seeded. Shortly afterwards, the COCOMO features 202 are touched upon, but without introducing the reader to what these 203 features are (they are finally defined in Figure 1 in section 2). 204 Straight after this we're into pruning... The introduction needs 205 to be rewritten to introduce the context of the work, the problem 206 being addressed, the solution adopted, and a summary of the results. 207 208 In the related work section the authors focus on two methods (COCOMO 209 and CBR) but without providing a justification for this. Also, the 210 description of CBR is very brief and doesn't actually describe what 211 it is. Furthermore I don't understand the reason for the two bullet 212 points on line 21-24 of page 6 - they seem to be a nonsequitur to 213 the preceding statements. 214 215 Section III is described as a "Brief Tutorial..." This is a rather generous description. 216 217 TYpos on page 7: The sentence beginning "Minimum..." (line 27) 218 should begin "The minimum...", and on line 35 "automatics" should 219 be "automatic". 220 221 On line 35 COCOMIN is described as "far less thorough" - than what? 222 And how is the subset of features that it explores selected? In 223 section B (Experimental Procedure), it would be helpful for the 224 reader who is unfamiliar with such strategies to explain the rationale 225 for creating subsets of the data sources. The section also mentions 226 that the features used are described in the appendix - this is not 227 the case (but should be). A similarly inaccurate reference to 228 material in the appendix appears in section C (line 46-47 or page 229 10), where it is suggested that the effort multipliers and scale 230 factors appear there (in the guise of Figure 2). again this is not 231 the case. Later on in the same paragraph the authors menion using 232 perturbations of Boehm's values - what is the rationale behind this 233 and how were the perturbations formed? 234 235XXXX Figure 5 on page 10 is confusing - the entry for method b uses 236 COCOMIN, which is a column prunrer I believe, but the cell for 237 column pruning is crossed and there is an entry for the row pruning. 238 The converse is the case for entry d. I would have expected these 239 to be the other way round, but perhaps I'm misunderstanding something. 240 241 Section D (and the end of section C) talks about "prefered methods". 242 In what sense are there prefered and why? when describing method 243 "e" the authors state that domain-type knowledge is ignored. This 244 implies that in the other methods such information is not ignored. 245 If this is the case, can the authors confirm this and then explain 246 how this is taken into account, and what the limits of this is (e.g. 247 over what features does it extend, and how are values of these 248 features grouped)? At the end of section D it stated that details 249 of these methods are contained in the appendix - again this is not 250 the case. 251 252 In the results section, as I understand it Figures 6, 7 and 8 all 253 show the number of losses, and none of them show the size of the 254 error. Is there also value in considering the size of the error 255 in addition to the number of losses to provide further insight into 256 the accuracy and stability of a model? Also, why are random seeds 257 just included for figure 8? There is a typo on line 42 of page 258 12: "NASA9" -> "NASA93" 259 260 The results from the experiments are interesting and the authors 261 make solid recommendations on the basis of this (but there is also 262 the issue raised at the start regarding the relevance of the data). 263 It is curious that all the attempts to improve the methods did so 264 badly, and some further insights into this, and the results as a 265 whole, would be welcome in the Discussion section - method "i" is 266 descussed briefly but the other results are glossed over. 267 268 One small final issue is the question of the title. As the authors 269 admit, the results cannot be extrapolated to non-COCOMO style data 270 sets, so perhaps this limitation should be reflected in the title. 271 272 How relevant is this manuscript to the readers of this periodical? 273 Please explain under the Public Comments section below.: Very 274 Relevant 275 276 Is the manuscript technically sound? Please explain under the Public 277 Comments section below.: Appears to be - but didn't check completely 278 279 1. Are the title, abstract, and keywords appropriate? Please explain 280 under the Public Comments section below.: No 281 282 2. Does the manuscript contain sufficient and appropriate references? 283 Please explain under the Public Comments section below.: References 284 are sufficient and appropriate 285 286 3. Please rate the organization and readability of this manuscript. 287 Please explain under the Public Comments section below.: Readable 288 - but requires some effort to understand 289 290 Please rate the manuscript. Explain your rating under the Public 291 Comments section below.: Fair 292 293 294 Reviewer: 4 295 296 Recommendation: Reject 297 298 Comments: 299 Overall I think there are a number of problems with the basic approach. 300 301 1. General approach 302 303 I am not sure that automating a large number of different techniques 304 to throw at a dataset is a good idea. It is not clear to me that 305 each technique is used appropriately " i.e. is used in a manner 306 consistent with expert use of the method. For example, using 307 regression should involve: 308 309 i. Determining whether a data transformation 310 is necessary " not just to allow for non-linearity but to allow for 311 heteroscedasticity (i.e. errors being proportional to size rather 312 than independent) and to reduce the impact of outliers. 313 314 ii. Using 315 stepwise regression when there are many variables (which is referred 316 to as column pruning in this paper) " for which there exist 317 well-defined algorithms in the statistical literature and statistical 318 software packages. 319 320 iii. Performing sensitivity analysis such as 321 checking for high leverage data points (for which there are 322 well-defined statistical procedures) and removing data points that 323 de-stabilize the model (which is referred to as row pruning in the 324 paper). 325 326 iv. Determining whether the dataset is suitable for 327 regression. The model would only be considered as a candidate 328 prediction model if the model was statistically significant. 329 330 Using standard statistical packages they above steps are based on well-defined statistical tests not ad hoc "learning" procedure. 331 332 Also see the recent paper TSE paper by Keung et al for statistical tests and sensitivity methods for analogy (CBR): 333 http://csdl2.computer.org/persagen/DLAbsToc.jsp?resourcePath=/dl/trans/ts/&toc=comp/trans/ts/5555/01/e1toc.xml&DOI=10.1109/TSE.2008.34. 334 335 The critical point is that if the alternative methods are not "best practice" for each method, the comparisons are invalid. 336 337 338 339 2. Use of the COCOMO dataset 340 341 It is hardly a surprise that the COCOMO-based estimation methods 342 work well on the COCOMO-1 dataset. However, I think that says more 343 about the dataset than the efficacy of the cost model. 344 345 The COCOMO dataset was used to help develop the COCOMO model. To 346 quote Software Engineering Economics (1st ed) Page 493 "The three-mode 347 model was then developed and calibrated to the 56 projects, resulting 348 in additional relatively small corrections to the effort multipliers 349 as well. 350 351 Furthermore, the COCOMO dataset published on pages 496 & 497 of 352 Software Engineering Economics includes adjustment values that are 353 inconsistent with the COCOMO multipliers that you quote in Figure 354 2 of your paper.For example 13 projects have TIM values that are 355 not consistent with COCOMO model. Other variables are not as bad 356 as the TIM variable but most have one or two projects with inconsistent 357 values. This implies "over-fitting" of the input variables to the 358 expected output 359 360 The method by which the dataset was collected is not defined but 361 clearly a large number of values of the adjustment factors (and 362 size) were assessed post-hoc. This would automatically lead to 363 better predictions than the COCOMO model would produce in a genuine 364 estimation role. The post-hoc data collection process may have 365 collected the output value (effort) at the same time as the input 366 values were identified , leading to the problem of over-fitting. 367 368 I have no information about the details of the other dataset - but 369 if it was collected post-hoc as a COCOMO validation dataset it may 370 suffer from the problems as the COCOMO-1 dataset. 371 372 3. Non-paired tests 373 374 No matter what variables (or projects) are included in a model 375 developed from the training data set, the model is always used to 376 predict the same outcome variable (i.e. effort in the test data 377 set). Therefore for a specific test data set all models will be 378 used to produce an effort output for each project in the validation 379 dataset. This suggests that the outcomes from different model are 380 not independent (because they predict the same variable, based on 381 the same test data). It is not clear that under these conditions 382 the assumptions underlying the Mann-Whitney test are fulfilled. 383 384 385 How relevant is this manuscript to the readers of this periodical? Please explain under the Public Comments section below.: Relevant 386 387 Is the manuscript technically sound? Please explain under the Public Comments section below.: No 388 389 1. Are the title, abstract, and keywords appropriate? Please explain under the Public Comments section below.: Yes 390 391 2. Does the manuscript contain sufficient and appropriate references? Please explain under the Public Comments section below.: References are sufficient and appropriate 392 393 3. Please rate the organization and readability of this manuscript. Please explain under the Public Comments section below.: Difficult to read and understand 394 395 Please rate the manuscript. Explain your rating under the Public Comments section below.: Poor 396 Tim Menzies Wed, Jul 9, 2008 at 6:12 AM Reply-To: tim@menzies.us To: Omid Jalali , Jairus Hihn , Daniel Baker , Karen Lum total and complete thumbs down. it was the big picture they didn't get. worse, even cocomo seems dull to them. it is like the international effort estimation community has no interest in commenting on standard industrial methods. all the methodological trivia mentioned below seems ignorant to me of common industrial. cocomo-based estimation is common++ especially in government. that work often uses the historical cocomo data. there is no other source. yet where is the audit for that work? back to the drawing board. begin one week of denial and anger followed by some action, later on. t [Quoted text hidden] -- Reality is nothing but a collective hunch. -- Lily Tomlin Tim Menzies Wed, Jul 9, 2008 at 7:59 AM Reply-To: tim@menzies.us To: Omid Jalali , Jairus Hihn , Daniel Baker , Karen Lum further to the below i'm open to ways to rewrite the paper. one thing we could do is a paper called "how not to do effort estimation". e.g. manic exploration of n data miners is not useful (two exceptions: row/column pruning) t [Quoted text hidden] Daniel Baker Wed, Jul 9, 2008 at 5:56 PM To: tim@menzies.us Don't feel bad Tim. They are selective and sometimes they won't like it. Sorry I haven't written that journal article for you yet. Work and other things tend to get me distracted and I forget about it. Give me a call this Sunday and we can talk and I'll start writing it. -Dan [Quoted text hidden] Omid Jalali Wed, Jul 9, 2008 at 10:32 PM To: tim@menzies.us Cc: Jairus Hihn , Daniel Baker , Karen Lum I don't know where the problem is. For one, I didn't know that 7 reviewers (3+4) would think so bad about this paper (at least, they all agree on something though!). One way to put it is to say "they aren't ready for this" but it is just a basic set of experiments comparing some methods. How can you reject a comparison unless you find something systematically wrong with the way they are compared? We are not exactly trying to turn the whole effort estimation world upside down. (We are just showing a way we think is good.) And if anyone is saying that our results are not new (which we know is not true) and not worthy of publication, then how come they allow the publication of Foss et. al. and the second paper based on that (or vice-versa) in TSE within 2 (3?) years with so much similarity and not a worthy conclusion but negative ones? Anyhow, I am wondering if another journal would be a possible solution. Maybe if others see these results in another journal, they could use these results somehow in a positive way (and at the end, publish something in TSE based on our paper!!!!). Omid [Quoted text hidden] Karen Lum Thu, Jul 10, 2008 at 11:34 AM To: Omid Jalali , tim@menzies.us Cc: Jairus Hihn , Daniel Baker I agree, a different journal is a good idea. Just which one is the right one is the difficult problem. -karen [Quoted text hidden] ************************************ Karen T. Lum JPL Mission Systems Concepts Section Systems Analysis and Modeling Group 4800 Oak Grove Drive Mail Stop 301-180 Pasadena, CA 91109 phone: (818) 354-5036 fax: (818) 393-9815 ************************************ DISCLAIMER: All personal and professional opinions presented herein are my own and do not, in any way, represent the opinion or policy of JPL, NASA or Caltech. Tim Menzies Thu, Jul 10, 2008 at 12:01 PM Reply-To: tim@menzies.us To: Karen Lum Cc: Omid Jalali , Jairus Hihn , Daniel Baker empirical software engineering should be our next port of call. maybe a paper not crowing about our success but lamenting how little succcess we've had. "Strawman studies in effort estimation". despite five years of work looking at ways to improve effort estimation, found very little improvement over methods that are not decades old. caution to other researchers in the field- need to benchmark the more complex against the simplest something like that. or does someone else have other suggestions? t [Quoted text hidden] Jairus Hihn Thu, Jul 10, 2008 at 12:18 PM To: tim@menzies.us, Karen Lum Cc: Omid Jalali , Jairus Hihn , Daniel Baker are the reviewers different? if they are then just send the same paper [Quoted text hidden] -- Jairus M Hihn, Ph.D. Manager, SQI Measurement, Estimation & Analysis Element (MESA) Mission Systems Concepts Section Jet Propulsion Laboratory/California Institute of Technology ms 301-285 4800 Oak Grove Drive, Pasadena, CA 91109 Phone (818) 354-1248 Cell (818) 726-1676 Fax (818) 393-4100 ***************************************************************************** The weaker the data available upon which to base one's conclusions, the greater the precision which is quoted to give the data authenticity. - Augustines Law XXXV (modified) ***************************************************************************** DISCLAIMER: All personal and professional opinions presented herein are my own and do not, in any way, represent the opinion or policy of JPL, NASA or Caltech. Tim Menzies Thu, Jul 10, 2008 at 7:11 PM Reply-To: tim@menzies.us To: Jairus Hihn , Karen Lum , Omid Jalali , Jairus Hihn , Daniel Baker Could be the same population. So we hace to address the populatuion, somehow T On 7/10/08, Jairus Hihn wrote: > are the reviewers different? if they are then just send the same paper > > At 2:01 PM -0400 7/10/08, Tim Menzies wrote: >>empirical software engineering should be our >>next port of call. maybe a paper not crowing >>about our success but lamenting how little >>succcess we've had. "Strawman studies in effort >>estimation". despite five years of work looking >>at ways to improve effort estimation, found very >>little improvement over methods that are not >>decades old. caution to other researchers in the >>field- need to benchmark the more complex >>against the simplest >> >>something like that. >> >>or does someone else have other suggestions? >> >>t >> >>On Thu, Jul 10, 2008 at 1:34 PM, Karen Lum >><karen.t.lum@jpl.nasa.gov> [Quoted text hidden] >>><tim@menzies.us> wrote: >>> >>> >>>further to the below >>> >>> >>>i'm open to ways to rewrite the paper. one >>>thing we could do is a paper called "how not to >>>do effort estimation". e.g. manic exploration >>>of n data miners is not useful (two exceptions: >>>row/column pruning) >>> >>> >>>t >>> >>> >>> >>>On Wed, Jul 9, 2008 at 8:12 AM, Tim Menzies >>><tim@menzies.us> wrote: >>> >>>total and complete thumbs down. >>> >>> >>>it was the big picture they didn't get. worse, >>>even cocomo seems dull to them. >>> >>> >>>it is like the international effort estimation >>>community has no interest in commenting on >>>standard industrial methods. all the >>>methodological trivia mentioned below seems >>>ignorant to me of common industrial. >>>cocomo-based estimation is common++ especially >>>in government. that work often uses the >>>historical cocomo data. there is no other >>>source. yet where is the audit for that work? >>> >>> >>>back to the drawing board. begin one week of >>>denial and anger followed by some action, later >>>on. >>> >>> >>>t >>> >>> >>> >>>---------- Forwarded message ---------- >>> >>>From: <tse@computer.org> >>> >>>Date: Wed, Jul 9, 2008 at 7:45 AM >>> >>>Subject: Decision Re: TSE-2008-03-0107 >>> >>>To: tim@menzies.us >>> >>>Cc: tse@computer.org [Quoted text hidden] >>>tse@computer.org [Quoted text hidden] >>>http://csdl2.computer.org/persagen/DLAbsToc.jsp?resourcePath=/dl/trans/ts/&toc=comp/trans/ts/5555/01/e1toc.xml&DOI=10.1109/TSE.2008.34 [Quoted text hidden] >>>rss=" http://menzies.us/rss.php " >>> >>>url=" http://menzies.us" >>> >>>> >>> >>> >>>Reality is nothing but a collective hunch. >>> >>>-- Lily Tomlin >>> >>> >>> >>> >>> >>>-- >>> >>>>> >>>title=" dr (Ph.D.) and associate professor" >>> >>>align=" csee, west virginia university" >>> >>>cell=" esb 841A" >>> >>>rss=" http://menzies.us/rss.php " >>> >>>url=" http://menzies.us" >>> >>>> >>> >>> >>>Reality is nothing but a collective hunch. >>> >>>-- Lily Tomlin >>> >> >>************************************ >>Karen T. Lum >>JPL Mission Systems Concepts Section >>Systems Analysis and Modeling Group >>4800 Oak Grove Drive >>Mail Stop 301-180 >>Pasadena, CA 91109 >>phone: (818) 354-5036 >>fax: (818) 393-9815 >>************************************ >>DISCLAIMER: All personal and professional opinions presented herein >>are my own and do not, in any way, represent the opinion or policy of >>JPL, NASA or Caltech. >> >> >> >> >>-- >>>title=" dr (Ph.D.) and associate professor" >>align=" csee, west virginia university" >>cell=" esb 841A" >>rss=" http://menzies.us/rss.php" >>url=" http://menzies.us" >>> >> >>Reality is nothing but a collective hunch. >>-- Lily Tomlin > > > -- > Jairus M Hihn, Ph.D. > Manager, SQI Measurement, Estimation & Analysis Element (MESA) > Mission Systems Concepts Section > Jet Propulsion Laboratory/California Institute of Technology > ms 301-285 > 4800 Oak Grove Drive, Pasadena, CA 91109 > Phone (818) 354-1248 Cell (818) 726-1676 Fax (818) 393-4100 > > ***************************************************************************** > The weaker the data available upon which to base one's conclusions, > the greater the precision which is quoted to give the data authenticity. > > - Augustines Law XXXV (modified) > ***************************************************************************** > DISCLAIMER: All personal and professional opinions presented herein > are my own and do not, in any way, represent the opinion or policy of > JPL, NASA or Caltech. -- Sent from Gmail for mobile | mobile.google.com [Quoted text hidden]