  <item>
    <category rank="1000" >lecture</category>
    <category rank="1000" >week8</category>
     <id>172</id> 
     <title>
From Cog. Psych. to Rules and Beyond
     </title>
     <pubdate secs="1204597282" around="Mar08">Mon Mar  3 18:21:22 PST 2008</pubdate>
     <link>http://menzies.us/csx72/?172</link>
     <guid>http://menzies.us/csx72/?172</guid>
     <description><![CDATA[<p>
 
<h2>Models of expert and novice behavior</h2>
<p><em>(Reference: 
	Science, 20 June 1980,
	Vol. 208. no. 4450, pp. 1335 - 1342
	"Expert and Novice Performance in Solving Physics Problems",
	Jill Larkin, John McDermott, Dorothea P. Simon, and Herbert A. Simon. 
	)</em>
<p>Much of the early AI research was informed by a 
<a href="http://en.wikipedia.org/wiki/Working_memory#Working_memory_capacity">
cognitive
psychology model of human expertise</a>. The model has two parts:
<ul>
	<li> LTM: A large long term memory (100,000s of patterns)
	<li>STM: A (much) smaller short term memory (seven, plus/minus two)
</ul>
<p>Reasoning was, according to this mode: a 
<em>match-act</em> cycle:
<ol>
	<li>Content of the  STM matches patterns in the LTM
	<li>LTM patterns have actions when, when they <em>fire</em>,
		rewrites the STM contents
	<li>Goto 1.
</ol>
<p>
	In this model
	<ul>
		<li>
	experts and experts because of the "power" patterns they have
		laboriously learned and added to their long term
		memory.
		<li>
		novices are novices because they clog short term memory 
		with irrelevant goals
		<li>
		experts dodge that since their LTM patters tell them
		</ul>
		<p>Q: How to experts know what is <em>relevant</em>? 
		<ul>
			<li>A: feature extractors- gizmos that experts learn to
			let them glance at a situation and extract the salient details.
			<li>E.g. chess experts can reproduce form memory all the
			positions on a chess board
			<li>But if you show the same experts a gibberish game (one where all the rules are broken- e.g white pawns on white's back row) then 
			they can't reproduce the board.
			<li>Why? Well, when they glance at a board, their
			feature extractors fire to offer them a succinct summary.
			Gibberish boards baffle the feature extractors, so
			no summary
		</ul>

		<h2>Enter rule-based programming</h2>	
		<p>	Much, much work
	on	 mapping this cognitive psychological insight into AI programs
	<h3>Success story #1: PIGE (not well known)</h3>
	<p><em>(Reference:

		"An Expert System for Raising Pigs" by T.J. Menzies and J. Black and J. Fleming and M. Dean. The first Conference on Practical Applications of Prolog 1992 . Available from 
		<a href="http://menzies.us/pdf/ukapril92.pdf">  
			http://menzies.us/pdf/ukapril92.pdf</a>  
		)</em>
	<p>PIGE: Australia's first exported rule-based expert systems
	<p>Written by me, 1987 who trapped the expertise of pig nutritionists
	in a few hundred rules.
	<uL>
		<li>PIGE would find factors to control, try them on a simulator of a farm, then adjust its factors accordingly.
		<li>Here's PIGE out-performing the experts:
		<img width=300 align=top src="http://menzies.us/csx72/doc/kbs/pige.png">
		<li>Why did PIGE success so well? Limits to short-term-memory. PIGE had none. The humans, on the other hand, had the standard human limitations
	</ul>
	<h3>Success story #2: MYCIN (famous)</h3>
	<p><em>(
  V.L. Yu and L.M. Fagan and S.M. Wraith and
                  W.J. Clancey and A.C. Scott and J.F. Hanigan and
                  R.L. Blum and B.G. Buchanan and S.N. Cohen,
   1979,
   Antimicrobial Selection by a Computer: a
                  Blinded Evaluation by Infectious Disease
                  Experts,
   Journal of American Medical Association,
   vo. 242, pages
   1279-1282)</em>
	<p>Given patient symptoms, what antibiotics should be prescribed?
	<p>Approx 500 rules.
	<p>Extensively studied:
	<img align=top src="http://www.cs.colostate.edu/~howe/EMAI/ch3/img27.gif" 
	width=500>
	<p>Compared to humans, performed very well:
	<img align=top width=300 src="http://www.cs.colostate.edu/~howe/EMAI/ch3/img28.gif">
	<p>Why did it perform so well?
	<ul><li>Not because of the power of its algorithm (recursive descent through the rules- backward chaining)
		<li>But only because of the knowledge in its rules
		<li>In this case, algorithms, not knowledge, is power.	
	</ul>
	<h3>Success story #3: XCON and VT (very famous)</h3>
	<p>John McDermott, DEC, 1980s. Two systems:
	<ul><li>XCON, early 1980s, auto-configured DEC computers for customers.
At its peak, 8,000 to 12,000 rules.
Maintained by a team of 20 developers. Saving DEC (approx) $20M/year.
<ul>
<em>(Reference:
J. McDermott,
   1993,
  R1 ("XCON") at age 12: lessons from an elementary
                  school achiever,
   Artificial Intelligence,
  59,
  241-247)</em>
</ul>
<li>VT, mid to late 1980s, designer of elevators, 7000 rules.
<ul>
<em>(Reference:
". Marcus and J. McDermott,
  1989,
  (SALT): A Knowledge Acquisition Language for
                  Propose-and-Revise Systems,
   Artificial Intelligence,
   39,
   1,
   pages1-37.)</em>
</ul>
</ul>
<h2>The First AI Bubble</h2>
<p>With strong theoretical support and excellent industrial
case studies, the future for AI seemed bright.
<p>Begin the AI summer (mid-1980s). Just like the Internet bubble of 2000:
fledging technology, over-hyped, immature tool kits.
<p>And after the summer, <a href="http://menzies.us/csx72/doc/intro/ai-business99.pdf">came the fall</a>. Enough said.
<p>Happily, then came the 1990s, data mining, and results from real-time AI planners, and stochastic theorem provers.
<p>By 2003, I could say I was an AI-nerd and people would not run away.
<h2>Inside Rule-based programming</h2>
<p>Here's a <a href="http://menzies.us/csx72/doc/kbs/rules.pdf">good introduction on rule-based programming</a>. Read it (only pages 1,2,3,4,5,6,7)
carefully to learn the meaning of:
<ul>
	<li>Conditions,
	<li>Actions
	<li>Backward chaining
	<li>Forward chaining
	<li>Working memory
</ul>
<p>Backward chaining supports <em>how</em> and <em>why</em> queries:
<ul>
	<li>How did you prove this goal?
	<li>Why are you trying to prove this goal?
</ul>
<p>Forward chaining supports only <em>how</em> not <em>why</em>.
<ul>
	<li>Explain.
	</ul>
	<p>Define match, resolve, act
	<ul><li>With respect to the BAGGER system, define with examples
		conflict resolution using:
		<ul><li>Rule ordering
			<li>Data ordering
			<li>Size ordering
			<li>Specificity ordering
		</ul>
		(Note: BAGGER is a toy example of how XCON configured computers.)
	</ul>
	<h2>Rules: the Real Story</h2>
	<h3>Rules- the good news</h3>
	<p>Some nice advantages for software engineering:
	<ul>
		<li>Uniform representation simplified tool construction:
		<ul><li>Rules are rules;
			<li>Feature extractors could be written as rules that 
				run before anything else runs;
				<li>Conflict resolution operators could be written as rules. </ul>
			<li>For example, CMU's SOAR language added a whole meta-layer called
				<a href="http://menzies.us/csx72/doc/kbs/yost99.pdf">TAQL</a>
				to handle:
				<ul>
					<li>Within each <em>problem space</em>:
					problem space proposal, problem space initialization;
					<li>For each <em>state</em> in a problem space:
						state initialization, state elaboration,
						state evaluation
						<li>For each <em>operator</em> that can
						jump you between states: operator proposal,
						operator selection, operator implementation,
						how to handle operator failure, operator evaluation
				</ul>
				And guess what- all this was code with rules!
		</ul>
	</ul>
	<h3>But Rules Have Problems</h3>
	<p>As rule-based programs became more elaborate, their support tools
	more intricate, the likelihood that they had any human psychological
	connection became less and less.
	<p>As we built larger and larger rule bases, other non-cognitive 
	issues became apparent:
	<ul>
		<li>Speed: within match-resolve-act, 80% of the time 
		is spend in <em>match</em>. 
		<li>Maintenance:  any rule can change the working memory
			to effect any other rule. 
			XCON was the commercial AI success story
		   	of the early 1980s and  the maintenance
			nightmare of the mid-1980s.
			<ul><li>
		What works for 400 rules does not work for 8000
		<li>
		At its peak, DEC was claiming it saved the, $20M/year 
			but if any of XCON's 20 developer's left,
			ouch!
		</ul>
	</ul>
	<P>Note that both these problems come from the
	global nature of match: all rules over all working memory.
	<ul><li> A search by all rules
			through all possible working memory contents is <em>slowwww</em>.
			<li>When humans maintain rules, understanding the connections
			between rules is difficult.
		</ul>

		<h3>Improving Rule-based Programming</h3>
		<h4>Solution #1: throw away rules.</h4>
<p>
		Go to simpler representations.
			e.g. state machines (used in gaming, next lecture)
			<h4>Solution #2: the RETE network</h4>
<p>As used in many rule-based
			tools including OPS5,
		   ART, JESS.	
			Compile all the rules into a big network
			wiring all the conditions together.
			<ul>
				<li>If 10 rules make the same test, represent the
					test once in the network.
					<li>New facts get dropped into
					the top, bubble over the 
					network, and matched actions pop out at the 
					bottom.
					<li>A little complex to implement (to say the least):<br>

					<a href="http://upload.wikimedia.org/wikipedia/commons/thumb/9/92/Rete.JPG/800px-Rete.JPG">
						<img width=500 src="http://upload.wikimedia.org/wikipedia/commons/thumb/9/92/Rete.JPG/800px-Rete.JPG"></a>
					<ul>
						<li>Alpha nodes; matching within one test (e.g. a &le; 7);
						<li>Beta nodes: matching between tests (joins across
						tables)
						<li>But what is Type? select?dummy?
					</ul>
					<li>RETE supports standard resolution operators:
					e.g.:
					<ul>
						<li>Specificity: number of links in the network;
						<li>Data ordering: when advancing over the net,
						work left to right, most recent to oldest
					</ul>
<li>Note: RETE changes some constant time factors in the search
but does not tame the fundamental problem of all rules looking in 
all places all the time.
					</ul>
					<h4>Solution #3: divide the global space</h4>
<p>
					If searching/understanding all is too much,
					find ways to built the whole from local parts.
					Have separate rule bases for each part.
					<p>Q: How to divide the system?
					<ol><li>
						A1: Using domain knowledge; e.g. from 
						<a href="http://menzies.us/csx72/doc/kbs/tamb95_1.pdf">[Tambe91]</a>:
						<a href="http://menzies.us/csx72/doc/kbs/tambe.png" <img  
							width=300
						src="http://menzies.us/csx72/doc/kbs/tambe.png"></a>
						<li>A2:  using background knowledge of problem
						solving types. Add tiny little specialized rule bases to the implement
knowledge-level operators.
						<p>E.g. here's Clancy's abstraction of MYCIN:
<img align=top src="http://menzies.us/csx72/doc/kbs/mycinPsm.png">
						<p>And here's his abstraction of an electrical
						fault localization expert system:
<img align=top src="http://menzies.us/csx72/doc/kbs/sophie.png">
				<p>Note that the same abstract problem solving method occurs in both
<p>A whole mini-industry arose in the late 1980s, early 1990s defining libraries of supposedly  reusable problem solving methods.
<p>e.g. Here's "diagnosis"
<img align=top src="http://menzies.us/csx72/doc/kbs/diagnosis.png"><br>
<p>Here's "monitoring":
<img align=top src="http://menzies.us/csx72/doc/kbs/monitor.png"><br>
Note that you could write and reuse rules based to handle select, compare, specify.
<p>And there's some evidence that this is useful. Marcus and McDermott built 
rule bases containing 7000+ rules via <em>role-limiting methods</em>. That is,
once they knew their problem solving methods, they write specialized editors that only
let users enter in the control facts needed to just run those methods.
<ul>
<li>If you can't use it...
<li>... don't ask for it.
</ul>

<p>For a catalog of problem solving methods, see
<em>Cognitive Patterns</em>
By Karen M. Gardner
(contributors James Odell, Barry McGibbon),
1998, Cambridge University Press
</ol>
<h4>Patch in context</h4>

<p>We study AI to learn useful tricks. Sometimes, we also learn something about
people. Here's a radically different, and successful,  method to the above. What does it tell
us about human cognition?
<p>
<em>Ripple-down rules</em>
is  a maintenance technique for rule-based
programs initially developed by
Compton.
Ripple-down rules are best understood by comparison with standard
rule-based programming. In standard rule-based programming, each
rule takes the form
<pre> rule ID IF condition THEN action</pre> 
where <em>condition</em> is some test on the <em>current case</em>. In
the 1970s and early 1980s rule-based systems were hailed as the cure
to the ills of software and knowledge
engineering. Rules are useful, it was
claimed, since they:
 represent high-level logic of the system
expressed in a simple form that can be rapidly modified. 
<p>However, as
these systems grew in size, it became apparent that rules were
surprisingly harder to manage. One reason for this was that, in
standard rule-based programming, all rules exist in one large global
space where any rule can trigger any other rule. Such an open
architecture is very flexible. However, it is also difficult to
prevent unexpected side-effects after editing a rule. Consequently,
the same rule-based programs hailed in early 1980s (XCON)
became case studies about how hard it can be to maintain rule-based
systems.

<p>
Many  researchers argued that rule authors must work in precisely
constrained environments, lest their edits lead to spaghetti
knowledge and a maintenance
nightmare. 
One such
constrained environment is ripple-down rules that adds an 
EXCEPT  slot to each rule:
<pre> rule ID1 IF condition THEN EXCEPT rule ID2 THEN conclusion because EXAMPLE</pre> 


<p>
Here,
ID1,ID2 are unique identifiers for different rules;
EXAMPLE is the case that prompted the
creation of the rule (internally, EXAMPLEs
conjunction of features); and
the <em>condition</em> is some subset of the EXAMPLE, i.e., <em>condition
&sube; EXAMPLE</em> (the method for selecting that subset is
discussed below).
Rules and their exceptions form a ripple-down rule tree:

<center>
<img align=top src="http://menzies.us/csx72/doc/kbs/rdr.png">
</center>


<p>At run time, a ripple-down rule interpreter explores the rules and
their exceptions. If the condition of rule ID1 is found
to be true, the interpreter checks the rule referenced in the <em>
except</em> slot. If ID2's rule condition is false, then the interpreter
returns the conclusion of ID1. Else, the interpreter recurses into
rule ID2.
<p>
<img src="http://www.hermes.net.au/pvb/thesis/thesis010.gif">
<p>
(Note the unbalanced nature of the tree, most patches are shallow. This
is a common feature of RDRs).
<p>Ripple-down rules <em>can</em> be easier to read than normal rules:
<center>
<img src="http://pages.cpsc.ucalgary.ca/~gaines/reports/ML/JIIS95/JIIS9510.gif">
</center>
<p>
But in practice, Compton advises <em>hiding</em>
the tree behind a <em>difference-list</em>
editor. 
Such an editor  constrains rule authoring as follows:
<ul>
<li> Recall that
these rules are only ever added in response to some new EXAMPLE.
That is, each rule is useful for at least one example.
<li> Hence
ripple-down rules never delete old rules; rather they are patched
with EXCEPT rules. 
<li> These EXCEPT rules cover the special
case that confused the parent rule. If the parent rule with
<em>condition1</em> has
 EXAMPLE1 and the new rule
is being created in response to EXAMPLE2, then the new rule's
<em>condition2</em> must be formed from the features not used in the parent
rule and must hold for the new EXAMPLE2; i.e.
<pre>
condition2 = EXAMPLE1 - condition1
condition2 &sube;  EXAMPLE2
</pre>
</ul>
<p>
When users work in a difference list editor, they watch
EXAMPLEs running over the ripple-down rules tree, intervening only
when they believe that the  wrong conclusion is generated. At that
point, the editor generates a list of features to add to
<em>condition2</em> (this list is generated automatically from the above equations).
 The expert picks some items from this list and the patch
rule is automatically to some leaf of the ripple-down rules tree.

<p>
Ripple-down rules are a very rapid rule
maintenance environment. Compton et.al. report average rule edit
times between 30 to 120 seconds for rule bases up to 1000 rules in
size
<a href="http://menzies.us/csx72/doc/kbs/rdr2005.pdf">[Compton 2005]</a>
	
<p>AFAIK, ripple-down-rules are the current high-water mark in knowledge
maintenance.
<p>Q: what does this tell us about human knowledge?						





     </p>]]></description>
  </item>

  <item>
    <category rank="1000" >review</category>
    <category rank="1000" >week6</category>
     <id>171</id> 
     <title>
	Review : week 6     </title>
     <pubdate secs="1203283657" around="Feb08">Sun Feb 17 13:27:37 PST 2008</pubdate>
     <link>http://menzies.us/csx72/?171</link>
     <guid>http://menzies.us/csx72/?171</guid>
     <description><![CDATA[<p>
 

<ol>
<li>
Distinguish between classical logic and fuzzy logic. Your answer should include "membership function", "crisp", "fuzzy" and "set".
<li>
Define  with examples, fuzzification, rule evaluation, de-fuzzification
<li>
Write down the three Zadeh operators. Illustrate each with a diagram.
<li>
Here is the truth table for classical logic
<pre>
A	B	A and B		A or B		not A
--	--	-----------	--------	-------
0	0	0		0		1
0	1	0		1		1
1	0	0		1		0
1	1	1		1		0
</pre>
Reproduce this table using the Zadeh operators for (A B) = (0.3 0.7).
<li>
Define the extension principle and its implications for the connection of classical logic to fuzzy logic. Using the Zadeh operators, demonstrate
the extension principle for row 2 or the above table. Show all calculations
<li>
Draw the following membership function for 
<ol type="a">
<li>(a b crisp)= ( -50,50,0.1) and
<li>(a b crisp)= ( -50,50,1)
</ol>
<pre>
(defun crisp (x a b crisp)
  "Return a point in a sigmoid function centered at (a - b)/2"
  (labels ((as100 (x min max)
	     (+ -50 (* 100 (/ (- x min) (- max min))))))
    (/ 1 (+ 1 (exp (* -1 (as100 x a b) crisp))))))
</pre>
Mark on the x-axis the key points of the function.
<li>
Draw the following membership function for  (x a b c) = (x 10 20 30).
<pre>
(defun fuzzy-fun1 (x a b c) 
  (max 0 (min (/ (- x a) (- b a))
	      (/ (- c x) (- c b)))))
</pre>
Mark on the x-axis the key points of the function.
<li>
Draw the following membership function for  (x a b c d) = (x 10 20 30 60).
<pre>
(defun fuzzy-fun2 (x a b c d)
  (max 0 (min (/ (- x a) (- b a))
	      1
	      (/ (- d x) (- d c)))))
</pre>
Mark on the x-axis the key points of the function.
<li>
On one plot, draw the following membership functions for 
(dist 'close) (dist 'medium) (dist 'far)
<pre>
(defun dist (d what)  
  (case what
    (range    '(close medium far))
    (close     (fuzzy-triangle   d -30 0 30))
    (medium    (fuzzy-trapezoid  d 10 30  50 70))
    (far       (fuzzy-grade      d 0.3  50 100))
    (t         (warn "~a not known to dist" what))))
</pre>
Mark on the x-axis the key points of these functions.
</ol>





     </p>]]></description>
  </item>

  <item>
    <category rank="1000" >review</category>
    <category rank="1000" >week5</category>
     <id>170</id> 
     <title>
		Review: week5
     </title>
     <pubdate secs="1203283582" around="Feb08">Sun Feb 17 13:26:22 PST 2008</pubdate>
     <link>http://menzies.us/csx72/?170</link>
     <guid>http://menzies.us/csx72/?170</guid>
     <description><![CDATA[<p>
<ol>
<li>DFID
	<ol type="a">
	<li>Describe DFID
	<li> Contrast DFID with breadth-first and depth-first search. Your answer should define breath-first an depth-first search, and the
maximum running time and memory of each method.
	<li>
	It can be shown that DFID visits all nodes with maximum frequency 
            <em>M(b,d) &le; b^d*(1 - 1/b)^(-2)
			</em>  where <em>b</em> is the branching factor of the search space and
<em>d</em> is the depth of the search,
        Here are some values from M(b,d)

    		<ul><li> When b=2, M(b,d) &le; 4 * b^d
   		 <li> When b=3, M(b,d) &le; 9/4 * b^d
   		 <li> When b=4, M(b,d) &le; 16/9 * b^d
    		<li> When b=5, M(b,d) &le; 25/16 * b^d
		</ul>
 
	 Using these values, describe when you would or would not recommend DFID.
          Make sure you explain your answers
	</ol>
<li>ISSAMP
			<ol type="a"><li>Write down the pseudo-code for ISSAMP			Make sure your code has line numbers.
(Note the pseudo code for ISSAMP includes a "unit propagation" step which is a black box to you. Just assume it means "see what can be quickly inferred from the current solution").
			<lI>Explain the following using a paragraph or two of English and line numbers into your pseudo-code:
						ISSAMP search is <ul><li> uniformed, <li>incomplete,
							<li>stochastic <li>does not use  local search
 <li>uses restarts <lI>and uses very little memory.
							</ul>
</ol>



<li>
A* is a best-first search of graph with a *visited* list where states are sorted by "g+h". Explain all the terms in the prior sentence

<li>MAXWALKSAT
			<ol type="a"><li>Write down the pseudo-code for MAXWALKSAT 
			Make sure your code has line numbers.
			<lI>Explain the following using a paragraph or two of English and line numbers into your pseudo-code:
						MAXWALKSAT search is <ul><li>sometimes uniformed, <li>incomplete,
							<li>stochastic <li>sometimes, uses local search
 <li>uses restarts <lI>and uses very little memory.
							</ul>
</ol>
</ol>






     </p>]]></description>
  </item>

  <item>
    <category rank="1000" >lecture</category>
    <category rank="1000" >week6</category>
     <id>169</id> 
     <title>
        Encoding Sudoku
     </title>
     <pubdate secs="1203269775" around="Feb08">Sun Feb 17 09:36:15 PST 2008</pubdate>
     <link>http://menzies.us/csx72/?169</link>
     <guid>http://menzies.us/csx72/?169</guid>
     <description><![CDATA[<p>
 

  <p>A Sudoku  puzzle:
<center>
<img src="http://menzies.us/csx72/img/sukodu.png">
</center>
<p>One of the hardest Sudoku's ever found: the dreaded <em>Al Escargot</em> problem:
<center>
<img src="http://menzies.us/csx72/img/alescargpt-sukodu.png">
</center>
<p>Encoding Sudoku into CNF (so it can be solved by,say,
MAXWALKSAT.
<p>

Here <em>s[x,y,z]=true</em> means that the
square at <em>x,y</em> has value <em>z</em>.  The following rules
expand into propositional formulae:
<ul>
<li>
and [x=0..8] and [y=0..8] or [z=1..9] then s[x,y,z]
  <li> and [x=0..8] and [y=0..8] and [z=1..9] and [x'=0..8]
      s[x,y,z] -> x == x' or not s[x',y,z]
  <li> and[x=0..8] and[y=0..8] and[z=1..9] and[y'=0..8]
      s[x,y,z] -> y == y' or not s[x,y',z]
  <li> and[x=0..2] and[y=0..2] and[x'=0..2] and[y'=0..2]
    and[x''=0..2] and[y''=0..2]
      s[3*x + x', 3*y + y', z] -> x' == x'' and y' == y'' or
                                  not s[3*x + x'', 3*y + y'', z]
</ul>
<P>Note that the encoding can be very large. But that is the
game with using MAXWALKSAT (how to encode a domain into
the low level representation used by the solver).


     </p>]]></description>
  </item>

    <item>
    <category rank="1000" >lecture</category>
    <category rank="1000" >week6</category>
     <id>168</id> 
     <title>
        Uncertainty, Fuzzy Logic, Abduction
     </title>
     <pubdate secs="1203269329" around="Feb08">Sun Feb 17 09:28:49 PST 2008</pubdate>
     <link>http://menzies.us/csx72/?168</link>
     <guid>http://menzies.us/csx72/?168</guid>
     <description><![CDATA[<p>
 

<h2>Meta-lesson</h2>

<p>As engineers, you need to  sensibly select technologies. Some exciting candidate technologies are too bleeding edge, or too slow or cumbersome to use in practice.  And you have to decide which is which. 

<p>The next two lectures offer examples of this process. Note that in both case,
the gaming industry uses technology (b) while theoreticians prefer (a). 

<ol>
<li> Handling uncertainty: from (a) abduction to  (b) fuzzy logic;
<li>Simulations: from (a) rule-based systems to (b) state machines
</ol>

<p>In your career
you'll have to strike your own balance between the limitations of the simpler method (b) vs the extra functionality offered by the more complex method (a).

<h2>Reasoning and Uncertainty</h2>

<P><img align=right width=300 src="http://www.joe-ks.com/archives_aug2005/ImpossibleIllusion1.jpg">
<EM>The words figure and fictitious both derive from the same Latin root, fingere (to form, create). Beware!</EM>
--M.J. Moroney</P>
<P><EM>Everything is vague to a degree you do not realize :till you have tried to make it precise.</EM> --Bertrand Russell</P>

<p>I'm pretty certain that I need better ways to handle uncertainty.
Many expressions of human knowledge are not crisp ideas. Rather, they are hedges, shades of gray.
<p>Consider four problems in programming a game:
<ul>
<li><em>Threat assessment:</em> How many battalions to deploy when you don't have total knowledge of the enemies movements?
<li><em>Control:</em> when chasing a target, you want to see some fluid gradual changes in tactics. Abrupt straight-line movements
that suddenly change course would appear unnatural.
<li><em>Classification:</em>  how to combine information to yield <em>hedge</em> information like <em>wimpy, easy, moderate, tough, formidable</em>?
How to combine these hedged classifications with other hedges to infer further information?
<li>
<em>How to do all the above</em>  that introduces some measure of unpredictability into the game?
</ul>

<p>
The problem is that conventional principles of logic
are very unfriendly towards uncertainty.
In  classical logic, things are either true or false. You do, you don't. Worse,
if a model contains a single inconsistency, the whole
theory is false.
So one little doubt, one little "this or not this" and you are screwed.
Not very helpful for introducing gradual degrees of hedged knowledge that may be contradictory.

<p><img 
src="http://upload.wikimedia.org/wikipedia/commons/thumb/2/28/Lotfi_A._Zadeh(2004).jpg/200px-Lotfi_A._Zadeh(2004).jpg" 
align=right width=200>
This is not just a game playing problem.
The drawbacks of classical logic are well known.
Theoreticians like Lofti Zadeh offer a <em> Principle of Incompatibility </em>:
<ul>
<li> "As the complexity of a system increases, our ability to make precise and yet significant statements about its
behavior diminishes until a threshold is reached beyond which precision and
significance (or relevance) become almost mutually exclusive characteristics."
</ul>
Zadeh goes on to say:
<ul>
<li>
"It is in this sense that precise quantitative analyzes of the behavior of humanistic systems are not likely to have much relevance to the real worlds societal,
political, economic and other types of problems which involve humans either
as individuals or in groups."
</ul>
<br clear=all>
<p><img align=right 
width=200
src="http://www.visualstatistics.net/East-West/Logical%20Positivism/Popper%201.jpg">Philosophers like Karl Popper claim that  accessing 100% evidence on any topic a fool's goal. All "proofs" must terminate on premises; i.e. some proposition that we accept as
        true without testing, but which may indeed be wrong.
In terms of most human knowledge, a recursion to base premises is
        fundamentally impractical. For example:
<ul>
<li>Consider one individual trying to
        reproduce all the experiments that lead to our current understanding of atomic
        physics. 
<li>Such an undertaking could longer than a lifetime and would be beyond
        the resources of most individuals (e.g. building a five kilometer long linear
        accelerator). 
<li>Such a task has to be divided up and, sooner or later, our single
        researcher would have to accept on faith the validity of another researcher's
        statement that "while you were busy elsewhere, I did this, and I saw that. Trust
        me.".
</ul>


<p>There is much evidence for Popper's thesis.
Gathering all the information required to remove all
uncertainties can be very difficult:
<ul>
<li>In the first study that isolated the
<em>thyroptin-releasing hormone</em>
(used in the brain), 300,000 sheep brains had to
be filtered. This yielded 1.0 milligrams of purified
hormone.
</li>
<li>The premise of the cs572 project is that we can't collect the information
required to tune 28 variables in a software process model.
I am continually surprised that this is the case. All software
development organizations collect
information but just  try to access  consistent samples of it!
<li><img src="http://www.systems-thinking.org/theWay/sap/ap.gif" align=right width=250>  The (in)famous "Limits to Growth"
  study attempted to predict the international effects of continued global economic growth. Less than 0.1% of the data required for the models was available
</li>
<lI>If the planet is too big, when is small small enough?
 In a complex system of only a modest number of variables and interconnections,
  any attempt to describe the system completely and measure the magnitude of all
  the links would be the work of many people over a lifetime
</li>
</ul>


 <p>Experience with mathematical simulation suggest that 
an over-enthusiasm for quantitative analysis can confuse, rather than clarify, a
domain.
For systems whose parts are not listed in a catalog, which evolve together,
      which are difficult to measure, and which show unexpected capacity to form new
      connections, the results of (quantitative) simulation techniques have been less
      than impressive:
<ul>
<li>The masses of data required make the procedure very costly.
<li>The
      demand for qualitative precision often forces the exclusion of many variables
      (e.g. stress in diabetes: poorly quantified but vitally important); 
<li> the predictions
      apply only to the original single system from which the models were derived, and
      are not easily extended even to similar systems ([207], p4).
</ul>

 
<p>
Anyway,  why seek total knowledge? Some levels of uncertainty are a good thing:
<ul>
<li>In a game simulation,
you want  some higglety pigglety . It would look unnatural if
 all  leaves on all the trees all blow this way then that like soldiers on parade.
<li><img align=right width=400 src="http://www.mgthomas.co.uk/dancebands/American%20Visitors/Pictures/Southern%20Rag-a-Jazz%20Band.jpg">
When co-ordinating multiple agents, it is good to leave some room for
local improvisation. Consider the phrase "close enough for jazz":
<ul> 
&nbsp;<br>
<em>While "close enough for jazz" sounds disparaging, it really isn't, in my view. The nature of jazz is to improvise on the basis of a known musical core, whether it's a melody or a set of harmonies. If it is TOO close to the original melody or other musical archetype, or too close for too long, it is no longer jazz, it's the original. "Close enough," it seems to me, would be close enough to recognize the musical skeleton behind what's being played, but not so close as to hamper the improvised self-expression at the heart of jazz. </em>
<br>
From <a href="http://www.phrases.org.uk/bulletin_board/49/messages/56.html">SS</a>.
</ul>
</ul>

<p>Sometimes we can learn more by allowing some uncertainties.
Creative solutions and insights  can sometimes be  found in a loosely constrained space than may be impossible in tightly constrained, certain spaces.
For example, 
<ul>
<li>Enabling random search is a very useful tool.
We have seen before that stochastic theorem provers have very nice generality properties.
<li>
If we over fit our solutions to old data then our solutions may be brittle if any part of
that old data changes.
That is,
even if total knowledge is available for a system, it can be useful to mutate
that knowledge to see how changes to the background knowledge effect the outcome.
<ul>
<li>
The premise of the NOVA project is that we don't want <em>one</em> solution.
Rather, we want to find actions that are stable across the space of <em>all possible solutions</em>
<li>In diagnosis, we may not want to explore scenarios that are exactly like prior
knowledge of normal behavior. Rather, we want to explore abnormal deviations
from normal.
Also,
in diagnosis, you can't reflect over multiple options unless your models
can generate multiple options.  That is, for diagnosis, you want models that can handle
"don't know, it could be this or that".
</ul>
</ul>

Finally, it can be pragmatically very useful to allow some uncertainty.
When  building systems, it is very useful to use under-specified qualitative
representations (with some degree of imprecision) when
precise relations among 
the variables in the system to be modeled may be hard or impossible to determine, but it is 
usually still possible to state some qualitative relations among the variables. 
For example:
<ul>
<li>Q: do we call out the marines?
<li>A: depends on the <em>size</em> and <em>proximity</em> of the invading forces
<pre>
           PROXIMITY
SIZE       close   medium  far
--------   ----------------------
tiny       medium  low     low
small      high    low     low
moderate   high    medium  low
large      high    high    medium
</pre>
<li>(But how do we give meaning to all these symbols?.... see below.)
</ul>

<p>So we need to change classical logic.
Uncertainty is unavoidable. Despite this, we persist and continue building systems of increasing size and internal complexity. Somehow, despite uncertainty, we can handle uncertainty and doubt and
predict the future a useful percent of the time. How do we do it? How can we teach AI algorithms to do it?

<H2>Method #1: Fuzzy Logic</H2>
<P><img align=right src="http://www.cartoonstock.com/lowres/dro1116l.jpg">
Officially, there are rules. Unofficially, there are shades of gray. Fuzzy logic is one way to
handle such shades of gray.</P>
<P>For example,
consider the rule ``if not raining then play golf''. Strange to say, sometimes people play golf even in the rain.
Sure, they may not play in a hurricane but  a few drops of rain rarely stops the dedicated golfer.</P>
<P>Fuzzy logic tries to capture these shades of gray. Our
golf rule means that we rush to play golf in the sunshine,
stay home during hurricanes, and in between we may or may not
play golf depending on how hard it is raining.</P>
<P>Fuzzy logic uses fuzzy sets.
Crisp sets have sharp distinctions between true and falsehood
(e.g. it is either raining or not). 
Fuzzy sets have blurred boundaries (e.g. it is <EM>barely</EM> raining).
Typically, a <EM>membership function</EM> is 
used to denote our belief in some concept.
For example, here are some membership functions for the belief that a number is positive.
All these beliefs have the same membership function:</P>
<PRE>
 1/(1+exp(-1*x*crisp))</PRE>
<p>Or, in another form...
<pre>
(defun crisp (x a b crisp)
  "Return a point in a sigmoid function centered at (a - b)/2"
  (labels ((as100 (x min max)
	     (+ -50 (* 100 (/ (- x min) (- max min))))))
    (/ 1 (+ 1 (exp (* -1 (as100 x a b) crisp))))))

(defun fuzzy-grade (x &optional (a 0) (b 1) (slope 0.1))
  (crisp x a b slope))
</pre>

<P>where the <CODE>crisp</CODE> parameter controls how sharp is the boundary in our beliefs:</P>
<CENTER> <IMG BORDER=0 width=400 SRC="http://menzies.us/tmp/scant/ref/var/step.png" ALIGN=center></CENTER><P>Note that for large <EM>crisp</EM> values the boundary looks like classical logic: at zero a number
zaps from positive to negative.
However, in the case of our golfing rule, there are
shades of gray in between sunshine and hurricanes.
We might believe in playing golf, even when
there is a little rain around.
For these situations, our beliefs are hardly crisp, as modeled by setting
 <CODE>crisp</CODE> to some value less than (say) one.</P>
<P>The <EM>crisp</EM> value lets us operationalize a set of
<EM>linguistic variables</EM> or <EM>hedges</EM> such as
``barely'',  
``very'', ``more or less'', ``somewhat,'' ``rather,'' ``sort of,'' and so on. 
In this approach,
analysts debrief their local experts on the relative strengths
of these hedges then add hedge qualifiers to every
rule. Internally, this just means mapping hedges to <EM>crisp</EM>
values. For example:</P>
<PRE>
  definitely: crisp=10
  sort of:    crisp=0.5
  etc</PRE>
<P>Alternatively, analysts can ask the local experts to
draw membership functions showing how the degrees of belief in some concept
changes over its range. For example:</P>
<CENTER> <IMG width=400 BORDER=0 SRC="http://menzies.us/tmp/scant/ref/etc/img/pressure.gif" ALIGN=center></CENTER><P>Sometimes, these can be mapped to mathematical
functions as in the following:</P>
<CENTER> <IMG BORDER=0 width=500 SRC="http://menzies.us/tmp/scant/ref/etc/img/DescriptionFig3-MFs.JPG" ALIGN=center></CENTER>
<pre>
(defun fuzzy-triangle (x a b c) 
  (max 0 (min (/ (- x a) (- b a))
	      (/ (- c x) (- c b)))))

(defun fuzzy-trapezoid (x a b c d)
  (max 0 (min (/ (- x a) (- b a))
	      1
	      (/ (- d x) (- d c)))))
</pre>
<P>Note that the 
function need not be represented mathematically. If the local
experts draw any shape at all, we could read off values from
that drawing and store them in an array.
For example, an analyst can 
construct an  array of values for various terms,
either as vectors or matrices. Each term and hedge can be
 represented as
(say) a 7-element vector or 7x7 matrix. 
Each
element of every vector and matrix a
value between 0.0 and 1.0,
inclusive, in what might be considered
 intuitively a consistent manner. For
example, the term ``high'' could be  assigned the vector</P>
<PRE>
     0.0 0.0 0.1 0.3 0.7 1.0 1.0</PRE>
<P>and ``low'' could be set equal to the reverse of ``high,'' or</P>
<PRE>
     1.0 1.0 0.7 0.3 0.1 0.0 0.0</PRE>
<P>
<H3>Zadeh Operators</H3>
<P>The AND, OR, NOT operators of boolean logic exist in fuzzy logic,
usually defined as the minimum, maximum, and complement; i.e.</P>
<PRE>
    [1]  truth (not A)   = 1.0 - truth (A)
    [2]  truth (A and B) = minimum (truth(A), truth(B))
    [3]  truth (A or B)  = maximum (truth(A), truth(B))</PRE>
<p>Or, in another form...
<pre>
(defmacro $and (&rest l) `(min ,@l))
(defmacro $or  (&rest l) `(max ,@l))
(defun    $not (x)       (- 1 x))
</pre>
<P>When they are
defined this way, the are called the <EM>Zadeh operators</EM>, because they
were originally defined as such in Zadeh's original papers.</P>
<P>Here are some examples of the Zadeh operators in action:</P>
<PRE>
 [1] truth (not A)   = 1.0 - truth (A)</PRE>
<CENTER> <IMG width=400 BORDER=0 SRC="http://menzies.us/tmp/scant/ref/var/fuzzylogica.png" ALIGN=center></CENTER><CENTER> <IMG
width=400 BORDER=0 SRC="http://menzies.us/tmp/scant/ref/var/fuzzylogicnota.png" ALIGN=center></CENTER><PRE>
 [2] truth (A and B) = minimum (truth(A), truth(B))</PRE>
<CENTER> <IMG width=400 BORDER=0 SRC="http://menzies.us/tmp/scant/ref/var/fuzzylogica.png" ALIGN=center></CENTER><CENTER> <IMG BORDER=0 width=400 SRC="http://menzies.us/tmp/scant/ref/var/fuzzylogicb.png" ALIGN=center></CENTER><CENTER> <IMG BORDER=0 width=400 SRC="http://menzies.us/tmp/scant/ref/var/fuzzylogicaandb.png" ALIGN=center></CENTER><PRE>
 [3]  truth (A or B)  = maximum (truth(A), truth(B))</PRE>
<CENTER> <IMG BORDER=0 width=400 SRC="http://menzies.us/tmp/scant/ref/var/fuzzylogica.png" ALIGN=center></CENTER><CENTER> <IMG 
width=400 BORDER=0 width=400 SRC="http://menzies.us/tmp/scant/ref/var/fuzzylogicb.png" ALIGN=center></CENTER><CENTER> <IMG BORDER=0 
width=400 SRC="http://menzies.us/tmp/scant/ref/var/fuzzylogicaorb.png" ALIGN=center></CENTER>
<p>Here are some more examples:
<center>
<img width=500 src="http://www.mathworks.com/access/helpdesk/help/toolbox/fuzzy/logic_graphs_2.gif">
</center>
<p>Here's yet another example
<center>
<img width=400 src="http://www.scholarpedia.org/wiki/images/8/85/Fuzzy_Logic_f2.gif">
</center>
<P>Some researchers in fuzzy logic have explored the use of other
interpretations of the AND and OR operations, but the definition for the
NOT operation seems to be safe.</P>
<P>Note that if you plug just the values zero and one into the 
definitions [1],[2],[3], you get the same truth tables as you would expect from
conventional Boolean logic. This is known as the EXTENSION PRINCIPLE,
which states that:
<ul>
The classical results of Boolean logic are recovered
from fuzzy logic operations when all fuzzy membership grades are
restricted to the traditional set {0, 1}. 
</ul>
This effectively establishes
fuzzy subsets and logic as a true generalization of classical set theory
and logic. In fact, by this reasoning all crisp (traditional) subsets ARE
fuzzy subsets of this very special type; and there is no conflict between
fuzzy and crisp methods.</P>
<H3>Example 1</H3>
<P>Assume that we have some fuzzy sub membership
functions for  combination of TALL and OLD things defined as follows:</P>
<PRE>
 function tall(height) {
  if (height &lt; 5 ) return Zero;
  if (height &lt;=7 ) return (height-5)/2;
  return 1;
 }</PRE>
<PRE>
 function old(age) {
   if (age &lt;  18) return Zero;
   if (age &lt;= 60) return (age-18)/42;
   return 1;
 }</PRE>
<PRE>
 function a(age,height)   { return FAND(tall(height),old(age)) }
 function b(age,height)   { return FOR(tall(height), old(age)) }
 function c(height)       { return FNOT(tall(height)) }
 function abc(age,height) { 
        return FOR(a(age,height),b(age,height),c(height)) }</PRE>
<P>(The functions FNOR, FAND, FOR are the Zadeh operators.)</p>
<P>For compactness, we'll call our combination functions:</P>
<PRE>
    a = X is TALL and X is OLD
    b = X is TALL or X is OLD
    c = not (X is TALL)
    abc= a or b or c</PRE>
<P>Here's the OLDness functions:</P>
<CENTER> <IMG BORDER=0 SRC="http://menzies.us/tmp/scant/ref/var/oldness.png" ALIGN=center></CENTER><P>Here's the TALLness function:</P>
<CENTER> <IMG BORDER=0 SRC="http://menzies.us/tmp/scant/ref/var/tallness.png" ALIGN=center></CENTER><P>Here's (c); i.e. the not TALLness function:</P>
<CENTER> <IMG BORDER=0 SRC="http://menzies.us/tmp/scant/ref/var/cness.png" ALIGN=center></CENTER><P>Here's  (a): TALL and OLD</P>
<CENTER> <IMG BORDER=0 SRC="http://menzies.us/tmp/scant/ref/var/aness.png" ALIGN=center></CENTER><P>Here's  (b): TALL or OLD</P>
<CENTER> <IMG BORDER=0 SRC="http://menzies.us/tmp/scant/ref/var/bness.png" ALIGN=center></CENTER><P>Here's  (abc): a or b or c</P>
<CENTER> <IMG BORDER=0 SRC="http://menzies.us/tmp/scant/ref/var/abcness.png" ALIGN=center></CENTER><P>
<H3>In practice</H3>
<P>Methodology, a fuzzy logic session looks like this:</P>
<DL>
<DT><STRONG><A NAME="item_Fuzzification%3A">Fuzzification:</A></STRONG><BR>
<DD>
Using membership functions, describe a situation.
<DT><STRONG><A NAME="item_Rule_evaluation%3A">Rule evaluation:</A></STRONG><BR>
<DD>
Apply  fuzzy rules (e.g. using the Zadeh operators)
<DT><STRONG><A NAME="item_Defuzzification%3A">Defuzzification:</A></STRONG><BR>
<DD>
Obtaining the crisp or actual results:
<UL>
<LI>
Apply some threshold to declare than any fuzzy belief over (e.g.) 0.2
is true and all others are false.
<LI>
Translate the resulting beliefs back through the
membership functions to get a linguistic summary of the
conclusion; e.g. <EM>happy is sort of true</EM>.
<LI>
Return the centroid of the computed membership function.
This process can be very complex (using full integrals) or, as the following example
shows, very simple.
<P>Suppose we have an
output fuzzy set AMOUNT which has three members: ZERO, MEDIUM, and
HIGH.  We assign to each of these fuzzy set members a corresponding
output value, say 0 for Zero, 40 for MEDIUM, and 100 for HIGH. A Defuzzification
procedure might then be:</P>
<PRE>
 return (ZERO*0 + MEDIUM*40 + HIGH*100)/(0+40+100)</PRE>
<P></P></UL>
</DL>
<h3>Example 2: Do we Call out the Marines?</h3>
<p>How close is the enemy?
<ul>
<li>Close, medium, far?
</ul>
<p>How large is the enemy's forces?
<ul>
<li> Tiny, small, moderate, large?
</ul>
<p>What is the size of the threat?
<pre>
           PROXIMITY
SIZE       close   medium  far
--------   ----------------------
tiny       medium  low     low
small      high    low     low
moderate   high    medium  low
large      high    high    medium
</pre>
<p>
We need some membership functions:
<pre>
(defun dist (d what)  
  (case what
    (range    '(close medium far))
    (close     (fuzzy-triangle   d -30 0 30))
    (medium    (fuzzy-trapezoid  d 10 30  50 70))
    (far       (fuzzy-grade      d 0.3  50 100))
    (t         (warn "~a not known to dist" what))))

(defun size (s what)
  (case what
    (range    '(tiny small moderate large))
    (tiny      (fuzzy-triangle  s -10 0 10))
    (small     (fuzzy-trapezoid s 2.5 10 15 20))
    (moderate  (fuzzy-trapezoid s 15 20 25 30))
    (large     (fuzzy-grade     s 0.3 25 40))
    (t         (warn "~a not known to size " what))))
</pre>
<p>And we'll need to model the above table:
<pre>
(defun fuzzy-deploy (d s what)
  (macrolet (($if (x y) `($and (dist d ',x) (size s ',y))))
    (case what
      (range    '(low medum high))
      (low       ($or ($if medium tiny)
		      ($if far    tiny)
		      ($if medium small)
		      ($if far    small)))
      (medium    ($or ($if close  tiny)
		      ($if medium moderate)))
      (high      ($or ($if close  small)
		      ($if close  moderate)
		      ($if close  large)
		      ($if medium moderate)))
      (t         (warn "~a not known to fuzzy-deploy" what)))))
</pre>
<P>Finally, we need a defuzzication function:
<pre>
(defun defuzzy-deploy (&key distance size)
  "Return how many marines to deploy?"
  (let ((low    (fuzzy-deploy distance size 'low   ))
	(medium (fuzzy-deploy distance size 'medium))
	(high   (fuzzy-deploy distance size 'high  )))
    (round 
     (/ (+ (* 10 low) (* 30 medium) (* 50 high))
	(+ low medium high)))))
</pre>
<p>So, how many marines to deploy?
<pre>
(defuzzy-deploy :distance 25 :size 8)) ==> 19
</pre>
<center>
<img src="http://www.coolmarines.net/images/cool_marines005014.jpg">
</center>
<h2>Method #2: Abduction</h2>
<h3>Digression</h3>
<p>
<em>(What I'm about to say <em>seemed</em> like a good
idea at the time- early 1990s. But most folks who took this seriously
ran into computational problems and had to switch from discrete-space
logic to some continuous variant. However, IMHO, 
recent empirical stochastic
results suggest that this  all might be worth a second look.
And, much to my surprise,
I see that <a href="http://orsp.rutgers.edu/aro/default.htm">
others think the same</a>.
<p> Also, technically speaking, what I'm about to show you is my own local
variant of abduction called <em>graph-based abduction</em> that comes
from my 
<a href="http://menzies.us/pdf/95thesis.pdf">1995 Ph.D. thesis</a>. 
For a more general treatment, that is more
standard, see
Bylander T., Allemang D., Tanner M.C., Josephson J.R.
 The computational complexity of. abduction, Artificial Intelligence Journal
49(1-3), pp. 25-60, 1991.)</em>
<h3>Uncertainty and Topology</h3>
<p>Republicans aren't pacifists  by quakers are. Is Nixon (who was both a republican and a quaker) a 
pacifists?
<center>
	<img src="http://menzies.us/csx72/img/nixon.png">
</center>
<p>This is a horrible problem. Observe that it can't be solved
by sitting at the Nixon note and querying his immediate parents.
<em>You have to search into the network for contradictory conclusions</em>.
That is, it can't be solved by some fast local propagation algorithm.
<p>If you can't gather more information, you have two options:
<ul>
	<li><em>Skeptical :</em>   since Nixon can neither be proved to be a pacifist nor the contrary, no conclusion is drawn.
	This is the approach of classical logic (one contradiction? throw out everything and run home crying).
	<li><em>Credulous: </em>    since Nixon can be proved to be a pacifist in at least one case, he is believed to be a pacifist; however, since he can also be proved not be a pacifist, he is also believed not to be a pacifist. 
Note that option 2 means forking different <em>worlds of beliefs</em> then reasoning separately about each
world. This can lead to an intractable explosion of beliefs if, as we explore
<em>quaker-ness</em> and <em>republican-ness</em> we find that we must generate even more worlds of belief.
</ul>
<p>Alternatively, if we are allowed to probe the world for more information, we can:
<ul>
	<li>Identify the base sources of the contradictions
	<li>Design the smallest number of probes that remove the most number of contradictions, thus reducing the number of worlds.
	</ul>
	<p>That is, unlike fuzzy logic (where everything is quickly inferred from hard-wired
	numbers set at design time), here we must extract and explore the topology of
	our doubts, then probe the key points. 
	<p>Formally, this is abduction.
<h3>Abduction, induction, deduction</h3>
<p><img align=right width=200 src="http://g-ec2.images-amazon.com/images/I/416YQ63DSQL._.jpg">
Welcome to abduction, the logic of "maybe". Abduction can be
contrasted with induction and deduction as follows:
<ul>
<li>If <em>a -> b</em> are a set of rules and <em>a,b</em>
are antecedents and consequences then..
<li>DEDUCTION is matching the rules to the antecedents  to find
the consequences; 
<pre>
rules + antecedents
==>
consequences
</pre>
<li>INDUCTION means taking lots of 
<em>antecedents,consequences</em> and learning the rules:
<pre>
&lt;antecedent,consequence>
&lt;antecedent,consequence>
&lt;antecedent,consequence>
&lt;antecedent,consequence>
==>
rules
</pre>
<li>ABDUCTION means matching the rules to the consequence to
find antecedents that could explain the consequence
<pre>
rules + consequences
==>
antecedents 

</pre>
Note that abduction is <em>not</em> a certain inference since
there may exist multiple rules that explain the consequence.
For example:
<ul>
<li>rule1: If sprinkler then wet grass
<li>rule2: If rain then wet grass
<li>consequence: wet grass
<li>Abduce: consequence = rain
<ul>
<li> Not a certain inference
</ul>
</ul>
</ul>
<p>In practice, all these methods run together:
<center>
<img src="http://www.gisdevelopment.net/technology/ip/images/mi3066a.gif">
</center>
<p>(That's the official story. My own experience is that
by the time you get an abductive device going, you have much
of the machinery required for deduction and induction. But
don't tell anyone,) 
<h3>Formally</h3>
<p>Formally, abduction looks like this:
<ol>
<li><em>Theory and assumptions => goal</em>; i.e. do something
<li><em>not(Theory and assumptions => error)</em>; i.e. don't do dumb things.
</ol>
<p>Without rule2, abduction becomes deduction:
<ul><li>Athletes on steroids. Go! Run to conclusions
</ul>
<p>With rule2, abduction gets very slow:
<ul><lI>Replace you athletes with nervous accountants
<li>After shuffling forward a few yards they realize that they are
in the wrong order (violating rule2). 
<li>So the race stops while they sort out who should be first, second, third..
<li>Worse case,
<ul>
<li> They realize they can't all run this race together
and the race must fork into <em>worlds</em> 
<li>Now several races are run and not everyone finishes the same race.
</ul>
</ul>
<p>
That is,  when the accountants run the race,
rules 1 &amp; 2 have to be modified:
<em><ol start="3">
<li>
Theory'  	&sube; Theory
<li>goals'  	&sube; goals
<li>Theory' and assumptions => goal'
<li>not(Theory' and assumptions => error)
</ol></em>
<p>Assumptions define worlds of belief.
<ul>
<li>Internally consistent
<li>Maximal w.r.t. size
<li>No world contains another world
</ul>
<p>Assumptions come in two forms:
<ul>
<li>Some assumptions are controversial (conflict with other
assumptions) and so do not. Those that do not do not drive world
generation.
<li>Some assumptions dependent on other assumptions and some
do not. Those that don't are <em>base</em>.
</ul>
<p>Note that the least informative assumptions are the non-base,
non-controversial assumption (the <em>yawn</em> set).

<p>(BTW, is your head spinning yet? Don't you wish we were still
doing fuzzy logic?)
<h3>Example</h3>
<p>
An example makes all this clear. Here's a theory:
<center>
<img width=400 src="http://menzies.us/csx72/img/theory.png">
</center>
<p>In this example, our inputs are:
<ul>
<li>foriegnSales=up
<li>domesticSales=down
</ul>
<p>Our goals are to explain the outputs:
<ul>
<li>investorConfidence=up
<li>wages=down
<li>inflation=down
</ul>
<p>We have some background knowledge
<ul>
<li>Variables X,Y,etc have the range {up,down}.
<li>X=up and X=down is an error
<li>X++Y is consistent with
<ul>
<li>X=up and Y=up
<li>X=down and Y=down
</ul>
<li>X-Y is consistent with
<ul>
<li>X=up and Y=down
<li>X=down and Y=up
</ul>
</ul>
<p>The above
theory supports the following consistent chains of reasons
that start at the inputs and end at the outputs:
<ol type="a">
<li>foriegnSales=up, companyProfits=up, investorConfidence=up
<li>domesticSales=down , inflation=down,
<li>domesticSales=down, companyProfits=down, wages=down
<li>domesticSales=down, inflation=down, wages=down
</ol>
<p>In the above, companyProfits=up and companyProfits=down
are controversial assumptions. They are also base since
they depend on no upstream controversial assumptions.
<p>We say that one <em>world</em> of belief is defined by each
 maximal consistent subsets of the base controversial
assumptions. By collecting together chains of reasons consistent with
each such subset, we find two worlds:
<ul>
<li>Worlds1= paths {a,b,d}
<li>World2= paths {b,c,d}
</ul>
<p>So our example leads us to two
possibilities:
<center>
<img width=400 src="http://menzies.us/csx72/img/worlds.png">
</center>

<h3>Applications of Abduction</h3>
<p>
Each world is internally consistent (by definition) so predictions can be made
without checking for inconsistencies. So, ignoring the world generation cost (which can
be scary), the subsequent prediction cost is very cheap.

<p>
Note the connection of abduction to deduction and induction.
<ul>
<li>In this framework, deduction is just running prediction
from the inputs.
<li>As to induction,
of it turns out that (say) world1 generates
the best predictions then we can induce that the
companyProfits to wages link is ignorable.
</ul>
<p>When  multiple worlds can be generated and a
 <em>best</em> operator selects the preferred world:
i.e.
<ul>
<li> Equations 3,4,5,6 generate world(s)
<li> <em>Believed = (best(words) &sube; worlds)</em>
<li>Hint: to find the worlds. seek maximal w.r.t. size
consistent subsets of the base
controversial assumptions.
As described above, one world exists 
for each such subset.
</ul>
<p>Different applications for abduction arise from different <em>best</em> operators:
For example,
<em>classification</em> is just a special case
of prediction where some subset of the goals have
special labels (the classes).
<p>
Another special case of prediction is <em>validation</em> where we score a theory by the
maximum of the number of goals found in any world. This is particularly useful for assessing
theories that may contain contradictions (e.g. early life cycle requirements).
<p>
Yet another special case of prediction is  <em>planning</em>. Here,
we have knowledge of how much it <em>costs</em> to use part of the theory and 
planning-as-abduction
tries to to maximize coverage of the goals while minimizing the traversal cost to get to those
goals.
Note that, the directed
graph in the generated worlds can be interpreted as an order of operations.
<p>
<em>Monitoring</em> can now be built as a a post-processor to planning.
Once all the worlds are generated, we cache them (? to disk). As new information arrives,
any world that contradicts that new data is deleted. The remaining worlds contain the remaining
space of possibilities.
<p>
<em>Minimal fault
diagnosis</em> means favoring worlds that explain most
diseases (goals) with the fewest inputs; i.e.
<ul>
	<li>maximize <em>world &cap; goals</em> and
	<li> minimize <em>world &cap; In</em>  
</ul>
<p><em>Probing</em> is a special kind of diagnostic activity that
seeks the fewest tests that rule out the most possibilities.
In this framework, we would eschew probes to non-controversial
assumptions and favor probes to the remaining base assumptions.
<p>The list goes on.
<em>Explanation </em> means favoring worlds containing ideas that the user
has seen before. This implies maintaining a persistent <em>user profile</em> storing everything the user
has seen prior to this session with the system.
<p>
<em>Tutoring</em> is a combination of explanation and planning.
If the best worlds we generate via explanation are somehow sub-optimum (measured via some domain-specific
criteria) we use the planner to plot a path from the explainable worlds to the better worlds. This path
can be interpreted as a lesson plan.
<p>
<img align=right width=300 src="http://www.hdforindies.com/uploaded_images/Gollum_iPrecious5-734388.jpg">
The list goes on. But you get the idea. We started with uncertainty and got to everything else. 
Managing uncertainty is a core process in many tasks.
Abduction as a unifying principle for implementing uncertain reasoning. 
One Ring to rule them all, One Ring to find them, One Ring to bring them all and in the darkness bind them
<p>Well, we all know what comes next...
<br clear=all>
<h3>Abduction: Complexity and solutions</h3>
<p>
Abduction belongs to a class of problems called NP-hard. That is, there no known fast and complete algorithm
for the general abductive problem. And we say that after decades of trying to find one.
<p>So if everything is abduction then everything is hard. Hmmm... sounds about right.
<p>
But I gave up on abduction when all my abductive inference engines ran into computational walls.
The following runtimes come from validation-as-abduction using an interpreted language on a 1993 computer.
But the general shape of this graph has been seen in other abductive inference engines.
<p><center>
	<img width=400 src="http://menzies.us/csx72/img/abductiveValidation.png">
</center>
<p>But I'm starting to think I should come back to abduction.
Here's one experiment with an ISSAMP-style world generation method for validation-as-abduction. 
Repeatedly, this abductive inference
engine built one consistent world, making random choices as it went. The algorithm terminated when new
goals stopped appearing in the generated worlds. This algorithm ran in polynomial time (as opposed to
the complete validation-as-abduction method shown on the left. 
<p>
<center>
	<img width=400 src="http://menzies.us/csx72/img/ht0.png">
</center>
<p>It is hardly surprising  that an incomplete random search runs faster than a complete one. But the
real significant finding was that in the region where both algorithms terminated, the random search found
98\% of the goals found by the complete search.
<p>Why? Well, that's another story for another time but if you are interested, 
then <a href="http://menzies.us/pdf/07fix.pdf">read this</a> (which is an ok paper)
<a href="http://menzies.us/pdf/08keys.pdf">or this</a>  (which is a really nice paper).
<h2>Fuzzy Logic or Abduction or ... ?</h2>
<p>By now you probably have a pretty strong view on the relative merits of fuzzy logic or abduction
(implementation complexity, generality, etc). 
So what are you going to use in your commercial AI work?
<P>Well, that depends. If you are just playing games then what-the-hell, use fuzzy logic.
<p>And if you are trying to diagnose potentially life threatening diseases using the least cost
and fewest number of invasive procedures, you might consider the complexity of abduction well worth the
implementation effort.
<p>But you're engineers- you decide.

     </p>]]></description>
  </item>

  <item>
    <category rank="1000" >fun</category>
    <category rank="1000" >emacs</category>
     <id>165</id> 
     <title>
Help me keep the shell people alive.
     </title>
     <pubdate secs="1201993343" around="Feb08">Sat Feb  2 15:02:23 PST 2008</pubdate>
     <link>http://menzies.us/csx72/?165</link>
     <guid>http://menzies.us/csx72/?165</guid>
     <description><![CDATA[<p>
<img src="http://cache.viewimages.com/xc/2629770.jpg?v=1&c=ViewImages&k=2&d=C829214BE645D91A8E3F33BF09AC6152A55A1E4F32AD3138" width=300 align=right>

Reply to: XXXa<br>
Date: 2007-11-21, 6:46PM 
</p><p>
There is a sad truth to the world today. I am part of a dying breed of people known as "shell users." We are an old-fashioned bunch, preferring the warm glow of a green screen full of text over the cold blockiness of a graphical interface. We use ssh, scp, and even occasionally ftp. Back in the days before high-speed connections ("broadband"), we would dial up during off-hours to avoid being slammed with huge phone bills. The whole "Microsoft Windows" fad will fade away sooner or later, but in the interim, our kind is facing extinction. 
</p><p>

Because there are fewer and fewer of us, I must help keep our lineage alive. I am looking for someone to help me do this. I need a woman (obviously) who is willing to raise a child with me in the method of Unix. Our child will be introduced to computers at a young age, and will be setting emacs mode before any other child can even read. I earn a sufficient income to support a family in modest comfort. Other than the fact our child will be bright, text-based and sarcastic, we will otherwise be a normal family. We will even go to Disney World and see Mickey Mouse. 
</p><p>

So, if you are a woman between the ages of 23 and 43 who is ready to raise a child in the way of the shell, let me know so we can begin the process. (If you are ready to raise more than one child, even better.) 
</p><p>

PS - yes, this is for real. Given the right person, I would obviously propose before we ... call fork(). 
</p><p>
PPS - I only set emacs mode for my ksh session. I only edit files using vi. Just wanted to clear that up. And I'm looking to raise the child(ren) as a dedicated couple, so if you aren't interested in being married, you may wish to select() a different posting. 
</p><p>

N.B. - on the issue of relocation. I live in a place where my income/expense ratio is proper (i.e., greater than 2:1). I'm willing to live anywhere in the world where this remains true. I've been to much of the country as well as foreign nations. There are no limits to where I will live *so long as the job market for unix admins is robust enough to be sustainable.* And yes, I am interested in a strictly monogamous situation. I've been known to actually turn down offers of "two chicks at the same time." 
</p><p>

Location: Typical Rich Town, CT<br>
it's NOT ok to contact this poster with services or other commercial interests<br>
Original URL: 
<a href="http://www.craigslist.org/about/best/nyc/485967082.html">http://www.craigslist.org/about/best/nyc/485967082.html</a>

     </p>]]></description>
  </item>


  <item>
    <category rank="1000" >2read</category>
    <category rank="1000" >week5</category>
     <id>167</id> 
     <title>
        Reading, week5
     </title>
     <pubdate secs="1202652945" around="Feb08">Sun Feb 10 06:15:45 PST 2008</pubdate>
     <link>http://menzies.us/csx72/?167</link>
     <guid>http://menzies.us/csx72/?167</guid>
     <description><![CDATA[<p>
 
          Nothing new this week- time to play catch up on our existing material:
	<ul>
	<li>My <a href="http://menzies.us/csx72/?140">search lecture</a>.
	<li>My <a href="http://menzies.us/csx72/?158">cognitive science lecture</a>
	</ul>	  


     </p>]]></description>
  </item>

  <item>
    <category rank="1000" >2read</category>
    <category rank="1000" >reading</category>
    <category rank="1000" >week4</category>
     <id>164</id> 
     <title>
        	Reading week 4
     </title>
     <pubdate secs="1201896310" around="Feb08">Fri Feb  1 12:05:10 PST 2008</pubdate>
     <link>http://menzies.us/csx72/?164</link>
     <guid>http://menzies.us/csx72/?164</guid>
     <description><![CDATA[<p>
<ul>
<li>
          Norvig, section 6.4 and the associated <a href="http://menzies.us/csx72/src/week4/search1.lisp">LISP file</a>.
		 <li>
	Re-read my <a href="http://menzies.us/csx72/?140">search lecture</a>.
	<li>Read my <a href="http://menzies.us/csx72/?158">cognitive science lecture</a>
</ul>
     </p>]]></description>
  </item>

  <item>
    <category rank="1000" >review</category>
    <category rank="1000" >week4</category>
     <id>166</id> 
     <title>
        Review: week 4
     </title>
     <pubdate secs="1202652508" around="Feb08">Sun Feb 10 06:08:28 PST 2008</pubdate>
     <link>http://menzies.us/csx72/?166</link>
     <guid>http://menzies.us/csx72/?166</guid>
     <description><![CDATA[<p>
For each of the following search methods: depth-first, breadth-first, best, beam:
<ul>
<li>Define them. Where possible, your definition should include terms like combiner,
ranker, cost function, h(x).
<li>Characterize their maximum memory and run time in terms of a search tree's
branching factor "b" and search tree depth "d".
<li>
              Comment on whether or not the algorithm s guaranteed to find an optimal solution.
			  <li>
			  Comment on under what conditions the
			      the algorithm will not return an acceptable solution.
</ul>
     </p>]]></description>
  </item>

  <item>
    <category rank="1000" >review</category>
    <category rank="1000" >week3</category>
     <id>163</id> 
     <title>
        Review: week 3
     </title>
     <pubdate secs="1201896077" around="Feb08">Fri Feb  1 12:01:17 PST 2008</pubdate>
     <link>http://menzies.us/csx72/?163</link>
     <guid>http://menzies.us/csx72/?163</guid>
     <description><![CDATA[<p>
 
          <ol>
		  <li>Functional programming and sorting
		  <ol type="a">
		  <li>
		  Read http://www.n-a-n-o.com/lisp/cmucl-tutorials/LISP-tutorial-21.html.
		  Write an anonymous lambda function  that returns t if the car one
		  list is numerically less than the car of another. 
		  <li>
		  Write an example of using the above
		  to sort a list ((10 . a ) (5 . b ) (20 . c))
		  Do  not use the :key argument.
		  <li>
		  Read   http://www.delorie.com/gnu/docs/emacs/cl_50.html.
		  Using string-lessp and :key, write the "words-sort" function
		  needed for the following function.

		  (defun sort-words-demo ()
		  	(words-sort '(i have gone to seek a great perhaps)))

			</ol>
			<li>Search control
			<ul type="a">
			<li>
			Write an anonymous lambda function that  returns true if some argument
			x equals the value 12.
			<li>
			Write a successors function for a binary tree that returns
			a list containing "twice the passed argument" and
			"twice the argument plus one"
			<li>
			Write a successor function of a finite tree that calls
			the above successor function and prunes any successors
			greater than some value n.
			</ul>
			<li>Search
			<ul type="a">
			<li>Give an example of an ordered search problem.
			<li>Give an example of an unordered search problem.
			<li>Consider the task of leaving the class and arriving
			at downtown Morgantown. How can this task be characterized
			in terms of [N,A,S,G], g(x), h(x), d and b.
			<li>Consider this generic search function:
<pre>
(defun tree-search (states goal-p  successors combiner)
	 (labels ((next  ()       (funcall successors (first states)))
	          (more-states () (funcall combiner   (next) (rest states))
	    (cond ((null states) nil)                              ; failure. boo!
	          ((funcall goal-p (first states)) (first states)) ; success!
			  (t  (tree-search                                 ; more to do
			       (more-states) goal-p successors combiner))))
</pre>

															 <li>Starting a number "1", give an example of depth first search
															 using "tree-search". Hints:
															 1) you may use the functions described
															 in the above questions. 
															 2) In DFS, new states are "combiner"-ed to the end of the current states./

															 <li>Starting a number "1", give an example of beam first search
															 using "tree-search". Hints:

															 1) beam search is a breadth first search that sorts the list of states
															 according to some predicate.

															 2) In BFS, new states are "combiner"-ed to the beginning of the current
															 state.
															 </ol>

</ol>
     </p>]]></description>
  </item>

  <item>
    <category rank="1000" >emacs</category>
    <category rank="1000" >fun</category>
     <id>162</id> 
     <title>
        	The great editor debate, settled at last
     </title>
     <pubdate secs="1201876163" around="Feb08">Fri Feb  1 06:29:23 PST 2008</pubdate>
     <link>http://menzies.us/csx72/?162</link>
     <guid>http://menzies.us/csx72/?162</guid>
     <description><![CDATA[<p>
 
          <img src="http://imgs.xkcd.com/comics/real_programmers.png">

     </p>]]></description>
  </item>

  <item>
    <category rank="1000" >lisp</category>
     <id>161</id> 
     <title>
        	Making a stand-alone executable
     </title>
     <pubdate secs="1201405421" around="Jan08">Sat Jan 26 19:43:41 PST 2008</pubdate>
     <link>http://menzies.us/csx72/?161</link>
     <guid>http://menzies.us/csx72/?161</guid>
     <description><![CDATA[<p>
 
          The SBCL function <em>save-lisp-and-die</em> can be
		  used to snapshot the current state of the virtual machine
		  and write that to disk. Optionally, if the <em>:executable t</em>
		  flag is added, the LISP interpreter can be added to the snapshot file.
		  The result is a  (large) stand alone executable that can be run without
		  needing the SBCL environment.
		  <p>For example, here's a file </em>misc/search1-exe.lisp</em> that builds
		  a stand-alone executable for the <a href="http://menzies.us/csx73/nova/php">NOVA</a>
		  system.
		  <pre>
(load "make.lisp")
(load "apps/search1.lisp")

(defun search1-and-quit ()
  (search1)
  (quit))

(save-lisp-and-die "tmp/search1" 
		   :toplevel 'search1-and-quit
		   :executable t )
</pre>		
<P> Note that:
<ul>
<li>This file loads a bunch of files
<li>Defines a top-level program that does the work, then <em>(quit)</em>s LISP
<li>Calls <em>save-lisp-and-die</em>
</ul>
<P>If this is loaded into LISP, then the system is made and saved to disk. Then...
<pre>
cd tmp
./search1 --noinform
</pre>
will run the file <em>tmp/search1</em>. 
<p>That's the good news. The bad news is that the executable contains <em>all</em> of LISP.
The example above generated a 24MB file from less than 2000 lines of LISP. 
Oh well.
</p>]]></description>
  </item>


  <item>
    <category rank="1000" >week3</category>
    <category rank="1000" >2read</category>
    <category rank="1000" >reading</category>
     <id>160</id> 
     <title>
        	Week3 : reading
     </title>
     <pubdate secs="1201204237" around="Jan08">Thu Jan 24 11:50:37 PST 2008</pubdate>
     <link>http://menzies.us/csx72/?160</link>
     <guid>http://menzies.us/csx72/?160</guid>
     <description><![CDATA[<p>
 
          Same as <a href="http://menzies.us/csx72/?141">week2</a>, plus
		  the lecture on <a href="http://menzies.us/csx72/?158">cognitive science</a>.

     </p>]]></description>
  </item>

  <item>
    <category rank="1000" >week2</category>
    <category rank="1000" >review</category>
     <id>159</id> 
     <title>
        	Week 2 review questions
     </title>
     <pubdate secs="1201202920" around="Jan08">Thu Jan 24 11:28:40 PST 2008</pubdate>
     <link>http://menzies.us/csx72/?159</link>
     <guid>http://menzies.us/csx72/?159</guid>
     <description><![CDATA[<p>
(Note to CS472 students. While the  quiz will be drawn from questions like the following,
the precise details of the exam questions may be somewhat different to the following. So don't
rote learn these- rather, strive to understand the principles behind these questions. )
          <ol>
		  	<li>Write a function to exponentiate , or raise a number to an integer power.<br> For example:
			<em>(power 3 2)= 3*3 = 9</em><br>Do it three ways (and, for a hint, see 
				<a href="http://menzies.us/csx72/?151">here</a>)
			<ol type="a">
				<li>Do it using a looping construct (no recursion).
					<li>Do it using a recursive call to the same top-level function
					<li>Do it using a helper function defined inside the top-level function.
				</ol><br>
				<li>Write a function that counts the number of times an expression occurs anywhere within another 
				expression. Example:
				<br><em>(count-anywhere 'a '(a ((a) b) a)) ==> 3</em><br> 
				Hint: your function should 
				recurse on all parts of the list.
	<br>&nbsp;<br>
				<li>Consider the following function:
				<ol type="a">
				<li>what do <em>apply</em>, <em>append</em>, <em>mapcar</em>
				do?
<pre>
(defun mappend (fn list)
  (apply #'append (mapcar fn list)))
</pre>
<li>What is the output of this code?
<pre>
(defun numbers-and-negations (input)
    (mappend #'number-and-negation input))

(defun number-and-negation (x)
  "If x is a number, return a list of x and -x."
    (if (numberp x)
	     (list x (- x))
        nil))
</pre>
</ol><br>

				<li>Write  grammars recording the  steps required to achieve some goal:
					<ol type="a"><li>For getting a university degree e.g. <pre>
(defparameter *education*
		'((graduate -> preschool high-school undergrad)
		  (high-school -> grade-school high-years)
		  (high-years -> hiyear1 hiyear2 hiyear3 hiyear4)
...)
</pre>
<li>Your grammar should support stochastic plan generation. Does it? Why? 
<li>Repeat the above for the task of assembling  a car.
<li>Augment the grammar with some benefit score in the head and some cost score in the tail (so when a goal is achieved,
we add that to a "hooray!" score while adding the cost of the tail to some "boo!" score)
</ol>
</ol>

     </p>]]></description>
  </item>

  <item>
    <category rank="1000" >lisp</category>
     <id>157</id> 
     <title>
        	LISP's Local Global Trick 
     </title>
     <pubdate secs="1200868161" around="Jan08">Sun Jan 20 14:29:21 PST 2008</pubdate>
     <link>http://menzies.us/csx72/?157</link>
     <guid>http://menzies.us/csx72/?157</guid>
     <description><![CDATA[<p>
 
          Globals are good; they stop you having to pass around big parameter lists.
		  </p>
		  <p>
		  Globals are bad; side-effects from one function can ruin another.
     </p>
	 <p>
	 With LISP's local-global trick you can have it both ways. A global is copied
	 to a local temporary that can be referenced by  called functions called from some
	 master function. BUT, and here's the the trick, all changes made by this
	 sub-functions just <em>go away</em> when they terminate.
<P>Here's how it works. First, define global parameter:
<pre>
(defparameter *x* 1)
</pre>
<p>Now, define a main function that calls some worker function.
<pre>
(defun main()
  (format t "before ~a~%" *x*)
  (worker)
  (format t "after ~a~%" *x*)
)
</pre>
<p>The worker changes the scope of the global using the "&optional" command,
then calls some underlings to get the job done.
<pre>
(defun worker (&optional (*x* *x*))
  (underling)
)

(defun underling ()
  (format t "during1 ~a~%" *x*)
  (dotimes (i 3)
    (incf *x* i)
    (format t "during2 ~a~%" *x*)))
</pre>
<p>Now, when the worker is call, all the changes made by the underlings disappear when
the work is done. Observe in the following how the underling makes many changes
to "*x*" which are gone when the worker terminates:
<pre>
CL-USER> (main)
before 1
during1 1
during2 1
during2 2
during2 4
after 1
</pre>
 	]]></description>
  </item>

  <item>
    <category rank="1000" >lisp</category>
     <id>156</id> 
     <title>
        	Sum, mean, median of numbers
     </title>
     <pubdate secs="1200784914" around="Jan08">Sat Jan 19 15:21:54 PST 2008</pubdate>
     <link>http://menzies.us/csx72/?156</link>
     <guid>http://menzies.us/csx72/?156</guid>
     <description><![CDATA[<p>
LISP functions can accept any number of arguments after the "&amp;rest" keyword.
So, to add up any number of "nums":
<pre>
(defun sum (&rest nums)
  (let ((sum 0))
    (dolist (one nums sum)
      (incf sum one))))  
</pre>
<p>E.g.
<pre>
(SUM 1 1 1 1 2 2 4 100) => 112
</pre>
<p>And, if you want the mean value, divide the sum by the number of items in the list:
<pre>
(defun mean (&rest nums)
  (let ((sum 0)
        (n   0))
    (dolist (one nums (/ sum n))
      (incf n)
      (incf sum one))))
</pre>
E.g.
<pre>
(MEAN 1 1 1 1 2 2 4 100) => 14
</pre>
<p>LISP functions can also return more than one value using the "values" keyword. Here,
we compute mean and standard deviation of a set of numbers.
<pre>
(defun mean-sd (&rest nums)
  (let ((sum 0)
        (n   0)
        (sumSq 0))
    (labels ((mean () (/ sum n))
             (sd   () (sqrt (/ (- sumSq(/ (* sum sum) n)) (- n 1) ))))
      (dolist (one nums)
         (incf n)
         (incf sum   one)
         (incf sumSq (* one one)))
      (values 
       (mean) 
       (sd)))))
</pre>
E.g.
<pre>
(MEAN-sd 1 1 1 1 2 2 4 100) => 14 34.764515
</pre>
<p>We can use the same kind of function to return the median and the "spread"
of a set of numbers. The median value is the point at which half the values lie below
it:
<ul><li> If a sorted list of numbers is of odd size, it is just the middle value.
<li>However of that list is of even size, then it is some number in between
the middle two values.
</ul>
<p>This list also returns "spread", i.e. the difference between the 50% and 75% value.
"Spread" (not the official technical name) is useful for measuring the expected
deviation from the median.

<pre>
(defun median (&rest nums) 
  "return 50% and (75-50)% values"
  (let* ((n1         (sort nums #'<))
         (l          (length n1))
         (mid        (floor (/ l 2)))
         (midval     (nth mid  n1))
         (75percent  (nth (floor (* l 0.75)) n1))
         (50percent  (if (oddp l) 
                         midval
                         (mean midval (nth (- mid 1) n1)))))    
    (values  
     50percent
     (- 75percent 
        50percent))))
</pre>
E.g.
<pre>
(MEDIAN 1 1 1 1 2 2 4 100) => 3/2 5/2
</pre>
     </p>]]></description>
  </item>

  <item>
    <category rank="1000" >lisp</category>
     <id>155</id> 
     <title>
        	How to load files faster
     </title>
     <pubdate secs="1200760531" around="Jan08">Sat Jan 19 08:35:31 PST 2008</pubdate>
     <link>http://menzies.us/csx72/?155</link>
     <guid>http://menzies.us/csx72/?155</guid>
     <description><![CDATA[<p>
 
          In this tutorial, we show how to speed up loading of LISP code. For the case
		  study shown here, 700 lines of lisp were loaded slowly, in 0.975 seconds,
		  then quickly in 0.175 seconds (over five times faster). Obviously,
that speed up factor is
		  system dependent.
<p>(Oh, and in case there are there are typos in the following, the on-line code is
<a href="http://unbox.org/wisp/var/timm/08/ai/src/nova/make.lisp">here</a>.)	  
<h3>(maker :files files2load :faslp boolean)</h3>
		  <p>"Maker" is a lisp function that, optionally, pre-processes textual .lisp files
		  into a binary fast load .fasl format.
		  <pre>
#+SBCL (DECLAIM (SB-EXT:MUFFLE-CONDITIONS CL:STYLE-WARNING))

(defun maker ( &key files 
	       faslp   ; if nil,  just load, don't fasl
	       forcep  ; always force a re-compile?
	      )
  (labels ; some one-liners for compiling and loading files
      ((filename (x y)     (string-downcase (format nil "~a.~a"  x y)))
       (src      (f)       (filename f "lisp"))
       (bin      (f)       (filename f "fasl"))   
       (newerp   (f1 f2 )  (> (file-write-date f1)  
			      (file-write-date f2)))
       (compile? (src bin) (or forcep
			       (not (probe-file bin))
			       (newerp src bin)))
       (update   (f)       (if (compile? (src f) (bin f)) 
			     (compile-file (src f))))
       (cake     (f)       (update f) (load (bin f)))
       (loader   (f)       (format t ";;;; FILE: ~a~%" f) (load f))
       (make1    (f)       (if faslp 
			       (cake f) 
			       (loader (src f)))))

    (mapc #'make1 files)))
</pre>
<h3>The slow way: "(maker :files files2load)"</h3>
<p>In this first example, we turn off fasl pre-processing and load stuff the slow way:
<pre>
(defun make-slow () 
  "list your files to load"
  (maker   :files '( 
		  macros
		  eg
		  lib
		  guess-db
		  config
		  guess
		  coc-lib
		  demos
		  )))
</pre>
<h3>Faster:  "(maker :faslp t :files files2load)"</h3>
<p>
In this example, we enable fasl pre-processing. Note that a new file is created in the 
current directory and, if the .lisp file ever has a new change date than the .lisp file,
fasl pre-processing is repeated.
<pre>
(defun make-fast () 
  "list your files to load"
  (maker  :faslp t 
          :files '( 
		  macros
		  eg
		  lib
		  guess-db
		  config
		  guess
		  coc-lib
		  demos
		  )))
</pre>
<h3>And the winner is...</h3>
 <p>In the usual case, only a few source files are ever changed so fasl pre-processing
is required for only a small part of the system. 
<p>In the worst case, all the files are loaded the slow way.
<p>In the best case, everything has been fasl-ed
so "compilation" becomes "just load the fasls".
<p>The following times just compare the worst case with the best case:
<pre>
(time (make-slow))

Evaluation took:
  0.975 seconds of real time
  0.912261 seconds of user run time
  0.050491 seconds of system run time
  [Run times include 0.103 seconds GC run time.]
  0 calls to %EVAL
  0 page faults and
  44,846,872 bytes consed.
(MACROS EG LIB GUESS-DB CONFIG GUESS COC-LIB DEMOS)
</pre>
Note that the worst case took 0.975 seconds and the best case, shown below, tool 0.175 seconds.
<pre>
(time (make-fast))

Evaluation took:
  0.175 seconds of real time
  0.164378 seconds of user run time
  0.005954 seconds of system run time
  0 calls to %EVAL
  0 page faults and
  9,153,928 bytes consed.
(MACROS EG LIB GUESS-DB CONFIG GUESS COC-LIB DEMOS)
</pre>
</p>]]></description>
  </item>

  <item>
    <category rank="1000" >lisp</category>
     <id>151</id> 
     <title>
        	To iterate or recurse, that is the question.
     </title>
     <pubdate secs="1200679891" around="Jan08">Fri Jan 18 10:11:32 PST 2008</pubdate>
     <link>http://menzies.us/csx72/?151</link>
     <guid>http://menzies.us/csx72/?151</guid>
     <description><![CDATA[<p>
 
          Let sum up the first 1,000 integers.
		  <p>Solution 1: recursion with funky initializations in variable list:
		  <pre>(defun n1   (&optional (x 1000)) 
  (if (< x 0) 
      0
      (+ x (n1 (- x 1))))) </pre>
          </p>
          <p>Solution 2: recursion but using helper function called by the main worker with
              defaults:
            <pre>(defun n2 ()
  (labels 
      ((n0 (x)
	 (if (< x 0)
	     0
	     (+ x (n0 (- x 1))))))
    (n0 1000))) </pre>
     </p><p>
     Solution 3: no recursion:
     <pre>(defun n3 ()
  (do* ((x   1000 (1- x))
	(sum x    (+ sum x)))
       ((< x 0) sum))) </pre>
<p>Timings:
<pre>
(defun ns ()
  (let ((r 100000)) 
      (print   t) (time (dotimes (i r) t)) 
      (print 'n1) (time (dotimes (i r) (n1))) 
      (print 'n2) (time (dotimes (i r) (n2))) 
      (print 'n3) (time (dotimes (i r) (n3)))))
</pre>
<p>Results:
<pre>
CL-USER> (ns)

T 
Evaluation took:
  0.0 seconds of real time
  1.03e-4 seconds of user run time
  2.e-6 seconds of system run time

N1 
Evaluation took:
  3.167 seconds of real time
  3.154306 seconds of user run time
  0.002845 seconds of system run time

N2 
Evaluation took:
  2.818 seconds of real time
  2.812621 seconds of user run time
  0.001603 seconds of system run time

N3 
Evaluation took:
  0.705 seconds of real time
  0.703598 seconds of user run time
  5.08e-4 seconds of system run time
</pre>
<p>Conclusions:
<ul
<li>Recursion increased runtimes by a factor of around 4.
<li>Keyword tricks added little to the runtime (2.818 secs to 3.167 secs)
</ul>

]]></description>

  </item>

  <item>
    <category rank="1000" >lisp</category>
     <id>152</id> 
     <title>
	   Profiling code in SBCL
     </title>
     <pubdate secs="1200710753" around="Jan08">Fri Jan 18 18:45:53 PST 2008</pubdate>
     <link>http://menzies.us/csx72/?152</link>
     <guid>http://menzies.us/csx72/?152</guid>
     <description><![CDATA[<p>
<p>
"We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil."
 <ul><li>
Knuth, Donald. Structured Programming with go to Statements, ACM Journal Computing Surveys, Vol 6, No. 4, Dec. 1974. p.268.</ul>
Advice to any programmer: code it clean, profile the code,
then only optimize the stuff that needs optimizing.
</p>
<p>
For example, say there is some code "(demo456)" that does "something". In SBCL, this is how we 
could learn where the time goes.
<p>First, we need a list of functions to profile:
<pre>
(defun profiles () 
  (sb-profile:profile
				?bag
				?dr
				?elt
				?em
				?num
				?one-a
				?one-b
				...
   ))
</pre>
<p>Hint: to get a list of all functions in the lisp code in a directory, run
<pre>
grep defun *.lisp | gawk '{print $2}' 
</pre>
<p>
Then add this code to your lisp interpreter:
<pre>
(defun watch (code)
  (sb-profile:unprofile)
  (sb-profile:reset)
  (profiles)
  (eval code)
  (sb-profile:report)
  (sb-profile:unprofile)
)
</pre>
<p>Then, "watch" something:
<pre>
(watch '(demo456))
</pre>
<P>This outputs something like this:
           <pre>
CL-USER> (watch '(demo456))
WARNING: ignoring undefined function @DR
  seconds  |   consed   |  calls  |  sec/call  |  name  
----------------------------------------------------------
     0.143 |  2,781,184 | 170,000 |   0.000001 | EM2EFFORT
     0.101 |  2,813,952 | 170,000 |   0.000001 | EM2CIN
     0.100 |  2,658,304 | 170,000 |   0.000001 | EM2DIN
     0.067 |  2,678,784 | 170,000 |  0.0000004 | EM2RIN
     0.065 |    413,696 |  50,000 |   0.000001 | SF2EFFORT
     0.033 |    348,160 |  50,000 |   0.000001 | SF2CIN
     0.027 |    172,032 |  30,000 |   0.000001 | DR2ROUT
     0.024 |    192,512 |  30,000 |   0.000001 | DR2COUT
     0.023 |    421,888 |  50,000 |  0.0000005 | SF2RIN
     0.021 |    401,408 |  50,000 |  0.0000004 | SF2DIN
     0.008 |    167,936 |  10,124 |   0.000001 | MY-RANDOM
     0.001 |          0 |      28 |   0.000033 | GETA
     0.001 |          0 |       5 |   0.000180 | ?SF
     0.000 |          0 |       1 |   0.000000 | POINT-TO-LINE
     0.000 |          0 |       1 |   0.000000 | LINE-Y
     0.000 |          0 |       1 |   0.000000 | INIT-DB
     0.000 |          0 |      27 |   0.000000 | GUESS
     0.000 |    172,032 |  30,000 |   0.000000 | DR2DOUT
     0.000 |          0 |       1 |   0.000000 | COCOMO-DEFAULTS
     0.000 |          0 |       1 |   0.000000 | ?ONE-B
     0.000 |          0 |       1 |   0.000000 | ?ONE-A
     0.000 |          0 |      17 |   0.000000 | ?EM
     0.000 |          0 |      25 |   0.000000 | ?ELT
     0.000 |          0 |       3 |   0.000000 | ?DR
     0.000 |          0 |      25 |   0.000000 | ?BAG
----------------------------------------------------------
     0.613 | 13,221,888 | 980,260 |            | Total

estimated total profiling overhead: 4.69 seconds
overhead estimation parameters:
  1.8e-8s/call, 4.784e-6s total profiling, 2.264e-6s internal profiling

These functions were not called:
 ?NUM ?QUANTITY AS-LIST COC-LIB1 DEMO DEMO-GUESS DEMO-GUESS1 DEMO123 DEMOF
 EG EG0 EGS GUESS-DEMO-ALL GUESS0A GUESS0B GUESS0C GUESS0D GUESS0E GUESS0F
 GUESS0G HINGED-LINE HINGED-LINE-COQUALMO MAKE MAKER MY-COMMAND-LINE
 MY-GETENV PIVOTED-LINE SYM-PRIM ZAP ZAPS
NIL
CL-USER> 
</pre>
<p>Looking at the above, if we can speed up "em2effort", "em2cin" and "em2din" then we'd halve the runtime.
     </p>]]></description>
  </item>


  <item>
    <category rank="1000" >lisp</category>
     <id>154</id> 
     <title>
		A better way to profile SBCL code
     </title>
     <pubdate secs="1200715288" around="Jan08">Fri Jan 18 20:01:28 PST 2008</pubdate>
     <link>http://menzies.us/csx72/?154</link>
     <guid>http://menzies.us/csx72/?154</guid>
     <description><![CDATA[<p>
 
<a href="http://menzies.us/csx72/?152">Previously</a>, 
we described a simple method to profile selected functions in SBCL Lisp.
     </p>
<p>But what about the related task of profiling <em>all</em> your code?
The code <a href="http://unbox.org/wisp/var/timm/08/ai/src/profile.lisp">http://unbox.org/wisp/var/timm/08/ai/src/profile.lisp</a>
illustrates one method.
<h3>*lisp-funs*</h3>
<p>That code contains a list of all symbols in a standard release of Common LISP and SBCL:
<pre>
(defparameter *lisp-funs* '(
			    *
			    *
			    **
			    ***
			    +
			    +
			    ++
			    +++
			    -
			    -
			    /
			    /
			    //
			    ///
			    1+
			    1-
			    <
			    <=
			    =
			    >
			    >=
			    abort
			    abs
			    acons
			    acos
                 
                etc
))
</pre>
<h3>(my-funs)</h3>
There is also a 
trick to find all the symbols in *package* user that are bound to a function (i.e.
satisfy "fboundp") that aren't in "*lisp-funs*".
<pre>
(defun my-funs ()
  (let ((out '()))
    (do-symbols  (s)
      (if (and (fboundp s)
	       (find-symbol  (format nil "~a" s) *package*)
	       (not (member s *lisp-funs*)))
	  (push s out)))
    out))
</pre>
<h3>(watch code)</h3>
<p>With "my-funs", we can write a "watch" macro that wraps the SBCL profiling calls:
<pre>
(defmacro watch (code)
  `(progn
    (sb-profile:unprofile)
    (sb-profile:reset)
    (sb-profile:profile ,@(my-funs))
    (eval ,code)
    (sb-profile:report)
    (sb-profile:unprofile)
    t)
)
</pre>
<h3>Example: (watch (main))</h3>
<p>With all the above, then we can profile all the functions called from some high-level "(main)" function, as follows:
<pre>
(watch (main))
</pre>
<p>The result is a standard SBCL profiler output. Note that, in the following,
the "!" function takes 0.447/0.707 (i.e. over half) the run time.
So if we are going to optimize anything, optimize "!".
<pre>
CL-USER> (watch (main))
  seconds  |   consed   |   calls   |  sec/call  |  name  
------------------------------------------------------------
     0.447 |          0 |   990,001 |  0.0000005 | !
     0.105 |  6,819,840 |   170,000 |   0.000001 | EM2EFFORT
     0.073 |  6,795,264 |   170,000 |  0.0000004 | EM2RIN
     0.027 |  6,778,880 |   170,000 |  0.0000002 | EM2CIN
     0.020 |  1,585,152 |    50,000 |  0.0000004 | SF2CIN
     0.016 |  1,630,208 |    50,000 |  0.0000003 | SF2DIN
     0.012 |  6,742,016 |   170,000 |  0.0000001 | EM2DIN
     0.006 |    946,176 |    30,000 |  0.0000002 | DR2ROUT
     0.001 |          0 |        14 |   0.000067 | MAKE-R15
     0.001 |      8,192 |         1 |   0.000539 | COCOMO-DEFAULTS
     0.000 |          0 |         5 |   0.000000 | MAKE-SFS
     0.000 |          0 |         1 |   0.000000 | LINE-Y
     0.000 |          0 |        27 |   0.000000 | ?
     0.000 |          0 |         1 |   0.000000 | INIT-DB
     0.000 |          0 |         5 |   0.000000 | MAKE-R16
     0.000 |          0 |         7 |   0.000000 | MAKE-RIN+
     0.000 |  1,695,744 |    50,000 |   0.000000 | SF2EFFORT
     0.000 |          0 |         1 |   0.000000 | MAKE-R26
     0.000 |          0 |         9 |   0.000000 | MAKE-EM-
     0.000 |          0 |         6 |   0.000000 | MAKE-DIN+
     0.000 |          0 |         1 |   0.000000 | ?ONE-B
     0.000 |          0 |         1 |   0.000000 | MAKE-ONE-A
     0.000 |          0 |        27 |   0.000000 | GUESS
     0.000 |          0 |        25 |   0.000000 | ?ELT
     0.000 |          0 |         5 |   0.000000 | MAKE-DSF
     0.000 |    974,848 |    30,000 |   0.000000 | DR2DOUT
     0.000 |          0 |        34 |   0.000000 | MAKE-EM
     0.000 |          0 |        25 |   0.000000 | ?BAG
     0.000 |          0 |         6 |   0.000000 | MAKE-DR
     0.000 |          0 |        10 |   0.000000 | MAKE-RIN-
     0.000 |          0 |         5 |   0.000000 | MAKE-RSF
     0.000 |          0 |         3 |   0.000000 | ?DR
     0.000 |          0 |         1 |   0.000000 | POINT-TO-LINE
     0.000 |          0 |         3 |   0.000000 | MAKE-RDR
     0.000 |          0 |         1 |   0.000000 | DEMO456
     0.000 |          0 |         1 |   0.000000 | MAKE-LINE
     0.000 |          0 |         3 |   0.000000 | MAKE-R25
     0.000 |          0 |         3 |   0.000000 | MAKE-DDR
     0.000 |          0 |         5 |   0.000000 | MAKE-CSF
     0.000 |          0 |        10 |   0.000000 | MAKE-SF
     0.000 |          0 |         6 |   0.000000 | MAKE-CIN+
     0.000 |          0 |        11 |   0.000000 | MAKE-CIN-
     0.000 |          0 |        28 |   0.000000 | GETA
     0.000 |  1,482,752 |    50,000 |   0.000000 | SF2RIN
     0.000 |          0 |         5 |   0.000000 | ?SF
     0.000 |      8,192 |        17 |   0.000000 | ?EM
     0.000 |          0 |         8 |   0.000000 | MAKE-EM+
     0.000 |          0 |         2 |   0.000000 | MAKE-NUM
     0.000 |          0 |         1 |   0.000000 | ?ONE-A
     0.000 |    958,464 |    30,000 |   0.000000 | DR2COUT
     0.000 |          0 |         3 |   0.000000 | MAKE-CODR
     0.000 |          0 |         1 |   0.000000 | MAKE-ONE-B
     0.000 | 24,076,288 |         1 |   0.000000 | DEMO456A
     0.000 |          0 |        25 |   0.000000 | COCO
     0.000 |          0 |         2 |   0.000000 | MAKE-R36
     0.000 |          0 |        11 |   0.000000 | MAKE-DIN-
     0.000 |          0 |         1 |   0.000000 | MAKE-DB
     0.000 |    155,648 |    10,124 |   0.000000 | MY-RANDOM
------------------------------------------------------------
     0.707 | 60,657,664 | 1,970,493 |            | Total
</pre>
]]></description>
  </item>


  <item>
    <category rank="1000" >news</category>
     <id>153</id> 
     <title>
	Some programming tips
     </title>
     <pubdate secs="1200711437" around="Jan08">Fri Jan 18 18:57:17 PST 2008</pubdate>
     <link>http://menzies.us/csx72/?153</link>
     <guid>http://menzies.us/csx72/?153</guid>
     <description><![CDATA[<p>
 
<ul>
<li><a href="http://menzies.us/csx72/?151">Cost benefits of recursion/iteration</a>.
<li>How to <a href="http://menzies.us/csx72/?152">profile some </a> of your code to find hotspots.
<li>How to <a href="http://menzies.us/csx72/?154">profile all </a> of your code.
</ul>

     </p>]]></description>
  </item>

  <item>
    <category rank="1000" >news</category>
     <id>145</id> 
     <title>
        	Week 1 news
     </title>
     <pubdate secs="1200596642" around="Jan08">Thu Jan 17 11:04:02 PST 2008</pubdate>
     <link>http://menzies.us/csx72/?145</link>
     <guid>http://menzies.us/csx72/?145</guid>
     <description><![CDATA[
<ol><li> 
          More AI fun (courtesy of Josh Williams): is LISP <a href="http://menzies.us/csx72?144">God's language</a>?

     <li>Week1 review questions <a href="http://menzies.us/csx72/?150">on-line</a>.</P>
	 <li>Jonathon Lynch has a fix for the <a href="http://wisp.unbox.org/private.cgi/2008-unbox.org/2008-January/000004.html">can't find LISP</> problem.
	 </ol>]]></description>
  </item>

  <item>
    <category rank="1000" >news</category>
     <id>143</id> 
     <title>
        Week 2 work now on-line.
     </title>
     <pubdate secs="1200288271" around="Jan08">Sun Jan 13 21:24:31 PST 2008</pubdate>
     <link>http://menzies.us/csx72/?143</link>
     <guid>http://menzies.us/csx72/?143</guid>
     <description><![CDATA[<p>
 
          See <a href="http://menzies.us/csx72/?week2">http://menzies.us/csx72/?week2</a>.

     </p>]]></description>
  </item>

  <item>
    <category rank="1000" >week2</category>
    <category rank="1000" >2read</category>
     <id>141</id> 
     <title>
Week 2: reading
     </title>
     <pubdate secs="1200270378" around="Jan08">Sun Jan 13 16:26:18 PST 2008</pubdate>
     <link>http://menzies.us/csx72/?141</link>
     <guid>http://menzies.us/csx72/?141</guid>
     <description><![CDATA[<p>
 
All students:
<ul><li>Chapter three of Norvig's Paradigms of AI.
<li>My <a href="http://menzies.us/csx72/?140">search lecture</a>
</ul>
Optional: Norvig, section 6.4.

     </p><p>
	 Cs572 students:
	 <ul>
	 	<li>Page 4 to 8 , sections 7.2 and 8 of 
		<a href="http://menzies.us/csx72/doc/cocomo/xomo102.pdf">XOMO: Understanding Development Options for Autonomy</a>
		</ul></p>]]></description>
  </item>

  <item>
    <category rank="1000" >week2</category>
    <category rank="1000" >lab</category>
     <id>142</id> 
     <title>
        	Week 2: lab
     </title>
     <pubdate secs="1200287368" around="Jan08">Sun Jan 13 21:09:28 PST 2008</pubdate>
     <link>http://menzies.us/csx72/?142</link>
     <guid>http://menzies.us/csx72/?142</guid>
     <description><![CDATA[
<p>Get the code:</p>
<pre>
mkdir -p $HOME/opt/lisp
cd $HOME/opt/lisp
svn export http://unbox.org/wisp/var/timm/08/ai/src/week2/ week2
</pre>
<p>If that works, you should see:
<pre>
A    week2
A    week2/simple.lisp
A    week2/examples-1-2-3.lisp
</pre>  
<p>
"Examples-1-2-3.lisp" contains samples from Norvig's
chapters 1,2,3. Try running and understanding them all
		  (if you
		  can't work our what's going on, try reading more of the 
		  text book).
</p>
<p>To run the examples, place the cursor to the end of each
form and evaluate just that form (using Cnt-x-e).</p>
<p>Note that some of the examples set up some variables,
then destructively modify them. So, sometimes to reproduce some
of the examples, you have to go back a few examples and reset
those variables.
</p>
     ]]></description>
  </item>

  <item>
    <category rank="1000" >lecture</category>
    <category rank="1000" >week2</category>
    <category rank="1000" >search</category>
     <id>140</id> 
     <title>
	Search
     </title>
     <pubdate secs="1200269465" around="Jan08">Sun Jan 13 16:11:05 PST 2008</pubdate>
     <link>http://menzies.us/csx72/?140</link>
     <guid>http://menzies.us/csx72/?140</guid>
     <description><![CDATA[<p>
 
<p>
Search is a universal problem solving mechanism in AI. The sequence of steps required to solve a problem is not known a priori and it must be determined by a
search exploration of alternatives.
<p>In computer science, a state space is a description of a configuration of discrete states used as a simple model of machines. Formally, it can be defined as a tuple [N, A, S, G] where:
<ul>

   <li> S is a nonempty subset of N that contains start states
   <li> G is a nonempty subset of N that contains the goal states.
   <li>  N is a set of states, each with "successors"
   <li> A is a set of arcs connecting each state to its successors
</ul>

<p>Often, there are oracles that offer clues on the current progress in the search
<ul>
<li>g(x) is a measure of "distance" of state "x" from start. 
<li>h(x) is a measure of "cost" required to get from state "x" to the goal.  
</ul>
<p>While "g(x)" can be known exactly (since it is a log of the past)
while "h(x)" is a heuristic guess-timate (since we have not searched there yet).

<p>
The state space is what state space search searches in. Graph theory is helpful in understanding and reasoning about state spaces.
<p>
A state space has some common properties:
<ul>
    <li> complexity, where branching factor "b" is important
   <li> structure of the space, see also graph theory:
          <ul>
      <li> directionality of arcs
          <li> tree
          <li> rooted graph
</ul>
</ul>

<p>In this lecture, we will explore two kinds of search engines:
<ul>
<li>Ordered-search; e.g. the tree and graph searchers discussed below.
In this kind of search, the solutions spreads out in a wave over over some solution space.
<li>
Unordered-search  where 
a partial solution is quickly (?randomly) generated, then maybe fiddled with. 
A common unordered search method is to generate a number of slots, each with  random values
as done in simulated annealing or MAXWALKSAT (discussed below).
 
</ul>
<p>Ordered search is useful for problems with some inherent ordering  (e.g.) walking a maze.
<P>
Unordered search is  useful for problems where ordering does not matter.
For example, if a simulator accepts N inputs, then an unordered search might supply M &le; N inputs and the rest
are filled in at random.


<h2>Problem Search Strategies</h2>

<p>In practice, S may not be pre-computed and cached. Rather, the successors
to some state "s" may be
computed  using some "successors" function  
only when, or if, we ever reach "s". In the following code:
<ul>
<li> Some "goal-p" function checks if we have arrived at our goal.
<li> Some "combiner" function
 joins the successors with the rest of the states
(maybe removing duplicates).
</ul>
<pre>
(defun tree-search (states goal-p  successors combiner)
  (labels ((next  ()       (funcall successors (first states)))
           (more-states () (funcall combiner   (next) (rest states))

   (cond ((null states) nil)                              ; failure. boo!
         ((funcall goal-p (first states)) (first states)) ; success!
         (t  (tree-search                                 ; more to do
               (more-states) goal-p successors combiner))))
</pre>
<ul>
<li>
    Devise a representation scheme for states
   <li>Describe an initial and a final state
   <li> Describe operators
   <li> Select which state to expand next
   <li>Recognize the goal when generated
</ul>
<p>Example: Monkey pushing a chair under a banana, climbs the chair, eats
the banana
<pre>
(defparameter *banana-ops*
  (list
    (op 'climb-on-chair
        :preconds '(chair-at-middle-room at-middle-room on-floor)
        :add-list '(at-bananas on-chair)
        :del-list '(at-middle-room on-floor))
    (op 'push-chair-from-door-to-middle-room
        :preconds '(chair-at-door at-door)
        :add-list '(chair-at-middle-room at-middle-room)
        :del-list '(chair-at-door at-door))
    (op 'walk-from-door-to-middle-room
        :preconds '(at-door on-floor)
        :add-list '(at-middle-room)
        :del-list '(at-door))
    (op 'grasp-bananas
        :preconds '(at-bananas empty-handed)
        :add-list '(has-bananas)
        :del-list '(empty-handed))
    (op 'drop-ball
        :preconds '(has-ball)
        :add-list '(empty-handed)
        :del-list '(has-ball))
    (op 'eat-bananas
        :preconds '(has-bananas)
        :add-list '(empty-handed not-hungry)
        :del-list '(has-bananas hungry))))
</pre>
<p>If the search space is small, then it is possible to write it manually (see above).
<p>More commonly, the search space is auto-generated from some other representations
(like the two examples that follow).
<p>Here's walking a maze where "op" is inferred from the maze description:
<pre>
(defparameter *maze-ops*
  (mappend #'make-maze-ops
     '((1 2) (2 3) (3 4) (4 9) (9 14) (9 8) (8 7) (7 12) (12 13)
       (12 11) (11 6) (11 16) (16 17) (17 22) (21 22) (22 23)
       (23 18) (23 24) (24 19) (19 20) (20 15) (15 10) (10 5) (20 25))))

(defun make-maze-op (here there)
  "Make an operator to move between two places"
  (op `(move from ,here to ,there)
      :preconds `((at ,here))
      :add-list `((at ,there))
      :del-list `((at ,here))))

(defun make-maze-ops (pair)
  "Make maze ops in both directions"
  (list (make-maze-op (first pair) (second pair))
        (make-maze-op (second pair) (first pair))))
</pre>
Here's stacking some boxes till they get into some desired order.
<pre>
(defun move-op (a b c)
  "Make an operator to move A from B to C."
  (op `(move ,a from ,b to ,c)
      :preconds `((space on ,a) (space on ,c) (,a on ,b))
      :add-list (move-ons a b c)
      :del-list (move-ons a c b)))

(defun move-ons (a b c)
  (if (eq b 'table)
      `((,a on ,c))
      `((,a on ,c) (space on ,b))))

(defun make-block-ops (blocks)
  (let ((ops nil))
    (dolist (a blocks)
      (dolist (b blocks)
        (unless (equal a b)
          (dolist (c blocks)
            (unless (or (equal c a) (equal c b))
              (push (move-op a b c) ops)))
          (push (move-op a 'table b) ops)
          (push (move-op a b 'table) ops))))
    ops))
</pre>

<h2>Search Trees</h2>
<p>
General search methods to:
<ul>
<li>Explore to depth "d"
<li>A tree that branches  at a rate of "b" out-arcs per node
</ul>
   Depth first search
          <ul><li>running time O(b^d)
          <li> least space:  O(b * d) 
<ul>
<li>At most, one branch from root to a leaf, knowing that there are "b"
options per node.
<ul>
          <li> may not find the minimum cost solution
</ul>
</uL>
</ul>
    Breadth first search:
         <ul><li> running time O(b^d)
          <li>most space: space requirements O(b^d)
          <li>least cost:   (shortest path to the goal)
	<li>Variants: Given that
you've reached depth "d", score the current  partial solutions.
<ul>
<li>best-first search
<ul>
<li> Sort the solutions and expand them in the order best to worst.
</ul>
<li>beam-search
<ul>
<li>Only expand the "N" best ones
<li>Keep N small (10 or 20) to constrain search size.
<li>Note: small beams mean less memory but make the search incomplete (may miss solutions).
</ul>
</ul>
</ul>
   DFID: depth-first iterative deepening

          <ul><li>Depth search to maximum depth Max
<li>Repeat for Max+1
<li> running time O(b^d)
         <li> space requirements O(b*d)
          <li>finds the minimum cost solution (shortest path to the goal)
          <li>It can be shown that this algorithm visits all nodes at most 
<pre>
M(b,d) <= b^d*(1 - 1/b)^(-2)
</pre>
As the branching factor increases, the extra overhead of DFID's repeated
search over breadth-first "b^d" search approaches unity:
<ul>
<li>When  b=2, M(b,d) &le; 4 * b^d
<li>When  b=3, M(b,d) &le; 9/4 * b^d
<li>When  b=4, M(b,d) &le; 16/9 * b^d
<li>When  b=5, M(b,d) &le; 25/16 *  b^d
</ul>
<li>Same iterative widening methods can be applied to any search
</ul>
  Bidirectional search:  hands across the water
<ul>
<li>add a "precursors" function that returns parents
of the current state (i.e. the opposite to "successors"). 
<li> Run two searches (forwards and backwards) it they ever meet, stop
</ul>
<h2>Stochastic Tree Search</h2>
ISAMP (iterative sampling)
<ul>
<li>Start at "start".
<li>If too many restarts, stop.
<li> Take any successor. 
<li> Repeat till
you hit a dead-end or "goal".  
<li>If dead-end, then restart</ul>
</ul>
<p>This stochastic tree search can be readily adapted to other problem types.
<p>Eg#1: scheduling problems</p>
<img src="http://menzies.us/csx72/img/issamp.png">
<p>Eg#2: N-queens with the <a href="http://en.wikipedia.org/wiki/LURCH">lurch</a>
algorithm. From state "x", pick an out-arc at random. Follow it to some max depth.
Restart.</P>
<img src="http://menzies.us/csx72/img/lurch.png">


<p>Q: Why does Isamp work so well. work so well?
<ul><li>A: Few solutions, large space between them. Complete search wastes a lot of
time if it lucks into a poor initial set of choices.
<li>See also stochastic search, below.
</ul>
<h2>Graph search</h2>
<p>Like tree search, but 
with a *visited* list that checks if some state has not been  reached before.
<p>
Visited list can take a lot of memory
<ul>
<li>Standard fix: some hashing scheme so that states get stored in a compact representation
<li>Incomplete: if different  reached states  happen to hash to the same value.
</ul>
<p>A* search: standard game playing technique:
<ul>
<li>
Best-first search of graph with a *visited* list where states are sorted by "g+h"
<li>g(x)= Cost to reach state "x" from start 
<li>h(x)= Heuristic estimate of  cost to go from "x" to goal
<li>Important: h(x) must <em> under-estimate </em>cost from "x" to goal
<ul>
<li>E.g. straight line "as the bird flies"
distance to shop at the Target store is less than the
actual driving distance
<li>
Assuming under-estimates, then A* is optimal. 
<ul>
<li>
Let f(x)=g(x)+h(x). 
<li> Compare two solutions s1, s2
that go from "x" to goal.
<li> actual &gt; f(s1) due to the under-estimation assumption
<li>And, at termination (by definition) f(s1) &lt; f(s2)
<li> So actual &gt; f(s2) &gt; f(s1) ; i.e. using s1 was optional
</ul>
If g(x) over-estimates then
<ul>
<li> 
actual &le; f(s1) 
<li>
At termination,  other, worse solution "s2" has 
f(s1) &lt; f(s2) (as before).
<li>
But we don't know if s2 is between s1 and actual.

</ul>
</ul>
</ul>
<h2>Unordered searchers</h2>
<p>Strangely, order may not matter. Rather than struggle with lots of tricky decisions,
<ul>
<li>exploit speed of modern CPUs, CPU farms.
<li> try lots of possibilities, very quickly
</ul>
For example:
<ul>
<li>current :=  a random solution across the space
<li>If its better than anything seen so far, then best := current
</ul>
Stochastic search:  randomly fiddle with current solution
<ul>
<li>Change part of the current solution at random
</ul>
Local search: . replace random stabs with a little  tinkering
<ul>
<li> Change that part of the solution that most improves score
</ul>
Some algorithms just do stochastic search (e.g. simulated annealing)
while others do both (MAXWALKSAT)
<h3>Simulated Annealing</h3>
<p>S. Kirkpatrick and C. D. Gelatt and M. P. Vecchi 
<a href="http://citeseer.ist.psu.edu/kirkpatrick83optimization.html">Optimization by Simulated Annealing</a>, Science, Number 4598, 13 May 1983, volume 220, 4598, pages 671680,1983.
<pre>
s := s0; e := E(s)                     // Initial state, energy.
sb := s; eb := e                       // Initial "best" solution
k := 0                                 // Energy evaluation count.
WHILE k < kmax and e > emax            // While time remains & not good enough:
  sn := neighbor(s)                   //   Pick some neighbor.
  en := E(sn)                          //   Compute its energy.
  IF    en < eb                        //   Is this a new best?
  THEN  sb := sn; eb := en             //     Yes, save it.
  FI
  IF random() > P(e, en, k/kmax)       //   Should we move to it?
  THEN  s := sn; e := en               //     Yes, change state.
  FI
  k := k + 1                           //   One more evaluation done                        
RETURN sb                              // Return the best solution found.
</pre>
<p>
Note the space requirements for SA: only enough RAM to hold 3 solutions. Very
good for old fashioned machines.
<p>But what to us for the probability function "P"? This is standard function:
<pre>
FUNCTION P(old,new,t) = e^((old-new)/t)
</pre>
Which, incidentally, looks like this:
<ul>
<li>
Initially: <img align=middle width=300 src="http://menzies.us/csx72/img/tdot1.png">
<li>
Subsequently: <img  align=middle width=300 src="http://menzies.us/csx72/img/tdot5.png">
<li>
Finally: <img align=middle width=300 src="http://menzies.us/csx72/img/t1.png">
</ul>
<p>That is,  we jump to a worse solution when "random()  > P"
in two cases:
<ul>
<li> The sensible case:  if "new" is greater than "old"  
<li>The crazy case: earlier in the run (when "t" is small) 
</ul>
<p>Note that such crazy jumps let us escape local minima.
<h3>MAXWALKSAT</h3>
<p>Kautz, H., Selman, B., & Jiang, Y.
<a href="http://citeseer.ist.psu.edu/168907.html">A general stochastic approach to solving  problems with hard and soft constraints</a>. In D. Gu, J. Du and P. Pardalos (Eds.), The satisfiability problem: Theory and applications, 573586. New York, NY, 1997.
<p>State of the art search algorithm.
<pre>
FOR i = 1 to max-tries DO
  solution = random assignment  
  FOR j =1 to max-changes DO
    IF    score(solution) > threshold
    THEN  RETURN solution
    FI
    c = random part of solution 
    IF    p < random()
    THEN  change a random setting in c
    ELSE  change setting in c that maximizes score(solution) 
    FI
RETURN failure, best solution found
</pre>
<p>This is a very successful algorithm. Here's some performance of
WALKSAT (a simpler version of MAXWALKSAT) against a complete search
</p>
<img width=600 src="http://menzies.us/csx72/img/walksat.png">
<p>

<A HREF="http://menzies.us/csx72/img/latin.mov">
        <IMG ALIGN=right width=150 SRC="http://menzies.us/csx72/img/latin.png"></A>
You saw another example in the introduction to this subject.
The movie at right shows two AI algorithms t$
        solve the "latin's square" problem: i.e. pick an initial pattern, then try to
        fill in the rest of the square with no two colors on the same row or column.
<ul>
        <li>The <em>deterministic</em> method is the kind of exhaustive theorem proving method
        used in the 1960s and 1970s that convinced people that "AI can never work".
        <li>The <em>stochastic</em> method is a 1990s-style AI algorithm that makes some
        initial guess, then refines that guess based on logic feedback.
        </ul>
   <p>     This stochastic
        local search kills the latin squares problem (and, incidently, many other problems).
 


<br clear=all>

     </p>]]></description>
  </item>


  <item>
    <category rank="1000" >news</category>
     <id>139</id> 
     <title>
        Getting a head start on the subject... 
     </title>
     <pubdate secs="1200111136" around="Jan08">Fri Jan 11 20:12:16 PST 2008</pubdate>
     <link>http://menzies.us/csx72/?139</link>
     <guid>http://menzies.us/csx72/?139</guid>
     <description><![CDATA[<p>
 


		  For those of you who want to get a head start on the subject:
<ul>
		  <li> The syllabus is on-line at 
		  <a href="http://menzies.us/csx72/?syllabus">
		  http://menzies.us/csx72/?syllabus</a>.

		  <li> The textbook is described at <a href="http://menzies.us/csx72/?2">
		  http://menzies.us/csx72/?2</a>.

		  <li> Lab1 (which we are doing in week1)  is at  
		  <a href="http://menzies.us/csx72/?133">
		  http://menzies.us/csx72/?133</a>.

		  <li> Project1  (due week4) is on-line at 
		  <a href="http://menzies.us/csx72/?11">
		  http://menzies.us/csx72/?11</a>.
</ul>
		  Note that:
<ul>
		  <li> There are no mark associated with Lab1. it is just some warm up stuff. no drama.

		  <li> You should read the textbook chapter 1,2,3 before starting Lab1.

		  <li> You should complete Lab1 before starting Project1 (there's stuff in the lab you'll need for the project)
</ul>
     </p>]]></description>
  </item>

  <item>
    <category rank="1" >week1</category>
    <category rank="1000" >2read</category>
     <id>132</id> 
     <title>
		Week 1: reading	
     </title>
     <pubdate secs="1199906558" around="Jan08">Wed Jan  9 14:22:38 EST 2008</pubdate>
     <link>http://menzies.us/csx72/?132</link>
     <guid>http://menzies.us/csx72/?132</guid>
     <description><![CDATA[<p>
All students: 
<ul>
<li>
	An overview of  <a href="http://menzies.us/csx72/doc/intro/03aipride.pdf">AI's accomplishments</a>;
<li>
	Chapters one and two of Norvig's Paradigms of AI
</ul>
     </p>
	 <p>Also, for CS572 students:
	 <ul>
	 <li><a href="http://en.wikipedia.org/wiki/Simulated_annealing">Simulated annealing</a>
	 <li><a href="http://menzies.us/pdf/07casease.pdf">The Business Case for Software Engineering</a>
	 </ul>
	 </p>
	 ]]></description>
  </item>



	<item>
	<category rank="2">week1</category>
	<category rank="2">lecture</category>
	<category rank="1000">start</category>
	
	 <id>114</id> 
	 <title>
		What is AI?
	 </title>
	 <pubdate secs="1196526984" around="Dec07">Sat Dec  1 08:36:24 PST 2007</pubdate>
	 <link>http://menzies.us/csx72/?114</link>
	 <guid>http://menzies.us/csx72/?114</guid>
	 <description><![CDATA[


	<p>   
	<!--- img width=200 align=right src="http://www.productdose.com/images/custom/robots/robot_main.jpg"
	--->
	<img border=0 width=300 align=right src="http://cowshell.com/uploads/_sketches/robots01.jpg">

	AI is
	the study and design of <em>intelligent agents</em>
	where an intelligent agent is a system that 
	perceives its environment and takes actions 
	which maximizes its chances of success.
	</p>
	<p>
	<a href="http://www.aaai.org/AITopics/assets/PDF/AIMag02-02-001.pdf">Newell</a>
	(1982) characterized the actions of such an <em>knowledge level agent</em> as...

	<ul><em>
	... a search for appropriate <u>operators</u> that convert some <u>current state</u> to a <u>goal state</u>. Domain-specific
	knowledge is used to select the operators according to the <u>principle of rationality</u>;
	i.e. an intelligent agent will select an operator which its knowledge tells it will
	lead to the achievement of some of its goals.
	</em>
	</ul>
	If that sounds too pompous for you, try this instead: 
	<ul><em>
	intelligence means looking before you leap.
	</em></ul>
	</p>
	<p>(Actually, it probably also means looking after your leap, 
	so you can learn from the past
	to be more rational  in the future.)
	</p>
	<h3>Inhuman Rationality?</h3>
	<p>Note that Newell makes no commitments as to <em>how</em> the knowledge level is operationalized.
	Underneath the knowledge level there could be any number of substrates (biological, mechanical, a collection of wind-powered beer cans, whatever)
	that implement rationality.
	</p><p>
	Now you might object at this separation of "rationality" from "humanity".
	You might protest that the only thing that can be rational like a person is another person.
	And many people would agree with you.
	</p>
	<p>But I don't think that I am the only kind of thing that can think.
	That would be like saying that only birds can fly and that air planes, which don't flap their wings, don't "really"  fly.
	</p>
	<p>
	What I do think is that  there is some abstract notion of flying/thinking that is independent of birds/humans. Like 
	<a href="http://en.wikipedia.org/wiki/Spock">Spock</a> said:
	"Intelligence does not require bulk, Mr. Scott".
	</p>
	<p>
	Every computer scientist knows this to be true.
	Two generations of algorithms research has shown that there exist
	properties of computation that are independent of what processor the algorithm runs on, or the implementation language.
	Dijkstra once said "computer science is no more about computers than astronomy is 
	about telescopes"- and he could have been talking about AI.
	</p>
	<p>
	<A HREF="http://www.cs.ubc.ca/spider/pai/movies/beast.mpg">
	<IMG ALIGN=left width=150 SRC="http://www.cs.rutgers.edu/~dpai/images/beastClimb.JPG"></A>
	Not convinced?
	Well, try another example. Do you think that a robot could/should walk like a human? 
	(see movie, right, of Dinesh Pai's <a href="http://www.cs.ubc.ca/spider/pai/history.htm">
	Platonic Beast</a>).
	This little fellow walks by occasionally throwing a spare limb over
	the top of itself. Such a move would tear us apart, but it is the natural way
	to do it for that kind of walking thing.
	<br clear=all></p>
	<p>


	And here's another example:
	<ul>
	<li><A HREF="http://menzies.us/csx72/img/latin.mov">
	<IMG ALIGN=right width=150 SRC="http://menzies.us/csx72/img/latin.png"></A>The movie at right shows two AI algorithms trying to
	solve the "latin's square" problem: i.e. pick an initial pattern, then try to
	fill in the rest of the square with no two colors on the same row or column.
	<li>The <em>deterministic</em> method is the kind of exhaustive theorem proving method
	used in the 1960s and 1970s that convinced people that "AI can never work".
	<li>The <em>stochastic</em> method is a 1990s-style AI algorithm that makes some
	initial guess, then refines that guess based on logic feedback. 
	<li>
	This stochastic
	local search kills the latin squares problem (and, incidently, many other problems).
	</ul>
	Now the point of this example is that you would not expect a human to think using stochastic search (too much 
	CPU twiddling). But for a computer, stochastic search is a useful inference
	method since each local twiddle can be done very quickly. 
	</p>
	<br clear=all>
	<p>So, once again, <em>how we best think</em> is a local decision, based on the properties
	of the thing doing the thinking. And just because humans do it one way, does not
	mean that that is the best way for AIs to do it.
	</p>
	<p>
	<img src="http://www.cs.iastate.edu/~honavar/images/ai.jpg" align=left>(Note an opposing view to the above. The literature is full of claims that
	AI works <em>like people do</em>. For example in Edward Feigenbaum's knowledge transfer view- which I <a href="http://menzies.us/csx72/?119">don't agree with</a>-  building knowledge-based systems was like
	"mining the jewels in the
	expert's head"; i.e.  looking at the cogs and wheels in people's head and replicating them on a computer.
	While I agree that at the knowledge level, beer cans can think like
	the wet-ware between our ears,  I think we need to respect the substrate in order to select the  best method for implementing rationality. 
	</p><P>And
	whatever substrate we select, 
	some issues will be the same; e.g. 
	Newell's knowledge level insight and 
	issues relating to representation and  search.)
	<br clear=all>
	</p><p>
	Based on all this, I offer two predictions for the future.
	One, that we will see a growing number of rational  computers but, two,
	they are going to be aliens (i.e. won't work exactly  like human intelligence) with very different motivations, needs, and desires to
	us. Instead, the 21<sup>st</sup> century
	will see a menagerie of
	many different kinds of intelligence. Some you'll
	know about, like the book-buying assistants wired into Amazon.com that sometimes
	send you recommendations about what books to read. And some you won't even see-
	<ul>
	<li>
	like the Bayesian SPAM
	filter that  eats silly emails,  
	<li> like the power-grid specialist that spins up a coal-fired electrical
	plant since it has guessed that, in two hours time, there will be a spike in the power generation,
	<li>
	like the NSA data miners rushing SWAT teams to the airport cause it has realized we're going to be hit
	with a bomb attack in 20 minutes 19,18,17,....
	</ul>
	Think of it as a jungle of AIs,  working  together, all living in their
	little ecological niches. And like any ecology, we'll learn that:
	<ul>
	<li> most can
	be ignored, 
	<li> some have to be to avoided (don't
	step on the red ants- they sting) 
	<li> while others are useful (if you tie up a flock of swans,
	you can fly to the moon).
	</ul>
	</p>
	<h3>But does it work?</h3>
	<p>
	But does this sounds crazy to you? Too optimistic? 
	Where is the proof, you might demand, that this different-to-humans AI-approach
	is equal to (or better than) the human way?  
	</p>
	<p>
	Well, there's lots of proof.
	AI is no
	longer a bleeding-edge technology -- hyped by its proponents and
	mistrusted by the mainstream.  AI has achieved much:
	<ul>
	<li> AI programs
	have beaten 
	<a href="http://en.wikipedia.org/wiki/IBM_Deep_Blue">
	world chess grand masters</a>.
	Actually, I don't
	like this example- if I can't
	beat a world chess grand master,
	does that mean that I am not intelligent? Sigh. I've always suspected as much.
	<li> <a
	href="http://www.genetic-programming.com/humancompetitive.html">John
	Koza's list of 36 Human-Competitive Results</a> produced by AI methods
	(in his case, genetic programming).
	</ul>
	<p>But don't expect AI to be all flash and dazzle.
	In the 21st century, AI is not
	necessarily amazing. Rather, it's often <em>routine</em>. Evidence for
	AI technology's routine and dependable nature abounds. See for
	example,
	the list of applications in a special issue I edited
	<a href="http://menzies.us/pdf03/aipride.pdf">21st-Century AI - Proud, Not Smug</a>
	IEEE Intelligent Systems (May/June 2003).

	 </p>
	<p>
	Hopefully, that is enough motivation for you.   As Nils  Nilsson says:
	<ul>
	AI and computer science have already set about trying to fill [...] niches, and that is a worthy, if never-ending, pursuit. But the biggest prize, I think, is for the creation of an artificial intelligence as flexible as the biological ones. That will win it. Ignore the naysayers; go for it! -
	</ul
	</p>

	]]></description>
	</item>
  <item>
    <category rank="1000" >week1</category>
    <category rank="1000" >lab</category>
     <id>133</id> 
     <title>
        Week 1 : lab
     </title>
     <pubdate secs="1199908121" around="Jan08">Wed Jan  9 11:48:41 PST 2008</pubdate>
     <link>http://menzies.us/csx72/?133</link>
     <guid>http://menzies.us/csx72/?133</guid>
     <description><![CDATA[<p>
	This week we learn about an AI development environment (EMACS) and a little about an
	AI language (LISP), and how to run and test our code.</p>
	<p>Note that this is <em>NOT</em> a semantic exercise. You'll be looking at a lot
	of code in a language you may have never seen before (LISP).
	Don't try to understand the details- its just the big picture on how to edit and
	code and test cases  that is important here.
	</p>
<p>You have an hour to attempt the following 24 exercises. 
Don't hand anything in. But I will be asking you to perform for me a few
randomly selected items from this list.</p>
<p>If you can't complete that task,
please do so before next week. You'll need this work before you can start 
<a href="http://menzies.us/csx72/?11">Project1</a></p>
	<h2>Directory and Files</h2>
<p>To begin, you need to:
<ol>
<li> set up some directories and download some files.
	<pre>
mkdir -p $HOME/opt/lisp
cd $HOME/opt/lisp/
svn export http://unbox.org/wisp/var/timm/08/ai/src/week1 week1
</pre>
<p>If that works, you should see something like: </p>
<pre>
A    week1
A    week1/code.lisp
A    week1/eg.lisp
A    week1/make.lisp
A    week1/macros.lisp
A    week1/config.lisp
A    week1/lib.lisp
A    week1/demos.lisp
</pre>
</ol>
          <h2>Editor</h2>
		  <p>
		  <img align=right width=350 src="http://www.michael-prokop.at/computer/images/vi-emacs-final.png">
		  </p>
		  <p>Now, we need to set up your editor.
<ol start="2">
<li> Edit the file $HOME/.emacs and add the code
		  shown in <a href="http://menzies.us/csx72/?105">getting started with SLIME</a>.  Quit that editor.
</ol>
</p><p>Now, fire up emacs.
<pre>
cd $HOME/opt/lisp/week1
emacs  &
</pre>
		<p>
	 Show the instructor you can work that environment:
			<ol start="3"><li>Split the  screen (Cnt x-2)
<li>
			Move cursor from one screen to another.
			<li>Edit difference files in each split (Cnt x-f).
			<li>
Un-split the screen (Cnt x-0)
			<li>
			Cut copy, paste text (look at the Edit menu)
			<li>
			Save files (Cnt x-s)
			<li>
			Write a buffer to a new file (Cnt x-w) 
			<li>
			Revert a buffer, throwing away all changes. (Look at the "File" menu).
			<li>
			Quit (Cnt X-C).
<li>Edit some code
<pre>
cd $HOME/opt/lisp/week1
emacs make.lisp &
</pre>
If that works, you should see something like:
<br/>
<img align=left border=1 src="http://menzies.us/csx72/img/slime1.png">
<br clear=all>
</ol>
</p>
<h2>Language</h2>
<p>Once you can edit code, you need to run an interpreter.
<ol start="13"><li>While  editing <em>make.lisp</em>,
 type "Meta-X slime <RETURN>" (where "Meta-x" means
hold down the Esc key and type "x"). 
After a few seconds your should be looking at the 
LISP's "read-evaluate-print" (or REPL) prompt:
<pre>
; SLIME 2006-04-20
CL-USER> 
</pre>
</li>
<li>Try and reproduce the following screen.<p>
<img border=1 src="http://menzies.us/csx72/img/slime2.png">
<br clear=all>
</p>
<li>LISP functions do something then return the last thing they evaluate. So
why do we see "hello world" twice in the above screens.
<li>Split the screen (remember how?) and  get the REPL and make.lisp on the same
screen. Place the cursor in "make.lisp" and load that file into LISP (hint: 
<em>menu : SLIME : Compilation : Compile/Load File</em>).
<li>Edit "lib.lisp" and write your name at the top as a LISP comment 
<pre>
;;;; Tim Menzies
</pre>
<li>Reload all changed files. How? Well "make.lisp" knows all the files required
for this program.   That remake is called by the "(make)" function. Go to the REPL and type
<pre>
CL-USER> (make)
</pre>
</ol>

		  <h2>Systems</h2>
<p>The list of files at the top of "make1.lisp" are pretty standard. You could find them in
any language:
<pre>
  (mapc 'make1
  '(eg      ; unit tests
    lib     ; generic stuff
    config  ; configuration
    macros  ; macros
    code    ; main code
    demos   ; stuff to show off
  )
</pre>
<ol start="19">
<li>Look at  the list of unit tests at the end of "eg.lisp". Add a test that
checks if  "(+ 3 7)" returns 11. Run that test with
<pre>
CL-USER> (make)
CL-USER> (demo :demo)
</pre>
<li>In LISP, and association list is a list of pairs "((key1 . value1) (key2 . value2) etc)".
Updating that list is a little tedious; e.g. "(assoc key list)"
find the pair that contains "key";
and "(cdr (assoc key list))"
finds the value in the pair than contains "key" value in list;
and to increment some key's value (by one) there are all these special cases like
the key is not in the list. So I wrote some code to simplify talking to association lists.
 Look at the list of unit tests at the end of "lib.lisp". Run them and take
a guess at what is going on
<pre>
CL-USER> (make) 
CL-USER> (demo :assoc)
</pre>
<li>Take a look at "config.lisp" and guess what is going on there. Look up 
the difference between 
<a href="http://www.lisp.org/HyperSpec/Body/mac_defparametercm_defvar.html">defvar
and defparameter</a>
and make a case why you should, should not use defparameter or defvar.
<li>Take a look at "macros.lisp" and guess what is going on there.
<pre>
CL-USER> (make)
CL-USER> (demo :o)
</pre>
<li> Take a look at "code.lisp" and write another function that reports what 
we have nothing to fear but.
<pre>
(defun we-have-nothing-to-fear-but ()
     "fear itself")
</pre>
Make those changes and check out what you have to fear.
<pre>
CL-USER> (make)
CL-USER> (we-have-nothing-to-fear-but)
</pre>
<li>Take a look at "demos.lisp" and add "we-have-nothing-to-fear" to the list of demos
in that file. Run them all.
<pre>
CL-USER> (make)
CL-USER> (demo :week1)
</pre>
</ol>
<h2>Congratulations!</h2>
Now you can edit and run LISP code and write little unit tests and combine them up
into one large systems test.

		  ]]></description>
  </item>
  <item>
    <category rank="1000">news</category>
     <id>128</id> 
     <title>
        5 more places available in cs572
     </title>
     <pubdate secs="1197033422" around="Dec07">Fri Dec  7 05:17:02 PST 2007</pubdate>
     <link>http://menzies.us/csx72/?128</link>
     <guid>http://menzies.us/csx72/?128</guid>
     <description><![CDATA[<p>
 
          There are now 5 more spare slots in cs572. Enrol quickly- they are filling up fast!</p>]]></description>
  </item>

  <item>
    <category rank="1000">lisp</category>
    <category rank="1000">fun</category>
    <category rank="1000">image</category>
     <id>126</id> 
     <title>
        LISP is different
     </title>
     <pubdate secs="1196964694" around="Dec07">Thu Dec  6 10:11:34 PST 2007</pubdate>
     <link>http://menzies.us/csx72/?126</link>
     <guid>http://menzies.us/csx72/?126</guid>
     <description><![CDATA[<p>
 
<img src="http://www.lisperati.com/different.jpg" >
<br clear=all>
     </p>]]></description>
  </item>

  <item>
    <category rank="1000" >fun</category>
     <id>144</id> 
     <title>
        LISP = God's Language?
     </title>
     <pubdate secs="1200596728" around="Jan08">Thu Jan 17 11:05:28 PST 2008</pubdate>
     <link>http://menzies.us/csx72/?144</link>
     <guid>http://menzies.us/csx72/?144</guid>
     <description><![CDATA[<p>
 
          <img  src="http://imgs.xkcd.com/comics/lisp.jpg">http://imgs.xkcd.com/comics/lisp.jpg">

     </p>]]></description>
  </item>

	  <item>
		<category rank="1000">fun</category>
		 <id>122</id> 
		 <title>
			AI Movies
		 </title>
		 <pubdate secs="1196786309" around="Dec07">Tue Dec  4 08:38:29 PST 2007</pubdate>
		 <link>http://menzies.us/csx72/?122</link>
		 <guid>http://menzies.us/csx72/?122</guid>
		 <description><![CDATA[


		<center>
		<object  width="425" height="355"><param name="movie" value="http://www.youtube.com/v/hA4e239eyrg&rel=1"></param><param name="wmode" value="transparent"></param><embed src="http://www.youtube.com/v/hA4e239eyrg&rel=1" type="application/x-shockwave-flash" wmode="transparent" width="425" height="355"></embed></object>
		</center>
		<dl><dt><em>
		2001 - A Space Odyssey
		</em>
		</dt><dd>
	The only thing with emotions in the whole movie is the computer-
	it
	kills from fear, pleads for mercy when
		caught, and reverts to childhood at the very end 
		(dies singing "Daisy, Daisy give me your answer true").
		</dl> <dl><dt><em> Alien </em><dd>
		"I can't lie to you about your chances, but.... you do have my sympathies."
		</dl> <dl><dt><em>
	Battlestar Galactica (the 2003+ version) 
		</em></dt><dd>
			Humans and their robot creations blur and clash. Turns out that they hate us/themselves
			yet strive to become the thing they hate.
			Genocide follows (or is it suicide?). Both race for Earth. Who got here
				first? Are we all Cylons?
		</dl> <dl><dt><em>
			Bladerunner 
		</em><dd>
			"I'm not in the business, I am the business."
		</dl> <dl><dt><em>
					Dark Star</em><dd>
	<dl><dt>Bomb #20:</dt>
			<dd>	Why, that would mean... I really
						don't know what the outside universe
						is like at all, for certain.
						</dd>
	<dt>
									   Doolittle:</dt>
						<dd>That's it.</dd>
	<dt>
									   Bomb #20:</dt><dd>
						Intriguing.  I wish I had more time
						to discuss this matter.</dd>
	<dt>
									   Doolittle:</dt><dd>
						Why don't you have more time?</dd>
	<dt>
									   Bomb #20:</dt>
	<dd>
						Because I must detonate in seventy-
						five seconds.	
	</dd></dl>
	<center>
				<object width="425" height="355"><param name="movie" value="http://www.youtube.com/v/qjGRySVyTDk&rel=1"></param><param name="wmode" value="transparent"></param><embed src="http://www.youtube.com/v/qjGRySVyTDk&rel=1" type="application/x-shockwave-flash" wmode="transparent" width="425" height="355"></embed></object>	
</center>
		</dl> <dl><dt><em>
					Ghost in the Shell</em><dd>

"If the substance of life is information, transmitted through genes,
then society and culture are essentially immense information
transmission systems, and the city, a huge external memory storage
device."

		</dl> <dl><dt><em> The Hitchhikers Guide to the Galaxy </em><dd>
		"Life? Don't talk to me about life."
		</dl> <dl><dt><em> The Matrix </em><dd>
		Our machines judge us. We are sentenced to be light bulbs.
		</dl> <dl><dt><em> Metropolis (1927, not the new version) </em><dd>
		Robot love-
		you know it'll end badly
		</dl> <dl><dt><em>
	Star Trek - First Contact 
		</em></dt><dd>
		More robot love- this time with hot borg babes.  
		</dl> <dl><dt><em>
			Terminator 2 
		</em></dt><dd>

			Just because you are a demon killing machine (DKM), doesn't mean you can't be a loving father.
	</dl> <dl><dt><em>
			Terminator 3 
		</em></dt><dd>

			Just because you are a drop-dead beautiful DKM, doesn't mean you can be a good mother.
		</dl> <dl><dt><em>
			  Other movies
		</em></dt><dd>

			  Here's Amazon's links to their 
			  <a href="http://www.amazon.com/Best-AI-movies/lm/Q80I6RT9KHVM">best AI movies</a>.
	</dl>
	<br clear=all>
		 ]]></description>
	  </item>

	  <item>
		<category rank="1000">lisp</category>
		<category rank="1000">image</category>
		 <id>121</id> 
		 <title>
			Simple LISP code
		 </title>
		 <pubdate secs="1196738206" around="Dec07">Mon Dec  3 19:16:46 PST 2007</pubdate>
		 <link>http://menzies.us/csx72/?121</link>
		 <guid>http://menzies.us/csx72/?121</guid>
		 <description><![CDATA[<p>
	 
	<img src="http://www.newspiritcompany.com/imgs/simple_lisp_code.jpg">

		 </p>]]></description>
	  </item>


	  <item>
		<category rank="1000">lisp</category>
		 <id>120</id> 
		 <title>
				LISP links      </title>
		 <pubdate secs="1196737635" around="Dec07">Mon Dec  3 19:07:15 PST 2007</pubdate>
		 <link>http://menzies.us/csx72/?120</link>
		 <guid>http://menzies.us/csx72/?120</guid>
		 <description><![CDATA[<p>
	 
			  <a href="http://common-lisp.net/">Common LISP net</a>.
			  </p><p>
			  Peter Seibe's excellent text <a href="http://gigamonkeys.com/book/">Practical Common LISP</a>.
			  </p><p>Lots of <a href="http://www.apl.jhu.edu/~hall/lisp.html">good material</a> from Marty Hall.
	</p>
		 ]]></description>
	  </item>

	<item>
	<category rank="1000">news</category>
	<category rank="1000">lisp</category>
	 <id>118</id> 
	 <title>
		Norvig's support code not working

	 </title>
	 <pubdate secs="1196697827" around="Dec07">Mon Dec  3 08:03:47 PST 2007</pubdate>
	 <link>http://menzies.us/csx72/?118</link>
	 <guid>http://menzies.us/csx72/?118</guid>
	 <description><![CDATA[<p>

		  After much sweat, I have given up trying to run Norvig's code using his support tools.
		  </p>
		  <p>The code in the pages of the 
		  <a href="http://menzies.us/csx72/?2">textbook</a>
		  is fine- but the support tools seem to be written for
		  quirky older versions of LISP.
		  </p>
		  <p>
		  So now, to run Norvig's stuff, I carefully paste in the functions one-by-one and only those
		  from the pages of the 
		  <a href="http://menzies.us/csx72/?2">textbook</a>.
		  Slower, yes, but at least I understand the code that I load. 


	 </p>]]></description>
	</item>


	<item>
	<category rank="1000">news</category>
	 <id>117</id> 
	 <title>
			"Search" not working
	 </title>
	 <pubdate secs="1196697585" around="Dec07">Mon Dec  3 07:59:45 PST 2007</pubdate>
	 <link>http://menzies.us/csx72/?117</link>
	 <guid>http://menzies.us/csx72/?117</guid>
	 <description><![CDATA[<p>

		  The search box (on left) uses a Google API that seems to be mal-functioning. </p><p>
		  This is being investigated but, for now, the search box is busted.

	 </p>]]></description>
	</item>

	<item>
	<category rank="1000">news</category>
	 <id>115</id> 
	 <title>
		cs572 now fill
	 </title>
	 <pubdate secs="1196632663" around="Dec07">Sun Dec  2 13:57:43 PST 2007</pubdate>
	 <link>http://menzies.us/csx72/?115</link>
	 <guid>http://menzies.us/csx72/?115</guid>
	 <description><![CDATA[<p>

		  Sorry- enrollments in CS572 are now all taken.

	 </p>]]></description>
	</item>


	<item>
	<category rank="1000">home</category>
	 <id>112</id> 
	 <title>
		Welcome to AI
	 </title>
	 <pubdate secs="1196523876" around="Dec07">Sat Dec  1 07:44:36 PST 2007</pubdate>
	 <link>http://menzies.us/csx72/?112</link>
	 <guid>http://menzies.us/csx72/?112</guid>
	 <description><![CDATA[<p>

		  <center><table >
		  <tr><td valign="middle">
	 <a href="http://menzies.us/csx72/?start"><img border=0
			src="http://healthcare.gatewaycc.edu/NR/rdonlyres/BC7A9D6B-9E71-42DA-A772-ED280B8A15DC/0/buttonStartNow.gif">
	 </td><td>
		  <img width=200 src="http://www.pauljomain.biz/Pics/Marvin%20w%20eye.jpg">
	</td></tr></table></center>
	<center>
	</center>
	 <br clear=all>]]></description>
	</item>


	<item>
	<category rank="1">start</category>
	<category rank="1000">fun</category>
	<category rank="1000">quotes</category>
	 <id>116</id> 
	 <title>
		Some quotes about AI
	 </title>
	 <pubdate secs="1196644594" around="Dec07">Sun Dec  2 17:16:34 PST 2007</pubdate>
	 <link>http://menzies.us/csx72/?116</link>
	 <guid>http://menzies.us/csx72/?116</guid>
	 <description><![CDATA[<p>

	<img width=250 align=right src="http://www.jeffbridges.com/images/Nov2005/quotes.gif">

	AI is the science of common sense. <br>- Claude Bornstein
	</p>
	<p><a href="http://en.wikipedia.org/wiki/Tesler%27s_Theorem">
	AI is whatever hasn't been done yet.</a><br>-  Larry Tesler    </p>

	<p>
	Computers are not intelligent. They only think they are.<br>
	- Anon
	</p>
	<p>
	AI is the art of making computers that behave 
	like the ones <a href="http://menzies.us/csx72/?122">in the movies</a>.<br>- Bill Bulko
	</p>
	<p>

	It is not my aim to surprise or shock 
	you -- but the simplest way I can
	summarize
	is to say that there are now in the world
	machines that think.
	<br>- Herbert Simon, 1957
	</p>


	]]></description>
	</item>



	<item>
	<category rank="1000">start</category>
	 <id>113</id> 
	 <title>
		AI Links
	 </title>
	 <pubdate secs="1196526774" around="Dec07">Sat Dec  1 08:32:54 PST 2007</pubdate>
	 <link>http://menzies.us/csx72/?113</link>
	 <guid>http://menzies.us/csx72/?113</guid>
	 <description><![CDATA[<p>
	<img width=200 align=right src="http://km.aifb.uni-karlsruhe.de/ws/msw2004/logo.jpg">
	<ul>          
	<li> <a href="http://www.c2i.ntu.edu.sg/AI+CI/Humor/AI_Jokes/AIKoans-DHillis.html">AI koans</a>
	<li> <a href="http://en.wikipedia.org/wiki/History_of_artificial_intelligence">AI history</a>
	<Li><a href="http://www.aaai.org/AITopics/html/current.html">AI in the news</a>
	<li>AI related <a href="http://www.aaai.org/Magazine/calendar.php">conferences</a>
	<li>The <a href="http://www.cs.usfca.edu/www.AlanTuring.net/turing_archive/index.html">Alan Turing</a> archive
	<li>15 <a href="http://groups.google.com/groups/dir?&sel=33583203&expand=1">AI-related  news groups</a>
	<li> AI <a href="http://www.faqs.org/faqs/ai-faq/general/part1/preamble.html">frequently asked questions</a>
	<li>The Google news RSS feed on <a href="feed://news.google.com/news?hl=en&ned=&q=artificial+intelligence&ie=UTF-8&output=rss&ned=:ePkh8BM9E0KxIxduRxLcDrBtGUKsWsyJ5dlC_Fq8Gak5qXkKSaVF6anFxUIcWmwFRfk5-elAKe6UxJJEhdzMvMy8dANWmLOMBF5xvHpl3uERs7GhdOWKgwFSACFZHds">AI</a>
	<li>Excellent set of links to <a href="http://pages.cs.wisc.edu/~dyer/cs540/courses.html">introductory AI subjects</a>
	<li><a href="http://pages.cs.wisc.edu/~dyer/cs540/links.html">Other AI links</a> (movies, resources, career planning)
	<li>American Association of AI's  excellent <a href="http://www.aaai.org/AITopics/html/overview.html">AI Topics' overview</a> page.
	<li>Chpt. 1  of Russell and Norvig textbook: <a href="http://www.cs.berkeley.edu/~russell/intro.html">"AI- a Modern Approach"</a>
	</ul>
	<br clear=all>
	 </p>]]></description>
	</item>

	<item>
	<category rank="1000">lecture</category>
	 <id>119</id> 
	 <title>
			What is knowledge?
	 </title>
	 <pubdate secs="1196706519" around="Dec07">Mon Dec  3 10:28:39 PST 2007</pubdate>
	 <link>http://menzies.us/csx72/?119</link>
	 <guid>http://menzies.us/csx72/?119</guid>
	 <description><![CDATA[<p>
	 <img align=right src="http://www.firstfoot.com/Great%20Scot/images/witchburning.jpg">
	 <em>(Warning: this article may contain heresies. </p><p>Also, it is quite dated since I wrote it in the mid- 1990s.
	 So this is a passionate argument about  stuff most folks don't care about anymore.)</em></p>
	 <p>

		  Human "knowledge" is a context-sensitive, approximate and inaccurate
		   hypotheses that requires continually testing.
			Compton argues that knowledge is a context-sensitive construct
		   [1]. Models may be inappropriate when used out of the context in which
		   they were elicited [2].
		 </p><p>
			Knowledge representation theorists stress that our KBs are
		   approximate surrogates of reality [3-5]; i.e. there accuracy is
		   doubtful. Contrast this view with Ed 
		   Feigenbaum 
		   who described knowledge
		   engineering as "mining the jewels in the expert's head" [6]; i.e. KBs
		   were representations of what experts actually have in their heads. The
		   changeover from the Feigenbaum  "expertise transfer" view and the
		   modern "knowledge-modelling" view seems to have occurred in the early
		   nineties [7]. O'Hara notes that some KR theorists still make occasional
		   claims that their KR theory has some psychological basis. However, when
		   pressed, their public line is that representations are models/
		   surrogates only [8].
		 </p><p>
		  
			Popper argues that all knowledge is an hypothesis since nothing
		   can ever be ultimately proved; our currently believed ideas are merely
		   those that have survive active attempts to refute them [9]. Compton
		   describes knowledge acquisition (KA) cycles where "test" is the
		   dominate technique [10, 11]. Elsewhere we have argued that KE
		   methodologies based on testing can out-perform alternative, more
		   complicated, methodologies [12].
		 </p><p>
		   Our knowledge representation (KR) research is somewhat flawed if we
		   uncritically enshrine our knowledge bases (KB).
			Clancey argues the knowledge structures found during knowledge
		   acquisition (frames, rules, etc) are structures created on-the-fly in
		   response to the specifics of the situation in which they were elicited
		   (the example being studied, the experts used, etc); i.e. they have
		   little/no isomorphism with structures present in an expert's
		   information processing system. Clancey is silent on where these
		   structures come from (but hints that the substrate may be neural). [13]
		 </p>
			 <h3>References</h3>
			 <ol><li>
			   Compton, P.J. and R. Jansen, 
				<a href="http://www.cse.unsw.edu.au/~compton/publications/1989_EKAW.pdf">A philosophical basis for
			   knowledge acquisition</a>. Knowledge Acquisition, 1990. 2: p. 241-257.
			 </li><li> 
				Puccia, C.J. and R. Levins, Qualitative Modelling of Complex
			   Systems: An Introduction to Loop Analysis and Time Averaging. 1985,
			   Cambridge, Mass.: Harvard University Press. 259.
			 </li><li> 
			  
			   Davis, R., H. Shrobe, and P. Szolovits, What is a Knowledge
			   Representation? AI Magazine, 1993. (Spring): p. 17-33.
			 </li><li> 
			  
				Wielinga, B.J., A.T. Schreiber, and J.A. Breuker, KADS: a
			   modelling approach to knowledge engineering. Knowledge Acquisition,
			   1992. 4(1): p. 1-162.
			 </li><li> 
			  
				Bradshaw, J.M., K.M. Ford, and J. Adams-Webber. Knowledge
			   Representation of Knowledge Acquisition: A Three-Schemata Approach. in
			   6th AAAI-Sponsored Banff Knowledge Acquisition for Knowledge-Based
			   Systems Workshop, ,October 6-11 1991. 1991. Banff, Canada:
			 </li><li> 
			  
			   Feigenbaum, E. and P. McCorduck, The Fifth Generation. 1983,
			   New York: Addison-Wesley.
			 </li><li> 
			  
			   Gaines, B., AAAI 1992 Spring Symposium Series Reports:
			   Cognitive Aspects of Knowledge Acquisition, in AI Magazine. 1992, p.
			   24.
			 </li><li> 
			  
			   O'Hara, K. and S. N. AI Models as a Variety of Psychological
			   Explanation. in IJCAI. 1993. Chambery, France:
			 </li><li> 
			  
			   Popper, K.R., Conjectures and Refutations,. 1963, London:
			   Routledge and Kegan Paul.
			 </li><li> 
			  
				Compton, P., et al., Ripple-down-rules: Turning Knowledge
			   Acquisition into Knowledge Maintenance. Artificial Intelligence in
			   Medicine, 1992. 4: p. 47-59.
			 </li><li> 
			  
				Compton, P., et al. Ripple down rules: possibilities and
			   limitations. in 6th Banff AAAI Knowledge Acquisition for Knowledge
			   Based Systems. 1991. Banff, Canada:
			 </li><li> 
			  
				Menzies, T.J. and P. Compton. 
					<a href="http://menzies.us/pdf/banff94.pdf">Knowledge Acquisition for
			   Performance Systems; or: When can "tests" replace "tasks"?</a> in
			   Proceedings of the 8th AAAI-Sponsored Banff Knowledge Acquisition for
			   Knowledge-Based Systems Workshop. 1994 (in press). Banff, Canada:
			 </li><li> 
			  
				Clancey, W. A Boy Scout, Toto, and a Bird: How Situated
			   Cognition is Different from Situated Robotics. in NATO Workshop on
			   Emergence, Situatedness, Subsumption, and Symbol Grounding,. 1991.
			 </li></ol> 

		 </p>]]></description>
	  </item>


	 <item>
		<category rank="1000">lisp</category>
		 <id>108</id> 
		 <title>
				Good set of LISP references
		 </title>
		 <pubdate secs="1196217257" around="Nov07">Tue Nov 27 18:34:17 PST 2007</pubdate>
		 <link>http://menzies.us/csx72/?108</link>
		 <guid>http://menzies.us/csx72/?108</guid>
		 <description><![CDATA[<p>
	 
			  Lots of <a href="http://www.apl.jhu.edu/~hall/lisp.html">good material</a> from Marty Hall.

		 </p>]]></description>
	  </item>

	  <item>
		<category rank="1000">lisp</category>
		<category rank="1000">start</category>
		 <id>106</id> 
		 <title>
				Getting started with LISP
		 </title>
		 <pubdate secs="1195837784" around="Nov07">Fri Nov 23 09:09:44 PST 2007</pubdate>
		 <link>http://menzies.us/csx72/?106</link>
		 <guid>http://menzies.us/csx72/?106</guid>
		 <description><![CDATA[<p>
		<img src="http://www.metabang.com/unclog/elements/lisp-glossy.jpg" width=150 border=1></p><p>
		To get a working (freely available) LISP system, look for SBCL (best) or CLISP (ok)</p><p>
		Please don't use any SCHEME dialect. SCHEME is a really great language but it never ran as fast as SBCL</p><p>

			  Then try to run and understand all the examples in the <a href="http://menzies.us/csx72/?2">textbook</a>, Part I: Introduction to Common Lisp
			  <ul>
			  <li>Introduction to Lisp. 
			  <li>A Simple Lisp Program. 
				<li>Overview of Lisp  
				</ul>
				</p>
				<p>(By the way, understanding these examples is  your <a href="http://menzies.us/csx72/?11">project1 assignment</a> for class. So doing this
					won't waste any of your time.).
					</p>

				<p>Note: source code for the above can be found at <a href="http://norvig.com/paip/examples.lisp">here</a>.
			

		 </p>]]></description>
	  </item>

	  <item>
		<category rank="1000">lisp</category>
		 <id>107</id> 
		 <title>
				Getting real funky with LISP
		 </title>
		 <pubdate secs="1195838096" around="Nov07">Fri Nov 23 09:14:56 PST 2007</pubdate>
		 <link>http://menzies.us/csx72/?107</link>
		 <guid>http://menzies.us/csx72/?107</guid>
		 <description><![CDATA[<p>
	<img width=150 src="http://us.st11.yimg.com/us.st.yimg.com/I/paulgraham_1976_1330075"></p><p> 
			  If you are feeling adventurous, try reading Graham's excellent text <a href="http://www.paulgraham.com/onlisptext.html">On Lisp</a>. </p><p>Graham is the guy responsible for the recent resurgence in LISP (sold his dot-com to Yahoo for $X0,000,000 that was based on LISP). So take careful note of all he says.

		 </p>]]></description>
	  </item>


  <item>
    <category rank="1" >emacs</category>
     <id>130</id> 
     <title>
        They use EMACS
     </title>
     <pubdate secs="1197346087" around="Dec07">Mon Dec 10 20:08:07 PST 2007</pubdate>
     <link>http://menzies.us/csx72/?130</link>
     <guid>http://menzies.us/csx72/?130</guid>
     <description><![CDATA[<p>
 
          <center><img width=500 src="http://www.michael-prokop.at/computer/images/vi-emacs-final.png"></center>

     </p>]]></description>
  </item>


  <item>
    <category rank="1000" >lecture</category>
    <category rank="1000" >week3</category>
     <id>158</id> 
     <title>
        	Cognitive Science
     </title>
     <pubdate secs="1201033486" around="Jan08">Tue Jan 22 12:24:46 PST 2008</pubdate>
     <link>http://menzies.us/csx72/?158</link>
     <guid>http://menzies.us/csx72/?158</guid>
     <description><![CDATA[<p>
 

<p>This subject is about inhuman AI, all the tricks that computers can use to be smart that 
humans may or may not use.

<p>Just to give the humans' a little more equality in this subject, today were going to talk about humans and AI. The
field of <em>cognitive science</em> is devoted to discovering more about human intelligence using insights from
a range of other areas including:
<ul>
<li>neuro-physiology
<li>philosophy
<li>linguistics
<li>mathematics
<li>cognitive psychology
<li>AI
</ul>
<p>Brief notes on all these follow. Note that: <ul> <li>I used to be
up to date on this stuff but I have not really looked at this since
the early 1990s. So this talk may  be a little out of date.  <li>Note
for WVU csx72 students: this material is not examinable </ul>
<h2>Neuro-physiology</h2>
<p>Human brain cells are very different to computer chips. In your brain, there is:
<ul>
<li> More distributed processing;

<li>No explicit representation mechanism  stored at one bit location (no "grandmother cell" which, if it dies, all your knowledge of granny is lost). 

<li>Much more use of parallelism. Throw a pen at the lecturer (just
kidding). My hand can jerk out to catch that pen in the time required
for just a few neurons to fire. The idea that some sequential
algorithm has solved the trajectory problem using matrix mathematics
(lots of iterations of row and columns) seem very unlikely.

<li>More use of a single structure (the neuron) used repeatedly (28 billion times)
</ul>

<ul><li>Not this menagerie of parts: <img width=300  src="http://www.ravirajtech.com/images/single_board_computer.jpg" align=middle>
<li>But one part, repeated many times:
<img align=middle src="http://media.nasaexplores.com/lessons/01-063/images/nervecell.gif">
</ul>


<p>
A never cell can  have up to
1000 dendritic branches, making connections with tens of thousands of other cells.
Each of the 10<sup>11</sup> (one hundred billion) neurons has on
average 7,000 connections to other neurons. 
<p>It has been
estimated that the brain of a three-year-old child has about 10<sup>16</sup>
synapses (10 quadrillion). This number declines with age, stabilizing
by adulthood. Estimates vary for an adult, ranging from 10<sup>15</sup> to 5 x
10<sup>15</sup> synapses (1 to 5 quadrillion).

<p>
Just to say the obvious-
that's a BIG network. 
<p> Neuro-physiology is a very active field. The latest generation of MRI scanners allow for detailed real-time
monitoring of human activity, while they are performing cognitive tasks. 
<center>
<img src="http://www.nature.com/nrc/journal/v7/n3/images/nrc2073-f1.jpg">
</center>
<p>
This field shows great promise but, as yet, they are still
working on locomotion and pain perception and vision and have yet to rise to the level of model-based
reasoning.
<p> The field of 
<a href="http://en.wikipedia.org/wiki/Neural_networks">neural networks</a>
 originally  began as an experiment in exploiting massive
repetition of a single simple structure, running in parallel, to achieve cognition.
As the field evolved, it turned more into some curve fitting over non-linear functions
(and the <a href="http://www.autonlab.org/tutorials/neural.html">tools used to achieve that fit</a> have become less and less likely to have a biological correlate).

<p>For another example of AI research, initially inspired by a biological metaphor, see 
<a href="http://en.wikipedia.org/wiki/Genetic_algorithm">genetic algorithms</a>.

<h2>Linguistics</h2>
<p>Noam Chomsky is one the towering figures of the 20th century. He's a linguistic and a political commentator. Every few years he disappears to re-emerge with a new book the redefines everything.
For example, a lot of computer science parsing theory comes from Chomsky's theory of language grammars.
<p>In AI circles, Chomsky is most famous for his argument that we
don't learn language. Rather, we are born with a general grammar and ,
as a child grows all up, all they are doing is filling some empty
slots referring to the particulars of the local dialect.
<p>This must be so, argues Chomsky otherwise language acquisition would be impossible.
<ul><li>   

      Children are exposed to very little correctly formed language. When people speak, they constantly interrupt themselves, change their minds, make slips of the tongue and so on. Yet children manage to learn their language all the same. This claim is usually referred to as the Argument from Poverty of the Stimulus.
 
  <lI> Children do not simply copy the language that they hear around
  them. They deduce rules from it, which they can then use to produce
  sentences that they have never heard before. They do not learn a
  repertoire of phrases and sayings, as the behaviourists believe, but
  a grammar that generates an infinity of new sentences.
</ul>
<p>
The implications are staggering. Somewhere in the wet-ware of the brain there is something like
the grammars we process in computer science. 
At its most optimistic, this also means that grammar-based languages
(like LISP, etc) have what it takes to reproduce human cognition.
We'll return to this below (when we talk about the "physical symbol system hypothesis").
<center>
<img src="http://www.manuelabadia.com/blog/content/binary/example_parseTree.PNG" >
</center>
<p>But is there really a "language" of thought? Or is this just an interpretation
of chemicals sloshing around the dendrites (under the hood) which we interpret as language.
<p>Well, there is evidence of some model-based manipulation by our wet
ware. In classic <a href="http://en.wikipedia.org/wiki/Mental_rotation">mental rotation experiments</a>, it was shown that the
time required to check if some object was rotation of another was
linear proportional to the size of the rotation. It is as if some
brain box is reaching out to a sculpture  of the thing we are looking at, the  turning it around at some fixed rate.
<center>
<img src="http://upload.wikimedia.org/wikipedia/en/2/2b/MR_TMR.jpg">
</center>
<p>Anyway, if ask a  philosopher,  "is it really neurons, or are their symbolic
models in between our ears?", they might answer who cares?. Whatever stance works best
is the right one.
<br clear=all>
<h2>Philosophy: part1 (we love AI)</h2>
<p>Daniel Dennett asks a simple questions. Try and beat a chess playing program. What are you going to do?
<ul>
<li>Assume a <em>physical stance</em> and work out the physics and chemistry of the device?
In this stance, you are
trying to work out which way the wires will flow through the computer.
Good luck with that.
<li>Assume a <em>design stance</em>
and reason about the biology or engineering of  the device. Look for large functional blocks are reasoning about
the next move of the chess playing computer by thinking about the surge suppressors, the coolant system, etc etc.
Again, good luck with that.
<li>Assume an <em>intentional stance</em> (the level of software and minds) where we ascribe beliefs,
desires, intents/goals to a device, then act accordingly. For example, we might say "the program <em>wants</em> to
take my queen and <em>believes</em> that if it offers me a trap, I'll fall into it".
</ul>
<center>
<img src="http://www.comp.rgu.ac.uk/staff/sw/stuarts_papers/images/correspondence.gif">
</center>
<p>Which is the <em>right</em> stance? The answer is, it depends. What do you want to do? Stop being short circuited by a loose wire?
You want the physical stance? Beat the program at chess? You want the intentional stance.
<p>Bottom line: a computer is not <em>just</em> "a machine". It is a
mix of things, some of which are best treated like any other
intelligence.
<p>Don't believe me? Well, pawn to king four and may the best stance win.
<p>(By the way,
for a good introduction to AI and philosohy, see <a href="http://en.wikipedia.org/wiki/The_Mind's_I">The Mind's I</a>.)

<h2>Philosophy: Part2 (AI? You crazy?)</h2>
<p>I think therefore I am. I don't think therefore...

<p>There used to be a savage critic by certain philosophers  along the lines that AI was impossible.

 For example, John Searle is a smart guy.
His text <em>Speech Acts: An Essay in the Philosophy of Language. 1969</em> is
listed as <a href="http://home.comcast.net/~antaylor1/fiftymostcited.html">one of the most cited works on the 20<sup>th</sup> century</a>.
</p>


<p>In one of the most famous critiques of early AI, Searle invented the <em>Chinese Room</em>: an ELIZA-like AI that used simple pattern look
ups to react to user utterances.  Searle argued that this was nonsense- that such a system could never be said to be "really" intelligent.</p>
<center>
<a href="http://pzwart.wdka.hro.nl/mdr/research/fcramer/wordsmadeflesh/pics/chinese-room.png">
<img width=500 border=0 src="http://pzwart.wdka.hro.nl/mdr/research/fcramer/wordsmadeflesh/pics/chinese-room.png"></a>
</center>

<p>Looking back on it all, 27 years later, the whole debate seems wrong-headed. Of course ELIZA was the wrong model for intelligence-
no internal model that is refined during interaction, no background knowledge, no inference, no set of beliefs/desires/goals, etc.

<center>
<img src="http://www.biosurvey.ou.edu/oese/Straw_Man_Kit.JPG">
</center>
</p><p>
Searle's argument amounts to "not enough- do more". And we did. Searle's kind of rhetoric
(that AI will never work) fails in the face of AI's <a href="http://menzies.us/pdf03/aipride.pdf">many successes</a>.

     </p><p>
Here's some on-line resources on the topic:
<ul><li>Wikipedia's entry on the <a href="http://en.wikipedia.org/wiki/Chinese_room">Chinese Room</a>.
<li>The original article <a href="http://members.aol.com/NeoNoetics/MindsBrainsPrograms.html">Minds, Brains, and Programs</a>
from The Behavioral and Brain Sciences, vol. 3. Copyright 1980</a>
<li>A <a href="http://www.nybooks.com/articles/6542">spirited debate</a> about this idea between Searle and Daniel Dennett.
For my money, Searle's argument grows tired and stale against Dennett's insights.
</ul>
And here's some more general links:
<ul>
<li><a href="http://groups.google.com/group/comp.ai.philosophy/topics?gvc=2">Comp.ai.philosophy</a>
<li>A very old <a href="http://www.dontveter.com/caipfaq/index.html">FAQ</a> from
comp.ai.philosophy.
<lI>Some more recent <a href="http://en.wikipedia.org/wiki/Philosophy_of_artificial_intelligence"> AI &amp; philosophy</a> notes from
Wikipedia.
</ul>
</p>
<h2>Mathematics</h2>
<h3>Godel's Incompleteness Theorem</h3>
<P><img align=right src="http://twistedphysics.typepad.com/cocktail_party_physics/images/godel.jpg">
There is some mathematical support for Searle's pessimism. In 1930, the  philosophical world was shaken to its foundation in 1930 by
 a mathematical paper that proved:
<ul>
<li>
For any consistent formal, computably enumerable theory that proves basic arithmetical truths, an arithmetical statement that is true, but not provable in the theory, can be constructed.1 That is, any effectively generated theory capable of expressing elementary arithmetic cannot be both consistent and complete.
<li>
For any formal recursively enumerable (i.e. effectively generated) theory T including basic arithmetical truths and also certain truths about formal provability, T includes a statement of its own consistency if and only if T is inconsistent.
</ul> 
<p>That is, formal systems have fundamental limits.
<p>So Godel's theorem gives us an absolute limit to what can be achieved
by "formal systems" (i.e.  the kinds of things we can write with a LISP program).
<p>Godel's theorem
<em>might</em> be used to argue against the "logical school" of
AI. If formal logics are so limited, then maybe we should ignore them
and try other procedural / functional representations instead: <ul>
<li>This is a very old debate <li>The proceduralists lost.  <li>Logic
turns out to be a very succinct way to describe an implementation.
And in that uniform view, impressive optimizations can be achieved
(e.g. <a href="http://www.uwtv.org/programs/displayevent.aspx?rID=3900&fID=810">Markov Logic</a>).
<li>Meanwhile, the proceduralists were left struggling to patch yet
another specific mechanism with one value in one specific domain. There may indeed be a neural net in my  
that implements any number of <a href="http://www.edge.org/documents/ThirdCulture/p-Ch.8.html"> procedural kludges</a>, but
it has had billions of years to evolve and patch (and patch again) those kludges. The experience of the 1990s is that we 
can go must further with logical AI than thought possible in the 1970s and 1980s. 
</ul>

<h3>Cook and NP-Complete</h3>
<p>Godel's theorem is somewhat arcane. He showed that somethings were unknowable but he did not say
what those things are.
<p>Enter Steve Cook. In 1971, he showed that commonly
studied problems (e.g. boolean satisfiability) belong to a class of systems for which the solution
takes exponential time to compute.
<p>An army of algorithms researchers have followed Cook's lead and now there are 
<a href="http://en.wikipedia.org/wiki/List_of_NP-complete_problems">vast catalogues</a>
of commonly studied programs for which there is no known fast (less than exponential time) and complete (no heuristic search)
solution.
<h2> Psychology (part1)</h2>

<p>O.K., so formal systems can never be omniscient, but how good do you have to be to "as smart as humans"?

<p>The answer is, sometimes, not very smart at all.
 The
cognitive psychology literature is full of examples where humans
repeatedly reason in characteristic sub-optimal ways (see the wonderful
Wikipedia page listing <a
href="http://en.wikipedia.org/wiki/List_of_cognitive_biases">35
decision-making biases, 28 biases in probability and belief, 20 social
biases, and 7 memory errors</a>). 

<h2>AI</h2>
<p>In fact, one the early successes of AI was not replicated some
human cognitive skills, but also human cognitive failings.  In the 1970s, AI
researchers adopted the <em>physical symbol system hypothesis</em>:
<ul> A physical symbol system has the necessary and sufficient means
of general intelligent action.  </ul> Here, by <em>physical symbol
system</em> they mean <ul> the basic processes that a computer can
perform with symbols are to input them into memory, combine and
reorganize them into symbol structures, store such structures over
time,. . . .compare pairs of symbols for equality or inequality, and
"branch" (behave conditionally on the outcome of such tests) </ul>
(Note that  tacit in this hypothesis is Chomsky's language of thought and the
notion that computers can think like people if they can push around
symbols, just like the brain.)  
Rule-based programs designed around
this hypothesis could replicate not just feats of human comprehension,
but also human inadequacies in the face of (e.g.) limited short term
memory or immature long-term memory (see Expert and Novice Performance
in Solving Physics Problems, Science, 1980 Jun 20;208(4450):1335-1342
Larkin J, McDermott J, Simon DP, Simon HA).

<h2> Psychology (part2)</h2>
<p>The AI work did not come in isolation. The physical symbol system hypothesis, for example, owed much to decades
of psychological research. In particular, the cognitive psychology research that evolved as a reaction to  behaviorism
(from the early part of the 20th century). In its most extreme view, behaviorism denied all internal states
and allowed for only the objective study of externally observable behavior
(i.e. no mental life, no internal states; thought is covert speech).
<p>
Well, that flew for a few decades then it just ran out of steam.
After decades of trying to map human behavior into
(what seems now) trite stimulus response models, cognitive psychology made the obvious
remark that the same input yields different outputs from different people <em>because</em> of their internal models.
That is, intelligence is not just a reaction to the world. Rather, it is the careful construction and constant
review of a set of internal states of belief which we use to  decide how to best act next.

<h2>Conclusion</h2>
<p>Maybe just because a representational system like LISP is limited does not mean that  it is useless:
<ul><li>
LISP can certainly represent the models of cognitive psychology
<li>
As to Godel's theorem:
I don't know the length of my 1000th hair above my right ear
but I can still buy a house, write programs, balance my check  book, etc. So Godel's theorem does not make me want to junk my LISP compiler and go off into
procedural neural net land.
<li>Cook showed that a  LISP interpreter can't implement a complete and fast solution to a wide range of problems.
But neither can people. And (using stochastic search) we 
can get pretty good solutions pretty fast, even from pretty big problems.
</ul>
<P>And anyway, if I am only trying to be as good as human intelligence, then 
sometimes I don't need to try too hard.

<p>So  please sleep easy tonight. And keep typing away at LISP.

     </p>]]></description>
  </item>

	  <item>
		<category rank="1000">lisp</category>
		<category rank="1000">emacs</category>
		<category rank="1000">start</category>
		 <id>105</id> 
		 <title>
				Getting started with SLIME
		 </title>
		 <pubdate secs="1195837543" around="Nov07">Fri Nov 23 09:05:43 PST 2007</pubdate>
		 <link>http://menzies.us/csx72/?105</link>
		 <guid>http://menzies.us/csx72/?105</guid>
		 <description><![CDATA[<p>
		<img  src="http://www.common-lisp.net/project/slime/images2/slime-small.png">
		</p><p>
			  S.L.I.M.E. = superior LISP interaction mode for emacs. </p><p>It is my recommendation  for writing, running, and debugging
			  LISP code (though some people prefer the <a href="http://bitfauna.com/projects/cusp/index.html">CUSP SBCL plugin</a> for ECLIPSE).
	</p>
	<p>
	If you want to get started on slime on a CSEE Linux machine, edit your $HOME/.emacs and add these lines.
	</p>
	<pre>
(setq inferior-lisp-program "/usr/bin/sbcl --noinform")
(add-to-list 'load-path "/usr/share/common-lisp/source/slime/") ;; this path is WVU CSEE specific
(setq slime-path "/usr/share/common-lisp/source/slime/")        ;; this path is WVU CSEE specific
(require 'slime)
(slime-autodoc-mode)
(slime-setup)
(add-hook 'lisp-mode-hook (lambda ()  
	(slime-mode t) 
	(local-set-key "\r" 'newline-and-indent)
	(setq lisp-indent-function 'common-lisp-indent-function)
	(setq indent-tabs-mode nil)))

(global-set-key "\C-cs" 'slime-selector)
</pre>
	<p>																													  Then fire up emacs and type M-x slime. After that, any .lisp file you edit will have some cool LISP bindings (see 
	http://common-lisp.net/project/slime/doc/html/Compilation.html#Compilation).
		 </p>]]></description>
	  </item>


	  <item>
		<category rank="1000">news</category>
		 <id>104</id> 
		 <title>
			cs472 now full
		 </title>
		 <pubdate secs="1195833685" around="Nov07">Fri Nov 23 08:01:25 PST 2007</pubdate>
		 <link>http://menzies.us/csx72/?104</link>
		 <guid>http://menzies.us/csx72/?104</guid>
		 <description><![CDATA[<p>
	 
		Sorry-	enrollments in CS472 have reached 25. This section of the subject is now full.</p>
	<p>Note that, at the time of this writing, places still exist in CS572.
		 </p>]]></description>
	  </item>

	  <item>
		<category rank="1000">news</category>
		 <id>103</id> 
		 <title>
			Web site created
		 </title>
		 <pubdate secs="1195833685" around="Nov07">Fri Nov 23 08:01:25 PST 2007</pubdate>
		 <link>http://menzies.us/csx72/?103</link>
		 <guid>http://menzies.us/csx72/?103</guid>
		 <description><![CDATA[<p>
	 
				Class web site now active. 
		
		 </p>]]></description>
	  </item>

	  <item>
		<category rank="1000">admin</category>
		 <id>102</id> 
		 <title>
				Contact us
		 </title>
		 <pubdate secs="1195831033" around="Nov07">Fri Nov 23 07:17:13 PST 2007</pubdate>
		 <link>http://menzies.us/csx72/?102</link>
		 <guid>http://menzies.us/csx72/?102</guid>
		 <description><![CDATA[
		 <p>
		<img align=right src="http://blogs.mercurynews.com/consumeractionline/wp-content/photos/old_phone.gif"> 
			Email class: <a href="mailto:2008@wisp.unbox.org">2008@wisp.unbox.org</a> (note: this is a moderated list)</p>
			 Email lecturer: "tim AT menzies DOT us"
			  </p><br clear=all>]]></description>
	  </item>
  <item>
    <category rank="1000" >project</category>
     <id>137</id> 
     <title>
        Cs472	Project: Planning and Dialogue Generation for Gaming
     </title>
     <pubdate secs="1200085267" around="Jan08">Fri Jan 11 13:01:07 PST 2008</pubdate>
     <link>http://menzies.us/csx72/?137</link>
     <guid>http://menzies.us/csx72/?137</guid>
     <description><![CDATA[<p>
	 <h2>Notes</h2>
	 Project size: three to four  persons per group.
	 And  project1 for cs472 is the same as cs572.)
	<h2>Parts</h2>
		 <ol>
		 <li>Project 1: An <a href="http://menzies.us/csx72/?11">introduction to LISP</a> programming
		 <li>Project 2: <a href="http://menzies.us/csx72/?12">Some planning</a>
		 <li>Project 3: Some <a href="http://menzies.us/csx72/?13">dialogue generation
		 controlled by the  planner</a>
		 </ol>
 <h2>Background</h2>
		 <p>Games needs graphics and models:
		 <ul><li>Graphics to make it look exciting and give feedback; <li>
		 models to control what  the computers offers you.</ul></p>
		 <p>Other subjects deal with the graphics. Here, we deal with the model.</p>
		 		<p> 
			<img class=rthumb250  src="http://imagecache2.allposters.com/images/pic/151/GREEKWEDRPT~My-Big-Fat-Greek-Wedding-Posters.jpg">
		Our game concerns the ancient and stressful act of
		getting married.
Not everyone
		gets married and  some of us never should. It just does not work for some people:
		<ul>
		<li>

		Marriage is a fine institution but who wants to live in an
		institution?<br>
		-- Gaucho Marx
		<li>
		Sometimes I wonder if men and women really suit each other. Perhaps they should live next door and just visit now and then.<br>
		-- Katerine Hepburn
		</ul>
		<p>
		Those of us who do get married, need all the help they can get.
		<h2>The Wedding Planner</h2>
In this game, our AI agent will be like an on-line wedding planner, advising
a budget conscious couple about, well, everything.</p>
<p>Note that the goal 
of producing a "great wedding" is a
non-linear problem: produce
the best wedding possible using the least cost. Not all is possible. Trade-offs
will have to be made. 
One problem is all this is the dependencies between decisions. For example,
<ul>
<li>
choice of guests may effect
choice of menu
<li>
which effects choice of wine
<li>which effects total cost
<li> which
effects how many guests are invited
<li>which effects who gets invited
<li>which
effects menus and around we go again.
</ul>
Your tool should report N different wedding plans and plot the cost/benefit
of each.
</p>
<p>Also note that the wedding planner and the ?happy couple are
not always sharing the same goals. The wedding planner may have left over stock of some material
they want to sell to the couple- but the couple may not be convinced
that they need it.
<p>Note that your tool has to be an inference engine and a knowledge base.
Part of the art of AI programming is showing that the same inference engine
can run over  multiple knowledge bases. So pick any two of the following weddings
and try to model some of the rituals.
<h2>References</h2>
<ul>
<li>
Chinese weddings: see <a href="http://en.wikipedia.org/wiki/Wedding_Banquet">The Wedding banquet</a>
(Chinese wedding that gets a little... strange).
<li>
Indian weddings: see <a href="http://en.wikipedia.org/wiki/Monsoon_Wedding">Monsoon wedding</a> (huge, contemporary wedding).
<li>
British weddings: see <a href="http://en.wikipedia.org/wiki/Four_Weddings_and_a_Funeral">Four Weddings and a funeral</a>  (pick any one of the weddings and ignore the bit about the funeral)
<li>
Greek weddings: see <a href="http://en.wikipedia.org/wiki/My_Big_Fat_Greek_Wedding">My big fat 
Greek wedding</a> (before the wedding, don't forget all the courting).
</ul>
<p>Note: you won't be able to model all of any of these. To make the task manageable,
pick some part of the whole problem; e.g.
try to model one of banquets in the movie.
</p>]]></description>
  </item>

  <item>
    <category rank="1000" >project</category>
     <id>138</id> 
     <title>
        Cs572 class project: Model-based software engineering
     </title>
     <pubdate secs="1200087100" around="Jan08">Fri Jan 11 13:31:40 PST 2008</pubdate>
     <link>http://menzies.us/csx72/?138</link>
     <guid>http://menzies.us/csx72/?138</guid>
     <description><![CDATA[<p>
	 <h2>Notes</h2>
	 Project size: two people per group.
	 And project1 for cs572 is the same as cs472.).
	 <h2>Parts</h2>
	 <ul>
		 <li>Project 1: An <a href="http://menzies.us/csx72/?11">introduction to LISP</a> programming
	 <li>Project 2b: <a href="http://menzies.us/csx72/?16">Certifying the models</a>
	 <li>Project 3b: <a href="http://menzies.us/csx72/?14">Implementing the search engines</a>
	<li>Project 4b: <a href="http://menzies.us/csx72/?10">Experimental studies </a>
	 </ul>
	 <h2>Background</h2>
	 <p>
The STAR tool, discussed in class,
is a decision support aid for model-based software engineering.</p>
<p>
		  STAR has a search bias. It only explores
		  options using a very dumb search device called simulated annealing.
	Your task is to check how different biases change the results of that  tool
	<h2>Task</h2>
	measured in terms of
	<ul>
	<li>Runtime
	<li>Size of solution
	<li>Median score  of the solution
	<li>Variance in the solution score
	<li>The stability of the solution over multiple runs.
	</ul>
	<p>For all the above except stability,
	less is more (i.e. the lower the numbers, the better).
	<p>You will explore the above using simulated annealing
	and any  three of the following:
	<ul>
	<li>Keys</li>
	<li>Depth-first iterative deepening
	<li>Isamp
	<li>Beam search
	<li> LDS
	</ul>
	<p>(And, for extra credit, explore more than 3 including items not on this list.)</p>
	<h2>References</h2>
	 <ul><li><a 
	 	href="http://menzies.us/pdf/07casease.pdf">The Business 
		Case for Software Engineering</a>
	 <li><a href="http://menzies.us/pdf/08keys.pdf">The Keys paper</a>
	 <li><a href="http://en.wikipedia.org/wiki/Simulated_annealing">Simulated annealing</a>
	 <li>The <a href="http://menzies.us/csx72/doc/search/lds.pdf">LDS</a> paper (has some notes at the front about Isamp).
	</ul>
	<h2>Reports</h2>
	<p>Reports: You will hand in a PDF (ten pages make)
	document in format of the ACM  Sig proceedings:
	<a href="http://www.acm.org/sigs/publications/proceedings-templates">template</a>.
	</p>]]></description>
  </item>


	  <item>
		<category rank="1">project</category>
		 <id>11</id> 
		 <title>
			Project 1: LISP, Grammars</title>
		 <pubdate secs="1195789532" around="Nov07">Thu Nov 22 22:45:32 EST 2007</pubdate>
		 <link>http://menzies.us/csx72/?11</link>
		 <guid>http://menzies.us/csx72/?11</guid>
		 <description><![CDATA[
		 <p>
		 (Note: do not attempt this project till you have completed <a href="http://menzies.us/csx72/?133">Lab1</a>.)</p>
		 <h2>Part A:  A whole lotta LISP</h2>
		 <p>In Lab1, you were shown how to:
		 <ul>
		 <li>create a "make.lisp" file that controls a set of files
		 <li>write test cases using "(egs :topic)" and run test cases using (demo :topic)
			</ul>
		<p>Your task in this project is to apply that knowledge to show you understand Norvig's LISP code in chapters 1,2,3. For each chapter:
			<ul>
			<li>Create a separate directory "proj1/chX"
			<li>Create a different "make.lisp" file in each directory that only loads files in that directory
			<li>Enter in code from Norvig chapter X
			<li>Create one "(egs :X)" test suite for each second-level heading that you don't skip over.
			With each eg, write 2 lines explaining what is going on here and include an expected output.
			<li>Create a "demos.lisp" file that contains 
			"(egs :all)" that calls all the other "egs" (if you don't understand this bit, go back and look at the lab1 "demos.lisp" file).
			</ul>
			<p>Only do code from the following sections:
		<ul><li>
		1.1, 1.2, 1.4, 1.5, 1.6, 1.7, 2.2, 2.3, 2.5, 3.1, 3.3, 3.5, 3,6, 3.8, 3.10, 3.12, 3.14, 3.15, 3.19
		</ul>
		</ul>
<h2>Part B: A Little Bit of Grammar</h2>
<p>Modify the grammar on page 30 of Norvig to describe the sequence of cs/ee subjects required to get a cs/ee/ce degree at WVU.   
			<ul>
			<li>Create a new directory "proj1/degree"
			<li>Create a file "proj1/degree/make.lisp" file.
			<li>Install just enough of the chapter 2 code into this new directory so the grammar can run.
			<li>Create example code in "proj1/degree/demos.lisp#(egs :all)" which, if executed, shows three different student plan of studies.
			<li>Comment your code.
			</ul>
			<h2>What to hand in</h2>
			<p>Comment your code
			<p>Zip up the entire "proj1" directory to "proj1.zip"
		<p>Submit the zip to Ecampus.
<h2>How this code will be tested </h2>
<ol>
<li>
With your whole group in attendance...
<li>
I will unzip your zip
<li>
In each directory,
I will run "(load "make.lisp") (egs :all)" (in SLIME, on a CSEE LINUX box). 
<li>
For each member of the group in turn,
I will point to ten random parts of the code and ask them to explain it to me (no hints from other members of the team). I will expect you to know your code.
</ol>
<h2>Study hint</h2>
Any code in sections
		1.1, 1.2, 1.4, 1.5, 1.6, 1.7, 2.2, 2.3, 2.5, 3.1, 3.3, 3.5, 3,6, 3.8, 3.10, 3.12, 3.14, 3.15, 3.19
		may be asked in the mid-session quiz.
			 ]]></description>
	  </item>

	  <item>
		<category rank="20">project</category>
		 <id>12</id> 
		 <title>
			Project 2a
		 </title>
		 <pubdate secs="1195789577" around="Nov07">Thu Nov 22 22:46:17 EST 2007</pubdate>
		 <link>http://menzies.us/csx72/?12</link>
		 <guid>http://menzies.us/csx72/?12</guid>
		 <description><![CDATA[
	<p> 
			See <a href="http://menzies.us/csx72/src/week5/proj2a.txt">
			http://menzies.us/csx72/src/week5/proj2a.txt</a>.

		 ]]></description>
	  </item>



	  <item>
		<category rank="30">project</category>
		 <id>16</id> 
		 <title>
			Project 2b
		 </title>
		 <pubdate secs="1195790417" around="Nov07">Thu Nov 22 23:00:17 EST 2007</pubdate>
		 <link>http://menzies.us/csx72/?16</link>
		 <guid>http://menzies.us/csx72/?16</guid>
		 <description><![CDATA[<p>

					See 
					<a href="http://menzies.us/csx72/doc/cs572/proj2.txt">
					http://menzies.us/csx72/doc/cs572/proj2.txt</a>

		 </p>]]></description>
	  </item>



	  <item>
		<category rank="40">project</category>
		 <id>13</id> 
		 <title>
			Project 3a
		 </title>
		 <pubdate secs="1195789609" around="Nov07">Thu Nov 22 22:46:49 EST 2007</pubdate>
		 <link>http://menzies.us/csx72/?13</link>
		 <guid>http://menzies.us/csx72/?13</guid>
		 <description><![CDATA[
		<p> 
			TBD
		</p>
		 ]]></description>
	  </item>



	  <item>
		<category rank="50">project</category>
		 <id>14</id> 
		 <title>
			Project 3b
		 </title>
		 <pubdate secs="1195789671" around="Nov07">Thu Nov 22 22:47:51 EST 2007</pubdate>
		 <link>http://menzies.us/csx72/?14</link>
		 <guid>http://menzies.us/csx72/?14</guid>
		 <description><![CDATA[
		<p> 
See 
<a href="http://menzies.us/csx72/doc/cs572/proj3b.txt">
http://menzies.us/csx72/doc/cs572/proj3b.txt</a>
		</p>
		 ]]></description>
	  </item>



	  <item>
		<category rank="60">project</category>
		 <id>10</id> 
		 <title>
			Project 4b
		 </title>
		 <pubdate secs="1195789814" around="Nov07">Thu Nov 22 22:50:14 EST 2007</pubdate>
		 <link>http://menzies.us/csx72/?10</link>
		 <guid>http://menzies.us/csx72/?10</guid>
		 <description><![CDATA[<p>
	 
			TBD

		 </p>]]></description>
	  </item>


	  <item>
		<category rank="1">syllabus</category>
		 <id>3</id> 
		 <title>
			Class objectives	
		 </title>
		 <pubdate secs="1195784601" around="Nov07">Thu Nov 22 21:23:21 EST 2007</pubdate>
		 <link>http://menzies.us/csx72/?3</link>
		 <guid>http://menzies.us/csx72/?3</guid>
		 <description><![CDATA[
	<p> 
	Upon successful completion of this course, students should have:
	</p>
	<ul>
	<li>An understanding of the basic theory of artificial intelligence (AI).
	<li>An understanding of the basic programming techniques of AI.
	<li>Ability to describe the practical issues associated with AI for real-world applications
	such as 
	gaming;
	or model-based software engineering;
	<li>An understanding of what it means to be a "knowledge worker" in the 21<sup>st</sup> century.
	<li>An understanding of how AI can augment traditional software engineering.
	</ul>
		 ]]></description>
	  </item>





	  <item>
		<category rank="1000">syllabus</category>
		 <id>5</id> 
		 <title>
			Professor
		 </title>
		 <pubdate secs="1195786068" around="Nov07">Thu Nov 22 21:47:48 EST 2007</pubdate>
		 <link>http://menzies.us/csx72/?5</link>
		 <guid>http://menzies.us/csx72/?5</guid>
		 <description><![CDATA[
	 
			<p>Dr. Tim Menzies Ph.D.<br>
		<a href="http://menzies.us">http://menzies.us</a><br>
		"tim AT menzies DOT us"</a></p>
		

		 ]]></description>
	  </item>



	  <item>
		<category rank="1000">syllabus</category>
		 <id>6</id> 
		 <title>
			Consultation times
		 </title>
		 <pubdate secs="1195786161" around="Nov07">Thu Nov 22 21:49:21 EST 2007</pubdate>
		 <link>http://menzies.us/csx72/?6</link>
		 <guid>http://menzies.us/csx72/?6</guid>
		 <description><![CDATA[
	 
			<p>Tuesday, 5:30pm, ESB room 841</p>

		 ]]></description>
	  </item>



	  <item>
		<category rank="3">syllabus</category>
		 <id>4</id>
		 <title>
			Where, When
		 </title>
		 <pubdate secs="1195785256" around="Nov07">Thu Nov 22 21:34:16 EST 2007</pubdate>
		 <link>http://menzies.us/csx72/?4</link>
		 <guid>http://menzies.us/csx72/?4</guid>
		 <description><![CDATA[
	 
	<dl>
	<dt>when:</dt>
	<dd>Tuesday 1830-2050</dd>
	<dt>where:</dt>
	<dd>Room 756, ESB</dd>
	<dt>university site:</dt>
	<dd>
	<a href="http://www.arc.wvu.edu/courses">http://www.arc.wvu.edu/courses</a>
	</dd>
	<dt>mailingList</dt>
	<dd> <a href="http://wisp.unbox.org/listinfo.cgi/2008-unbox.org"> http://wisp.unbox.org/listinfo.cgi/2008-unbox.org</a>
	<dt>Ecampus site</dt>
	<dd> <a href="https://ecampus.wvu.edu/webct/logon/163543761051">https://ecampus.wvu.edu/webct/logon/163543761051</a>
	</dd>
	</dl>
		 ]]></description>
	  </item>


	  <item>
		<category rank="1000">syllabus</category>
		<category rank="1000">lisp</category>
		 <id>2</id> 
		 <title>
			Textbook
		 </title>
		 <pubdate secs="1195783817" around="Nov07">Thu Nov 22 21:10:17 EST 2007</pubdate>
		 <link>http://menzies.us/csx72/?2</link>
		 <guid>http://menzies.us/csx72/?2</guid>
		 <description><![CDATA[
			<p>Peter Norvig's <a href="http://norvig.com/paip.html">Paradigms of AI</a>
			</p>
		<p>
			<img style="padding: 3px;" align=left border=0 width=150 src="http://images.bestwebbuys.com/muze/books/18/9781558601918.jpg"></p><p>
		<P>The following chapters will be thoroughly studied:
		<ul>1. Introduction to Lisp<br>
		2. A Simple Lisp Program<br>
		3. Overview of Lisp<br>
		4. GPS: The General Problem Solver<br>
		5. ELIZA: Dialog with a Machine <br>
		11. Logic Programming<br>
	14. Knowledge Representation and Reasoning <br>
	16. Expert Systems 
		</ul>
		<br clear=all>
		</p>
		<p>The following chapters will studied, if time permits.
		<ul>
		6. Building Software Tools<br>
		7. STUDENT: Solving Algebra Word Problems <br>
		18. Search and the Game of Othello
			</ul>
		</p>
		<P>The following chapters will not be studied:
		<ul>
	8. Symbolic Mathematics: A Simplification Program<br>
	9. Efficiency Issues<br>
	10. Low Level Efficiency Issues <br>
	12. Compiling Logic Programs<br>
	13. Object-Oriented Programming <br>
	V: Advanced AI Programs<br>
	15. Symbolic Mathematics with Canonical Form <br>
	17. Line-Diagram Labeling by Constraint Satisfaction<br>
	19. Introduction to Natural Language<br>
	20. Unification Grammars<br>
	21. A Grammar of English<br>
	V: The Rest of Lisp<br>
	22. Scheme: An Uncommon Lisp<br>
	23. Compiling Lisp<br>
	24. ANSI Common Lisp<br>
	25. Troubleshooting
			</ul>
		</p>
		
		 ]]></description>
	  </item>


  <item>
    <category rank="1000" >syllabus</category>
     <id>134</id> 
     <title>
        Ecampus
     </title>
     <pubdate secs="1200068565" around="Jan08">Fri Jan 11 08:22:45 PST 2008</pubdate>
     <link>http://menzies.us/csx72/?134</link>
     <guid>http://menzies.us/csx72/?134</guid>
     <description><![CDATA[<p>


          WVU's Ecampus tool will  be used for controlling group membership, submitting assignments, and distributing grades.
		  </p>
		  <p>This subject's Ecampus site is <a href="https://ecampus.wvu.edu/webct/logon/163543761051">https://ecampus.wvu.edu/webct/logon/163543761051</a>. Even though
		 the
		  name of the site is <em>CS-472 - 200801-CS-472-001</em>, it is 
		  still the site for cs472 <em>and </em>cs572.
     </p>]]></description>
  </item>


	  <item>
		<category rank="1000">syllabus</category>
		 <id>7</id> 
		 <title>
			Class mailing list
		 </title>
		 <pubdate secs="1195786358" around="Nov07">Thu Nov 22 21:52:38 EST 2007</pubdate>
		 <link>http://menzies.us/csx72/?7</link>
		 <guid>http://menzies.us/csx72/?7</guid>
		 <description><![CDATA[
	<p> 
	Students are required to subscribe to the class mailing list- this
	will be the official notification mechanism for class announcements,
	When you subscribe, please use a password that you are not using
	elsewhere.
	</p>
	<p>
	If you need to speak to me on a private matter (e.g. illness, a
	query regarding your mark, suspected plagiarism), please email me
	privately to "tim AT menzies DOT us".
	</p>
	<p>
	Web page (subscribe here): 
	<a href="http://wisp.unbox.org/listinfo.cgi/2008-unbox.org">http://wisp.unbox.org/listinfo.cgi/2008-unbox.org</a>
	</p>
	<p>
	To post messages: <a href="mailto:2008@wisp.unbox.org">2008@wisp.unbox.org</a>.
	</p>
	<p>
	List archives: 
	<a href="http://wisp.unbox.org/private.cgi/2008-unbox.org/">http://wisp.unbox.org/private.cgi/2008-unbox.org/</a>
	</p>
	<p>
	For more info on the mail list, email 
	<a href="mailto:2008-request@wisp.unbox.org">2008-request@wisp.unbox.org</a>
	with the subject line "help".
	</p>

		 ]]></description>
	  </item>

	  <item>
		<category rank="1000">syllabus</category>
		 <id>9</id> 
		 <title>
			Expected workload
		 </title>
		 <pubdate secs="1195787668" around="Nov07">Thu Nov 22 22:14:28 EST 2007</pubdate>
		 <link>http://menzies.us/csx72/?9</link>
		 <guid>http://menzies.us/csx72/?9</guid>
		 <description><![CDATA[
	<p> 
	This is a graduate level course, and the expected workload is relatively high. 
	You must be prepared to dedicate approximately 9 working hours a week to this class
	(excluding the time spent in the classroom).
	</p>
		 ]]></description>
	  </item>





  <item>
    <category rank="1000" >syllabus</category>
     <id>135</id> 
     <title>
        	Group Work
     </title>
     <pubdate secs="1200068794" around="Jan08">Fri Jan 11 08:26:34 PST 2008</pubdate>
     <link>http://menzies.us/csx72/?135</link>
     <guid>http://menzies.us/csx72/?135</guid>
     <description><![CDATA[<p>
All students should now go to the <a href="https://ecampus.wvu.edu/webct/logon/163543761051">Ecampus</a> site and assign themselves to a group.
          <ul><li>Cs472 students will work in groups of size three (four at absolute max, if you get my permission first). 
		  <li>Cs572 students will  work in  groups of two.
		  </ul>
		  <h2>Important</h2>
<p>
		  Do not reveal your group id to other members of the class. After each assignment I will be
		  publicly commenting on each group- but only by their anonymous group id.
		  <p>Also, after each assignment submission I will be privately asking all students what was their percentage contribution to the assignment. If those numbers do not 
		  add up to 100%, I will investigate further and grades will be adjusted accordingly.
		  </p>

     ]]></description>
  </item>


	  <item>
		<category rank="1000">syllabus</category>
		<category rank="1000">project</category>
		 <id>8</id> 
		 <title>
			Grading
		 </title>
		 <pubdate secs="1195786740" around="Nov07">Thu Nov 22 21:59:00 EST 2007</pubdate>
		 <link>http://menzies.us/csx72/?8</link>
		 <guid>http://menzies.us/csx72/?8</guid>
		 <description><![CDATA[
	 <p>
			There are two grading schemes:
	</p>
		<ol type="A">
			<li>Exams, simpler assignments (building a conversation agent for a game);
			<li>No exams, elaborate assignments (building an expert system for software project management);
		</ol>
		<p>All CS472 students with a GPA below 3.5 must use scheme A. </p>
		<p>All advanced AI students (CS572) students must use scheme B.</p>
		<p>All other students get to choose,
	provided
		the lecturer gives permission.
		</p>
		<table border=1>
			<tr>
				<td>week</td><td> A (with exams)</td><td>B (with no exams)</td>
				<td>Marks</td></tr>
				<td> 4   </td><td> <a href="http://menzies.us/csx72/?11">Project1</a></td><td><a href="http://menzies.us/csx72/?11">Project1</a>        </td>
				<td>10</td><tr>
				<td> 7   </td><td> Mid-term</td><td><a href="http://menzies.us/csx72/?16">Project2b</a>        </td>
				<td>20</td><tr>
				<td> 10   </td><td> <a href="http://menzies.us/csx72/?12">Project2a</a></td><td><a href="http://menzies.us/csx72/?14">Project3b</a>        </td>
				<td>20</td><tr>
				<td> 13   </td><td> <a href="http://menzies.us/csx72/?13">Project3a</a></td><td>        </td>
				<td>20</td><tr>
				<td> 14   </td><td> </td><td><a href="http://menzies.us/csx72/?10">Project4b</a> (presentation)        </td>
				<td>20</td><tr>
				<td> 17   </td><td> Final exam</td><td> <a href="http://menzies.us/csx72/?10">Project 4b</a> (written submission)   </td>
				<td>30</td><tr>
			</td>
		</table>
	<p>
	Final marks:
	<ul>
	A+ =95; A =92; A- =90; <br>
	B+ =86; B=83; B- =80; <br>
	C+ =76; C=73; C- =70; <br>
	D+ =66; D=63; D- =60; <br>
	E+ =55; E=50; E- =45; <br>
	otherwise F
	</ul>
		 </p>
		 <p>
		 <h2>Late marks</h2>
		 <p>Assignments can be submitted up to three days after the due date, at a late penalty of 2.5 marks per day.</p>
		 <p>After 3 days late, assignments will get a grade of zero.</p> 
		 </p>
	]]></description>
	  </item>


	  <item>
		<category rank="1500">syllabus</category>
		 <id>100</id> 
		 <title>
			Academic Honesty
		 </title>
		 <pubdate secs="1195787807" around="Nov07">Thu Nov 22 22:16:47 EST 2007</pubdate>
		 <link>http://menzies.us/csx72/?10</link>
		 <guid>http://menzies.us/csx72/?10</guid>
		 <description><![CDATA[
	<p>
	Intellectual honesty is fundamental to 
	scholarship. Accordingly, I view plagiarism or cheating of any kind in academic work as among the most serious offenses that a student can commit.
	</p>
	<p>
	Students are encouraged to discuss class topics between themselves and prepare their assignments in groups. However, each group should develop their reports with due consideration for academic honesty.
	</p>
	<p>
	While you are encouraged to search for additional references, you are not supposed to copy sentences, paragraphs or figures from these papers, unless you explicitly mention the source (by quoting and/or referencing).
	</p>
	<p>
	Note that I will run phrases from each submitted term paper through a Web search engine, and compare them with my private collection of manuscripts.
	</p>
	<p>
	In cases of plagiarism or copyright infringement, the report will be automatically be assigned a score of 0 points.
	</p>
		 ]]></description>
	  </item>



	  <item>
		<category rank="2000">syllabus</category>
		 <id>101</id> 
		 <title>
			Social Justice Statement
		 </title>
		 <pubdate secs="1195788066" around="Nov07">Thu Nov 22 22:21:06 EST 2007</pubdate>
		 <link>http://menzies.us/csx72/?10</link>
		 <guid>http://menzies.us/csx72/?10</guid>
		 <description><![CDATA[
	<p> 
	West Virginia University is committed to social justice. I concur with that commitment and expect to foster a nurturing learning environment based upon open communication, mutual respect, and non-discrimination. Our University does not discriminate on the basis of race, sex, age, disability, veteran status, religion, sexual orientation, color or national origin. Any suggestions as to how to further such a positive and open environment in this class will be appreciated and given serious consideration.
	</p>
	<p>
	If you are a person with a disability and anticipate needing any type of accommodation in order to participate in this class, please advise me and make appropriate arrangements with Disability Services (293-6700).
	</p>

		 ]]></description>
	  </item>


	  <item>
		<category rank="2000">emacs</category>
		 <id>111</id> 
		 <title>
		   Disabling text menus     </title>
		 <pubdate secs="1196422668" around="Nov07">Fri Nov 30 03:37:48 PST 2007</pubdate>
		 <link>http://menzies.us/csx72/?111</link>
		 <guid>http://menzies.us/csx72/?111</guid>
			 <description><![CDATA[<p>
		 
				  I get this irritating problem when ssh-ing into CSEE and running emacs via an xterm.
				  </p><p>If I make the window really wide then typing on the right hand side produces a pop-up of the text
				  menu buffer. Then I have to tap some keys to clear that buffer and get on with my work. Most annoying.</p>
				  <p>Today, I found a fix- using the EMACS <tt>defadvice</tt> command,
					ignore all calls to the function that calls the text menu buffer:</p>
		<pre>; e.g. inside $HOME/.emacs
	(defadvice tmm-menubar  (around no-tmm-menu (x))
	  (if nil ad-do-it))

	(ad-activate 'tmm-menubar)
	(menu-bar-mode nil)</pre>
	]]></description>
	  </item>

	  <item>
		<category rank="1002">start</category>
		<category rank="1000">emacs</category>
		<category rank="1000">lisp</category>
		 <id>109</id> 
		 <title>
				Color themes in EMACS     </title>
		 <pubdate secs="1196310158" around="Nov07">Wed Nov 28 20:22:38 PST 2007</pubdate>
		 <link>http://menzies.us/csx72/?109</link>
		 <guid>http://menzies.us/csx72/?109</guid>
		 <description><![CDATA[<p>
	 Do you think emacs is boring to look at?</p><p>
			  Check out this <a href="http://www.cs.cmu.edu/~maverick/GNUEmacsColorThemeTest/index-el.html">EMACS color theme tester</a>.
			  </p><p>

			  If you think those screens look better than your current EMACS screen, then:
			  </p>
				<pre>
	mkdir $HOME/src/lisp
	cd $HOME/src/lisp
	wget http://download.gna.org/color-theme/color-theme-6.6.0.tar.gz
	tar xfvz color-theme-6.6.0.tar.gz
	cd $HOME
	</pre>
	<p>
	Edit $HOME/.emacs with some other editor; e.g. <em>nano .emacs</em>.
	</p><p>
	Add these lines:
	</p>
	<pre>
	(setq load-path (cons "~/src/lisp/color-theme-6.6.0" load-path))
	(require 'color-theme)
	(color-theme-initialize)
	(color-theme-hober)
	</pre>
	If it doesn't seem to work, try adding one more line:
	<pre>
	(require 'color-theme)
	(setq color-theme-is-global t)
	(color-theme-hober)
	</pre>
	<p>If the <em>hober</em> theme don't do it for you, try
	some other themes:
	<pre>
	M-x color-theme-select RET
	</pre>
		 </p>]]></description>
	  </item>

	 


	  <item>
		<category rank="1001">start</category>
		<category rank="1000">emacs</category>
		<category rank="1000">lisp</category>
		 <id>110</id> 
		 <title>
			Cool EMACS tricks</title>
		 <pubdate secs="1196312264" around="Nov07">Wed Nov 28 20:57:44 PST 2007</pubdate>
		 <link>http://menzies.us/csx72/?110</link>
		 <guid>http://menzies.us/csx72/?110</guid>
		 <description><![CDATA[<p>
	Here's some nice tricks to add to your $HOME/.emacs 
	<pre>
	(xterm-mouse-mode t)             ; make mouse work in text windows
	(transient-mark-mode t)          ; show incremental search results
	(setq scroll-step 1)             ; don't scroll in large jumps
	(setq require-final-newline   t) ; every file has at least one new line
	(setq inhibit-startup-message t) ; disable start up screen
	(global-font-lock-mode t 1)      ; enable syntax highlighting
	(line-number-mode t)             ; show line numbers and time in status line

	; show line numbers and time in status line
	(setq display-time-24hr-format nil)
	(display-time)     </pre>
	<p>See also <a href="http://menzies.us/csx72?109">Color themes in EMACS</a>.
		 </p>]]></description>
	  </item>

  <item>
    <category rank="1000">fun</category>
     <id>127</id> 
     <title>
        Humans are dead
     </title>
     <pubdate secs="1196965627" around="Dec07">Thu Dec  6 10:27:07 PST 2007</pubdate>
     <link>http://menzies.us/csx72/?127</link>
     <guid>http://menzies.us/csx72/?127</guid>
     <description><![CDATA[<p>
<center> 
<object width="425" height="355"><param name="movie" value="http://www.youtube.com/v/WGoi1MSGu64&rel=1&border=0"></param><param name="wmode" value="transparent"></param><embed src="http://www.youtube.com/v/WGoi1MSGu64&rel=1&border=0" type="application/x-shockwave-flash" wmode="transparent" width="425" height="355"></embed></object>
  </center>

     </p>]]></description>
  </item>

  <item>
    <category rank="1000">lisp</category>
    <category rank="1000">fun</category>
    <category rank="1000">quotes</category>
     <id>124</id> 
     <title>
        	LISP Quotes
     </title>
     <pubdate secs="1196904642" around="Dec07">Wed Dec  5 17:30:42 PST 2007</pubdate>
     <link>http://menzies.us/csx72/?124</link>
     <guid>http://menzies.us/csx72/?124</guid>
     <description><![CDATA[<p>
          "...Please don't assume Lisp is only useful for Animation and Graphics, AI, Bioinformatics, B2B and E-Commerce, Data Mining, EDA/Semiconductor applications, Expert Systems, Finance, Intelligent Agents, Knowledge Management, Mechanical CAD, Modeling and Simulation, Natural Language, Optimization, Research, Risk Analysis, Scheduling, Telecom, and Web Authoring just because these are the only things they happened to list."
<br>
- <a href="http://interviews.slashdot.org/comments.pl?sid=23357&cid=2543265">Kent M. Pitman</a>
     </p>
	 
	<p>
	Lisp is the red pill.  <br>- John Fraser, comp.lang.lisp
	</p><p>
	Lisp isn't a language, it's a building material.  <br>- Alan Kay
	</p><p>
	Lisp is a programmable programming language.  <br>- John Foderaro, CACM, September 1991
	</p><p>
	Lisp is like a ball of mud - you can throw anything you want into it, and it's still Lisp.  <br>- Anonymous
	</p><p>
	LISP stands for: Lots of Insane Stupid Parentheses.  <br>- Anonymous
	</p><p>
	These are your father's parentheses. Elegant weapons, for a more... civilized age.  <br>- XKCDp
	</p><p>
	Lisp has jokingly been called "the most intelligent way to misuse a computer". I think that description is a great compliment because it transmits the full flavor of liberation: it has assisted a number of our most gifted fellow humans in thinking previously impossible thoughts.  <br>- "The Humble Programmer", E. Dijkstra, CACM, vol. 15, n. 10, 1972
	</p><p>
	Lisp is worth learning for the profound enlightenment experience you will have when you finally get it; that experience will make you a better programmer for the rest of your days, even if you never actually use Lisp itself a lot.  <br>- Eric S. Raymond, "How to Become a Hacker".
	</p> <p>
	Any sufficiently complicated C or Fortran program contains an ad hoc informally-specified bug-ridden slow implementation of half of Common Lisp.  <br>-Philip Greenspun's Tenth Rule of Programming
	 
	</p><p>
	Java was, as Gosling says in the first Java white paper, designed for average programmers. It's a perfectly legitimate goal to design a language for average programmers. (Or for that matter for small children, like Logo.) But it is also a legitimate, and very different, goal to design a language for good programmers.  <br>-Paul Graham
	</p><p>
	In Lisp, if you want to do aspect-oriented programming, you just do a bunch of macros and you're there. In Java, you have to get Gregor Kiczales to go out and start a new company, taking months and years and try to get that to work. Lisp still has the advantage there, it's just a question of people wanting that.  <br>-Peter Norvig
	</p><p>
	Common Lisp people seem to behave in a way that is akin to the Borg: they study the various new things that people do with interest and then find that it was eminently doable in Common Lisp all along and that they can use these new techniques if they think they need them.  <br>- Erik Naggum
	</p>
	 ]]></description>
  </item>


  <item>
    <category rank="1">ignore</category>
     <id>123</id> 
     <title>
        	What are the "Ignore" pages?</title>
     <pubdate secs="1196898036" around="Dec07">Wed Dec  5 15:40:36 PST 2007</pubdate>
     <link>http://menzies.us/csx72/?123</link>
     <guid>http://menzies.us/csx72/?123</guid>
     <description><![CDATA[<p>
<img align="right" src="http://teens.novita.org.au/library/ignore_them.gif">
          These pages store half-formed ideas, stuff under-construction, rough notes, etc.
<br clear=all>
     </p>]]></description>
  </item>

	<item>
		<category rank="2000">ignore</category>
		<category rank="1000">lisp</category>

		 <id>125</id> 
		 <title>		<category rank="2000">ignore</category>
			Things I need to tell the students
		 </title>
		 <pubdate secs="1196898031" around="Dec07">Wed Dec  5 15:40:31 PST 2007</pubdate>
		 <link>http://menzies.us/csx72/?125</link>
		 <guid>http://menzies.us/csx72/?125</guid>
		 <description><![CDATA[<p>
	 
			  <ul>	
			  	<li>Local globals (the &amp;optional (*X* *X*)) idiom
			  <li>unit tests and eg.lisp
			  <li>(?x y) = pick any at random
			  	<li>!x pick and cache
					<li>the one global rule. slots at different life times
					<li>create a make file
	</ul>
		 </p>]]></description>
	  </item>

 <item>
    <category rank="1000">quotes</category>
    <category rank="1000">philosophy</category>
     <id>129</id> 
     <title>
         Donna Haraway's "Cyborg Manifesto" (excerpt)
     </title>
     <pubdate secs="1197089448" around="Dec07">Fri Dec  7 20:50:48 PST 2007</pubdate>
     <link>http://menzies.us/csx72/?129</link>
     <guid>http://menzies.us/csx72/?129</guid>
     <description><![CDATA[<p>
 
From <a href="http://www.stanford.edu/dept/HPS/Haraway/CyborgManifesto.html">http://www.stanford.edu/dept/HPS/Haraway/CyborgManifesto.html</a>:
</p>
<ul>
<p>
"Our best machines are made of sunshine; they are all light and clean because they are nothing but signals, electromagnetic waves, a section of a spectrum, and these machines are eminently portable, mobile  a matter of immense human pain in Detroit and Singapore."
</p><p>
"People are nowhere near so fluid, being both material and opaque. </p><p>"Cyborgs are ether, quintessence."
</p>
</ul>
]]></description>
  </item>



  <item>
    <category rank="1000" >lecture</category>
    <category rank="1000" >philosophy</category>
     <id>131</id> 
     <title>
        	Chinese room
     </title>
     <pubdate secs="1197728771" around="Dec07">Sat Dec 15 06:26:11 PST 2007</pubdate>
     <link>http://menzies.us/csx72/?131</link>
     <guid>http://menzies.us/csx72/?131</guid>
     <description><![CDATA[<p>
<a href="http://pzwart.wdka.hro.nl/mdr/research/fcramer/wordsmadeflesh/pics/chinese-room.png"><img align=right width=500 border=0 src="http://pzwart.wdka.hro.nl/mdr/research/fcramer/wordsmadeflesh/pics/chinese-room.png"></a>
 
          John Searle is a smart guy.
His text <em>Speech Acts: An Essay in the Philosophy of Language. 1969</em> is
listed as <a href="http://home.comcast.net/~antaylor1/fiftymostcited.html">one of the most cited works on the 20<sup>th</sup> century</a>.
</p>
<p>In one of the most famous critiques of early AI, Searle invented the <em>Chinese Room</em>: an ELIZA-like AI that used simple pattern look
ups to react to user utterances.  Searle argued that this was nonsense- that such a system could never be said to be "really" intelligent.</p>
<p>Looking back on it all, 27 years later, the whole debate seems wrong-headed. Of course ELIZA was the wrong model for intelligence-
no internal model that is refined during interaction, no background knowledge, no inference, no set of beliefs/desires/goals, etc.
</p><p>
Searle's argument amounts to "not enough- do more". And we did. Searle's kind of rhetoric
(that AI will never work) fails in the face of AI's <a href="http://menzies.us/pdf03/aipride.pdf">many successes</a>.

     </p><p>
Here's some on-line resources on the topic:
<ul><li>Wikipedia's entry on the <a href="http://en.wikipedia.org/wiki/Chinese_room">Chinese Room</a>.
<li>The original article <a href="http://members.aol.com/NeoNoetics/MindsBrainsPrograms.html">Minds, Brains, and Programs</a>
from The Behavioral and Brain Sciences, vol. 3. Copyright 1980</a>
<li>A <a href="http://www.nybooks.com/articles/6542">spirited debate</a> about this idea between Searle and Daniel Dennett.
For my money, Searle's argument grows tired and stale against Dennett's insights.
</ul>
And here's some more general links:
<ul>
<li><a href="http://groups.google.com/group/comp.ai.philosophy/topics?gvc=2">Comp.ai.philosophy</a>
<li>A very old <a href="http://www.dontveter.com/caipfaq/index.html">FAQ</a> from
comp.ai.philosophy.
<lI>Some more recent <a href="http://en.wikipedia.org/wiki/Philosophy_of_artificial_intelligence"> AI &amp; philosophy</a> notes from
Wikipedia.
</ul>
</p>
]]></description>
  </item>


	<copyright><![CDATA[
					<a rel="license" href="http://creativecommons.org/licenses/by-sa/3.0/"> <img  alt="Creative Commons License" style="padding-left: 10px; border-width:0" src="http://i.creativecommons.org/l/by-sa/3.0/88x31.png" /> </a>
	  <br clear="all">&copy; 2007, 2008<br>&nbsp;<a href="http://menzies.us">Tim Menzies</a>
					]]>
	</copyright>
	<navigation><![CDATA[
	 <a href="http://menzies.us/csx72/?home">Home</a>
	| <a href="http://menzies.us/csx72/?news">News</a>
	| <a href="http://menzies.us/csx72/?syllabus">Syllabus</a>
	| <a href="http://menzies.us/csx72/?project">Project</a> <br>
	 <a href="http://menzies.us/csx72/?lecture">Lectures</a>
	| <a href="http://menzies.us/csx72/?lisp">LISP</a>
	| <a href="http://menzies.us/csx72/?emacs">EMACS</a>
	| <a href="http://menzies.us/csx72/?fun">Fun</a><br>
	 <a href="http://menzies.us/csx72/?113">Links</a>
	| <a href="http://menzies.us/csx72/sitemap.php">Site map</a>
	| <a href="http://menzies.us/csx72/?102">Contact </a><br>
	 <a href="http://menzies.us/csx72/nova.php">NOVA</a>
	]]>
	</navigation>

  <item>
    <category rank="1000" >week1</category>
    <category rank="1000" >lecture</category>
     <id>136</id> 
     <title>
        The business case for software engineering
     </title>
     <pubdate secs="1200084143" around="Jan08">Fri Jan 11 12:42:23 PST 2008</pubdate>
     <link>http://menzies.us/csx72/?136</link>
     <guid>http://menzies.us/csx72/?136</guid>
     <description><![CDATA[<p>
 
          <div style="width:425px;text-align:left" id="__ss_161266"><object style="margin:0px" width="425" height="355"><param name="movie" value="http://static.slideshare.net/swf/ssplayer2.swf?doc=the-business-case-for-automated-software-engineering-1194728053795142-2"/><param name="allowFullScreen" value="true"/><param name="allowScriptAccess" value="always"/><embed src="http://static.slideshare.net/swf/ssplayer2.swf?doc=the-business-case-for-automated-software-engineering-1194728053795142-2" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="425" height="355"></embed></object><div style="font-size:11px;font-family:tahoma,arial;height:26px;padding-top:2px;"><a href="http://www.slideshare.net/?src=embed"><img src="http://static.slideshare.net/swf/logo_embd.png" style="border:0px none;margin-bottom:-5px" alt="SlideShare"/></a> | <a href="http://www.slideshare.net/timmenzies/the-business-case-for-automated-software-engineering" title="View 'The business case for automated software engineering ' on SlideShare">View</a> | <a href="http://www.slideshare.net/upload">Upload your own</a></div></div>

     </p>]]></description>
  </item>

  <item>
    <category rank="1000" >week1</category>
    <category rank="1000" >review</category>
     <id>150</id> 
     <title>
        Week 1 : review
     </title>
     <pubdate secs="1200596847" around="Jan08">Thu Jan 17 11:07:27 PST 2008</pubdate>
     <link>http://menzies.us/csx72/?150</link>
     <guid>http://menzies.us/csx72/?150</guid>
     <description><![CDATA[
 
          <ol>
		  <li>Planes fly without flapping there wings. The Platonic beast walks using more than two legs.  
		  	<ol type="a"><li>In what sense do planes do/don't "really" fly?
				<li>In what sense does the Platonic beast does/does not "really" walk?
				<lI>Offer a definition of flying that includes air planes and birds. What other things can this definition apply to? 
				<li>Offer an abstract description of walking that includes humans and the Platonic Beast? What other things can the definition apply to?
				</ol>
		  <li>The Knowledge level
		  <ol type="a"><li>What is Newell's abstract description  of a intelligence (hint: knowledge-level agent, principle of rationality). 
		  <li>You want to get to downtown Morgantown. What are the states between here and there? 
				<li>What operators are available to you to get to downtown?
				<li>What knowledge could you use to decide which operators to apply?
				</ol>
		<li>Simulated annealing
			<ol type="a"><li>Write down the <a href="http://en.wikipedia.org/wiki/Simulated_annealing#Saving_the_best_solution_seen">pseudo-code for simulated 
			annealing</a>. Make sure your code has line numbers.
			<lI>Explain the following using a paragraph or two of English and line numbers into your pseudo-code:
						a simulator annealing search is <ul><li>uniformed, <li>incomplete,
							<li>stochastic <li>local search <li>best suited to non-linear problems <li>uses no restarts <lI>and uses very little memory.
							</ul>
				<li>Why was  using very little memory very important when simulated annealing was invented (1953)?
				<li>What is local search and  why might it be useful in a search problem?
				<li>What are restarts and why might  restarts be useful in a search problem ?
				</ol>
	</ol>
     ]]></description>
  </item>
