University of Washington
Department of Computer Science and Engineering

CSE473–Artificial Intelligence I

Autumn 1997

 

Midterm Exam Solution

February 13, 1998

 

 

 

 

 

 

 

 

 

 

Question and point summary:

 

Number

Topic

Possible

Actual


1

Term definitions

20

2

LISP programming

25

3

Adversarial Search

25


4

State-space search

30


5

Planning

30

 

 

 

Term definitions (Answer 5 of the following; clearly mark exactly 5 you want graded; 4 points each, 20 points total)

 

Define the following italicized terms. Use only the space provided. Answers that continue to the back or onto other pages will not be graded!

 

  1. The first agent we studied in class was purely reactive, whereas an agent based on planning or search is entirely deliberative.
  2.  

    A reactive agent decides what action to take next based entirely on whatever information its sensors tell it immediately about the world. Reactive agents generally have little or no memory and little or no ability to predict the consequences of their actions. A deliberative agent, on the other hand, is able to predict the effects of what it might do, and uses that information to generate "good" plans of action. A purely deliberative agent is generally assumed to have no sensing ability, so it generates its solution plans ahead of time then executes them without any feedback.

     

     

  3. A* search generates an optimal solution provided it is given an admissible heuristic.
  4.  

    A search heuristic is a function that takes a state and predicts how close it is to a goal (solution) state. In the case of optimizing search, the heuristic estimates the cost from the current state to a solution as well. An admissible heuristic never over-estimates the cost from the current state to a solution. A* search is a best-first optimizing search technique which uses a heuristic ranking function of the form
    f'(s) = g(s) + h'(s), where f(s) is the (actual) minimum cost of moving from the initial state to s, and h'(s) is the (estimated) cost of moving from s to a solution. If h'(s) is admissible, that is if h'(s) <= h(s) for all s, then the first solution that A* search finds will be a minimum-cost solution.

     

     

  5. Many people assert that the Turing test is the only legitimate test of machine intelligence.
  6.  

     

    The Turing test is one proposed solution to the question of deciding whether some system is intelligent. The test, proposed in a paper by Turing written in the 1920s, involves a human investigator with two teletype consoles. The first is connected to a person, the second to the system being tested. The investigator has some interval of time to engage both systems in unrestricted conversation. At the end of the time a judgment is made as to which is human. If the investigator(s) cannot reliably tell, then the system is judged to be intelligent.

    The most important premise of the test is that intelligence should be evaluated according to the system's IO behavior only. It does not matter how it formulates its conversation, all that matters is that the conversation appear intelligent. Other interesting aspects are: the domain of conversation is unrestricted (thus ruling out the idea of "special-purpose intelligence," and the communication channel is typed text (thus ruling out judgments based on appearance, spoken word, facial expressions).

     

     

  7. Iterative deepening search is an asymptotically optimal, blind search method.
  8.  

    Iterative deepening is a blind search method in that there is no heuristic ranking of intermediate states. Its space usage is linear in the size of the solution, which is comparable to depth-first search (which is optimal in its space usage). Its time usage is proportional to the amount of time required by depth-first or bread-first search. Technically speaking, asymptotically optimal time requirement means that the as the solution length becomes arbitrarily long, the amount of time required to find a solution in the worst case, using iterative deepening, is within a constant factor of the time required to find it using breadth-first search.

     

     

     

     

     

  9. Breadth-first search is a complete search procedure although depth-first search is not.
  10. Breadth-first search is guaranteed to find a solution in finite time, provided that there is a finite-length solution. Depth-first search can spend an infinite amount of time exploring one section of the search graph even though there is no solution in that branch and there is a solution in another branch.

     

  11. The SNLP planning algorithm is sound, complete, and systematic.
  12. Soundness means that when the algorithm terminates with a successful plan, that plan is indeed a solution to the planning problem (i.e. it actually would achieve a goal state if it were executed in the initial state). Completeness means that if a solution plan exists, the algorithm will eventually find it and recognize it as a solution. Systematicity means that the algorithm will never explore the same partial plan (node in the search graph) more than once.

     

  13. Most search routines are designed to solve a satisficing problem or an optimizing problem, but not both.
  14. A satisficing search stops when it finds any solution (any path that leads to a goal state). An optimizing search is defined in terms of an objective function defined over solutions, and finds the solution that maximizes the value of that function.

     

  15. Means-ends analysis is a planning technique somewhere between progression and regression.

 

Progression is searching from the initial state, trying to find a goal state. Regression is searching back from the set of goal states, trying to find an initial state. Means-ends analysis is a heuristic method whereby a regression search is begun, but the choice of next operator to apply (the "means") is made by analyzing the difference between the initial state and the current regressed state(s). In other words, an operator is selected on the basis of its ability to reduce the difference between the current progressed state and the current regressed state.

 

 

 

 

 

LISP Programming (25 points)

 

Recall that the first agent got its sensing information in the form of object lists. Here is an example of an object list with three members:

 

(setf *sample* ’(((kind glass) (position 0) (state broken))

((kind rock) (position 4) (color green))

((kind rock) (position 5) (color red))

((kind truck) (position 6)))

 

Write the following functions in Lisp

 

  1. A function count-objects, which takes an object-list as input and returns the number of objects in the list.
    (count-objects *sample*) => 3
  2.  

  3. A function count-color, which takes an object-list and a symbol (a color name) as input, and returns the number of objects in the list that have that color.
    (count-color *sample* ’green) => 1
    (count-color *sample* ’blue) => 0
  4.  

  5. A function object-position, which takes an object-list and a symbol (a kind) as input, and returns the position of an object of that kind, or NIL if there is no such object.
  6. (object-position *sample* ’truck) => 6

    (object-position *sample* ’rock) => 5

    ;; or 4 would be OK too

    (object-position *sample* ’fuel-drum) => NIL

     

     

  7. EXTRA CREDIT. Suppose the truck is always in the last available position at the object-list's location, and positions are numbered from 0. Write a function empty-position, which takes an object-list as input and returns a position number with nothing currently in it, or NIL if there is no empty position.
    (empty-position *sample*) => 1

 

 

 

Answer this question entirely on the next page.

 

Answer to Lisp Question Here

 

;;; These first two are helper functions for the solution

 

;;; Get the property value for this object & property, e.g.

;;; (object-property (first *sample* 'position) => 0

 

(defun object-property (object property-name)

(second (assoc property-name object)))

 

;;; Return a list of all the objects that have this property value

;;; (find-objects *sample* 'kind 'rock)

;;; => (((kind rock) (position 4) (color green))

;;; ((kind rock) (position 5) (color red)))

 

(defun find-objects (object-list prop-name prop-value)

(remove-if-not

#'(lambda (object)

(eq (object-property object property-name) prop-value))

object-list))

 

;;;***************************************

 

(defun count-objects (object-list)

(length object-list))

 

(defun count-color (object-list color)

(length (find-objects object-list 'color color)))

 

(defun object-position (object-list kind)

(let ((matching-objects (find-objects object-list 'kind kind)))

(cond

((null matching-objects) NIL)

(T (object-property (first matching-objects) 'position)))))

 

(defun empty-position (object-list)

(let* ((positions (mapcar #'(lambda (object)

(object-property object 'position))

object-list))

(max-position (first (sort positions '>))))

(do ((cp 0 (+ cp 1))

(empty NIL))

((or (> cp max-position) empty)

empty)

(when (not (member cp positions :test '=))

(setf empty cp)))))

 

 

 

 

Adversarial Search (25 points)

 

The game of "nim" is defined as follows. There are two players, and initially a stack of five pennies. The players alternate turns, and on each turn a player must take either one, two, or three pennies from the stack. The player who takes the last penny loses.

 

Generate the tree for the first move of this game, suggest a state evaluation function, and apply it to this tree of depth 1. Using alpha/beta pruning, which of the subtrees must be explored further?

 

 

 

Let us begin with the one-move game tree:

 

 

 

 

The rest of the answer depends on the heuristic evaluation function. Suppose naively we have the following evaluation for a state, when it is MIN's move:

 

v(5) = 10

v(4) = 5

v(3) = 3

v(2) = 1

v(1) = 100

 

at this point we want to establish as good an a value as possible at the top level 5/MAX node, so we would expand the subtree at state 4 next:

 

 

 

 

 

 

At the MAX node the heuristic would be the negative of its MIN value. At this point expanding the 1 leaf would lead to a "hard value" of –¥ (for MAX losing), and we would set the value of the 4/MIN node to –¥ as well. At this point we could prune the 3/MAX node and the 2/MAX node, because they would not raise the value of the 4/MIN node. On the other hand, we would have to continue to expand the 3/MIN and 2/MIN nodes to see if their b values could be raised above the current best candidate of –¥ .

 

 

 

 

 

 

 

State Space Search (30 points) - You need answer only one of this question and the next

 

The "water jug problem" can be stated as follows: you are given two jugs, one that can hold 4 gallons of water and the other that can hold 3 gallons of water. You also have a pump that can be used to fill either jug with water, and you can empty the contents of either jug at any time. You goal is to get exactly 2 gallons of water into the four-gallon jug.

 

a. (20 points) State this problem in terms of state-space search. Describe the state, the move-generator, the goal checker. (You can do this in Lisp if you want, or in some other computer language, or in pseudo-code, or in English as long as you are precise.) Suggest a heuristic for guiding the search for a solution.

 

b. (10 points) Suppose that it costs $5 every time the pump is used, $2 every time you fill the four-gallon jug, and $1 every time you use the three-gallon jug. Describe how to find the lowest-cost solution to the problem.

 

 

 

State and action descriptions appear on the next page. These definitions also contain the action cost information for part b.

 

Here's a simple heuristic:

rank(state) º if( state.B = 2,

0,

abs( state.B + state.S – 2))


where state.S is the level in the small jug and state.B is the level in the big jug.

 

For part b, the state already contains total cost information, so we can apply A* search to get the lowest-cost solution provided we have an admissible heuristic estimating the lowest cost to a solution.

 

In formulating an admissible heuristic, about all we could say for sure is that if the total amount of water is less than 5, then the pump will have to be used (at a cost of 5), and unless the large jug contains exactly 2 gallons, we will have to use the small jug (at a cost of 1) .

 

h'(state) = if(state.B + state.S < 2, 5, 0) +

if(state.B != 2, 1, 0 )

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Planning (30 points) - You need answer only one of this question and the previous one

 

Consider writing a planner that will solve the following block-moving

problem:

 

 

 

 

 

 

 

 

 

 

 

There are three operators available:

 

 

The gripper can only be holding one thing at a time, and there can be at most one block at a position at a time.

 

 

 

 

 

  1. Write the operator definitions for pickup, putdown, and move-gripper.
  2.  

  3. Demonstrate a regression search for a solution: show the goal and two regressions, one that fails, and one that succeeds.
  4.  

  5. Demonstrate a plan-space search for a solution: show the initial plan and three refinements: one that links to the initial state, one that involves adding an operator, and one that involves resolving a threat.

 

 

 

 

 

 

 

 

 

 

Answer this question entirely on the next page.

 

 

Answer to Planning Question Here

 

 

Propositions:

 

 

Operators:

 

pickup(?b, ?p)

Pre: bp(?b, ?p), ge, gp(?p)

Effects: -ge, -bp(?b, ?p), +gh(?b), +pe(?p)

 

putdown(?b, ?p)

Pre: gh(?b), gp(?p), pe(?p)

Effects: -pe(?p) –gh(?b), +bp(?b, ?p), +ge

 

move-gripper(?p1, ?p2)

Pre: gp(?p1)

Effects: -gp(?p1), +gp(?p2)

 

Regression:

 

Here's an example of a failed regression and a successful regression. The last action in the plan must be move-gripper(?x, 4) so for failure we will make the last action which is part of a solution plan, but not the last action.

 

 

 


 

 

 

 

 

 

 


 

 

 

 

 

 

POCL Planning:

 

Here is an example that shows the initial and final actions, linking to initial, adding a couple of steps, and an irreconcilable threat.