Guaranteed Higher Grade!

Free Quote
Uninformed, Informed, and Adversarial Search; Knowledge Representation and Automated Planning

1. Uninformed, Informed, and Adversarial Search

(a) Consider the following grid representing the states and transitions in a search problem. States are labelled with letters. An agent can move between states provided the two states are adjacent and not blocked by a wall (the grey squares). Diagonal movement is not permitted. Each move to an adjacent square costs 1 resource. The grid also includes a tunnel that allows the agent to travel from one side to the other. An agent in state C can use the tunnel to travel to state D at a cost of 1 resource.

If S is the start state and G is the goal state, determine the order in which states are expanded, as well as the path returned, for each of the following search methods. For greedy search, include the h-value of each expanded state. For A* search, include the f-value of each expanded state. To estimate the distance from any state to the goal, the Manhattan distance heuristic should be used. Assume that ties are resolved so that states appearing earlier in alphabetical order are expanded first, and that no state is expanded more than once.

i) Depth first search

States expanded:

Goal path: (2)

ii) Breadth first search

States expanded:

Goal path: (2)

iii) Greedy search

States expanded (with h values):

Goal path: (3)

iv) A* search

States expanded (with f values):

Goal path: (3)

v) Assume the Manhattan heuristic is used for all states except for C which has a heuristic estimate of 4 (due to the tunnel) and D which has a heuristic estimate of 6 (to compensate for taking the tunnel in reverse). Is the heuristic admissible? Explain why or why not.

(b) Consider the following game tree. Upward facing triangles represent the game moves for the maximising player. Downward facing triangles represent the game moves for the minimising player. Squares represent terminal nodes. The maximising player moves first.

i) Assuming both players play optimally, what is the minimax value for the game tree? Which move should the maximising player make? (2)

ii) Apply the alpha-beta pruning algorithm to the game tree and list the values that alpha-beta assigns to each of the nodes A, B, C, D, E,

and F (if they exist). (2)

iii) List the nodes that are pruned by alpha-beta pruning. (2)

iv) Assume that each player in a 2-player game takes two turns in a row before the other player plays. What would the game tree look like in

this case? Can alpha-beta pruning still be used? Explain why or why not.

2. Knowledge Representation and Automated Planning

Consider the following description of a planning domain. A castle contains three rooms: a dungeon, a hall, and a tower. Locked doors prevent movement between the rooms. Buttons are located in each of the three rooms. Pressing the button in the dungeon opens the door to the hall, pressing the button in the hall opens the door to the tower, and pressing the button in the tower opens the door to exit the castle. A treasure chest is located in the tower. A castle robot is able to operate the buttons and move to another room provided the door to that room is open. The robot can also carry the treasure chest but requires both of its hands to do so.

A floorplan of the castle showing the location of the rooms, the locked doors (D), the buttons (b1, b2, b3), and the chest is given below:

Assume that the domain contains four locations (dungeon, hall, tower, outside), three buttons (b1, b2, b3), and the chest. The robot starts in the dungeon and has the following three actions available to it:

â€¢ move(X,Y): move from location X to location Y

â€¢ pickup(X,Y,Z,W): pickup X from location Y using hands Z and W

â€¢ press(X,Y): press button X in location Y

Answer the following questions using PDDL notation where necessary.

(a) List and briefly describe the properties (predicates) that are needed to model this planning domain, including their parameters. (2)

(b) Write PDDL actions for move, pickup, and press. (9)

(c) Define the initial state that models the castle scenario. (3)

(d) The robot would like to exit the castle with the treasure. State the goal that describes this planning problem.

(e) Give a plan that achieves the goal (d) from the initial state (c). (3)

(f) Assume you are tasked with preparing a PEAS description for the castle robot. Briefly state the "P" and "S" parts of your PEAS description.