Notes on Search Algorithms
The description of most of the search algorithms in these notes is taken from
J. Pearl, "Heuristics", AddisonWesley, 1984.
Important issues about Search Algorithms
We will address:
 how to write search algorithms. In particular we will examine:
 the data structure to keep unexplored nodes. We use a
queue (often called a list in many AI books) called OPEN.
 expansion of a node (or generation of its successors).
All the successors of a node can be generated at once (method
most commonly used) or they could be generated one at a time
either in a systematic way or in a random way.
The number of successors is called the branching factor.
 strategies for selecting which node to expand next. Different
algorithms result from different choices (e.g. depthfirst
when successor nodes are added at the beginning of the queue,
breadthfirst when successor nodes are added at the end of the
queue, etc),
 test for goal. We will assume the existence of a predicate
that applied to a state will return true or false.
 bookkeeping (keeping track of visited nodes, keeping track of
path to goal, etc).
Keeping track of visited nodes is usually done by keeping
them in a queue (or, better a hashtable) called CLOSED.
This prevents getting trapped into loops or repeating work
but results in a large space complexity. This is (most often)
necessary when optimal solutions are sought, but can be (mostly)
avoided in other cases.
Keeping track of the path followed is not always necessary
(when the problem is to find a goal state and knowing how to get
there is not important).
 properties of search algorithms and the solutions they find:
 Termination: the computation is guaranteed to
terminate, no matter how large the search space is.
 Completeness: an algorithm is complete if it terminates with
a solution when one exists.
 Admissibility: an algorithm is admissible if it is
guaranteed to return an optimal solution whenever a solution exists.
 Space complexity and Time complexity: how the size of the
memory and the time needed to run the algorithm grows depending
on branching factor, depth of solution, number of nodes, etc.
Let's briefly examine the properties of some commonly used uninformed
search algorithms.
DepthFirst Search 
 Termination:
 guaranteed for a finite space if repeated nodes are checked.
Guaranteed when a depth bound is used.
Not guaranteed otherwise.
 Completeness:
 not guaranteed. Guaranteed only if the search space
is finite (exhaustive search) and repeated nodes are checked.
 Admissibility:
 not guaranteed.

BreadthFirst Search 
 Termination:
 guaranteed for finite space. Guaranteed when a solution exists.
 Completeness:
 guaranteed.
 Admissibility:
 the algorithm will always find the shortest path
(it might not be the optimal path, if arcs have different costs).

DepthFirst Search IterativeDeepening 
 Termination:
 guaranteed for finite space. Guaranteed when a solution exists.
 Completeness:
 guaranteed.
 Admissibility:
 the algorithm will always find the shortest path
(it might not be the optimal path, if arcs have different costs).


 classes of search algorithms. We can classify search algorithms along
multiple dimensions. Here are some of the most common:
 uninformed (depthfirst, breadthfirst, uniform cost,
depthlimited, iterative deepening) versus informed (greedy, A*,
IDA*, SMA*)
 local (greedy, hillclimbing) versus global (uniform cost, A*, etc)
 systematic (depthfirst, A*, etc) versus stochastic
(simulated annealing, genetic algorithms)
Description of an Uninformed Search Algorithm:
DepthFirst Search with depth bound
1. Put the start node on OPEN.
2. until OPEN is empty or a goal node has been reached
2.1 remove the topmost node from OPEN and put it on
CLOSED. Call this node n.
2.2 if the depth of n is less than the depth bound
then
2.2.1 expand n, generating all successors (if any).
2.2.2 if any of these successors is already on
the traversal path discard it
2.2.3 if any of these successors is a goal node,
exit
2.2.4 put the successors on top of OPEN and
provide for each a pointer back to n
else
2.2.5 clean up CLOSED.
3. If a goal node is found,
then exit with the solution obtained by tracing back
through its pointers
else announce failure.
Notes:

Nodes that have been expanded (i.e. their successors have been generated)
are called closed, nodes that were generated
and are awaiting expansion are called open. Two queues, CLOSED and OPEN
keep track of these two sets of nodes.

The depth of a node in a graph is defined recursively as 1 + depth of its
shallowest parent. The start node has 0 depth.

If the depth bound is less than the solution depth the algorithm
terminates without finding a solution. If the depth bound is greater the
algorithm might find a non optimal solution. This can be remedied by
completing the search to the depth of the last solution found and then
returning the best solution.

To save memory CLOSED needs to be cleaned up by keeping in it only the
nodes on the current traversal path.
Description of an Informed Search Algorithm
BestFirst Search
The term bestfirst search is used in different ways by different
authors. The Russell and Norvig textbook calls bestfirst search
any algorithm that sorts the
nodes according to some function and selects the best each time.
Judea Pearl has a somewhat different definition of bestfirst, which
is given here. The major difference is in the specification of how to
handle nodes already seen (Russell and Norvig do not specify what to do)
and in when goal test is done (Russell and Norvig in GraphSearch check
for goal only on the first element of OPEN,
Pearl checks as soon as the successors of a node are generated).
1. Put the start node S on OPEN.
2. until OPEN is empty or a goal node has been reached
2.1 remove from OPEN a node at which f is minimum (break
ties arbitrarily) and put it on CLOSED. Call this
node n.
2.2 expand n, generating all successors (if any).
2.3 if any of the successors of n is a goal node, exit.
2.4 for each successor n' of n
2.4.1 calculate f(n')
2.4.2 if n' was neither on OPEN nor CLOSED, add it to
OPEN. Attach the cost f(n') to it.
2.4.3 if n' was already on OPEN or on CLOSED compare
the new value f(n') with the previously assigned
value f(n'). If the old value is lower discard
the newly generated node. If the new value is
lower, substitute the new node for the old node.
If the old node was on CLOSED move it back to
OPEN.
3. If a goal node is found,
then exit with the solution obtained by tracing back
through pointers
else announce failure.
Notes:

Step 2.3 requires checking if any of the children of the node just expanded
is a goal node. This allows terminating as soon as a goal is found, but
the solution returned might have a larger value of f than the optimal
solution.
To guarantee an optimal solution we need to use a f function that
understimates the cost (as doen in A*) and to test for goal after a node
has been selected for expansion.

The computation required to do the tests is step 2.4.3 can be
substantial.
We could eliminate this step, at the cost of having to maintain
duplicate descriptions of identical nodes.

If a lower value has been found for a node n' that had already been visited
(step 2.4.3) and the old node was on CLOSED, instead of moving it
back to OPEN we could leave it on CLOSED, redirect the pointer to its
parent node (so it will point to the new parent on the shorter path), and
recompute the f values of its descendants. This requires keeping
track of descendants of nodes and not only of their parent (so it makes
the program more complex to write), but it saves reexpading nodes that have
already been expanded (so it can save time if expansion is an expensive
process).

In general there are no constraints on the function f(n). Best first
always selects for expansion the node with the smallest f value, so f
values must be smaller for better nodes. Also, bestfirst discards
multiple paths leading to the same node and keeps the path with the
smallest f value.
 Properties of BestFirst
 Completeness:

guaranteed on finite search spaces
 Admissibility:
 not guaranteed
 Time Complexity:

depends on the accuracy of the heuristic function. See more later under A*.
 Space Complexity:
 O(B to the power of d) where B = branching factor, and d = solution depth
A*
1. Put the start node S on OPEN. Attach to s the cost
g(s) = 0. Initialize CLOSED to the empty queue.
2. until OPEN is empty or a goal node has been reached
2.1 remove from OPEN a node at which f is minimum
(break ties arbitrarily) and put it on CLOSED.
Call this node n.
2.2 if n is a goal node, exit.
2.3 expand n, generating all successors (if any).
2.4 for each successor n' of n
2.4.1 calculate f(n')= g(n') + h(n')
= g(n) + c(n,n') + h(n').
2.4.2 if n' was neither on OPEN nor CLOSED, add
it to OPEN. Attach the cost f(n') to it.
2.4.3 if n' was already on OPEN or on CLOSED
direct its pointers along the path yielding
the lowest g(n') and keep the lowest f(n').
2.4.4 if n' required pointer adjustment and was
found on CLOSED, move it back to OPEN.
3. If a goal node is found,
then exit with the solution obtained by tracing back
through its pointers
else announce failure.
Notes:

A* is a special case of bestfirst search where:
 The cost f(n) of a node is computed as f(n) = g(n) + h(n), where
g(n) is the cost of the current path from s to n, and
h(n) is an estimate of the cost of the path from n to goal

The g cost of a node n is computed recursively by adding the g cost of
its parent node m to the cost of the arc from the parent m to the node n
g(n) = g(m) + c(m,n).
We assume arc costs to be positive.
The start node has a cost g(s) = 0.

h(n) is an underestimate of the cost of the optimal path from n to goal.
If we call h*(n) the cost of the optimal path
forall n h(n) < = h*(n).
This implies that a goal node has a cost h(goal) = 0

The check for the goal node (step 2.2) must be done after the node has
been selected for expansion to guarentee an optimal solution.

The redirection of the parent pointer is done using only the g value of
the node (not the f value). This guarantees that the path used to any
node is the best path.

When the heuristic used is monotone, A* never reopens nodes from CLOSED.
This means that whenever a node is put into CLOSED, A* has already found
the optimal path to it.

If forall n h(n) = 0 the algorithm A* is the same as Branch&Bound (or
Uniform Cost).
If forall n h(n) = 0 and forall n, n' c(n,n')=1 the algorithm A* is the same
as BreadthFirst Search.
 Properties of A*
 Completeness:
 guaranteed even on infinite graphs provided that the branching factor
is finite and that there is some positive constant delta such that
every operator cost at least delta.
 Admissibility:
 guaranteed given an admissible heuristics (i.e. h(n) is an
underestimate of the cost of the optimal path from n to the goal for every
node)
 Time Complexity:
 depends on the accuracy of the heuristic function.
If the heuristic
function is exact A* runs in linear time, if the heuristic function is
useless, the time complexity is exponential (as for uniform cost).
 Space Complexity:
 O(B to the power of d)
IterativeDeepening A* (IDA*)
IDA* is similar to DepthFirst IterativeDeepening, the difference
being the cutoff criteria. A
path is cutoff when its total cots f(n) exceeds a cost threshold. IDA*
starts with a threshold equal to f(s) (which is equal to h'(s) since g(s)
= 0). Each iteration is a depthfirst
search, cutting off a branch when its f(n) value exceeds the threshold.
If a solution is found, the algorithm terminates. Otherwise, the
threshold is increased to the minimum f value that exceeded the previous
threshold and another depthfirst search is started from scratch. This
continues until a solution is found.
 Time Complexity:
 depends on the accuracy of the heuristic function. If the heuristic
function is exact IDA* runs in linear time, if the heuristic function is
useless, the time complexity is exponential (as for uniform cost).
 Space Complexity:
 O(d)
Heuristic functions

A heuristic function h(n) that underestimates the cost
of getting from n to goal is said to be admissible.

A heuristic funtion is said to be monotone if for all pairs
of nodes n and
n' (with n' being a successor of n) the following inequality is satisfied
h(n) < = c (n,n') + h(n').
Examples of heuristic functions for the 8puzzle
In the 8puzzle problem 8 differently numbered tiles are fitted into 9
spaces on a 3x3 grid. One space has no tile (we call it blank) so that
any tile adjacent to the empty space can be moved to occupy it by sliding
from its original position.
For instance, here is an initial and a goal state:
initial state goal state
2 8 1 1 2 3
4 6 8 4
7 5 3 7 6 5
Here are some different heuristic functions:

h1 (n) = number of misplaced tiles

h2(n) = P(n) + 2 * R(n)
where P(n) is the sum of the Manhattan distances that each tile is from
home,
and R(n) is the number of direct tile reversals (when two adjacent tiles
must be exchanged to be in the order of the goal).

h3(n) = P(n) + 3 * S(n)
where P(n) is the sum of the Manhattan distances that each tile is from
home,
and S(n) is a sequence score obtained by counting 1 for the central
tile (if not blank)
and counting 2 for each nonblank tile on the board perimeter that is
not followed (in clockwise order) by its proper successor.
To compute S(n) the blank tile does not count.
This heuristic is not admissible.
Here is an example of how to compute these heuristics for the
initial configuration shown above.
The value of R(n) is 0 because there are no direct tile reversals.
P(n) = 

S(n) = 
1 + 
(2 is 1 position away from goal position) 
2 + 
(2 not followed by its successor) 
2 + 
(8 is 2 positions away from goal position) 
0 + 
(8 followed by its successor) 
2 + 
(1 is 2 positions away from goal position) 
2 + 
(1 not followed by its successor) 
2 + 
(6 is 2 positions away from goal position) 
2 + 
(6 not followed by its successor) 
2 + 
(3 is 2 positions away from goal position) 
2 + 
(3 not followed by its successor) 
1 + 
(5 is 1 position away from goal position) 
2 + 
(5 not followed by its successor) 
0 + 
(7 is 0 position away from goal position) 
2 + 
(7 not followed by its successor) 
2 = 
(4 is 2 positions away from goal position) 
2 = 
(4 not followed by its successor) 
12 

14 
The blank tile does not count even if it is not in its goal position.
In this case the center tile is blank so we do not add to S(N)1 for it.
If we
had the blank tile on the border it would not affect the sequence number
because to compute the sequence number you look only at the numbered
tiles.