Kingsbury Hall Room W236

Department of Computer Science

University of New Hampshire

33 Academic Way

Durham, NH 03820

wilt~at~cs~dot~unh~dot~edu

UPDATE: I have graduated, and my new website is here.

I am a graduate student in computer science at the University of New Hampshire. I work with Professor Wheeler Ruml. My CV can be downloaded here.

I have served as a teaching assistant for algorithms, artificial intelligence, data structures, and Weaving the Web.

I am currently researching both optimal and suboptimal heuristic search.

I spent the summer of 2012 at Kiva Systems where I did work on path planning, and multiagent path planning.

Proceedings of the Twenty-fourth International Conference on Automated
Planning and Scheduling (ICAPS-14) 2014.

Multiagent path planning is important in a variety of fields, ranging
from games to robotics and warehouse management. Although centralized
control in the joint action space can provide optimal plans, this
often is computationally infeasible. Decoupled planning is much more
scalable. Traditional decoupled approaches perform a unit-centric
decomposition. For instance, replace a multi-agent search with a
series of single-agent searches, one for each mobile unit.
We introduce an orthogonal, significantly different approach,
following a spatial distribution that partitions a map into high-
contention, bottleneck areas and low-contention areas. Local agents
called controllers are in charge with one local area each, routing
mobile units on their corresponding area. Distributing the knowledge
across the map, each controller can observe only the state of its own
area. Adjacent controllers can communicate to negotiate the transfer
of mobile units. We evaluate our implemented algorithm, SDP, on real
game maps with a mixture of larger areas and narrow, bottleneck
gateways. The results demonstrate that spatially distributed planning
can have substantial benefits in terms of makespan quality and
computation speed.

Twenty-Seventh AAAI Conference (AAAI-13)

Although the heuristic search algorithm A* is well-known to be
optimally efficient, this result explicitly assumes forward
search. Bidirectional search has long held promise for surpassing
A*'s efficiency, and many varieties have been proposed, but it has
proven difficult to achieve robust performance across multiple
domains in practice. We introduce a simple bidirectional search
technique called Incremental KKAdd that judiciously performs backward
search to improve the accuracy of the forward heuristic function for
any search algorithm. We integrate this technique with A*, assess its
theoretical properties, and empirically evaluate its performance
across seven benchmark domains. In the best case, it yields a factor
of six reduction in node expansions and CPU time compared to A*, and
in the worst case, its overhead is provably bounded by a user-supplied
parameter, such as 1%. Viewing performance across all domains, it
also surpasses previously proposed bidirectional search
algorithms. These results indicate that Incremental KKAdd is a robust
way to leverage bidirectional search in practice.

The presentation associated with this paper can be downloaded
here.

Fifth Annual Symposium on Combinatorial Search 2012

Weighted A* is the most popular satisficing algorithm for heuristic
search. Although there is no formal guarantee that increasing the
weight on the heuristic cost-to-go estimate will decrease search
time, it is commonly assumed that increasing the weight leads to
faster searches, and that greedy search will provide the fastest
search of all. As we show, however, in some domains, increasing the
weight slows down the search.
This has an important consequence on the scaling behavior of
Weighted A*: increasing the weight ad infinitum will only speed up
the search if greedy search is effective. We examine several
plausible hypotheses as to why greedy search would sometimes expand
more nodes than A* and show that each of the simple explanations has
flaws. Our contribution is to show that greedy search is fast if
and only if there is a strong correlation between h*(n) and d*(n),
the true distance-to-go, or if the heuristic is extremely accurate.

Proceedings of the Twenty-second International
Conference on Automated Planning and Scheduling (ICAPS-12) 2012.

There has been much interest recently in problems that combine
high-level task planning with low-level motion planning. In this
paper, we present a problem of this kind that arises in multi-vehicle
mission planning. It tightly integrates task allocation and
scheduling, who will do what when, with path planning, how each task
will actually be performed. It extends classical vehicle routing in
that the cost of executing a set of high-level tasks can vary
significantly in time and cost according to the low-level paths
selected. It extends classical motion planning in that each path must
minimize cost while also respecting temporal constraints, including
those imposed by the agent's other tasks and the tasks assigned to
other agents. Furthermore, the problem is a subtask within an
interactive system and therefore must operate within severe time
constraints. We present an approach to the problem based on a
combination of tabu search, linear programming, and heuristic
search. We evaluate our planner on representative problem instances
and find that its performance meets the demanding requirements of our
application. These results demonstrate how integrating multiple
diverse techniques can successfully solve challenging real-world
planning problems that are beyond the reach of any single method.

Fourth Annual Symposium on Combinatorial Search 2011

In many domains, different actions have different costs. In this
paper, we show that various kinds of best-first search algorithms
are sensitive to the ratio between the lowest and highest operator
costs. First, we take common benchmark domains and show that when
we increase the ratio of operator costs, the number of node
expansions required to find a solution increases. Second, we
provide a theoretical analysis showing one reason this phenomenon
occurs. We also discuss additional domain features that can cause
this increased difficulty. Third, we show that searching using
distance-to-go estimates can significantly ameliorate this problem.
Our analysis takes an important step toward understanding algorithm
performance in the presence of differing costs. This research
direction will likely only grow in importance as heuristic search is
deployed to solve real-world problems.

Third Annual Symposium on Combinatorial Search 2010

We discuss the relationships between three approaches to greedy
heuristic search: best-first search, hill-climbing search, and beam
search. We consider the design decisions within each family and
point out their oft-overlooked similarities. We consider the
following best-first searches: weighted A*, greedy search, A*
epsilon, window A* and multi-state commitment search. For hill
climbing algorithms, we consider enforced hill climbing and
LSS-LRTA*. We also consider a variety of beam searches, including
BULB. We show how to best configure beam search in order to
maximize robustness. An empirical analysis on six standard
benchmarks reveals that beam search and best-first search have
remarkably similar performance, and outperform hill-climbing
approaches in terms of both time to solution and solution quality.
Of these, beam search is preferable for very large problems and best
first search is better on problems where the goal cannot be reached
from all states.

The poster associated with this paper can be downloaded
here.

Technical Report 10-07

There are many algorithms that are capable of solving the shortest
path problem. Given a specific shortest path problem, it is not
clear which of the myriad algorithms should be used. Based upon an
empirical evaluation of six benchmark domains, we create a decision
tree that considers domain features and approximate time/memory
budget constraints to decide which algorithm should be used given a
domain with known attributes and given time/memory budget. The
decision tree also helps identify open questions regarding what
information is needed to predict how well a given algorithm will
perform.