## nextmv Docs

# How Hop Hops

# How Hop Hops

Hop searches for solutions using Decision Diagram (DD) techniques. This section provides a basic introduction to these mechanics. It is not intended to be exhaustive. It merely imparts a working knowledge of some of the concepts underlying Hop's algorithms. For the theoretical foundations of optimization using DDs, see the excellent book on the subject.

David Bergman, Andre A. Cire, Willem-Jan Van Hoeve, and John Hooker. Decision Diagrams for Optimization. Springer, 2016.

## Decision Diagrams

Hop provides solvers for maximizing or minimizing an objective, and satisfying a set of constraints. At a high level, Hop reads JSON input describing the state of the system and constructs actors based on that state. It then uses DD techniques to explore future states for improving solutions.

In order to leverage these DD techniques, Hop requires the modeler to define a few things:

- State definition
- Transition function
- Feasibility test

This state must implement two methods: `Feasible`

and `Next`

. `Feasible`

should
return true if a state is actionable, meaning it can be implemented in the real
world. What this means depends on the model. For example, in a routing problem,
feasibility may imply that each of a set of customer requests has been planned
for. In the knapsack problem discussed below, it means that the knapsack's
capacity is not exceeded.

`Next`

should return an `Expander`

. An `Expander`

is able to return states that
we're interested in reaching from a given state. For a routing problem, `Next`

might attempt to insert a request into an existing set of routes anywhere it
could make sense to do so. For the knapsack problem, `Next`

merely considers
whether to include or exclude another item.

If we are maximizing or minimizing some objective, our model also needs a
`Value`

method to tell Hop the value of each state. An example should make this
all clear.

## How Hop Fits In

Seamless integration is arguably one of the most valuable features of Hop or Dash and guides many of our design decisions. Hop models often makes decisions using real-time data. A Hop runner loads input data from JSON, along with options that impact its execution. The runner provides these data and options to Hop's solver. Hop outputs a recommended future state, such as a set of routes for couriers. A separate service (machine or human) will typically receive the output JSON and use this to make decisions.

## Example: Knapsack Problems

Consider the 0-1 knapsack problem. We have a set of items 1 through n. Each item has a value and a weight. We can carry a total weight of 50. We would like to carry the maximum possible value within our weight limit.

Let's say our input data looks like this.

Item | Value | Weight |
---|---|---|

1 | 200 | 25 |

2 | 150 | 10 |

3 | 225 | 20 |

To get started, we need a root state. We aren't carrying anything to begin with,
so our root state `R`

is an empty set of items. Its weight and value are both 0.
We also call this the root "layer" and say it has a depth of 0. It is a feasible
knapsack because we haven't exceeded the capacity, but it isn't a very useful
one either.

State | Depth | Items | Value | Weight | Feasible |
---|---|---|---|---|---|

`R` | 0 | - | 0 | 0 | Yes |

Once we constructs this layer, we examine `R`

. `R`

is our first feasible
solution to the knapsack, so we marks it as the "incumbent solution."

Let's consider packing each item in order 1 through n. For each item, we have
two options: leave it out or pack it. States `R0`

and `R1`

correspond to those
two options for item 1. Together they form layer 1 of our diagram.

State | Depth | Items | Value | Weight | Feasible |
---|---|---|---|---|---|

`R0` | 1 | - | 0 | 0 | Yes |

`R1` | 1 | 1 | 200 | 25 | Yes |

Both `R0`

and `R1`

have remaining capacity in the knapsack, so they are
feasible. We order these by descending value and determine that `R1`

is a better
solution than our current incumbent, `R`

. We save `R1`

as our new incumbent and
skip `R0`

as it is worse than `R1`

.

Now we decide what to do about item 2 in the same manner. Isn't it nice having computers to do this sort of work for us?

State | Depth | Items | Value | Weight | Feasible |
---|---|---|---|---|---|

`R00` | 2 | - | 0 | 0 | Yes |

`R01` | 2 | 2 | 150 | 10 | Yes |

`R10` | 2 | 1 | 200 | 25 | Yes |

`R11` | 2 | 1,2 | 350 | 35 | Yes |

Again, we examine the feasible solutions in this layer. We finds that `R11`

improves on the value of our current incumbent, `R1`

. We save `R11`

as our new
incumbent and skip the other solutions.

Deciding whether or not to include item 3 generates a layer with 8 states in it. Each layer has twice the number of states (the "width") of the layer before it. This is called "expanding" the next layer. If you're thinking that our layers are going to get hopelessly big with a small number of items, you're right!

State | Depth | Items | Value | Weight | Feasible |
---|---|---|---|---|---|

`R000` | 3 | - | 0 | 0 | Yes |

`R001` | 3 | 3 | 225 | 20 | Yes |

`R010` | 3 | 2 | 150 | 10 | Yes |

`R011` | 3 | 2,3 | 375 | 30 | Yes |

`R100` | 3 | 1 | 200 | 25 | Yes |

`R101` | 3 | 1,3 | 425 | 45 | Yes |

`R110` | 3 | 1,2 | 350 | 35 | Yes |

`R111` | 3 | 1,2,3 | 575 | 55 | No |

Something interesting happens constructing layer 3. State `R111`

exceeds our
weight limit. Thus, we remove it from consideration as an infeasible state. Of
the remaining states, `R101`

is the best and improves upon our incumbent, `R11`

.

There aren't any more states to generate, so we choose `R101`

as our optimal
solution. It provides the most value without exceeding our weight limit.

### Diagram Expansion

The exercise we just went through constructed what is called an "exact" decision
diagram. That means it represents the entire state space of our problem, as
shown in the following graph. Note that we leave `R111`

off the graph since it
isn't feasible.

` ___________ R __________ / \ ___ R0 ___ _ R1 _ / \ / \ R00 R01 R10 R11 / \ / \ / \ |R000 R001 R010 R011 R100 R101 R110`

Obviously it gets impossible to generate exact diagrams for problems of any
reasonable size. Luckily, we have techniques to deal with this issue. We
mentioned that `Next`

returns an `Expander`

. We have two different types of
`Expanders`

: `Eager`

and `Lazy`

.

`Eager`

will generate all of its child states at once, but `Lazy`

can be
configured to merely generate a subset of them up to our expansion limit.
We defer states that could potentially generate more child states than the limit
allowed so they can later be used to generate even more states until they are
exhausted. If our expansion limit is set to 2, that means that even if a node
could generate 10 child nodes, we will only consider the first 2 child nodes and
defer the rest for a subsequent pass.

### Diagram Restriction

The other technique hop uses is called diagram restriction. Say we use a "value" restrictor that keeps the best states of each layer and defers the rest. Let's set our maximum diagram width to 2. We want to maximize total value, so the value restrictor keeps the two states from each layer with the highest values. Our restricted decision diagram looks like this.

` R / \R0 R1 / \ R10 R11 | |R101 R110`

This diagram is much smaller, and in this case it manages to find the optimal state for our problem. The other states aren't actually gone, we're just going to look at them later. We focused our search on a promising section of the state space, and it paid off.

In constructing this restricted diagram, we find a series of improving,
incumbent (best known) solutions to consider: `R`

, `R1`

, `R11`

, and `R101`

. Each
improves on the last solution. We then continue searching the rest of the state
space. If we find a better feasible state, we'll use that as a new incumbent.

## Hop Search Mechanics

We've covered two of the techniques Hop uses to search the state space of a decision model: expansion and restriction. There are more parts to Hop's search algorithm which you can read about in the package docs. Not all of these are required, and Hop makes them plugable so the modeler can exploit domain knowledge about a problem's structure. The more Hop understands about a problem, the more efficient it can be at finding good solutions and reducing the size of the state space.

Briefly, Hop uses the following procedure to explore any state.

*Expand*a deeper layer using each state's`Next`

method.*Reduce*states that are dominated by other states.*Restrict*the states for further exploration and defer the rest.

Hop repeats this procedure for each layer. By default, Hop uses a small restriction width to find feasible solutions quickly. Settings like width can be easily overridden using command line options or environment variables. Search components like restrictors can be plugged directly into the solver options. Hop provides interfaces to follow for custom implementations, if so desired.