You are viewing Nextmv legacy docs. ⚡️ Go to latest docs ⚡️

Deployment

Deployment

Overview of deploying your custom app.

This page details pre-built runners available with Nextmv and how to use them for deployment.

Currently, we offer several pre-built runners options:

  • CLI Runner
  • Tests Runner
  • Lambda Runner
  • HTTP Runner

All runners are configurable through environment variables or command-line flags.

We are always growing our toolkit based on customer needs. If you don't see your deployment pattern, just ask us.

CLI runner

The simplest, and possibly most useful, runner is the command line interface (CLI) runner. It reads data from standard input and writes it to standard output.

To use the CLI runner, you need a main function which calls cli.Run. That function takes in standard input and Hop solver or Dash simulation options. It then produces a solver or simulator as well as any applicable errors.

For a Hop model, we assume we have an input struct defined, along with a function state that creates a root state for the minimization solver.

package main

import (
    "github.com/nextmv-io/code/hop/run/cli"
    "github.com/nextmv-io/code/hop/solve"
)

func main() {
    cli.Run(
        func(in input, opt solve.Options) (solve.Solver, error) {
            return solve.Minimize(state(input), opt), nil
        },
    )
}
Copy

Once we go build our decision model, we end up with a single binary artifact. That artifact encapsulates our decision as an atomic unit, which we can easily plug into virtually any software stack.

Let's say our binary is called decide. We can ask it to make decisions by piping in JSON data. It helpfully writes JSON back to standard output once it finds feasible states.

cat input.json | ./decide | jq
Copy

Alternatively, we can specify an input and/or output file via the -hop.runner.input.path and -hop.runner.output.path flags:

./decide -hop.runner.input.path input.json -hop.runner.output.path output.json
Copy

In Dash, we assume there's an input containing structs that we can add to a queue as actors. See the Dash examples for the full source.

package main

import (
    "time"

    "github.com/nextmv-io/code/dash/run/cli"
    "github.com/nextmv-io/code/dash/sim"
)

func main() {
    cli.Run(
        func(input []*a, opt sim.Options) (sim.Simulator, error) {
            simulator := sim.New(opt)

            for _, a := range input {
                start := now.Add(time.Duration(a.start) * time.Minute)
                simulator.Add(start, a)
            }

            return simulator, nil
        },
    )
}
Copy

Once we go build our simulator, we end up with a single binary artifact. That artifact encapsulates our simulator as an atomic unit, which we can use in a similar manner to a Hop model.

Let's say our binary is called simulate. We can feed it actor data via standard input, and it will output the simulation data in JSON format to standard output.

cat input.json | ./simulate | jq
Copy

As with Hop, we can opt to specify an input and/or output file via the -dash.runner.input.path and -dash.runner.output.path flags:

./simulate -dash.runner.input.path input.json \
           -dash.runner.output.path output.json
Copy

CLI runner options

The CLI runner supports both command line flags and environment variables. The former override the latter if both are provided, which is useful when testing the effect different parameters have on output, for example. An environment variable like HOP_RUNNER_OUTPUT_QUIET has the associated flag -hop.runner.output.quiet.

The following variables and flags are specific to the CLI runner. Streaming output prints new improving solutions to standard output as they are discovered. Setting a CPU profile location runs a model with CPU profiling enabled and writes out a profile file.

Environment VariableDefault
HOP_RUNNER_OUTPUT_STREAMnull
HOP_RUNNER_PROFILE_CPUnull

The CLI runner provides help when passed a -h flag.

HTTP Runner

The Hop HTTP runner reads input data from HTTP posts. It can be configured to use TLS for security. It accepts both environment and command line options.

package main

import (
    "github.com/nextmv-io/code/hop/run/http"
    "github.com/nextmv-io/code/hop/solve"
)

func main() {
    http.Run(
        func(in input, opt solve.Options) (solve.Solver, error) {
            return solve.Minimize(state(input), opt), nil
        },
    )
}
Copy

HTTP Runner Options

The following environment variables and command line flags are specific to the HTTP runner.

Environment VariableDefault
HOP_RUNNER_HTTP_ADDRESSnull
HOP_RUNNER_HTTP_CERTIFICATEnull
HOP_RUNNER_HTTP_KEYnull

The HTTP runner provides help when passed a -h flag.

AWS Lambda Runner

Running models and simulations in serverless environments is supported through the AWS Lambda runner. This runner only accepts configuration through environment variables set on a Lambda function.

In Hop:

package main

import (
    "github.com/nextmv-io/code/extend/hop/run/aws/lambda"
    "github.com/nextmv-io/code/hop/solve"
)

func main() {
    lambda.Run(
        func(in input, opt solve.Options) (solve.Solver, error) {
            return solve.Minimize(state(input), opt), nil
        },
    )
}
Copy

and in Dash:

package main

import (
    "time"

    "github.com/nextmv-io/code/extend/dash/run/aws/lambda"
    "github.com/nextmv-io/code/dash/sim"
)

func main() {
    lambda.Run(
        func(input []*a, opt sim.Options) (sim.Simulator, error) {
            simulator := sim.New(opt)

            for _, a := range input {
                start := now.Add(time.Duration(a.start) * time.Minute)
                simulator.Add(start, a)
            }

            return simulator, nil
        },
    )
}
Copy

To deploy a Hop model or Dash simulator to Lambda through the AWS console, build the model and then zip it. Upload that zip file into a Lambda function using the Go 1.x runtime with the handler set to the name of the binary. Make sure you cross-compile so the model is portable across architectures:

GOARCH=amd64 GOOS=linux go build
zip [model name].zip [model name]
Copy

AWS Lambda Runner and S3 Trigger

To trigger a Lambda Hop model with S3 events, switch the runner to s3:

package main

import (
    "github.com/nextmv-io/code/hop/model"
    "github.com/nextmv-io/code/extend/hop/run/aws/lambda/s3"
    "github.com/nextmv-io/code/hop/solve"
)

func main() {
    s3.Run(
        func(in input, opt solve.Options) (solve.Solver, error) {
            // runner code
        },
    )
}
Copy

In the AWS Console, create two S3 buckets: one for input files and one for output files. Bucket names must be unique, therefore we recommend a format such as [model name]-[model version]-[input|output]. These buckets should be created within the same AWS region as your Lambda function. All other settings can be left as their default for now.

Navigate to the Lambda AWS service and create a new Lambda function. You will set the runtime to Go 1.X and toggle "Create a new role with basic Lambda permissions". Once the Lambda function is created, it is recommended to increase the memory to the maximum value to get consistent behavior when running the models. To increase memory, you can follow the steps described here.

After the Lambda has been created, we will continue to configure it.

  1. First, use Function Code > Actions > Upload a .zip file to upload your [model name].zip.

  2. Add the S3 input bucket as a trigger in the "Designer" section. Make sure that the Event type is set to "All object create events".

  3. In the "Basic settings" menu set the Handler to match the [model name]. We also recommend setting the memory and runtime limits to 3000 MB and 5 minutes. These values can be updated as needed later on.

  4. We recommend setting runtime limits on your model while it is in production and limiting the size of the model response, you can read more about this in deployment best practices. The Lambda requires the following environment variables to be set:

    • HOP_RUNNER_INPUT_PATH = [model name]-[model version]-[input]
    • HOP_RUNNER_OUTPUT_PATH = [model name]-[model version]-[output]
    • While verifying your Lambda deployment is working as intended we encourage you to set HOP_SOLVER_LIMITS_SOLUTIONS = 1. However, in production it would be advised to use HOP_SOLVER_LIMITS_DURATION.
  5. Our last step is to grant the Lambda function's execution role permission to read from and write to the corresponding S3 bucket. You can do this by clicking on the role under Permissions. This will take you to the IAM menu. Select the "Policy" then add the following to its Statement array in the role JSON:

        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject"
            ],
            "Resource": "arn:aws:s3:::model-input/*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:PutObject"
            ],
            "Resource": "arn:aws:s3:::model-output/*"
        }
Copy

Upload a JSON input file (e.g. foo.json) into the input bucket.

The Lambda function should write a corresponding foo.json file to the output bucket. Errors and other messages, including start and end times, are written to CloudWatch.

Test Runner

The hop/run/tests package provides a CLI runner which, given a directory of input files and output fixture files, runs once per input/output fixture pair. It compares its output for a given input file with the corresponding output fixture, providing a human-readable diff when they do not match. If the fixture data is a subset of the present model output for the given input file, ok is printed and the runner moves on to the next input and output. This continues until there is a failure.

It is important to make sure when using output fixtures that the same runner and solver configuration is used as when creating them. For example, if a test fixture was created with HOP_RUNNER_OUTPUT_SOLUTIONS set to all, and the test runner is run with the same variable set to last, the run will fail.

A basic test runner is shown below. We assume the example code is in a file advent/main.go.

package main

import (
    "maze.of.twisty.passages/xyzzy/plugh"
    "github.com/nextmv-io/code/hop/run/tests"
    "github.com/nextmv-io/code/hop/solve"
)

func main() {
    tests.Run(
        func(in plugh.Input, opt solve.Options) (solve.Solver, error) {
            root := plugh.New(in)
            return solve.Minimizer(root, opt), nil
        },
    )
}
Copy

Models are built into binaries which read input from files in the input directory at the path defined by -hop.runner.input.path and compare their output to fixture files at the path defined by -hop.runner.output.path.

$ cd advent
...
$ go build
...
Copy

This creates a single binary, advent, which can be called as follows:

$ ./advent -hop.runner.input.path tests/input \
       -hop.runner.output.path tests/output
Copy

The test runner will run once for each file in tests/input and compare the resulting model output with the output fixture file in tests/output with the same name. For example, if tests/input contains three files

  1. tests/input/test1.json
  2. tests/input/test2.json
  3. tests/input/test3.json

the test runner will compare its model output for each input file to the fixture file which matches its name. Therefore,

  1. tests/input/test1.json output is compared to tests/output/test1.json
  2. tests/input/test2.json output is compared to tests/output/test2.json
  3. tests/input/test3.json output is compared to tests/output/test3.json

If all model outputs match their corresponding fixture files for the provided input values, the runner exits with status code 0. Otherwise, a diff message is displayed and the runner exits with status code 1.

Runner and solver options are provided through environment variables and command-line flags.

Cross-compile binaries for runners

Models are typically created as a binary artifact. This means that models may not be portable across architectures.

We can use Go's cross compilation functionality to build models and simluators that run on other architectures without hassle. The command below, for example, builds a binary and zips it. This can be deployed directly to Lambda, irrespective of what machine it is built on.

GOARCH=amd64 GOOS=linux go build
zip [model name].zip [model name]
Copy

Page last updated

Go to on-page nav menu