Reference

Experiments

Technical reference for experimentation through the Nextmv CLI.

Experiments is a beta feature and not intended for production use. To try out experiments with Nextmv, you must have a Nextmv account and be signed up for a production trial or paid account. For questions, please contact Nextmv support.

Experiments are used to test the performance of your models by running and comparing the results of one or more models. Experimentation is a key part of developing a good solver. The Nextmv CLI provides a suite of commands to help you manage your experiments. This initial set of commands is focused on creating input sets for experiments, and creating and running a batch experiment, comparing the results of running an input set against a number of application instances (when feasible). Experiment results are then aggregated and returned to the user.

The experimentation functionality builds on top of applications and is always associated with an app. You can create an app using the nextmv app create command. For more information about applications, see the how to guide.

One popular use case is to test a new instance of an app against the current production app on a standardized set of inputs.

The main command for experimentation is nextmv experiment. You can use the --help flag to get a list of available commands.

From the CLI (or console) you can create input sets and start new experiments. Experiments are run on Nextmv Cloud and accessible via the Nextmv console.

Available Commands

CommandDescription
batchA subsuite of commands to start batch experiments
input-setA subsuite of commands to manage input sets

Batch

Batch experiments are used to run a set of inputs against a number of application instances on Nextmv Cloud. The results are aggregated and made available to the user in the Nextmv console.

CommandDescriptionFlags
startStart a batch experiment run.--app-id, --experiment-id, --description, --name, --input-set-id, --instance-ids
resultGets the result of a batch experiment run.--app-id, --experiment-id, --output

start

nextmv experiment batch start [flags]
Copy

result

nextmv experiment batch result [flags]
Copy

Input Set

Input sets are used to define named inputs for a batch experiment that can be reused across experiments.

CommandDescriptionFlags
createCreate an input set from historic runs--app-id, --name, --description, --input-set-id, --instance-id, --start-time, --end-time, --run-ids --limit
listList input sets--app-id
getShow an input set--app-id, --input-set-id
updateUpdate an input set--app-id, --input-set-id, --name, --description

create

nextmv experiment input-set create [flags]
Copy

list

nextmv experiment input-set list [flags]
Copy

get

nextmv experiment input-set get [flags]
Copy

update

nextmv experiment input-set update [flags]
Copy

Review the Results

After running an experiment from the CLI, navigate to the Nextmv console to view the results of your experiment comparing the models. Note, once you navigate to your experiment, you may need to refresh to view results.

Within the Nextmv console, you'll find your experiment under the Experiments section. The results of each batch experiment are broken out into the following metric comparisons: Value, Total Run Duration, and Elapsed as well as any custom metrics, if defined. These metrics correspond to the statistics section of your output.json files and are used to compare the performance of the models.

Page last updated

Go to on-page nav menu