It is possible to execute runs outside of the Nextmv Cloud environment and just send the results (input and output) back to the platform. This can be useful if you want to do production runs on your own infrastructure while still using the Nextmv Cloud platform for monitoring and collaboration on non-production models.
The general principle is that you can send a result as part of a run request to the Nextmv Cloud API. This result can be a success
or a failure
, and it can include the output of the run or an error message.
External runs can further be attached to an existing batch experiment using the batch_experiment_id
field. They can also trigger shadow tests, but do not work with switchback tests.
Using the Python SDK
The Python SDK has a built-in function for tracking runs that takes care of the file uploading prerequisites for you. See the Python SDK docs on tracking runs for how to use these built-in methods.
Using the HTTP API
You can track runs using the API directly. It’s a five-step process to track a run:
- Create a unique upload ID and URL to use for your run’s input file.
- Upload your input to this upload ID.
- Create a unique upload ID and URL for your output file.
- Upload your output to this upload ID.
- Add your external run using the two upload IDs in place of your input and output.
Each of these steps is described below.
As a prerequisite, you will have to know how to make authenticated requests to the Nextmv Cloud API. You can find more information on how to authenticate your requests in the Cloud API section.
1. Get unique upload ID and URL for your input
Note that the returned upload ID and URL are valid only for 15 minutes. However, you can always request new ones if needed.
Use the /runs/uploadurl
endpoint to request a presigned URL to upload your input file to. This will return a unique upload ID and URL that can be used for uploading. Again note that the URL and ID are only valid for 15 minutes.
Retrieve unique upload URL and ID.
Retrieve a unique URL and ID for uploading files.
In the cURL example below, note the two placeholders for your app ID and API key. The extra commands in the code snippet are for convenience so you can just copy and paste the snippets in each step (outside of adding your app ID and API key as mentioned prior).
Also note the use of the jq
library, if you do not have jq
installed you can just copy the first 5 lines (the response
).
2. Upload your input file
You can use whatever method you would like to upload your input file to the returned upload URL from step 1. An example using curl
is given below.
Note the placeholder for pointing to your input file. For example, if you are running the command from the directory where your input file is, and say your input file is named run-input.json, then you would replace <PATH TO YOUR INPUT FILE HERE>
with ./run-input.json
.
Once you run that command (a successful response returns nothing), your input file will be uploaded and ready for reference via the unique ID from step 1. In steps 3 and 4 below, you will just repeat steps 1 and 2, but for your run’s output file rather than the input file.
3. Get unique upload ID and URL for your output
Repeat the call from step 1 to the runs/uploadurl
endpoint to get a unique upload ID and URL to use for your run output.
4. Upload your output file
Repeat the process in step 2 to upload your output file.
5. Track your external run
At this point you have uploaded your run’s input and output files and have a unique ID for both. Now, you just need to record a new run in the system using these unique IDs.
To track an external run, you add the run using the /runs
endpoint like any other run, but in this case you will assign not just input to the run but output as well. In addition, you will also mark the run’s status and duration for the system as well since it doesn’t actually run (duration is optional).
New application run.
Create new application run.
Note that upload_id
property is for the input upload ID even though it’s not explicitly marked with the word input.
After running the curl command above, you should be able to go to your app’s run history and interact with your tracked run like any other run.
Recording a failed run
The example above recorded a successful run. To record a failed run you can set the status
to failed
and include an error message. An example of the JSON payload for a tracked run you want to mark as failed is shown below.
Adding logs to a run
Use the error_upload_id
property in the JSON payload to add logs to your tracked run. You will need to first upload your logs file in the same way you upload your input and output (see steps 1–4 above). Then add the unique ID for the logs to the error_upload_id
property (see example payload below).
The uploaded logs file must be in utf-8
format.
Note that you can use the error_upload_id
property to upload logs for successful runs as well. The “error” part of the property key is related to how apps write logs to stderr
by convention.