CLI Assets upload

Describes the usage of the Tensorleap CLI to upload models and datasets to the platform

The Tensorleap CLI is used to upload codebases and models to the Tensorleap platform.

Uploading code only

In order to upload a code integration into the platform, from within the root of the repo (where leap.yaml is located), run:

leap code push PATH-TO-MODEL

If leap.yaml fields are invalid or if this is the first integration of the code and some fields are missing, the CLI would interactively ask the user to choose an existing code integration to update from the platform or to create a new one.

In case the "Build Dynamic dependencies" was selected from the settings page in the platform, the first action that happens on code push and requirement.txt upload is a creation of a virtual environment. This might take some time at first creation, but subsequent upload with the same requirements will utilize this existing virutal environment. Otherwise, a default environment is being used.

Next, the code is uploaded to the platform. The server would try to run the preprocess function, get an input, get a GT, and run all of the metadata for the first index of the dataset.

If this is able to run successfully, the CLI should show:

INFO Code parsed successfully

Otherwise, the CLI should print the stack trace with the error it got. For a more complete log, login to the platform and review the runs & processes menu.

Uploading A codebase that requires a secret

In order to access a cloud storage, or some other service that requires a secret within your code, you can Utilize the Secret Manager. After uploading a secret to the platform, you can associate this secret with your current integration by running:

leap secrets set

This would interactively let you choose which secret would you like to use within this code. Once set, the secret could be used within the code by referencing

import os
auth_secret_string = os.environ['AUTH_SECRET']

This environment variable would be set automatically once the codebase is uploaded & utilized within the Tensorleap server. For the local integration test we recommend setting an enviroment variable with the same name in your local enviroment for consistency purposes.

Uploading a model

In order to upload a mode into the platform, from within the root of the repo (where leap.yaml is located), run:

leap models import PATH-TO-MODEL

If leap.yaml fields are invalid or if this is the first integration of the code and some fields are missing, the CLI would interactively ask the user to choose an existing project or create a new one. It would also inteartively ask for a model name.

Then, it would upload the model into the Tensorleap server, and run it through the Tensorleap model analysis pipeline. At the end of a successful import of the model to the platform, it would print:

INFO Successfully imported model

In case you encounter any errors in model upload - please contact the Tensorleap team.

Uploading both a code and model

In order to upload a model and a codebase, from within the root of the repo (where leap.yaml is located), run:

leap projects push PATH-TO-MODEL

This would first try to upload your code and would then try to upload the model.

If both works successfully and a leap_mapping.yaml is present, it would also try to create the corrosponding UI connections. In that case it would print:

INFO mapping was applied successfully applied with no validation errors

Last updated

Was this helpful?