Quickstart using CLI

Prerequisites

Prior steps are logging in, creating a project and a dataset instance.

More info about these steps is described at the Setup page.

Installing Leap CLI

Install leapcli using the following command:

pip install leapcli

Once installed successfully, the leap command shall be available.

Leap Init

Once the leap CLI is installed, you can initialize your project to be synced with the Tensorleap platform.

In the target project path, run the following command to initialize Tensorleap within the project:

leap init PROJECT_NAME DATASET_NAME

Replace the arguments with the corresponding value:

Args

PROJECT_NAME

The name of the project

DATASET_NAME

The name of the Dataset Instance

All the parameters are stored at the .tensorleap/config.toml file and could be changed if needed.

Leap Login

In order to access the platform, we must run the leap login command within the initialized project path, as such:

leap login [API_ID] [API_KEY] [ORIGIN]

Dataset Integration

For Tensorleap to read the data and fetch it to the model for training or evaluation, we must provide a Dataset Script. This script defines the preprocessing function, input/ground_truth encoders and metadata functions.

The script should be set at the .tensorleap/dataset.py file.

Important to note that this script should be contained and not use external project dependencies!

This script will later be synced with the Dataset Instance set by the DATASET_NAME property above.

More info can be found at the Dataset Script page. A sample script can be found at the MNIST Guide.

Additional examples can be found at the Tensorleap Examples repository.

Model Integration

The model integration script is located at .tensorleap/model.py, an example script below:

from pathlib import Path
from myproject.model import build_model # import from the parent project

def leap_save_model(target_file_path: Path):
    # Load your model
    model = build_model()
    # Save it to the path supplied as an arugment (has a .h5 suffix)
    model.save(target_file_path)

The leap_save_model is automatically called by the CLI when pushing a model into the Tensorleap platform. Its purpose is to prepare and store the model in the provided target_file_path.

More info can be found at the Dataset Script page. A sample script can be found at the MNIST Guide.

Additional examples can be found at the Tensorleap Examples repository.

Validation

You can validate the dataset and model scripts locally by using the following command:

leap check --all

This command will validate the scripts together with its synchronization with the Tensorleap platform.

Synchronization

Once everything is validated, you can push the dataset script and model to the Tensorleap platform for further evaluation/training/analysis.

To push the dataset script, use the following command from the project path:

leap push --dataset

To push the model to the project, use the following command:

leap push --model

You can also set the branch-name, description and model-name as such:

leap push --model [--branch-name=<BRANCH_NAME> [--description=<DESCRIPTION>] 
          [--model-name=<MODEL_NAME>]

What's Next?

You can now log in into the Tensorleap Platform within the UI, and find the imported model within the project, and the dataset within the Resources Management view.

You can follow the next steps to prepare the model for analysis:

  • Open up the current project (defined by the PROJECT_NAME above). More info at Open a Project.

  • Back on the Network view, set the Dataset Block to point to the DATASET_NAME Dataset Instance, and connect it to the first layer.

  • Add the ground truth, loss and optimizer blocks, and connect them to the end of the network. More info at Dataset Block Setup and Loss and Optimizer.

  • The model is ready for training or evaluation:

  • After the process is done, the model is ready for analysis. More info at Analysis.

Last updated