Quickstart using CLI
Last updated
Last updated
Prior steps are logging in, creating a project and a dataset instance.
More info about these steps is described at the Setup page.
Install leapcli
using the following command:
Once installed successfully, the leap
command shall be available.
Once the leap
CLI is installed, you can initialize your project to be synced with the Tensorleap platform.
In the target project path, run the following command to initialize Tensorleap within the project:
Replace the arguments with the corresponding value:
Args | |
---|---|
All the parameters are stored at the .tensorleap/config.toml
file and could be changed if needed.
In order to access the platform, we must run the leap login
command within the initialized project path, as such:
For Tensorleap to read the data and fetch it to the model for training or evaluation, we must provide a Dataset Script. This script defines the preprocessing function, input/ground_truth encoders and metadata functions.
The script should be set at the .tensorleap/dataset.py
file.
Important to note that this script should be contained and not use external project dependencies!
This script will later be synced with the Dataset Instance set by the DATASET_NAME
property above.
More info can be found at the Dataset Script page. A sample script can be found at the MNIST Guide.
Additional examples can be found at the Tensorleap Examples repository.
The model integration script is located at .tensorleap/model.py
, an example script below:
The leap_save_model
is automatically called by the CLI when pushing a model into the Tensorleap platform. Its purpose is to prepare and store the model in the provided target_file_path
.
More info can be found at the Dataset Script page. A sample script can be found at the MNIST Guide.
Additional examples can be found at the Tensorleap Examples repository.
You can validate the dataset and model scripts locally by using the following command:
This command will validate the scripts together with its synchronization with the Tensorleap platform.
Once everything is validated, you can push the dataset script and model to the Tensorleap platform for further evaluation/training/analysis.
To push the dataset script, use the following command from the project path:
To push the model to the project, use the following command:
You can also set the branch-name
, description
and model-name
as such:
You can now log in into the Tensorleap Platform within the UI, and find the imported model within the project, and the dataset within the Resources Management view.
You can follow the next steps to prepare the model for analysis:
Open up the current project (defined by the PROJECT_NAME
above).
More info at Open a Project.
Back on the Network view, set the Dataset Block to point to the DATASET_NAME
Dataset Instance, and connect it to the first layer.
Add the ground truth, loss and optimizer blocks, and connect them to the end of the network. More info at Dataset Block Setup and Loss and Optimizer.
The model is ready for training or evaluation:
After the process is done, the model is ready for analysis. More info at Analysis.
The API_ID
, API_KEY
and the ORIGIN
, along with the full command, can easily be found by clicking the button within the Resources Management view.
Find the imported model on the Versions view, hover your cursor over the view and click on the right to Open Commit.
Save the version model by clicking the button and set the Revision Name
. This adds the new version to the Versions view. More info at Save a Version.
Already trained model - click from the top bar to inference the data and collect metrics. Re-train / train from scratch in the Tensorleap platform - click from the top bar to train the model. More info at Evaluate / Train Model.
PROJECT_NAME
The name of the project
DATASET_NAME
The name of the Dataset Instance