Quickstart using Leap Hub
How to Quickly load an off-the-shelf model & dataset using Leap Hub
Please Ensure to fill all prerequisites before going through the Leap Hub integration.
In this section we will go over the steps needed to push an existing hub project into your local tensorleap installation. At the end of this section you would have a model and data integrated in the Tensorleap platform.
Choosing a hub project
In order to chose a hub model, we recommend visiting our github space and reviewing the different repositories. Each repository is a Tensorleap integration for a given model and dataset.
If it's your first time using Tensorleap, we recommend using the MNIST use-case. It does not require to download any datasets, and thus enables a straightforward integration.
Otherwise, we recommend browsing the hub for a repo that captures a similar data and model to your organization's use-case.
Setting up the repo
Clone the repo into the computer on which the CLI was installed.
Create a virtual environment for the repo and install the Repo's requirements.
Integration Test
From within the repo, run leap_integration.py and ensure a successful termination of the script.
This file serves as a local integration test the checks the validity of the Tensorleap integration, dataset structure, and dataloader. Any issues in running this script should be resolved before moving on to the next steps to ensure a smooth integration with the dataset and model.
Integrating the codebase and model to the platform
To upload your model and code, run the push command from the repository root:
You would then be prompted to provide:
Project name (choose create new)
Choose model path
Choose model name
A valid upload show the following logs in the terminal:
Once these appear within the terminal, the model and dataset are integrated into the platform.
Data validation and Evaluation in the platform
To review your project and run the data through the uploaded model:
Login into your Tensorleap app
Click on the project you've just uploaded

Click on the Evaluate button to open the Evaluation panel. Click Evaluate again to start the evaluation process and infer your data using the uploaded model.

Wait for the evaluation process to be completed. To follow on the progress of the evaluation you can open the "Runs and Processes" panel.

Once evaluation is completed the integration is finished. We can now go over the Dashboard tab and start an analysis of the model and the dataset:

Last updated
Was this helpful?

