Tensorleap Integration
Integrating your own model and dataset to the platform and reviewing the development proces
This page outlines the different elements of the tensorleap integration and reviews the integration flow and development cycle within the platform.
Prerequisites
Leap CLI is installed and authenticated
.onnx or .h5 model
A valid code integration that instructs Tensorleap on how to load the dataset and parse and visualize different elements.
Tensoleap configuration yamls:
A leap.yaml file that contains the setup of your integration.
(optionally) requirements.txt file if non-default requirements are needed
An integration test script that instructs Tensorleap on what code needs to be run during model analysis (loss, metrics, visualizers & metadata)
Expected Folder Structure
The expected file structure for a Tensorleap code integration is the following:
my_leap_project/
├── ...
├── leap_binder.py
├── leap.yaml
├── integration_test.py
Here, leap_binder.py contains the integration script.
The Tensorleap integration flow
Start from an existing repo from our Leap Hub or use the CLI to add template Tensorleap integration files into your existing repository.
Make sure your data is accessible by the Tensorleap Server.
Create a basic Integration script. This can include an input_encoder, GT encoder, loss, and a basic visualizer for the input, prediction and GT.
Verify your integration validity locally using an Integration test.
Use the CLI to push the code and models to the platform
Validate the assets after uploading to the server
Use the Platform to Evaluate and process the data using your model.
Last updated
Was this helpful?