Core Concepts
This page describes the three core concepts that the Tensorleap Platform utilizes: the code, the model, and the mapping.
Tensorleap operates around three core building blocks:
The Model – the neural network you want to analyze
The Code Integration – custom Python functions that explain how your data is processed
The Mapping – a configuration that tells Tensorleap how to connect your code and model into the platform’s graph-based analysis engine
Together, these define everything Tensorleap needs to run explainability, validation, and debugging pipelines.
This page introduces each concept, how it’s represented in your local environment, and how it appears inside the Tensorleap platform.
The Model
What is a Model in Tensorleap?
A model in Tensorleap is a trained neural network used for inference and analysis. It’s the computational backbone that transforms input data into predictions.
How the Model Is Used in the Platform
Once uploaded, the model is attached to a version of a project. The platform uses the model during evaluation processes, combining it with your integration code and mapping to compute predictions, metrics, and insights.
Tensorleap supports models in .onnx
or .h5
format.
How the Model Connects to the Other Components
The model is powered by your integration script, which defines how inputs and outputs are prepared. The mapping then connects these code interfaces to nodes in the platform's analysis graph, enabling rich visualization and debugging tools.
The Code Integration
What Is the Code Integration?
The code integration is a collection of Python scripts (or a complete repo) that defines how your dataset is processed and interpreted. It’s similar in purpose to a PyTorch Dataset
: it loads and prepares data for your model, but also supports additional functionality like visualizations and custom metrics. Each component is defined using decorators such as @tensorleap_input_encoder or @tensorleap_custom_visualizer.
How the Code Integration Is Used in the Platform
Your script is uploaded to the platform and executed during evaluation processes. Tensorleap calls the functions you've defined (e.g., preprocess, encoders, visualizers) to transform and interpret data throughout the analysis graph.
How the Code Integration Connects to the Other Components
The code integration acts as a bridge between your raw dataset and model. The mapping tells the platform how each function (e.g., your input encoder) connects to specific nodes in the platform’s graph, enabling explainability and evaluation.
The mapping
What Is the Mapping?
The mapping defines how the interfaces in your integration code connect to your model. For example, if you define a visualizer using @tensorleap_custom_visualizer, the mapping tells the platform whether it should visualize the input, output, or ground truth.
This configuration is written in the leap_mapping.yaml
file.
How the Mapping Is Used in the Platform
When a process runs in the platform, the mapping tells Tensorleap how to connect your code functions to the model’s input and output layers. This ensures that the platform executes your integration correctly and interprets its outputs.
How the Mapping Connects to the Other Components
The mapping links your integration script to your model, specifying how each decorated function relates to the model’s computation. It makes your integration usable by the platform.
Last updated
Was this helpful?