Core Concepts

This page describes the three core concepts that the Tensorleap Platform utilizes: the code, the model, and the mapping.

Tensorleap operates around two core building blocks:

  • Model – the neural network you want to analyze

  • Code – which contains two parts:

    • The integration code: Custom Python functions that load your data and define how the data is processed (metrics, visualizations, etc.)

    • The Integration test: Python inference script that defines how your integration code connects to your model. In addition, it is used to test your code before uploading it to the platform

Together, the model-code pair defines everything Tensorleap needs to run explainability, validation, and debugging pipelines.

This page introduces each concept, how it’s represented in your local environment, and how it appears inside the Tensorleap platform.

Model

What is a Model in Tensorleap?

A model in Tensorleap is a trained neural network used for inference and analysis. It’s the computational backbone that transforms input data into predictions.

How Is the Model Used in the Platform

Once uploaded, the model is saved as a version. The platform uses the model during evaluation processes, combining it with your integration code as defined in the integration test to compute predictions, metrics, and insights.

Tensorleap supports models in .onnx or .h5 format.

How the Model Connects to the Other Components

The model is powered by your integration script, which defines how inputs and outputs are prepared. The integration test then instructs the platform on how to connect these code interfaces to nodes in the platform's analysis graph, enabling rich visualization and debugging tools.

Code

The Integration Code

What Is an Integration Code in Tensorleap?

The code integration is a collection of Python scripts (or a complete repo) that defines how your dataset is processed and interpreted. It’s similar in purpose to a PyTorch Dataset: it loads and prepares data for your model, but also supports additional functionality like visualizations and custom metrics. Each component is defined using decorators such as @tensorleap_input_encoder or @tensorleap_custom_visualizer.

How the Code Integration Is Used in the Platform

Your script is uploaded to the platform and executed during evaluation processes. Tensorleap calls the functions you've defined (e.g., preprocess, encoders, visualizers) to transform and interpret data throughout the analysis graph.

How the Code Integration Connects to the Other Components

The code integration acts as a bridge between your raw dataset and model. The integration test tells the platform how each function (e.g., your input encoder) connects to specific nodes in the platform’s graph, enabling explainability and evaluation.

The Integration test

What Is the integration test?

The integration test serves two purposes: (1) It enables a quick way to locally test whether your integration to Tensorleap works.

(2) It defines how the interfaces in your integration code connects to your model. For example, if you define an image visualizer using @tensorleap_custom_visualizer, the integration test tells the platform whether it should visualize the input, output, or ground truth.

How the Integration test Is Used in the Platform

When a process runs in the platform, the integration test tells Tensorleap how to connect your code functions to the model’s input and output layers. This ensures that the platform executes your integration correctly and interprets its outputs.

How the Integration test Connects to the Other Components

The Integration test links your integration script to your model, specifying how each decorated function relates to the model’s computation. It makes your integration usable by the platform.

For more info on the Integration test structure, syntax, and goals: see this guide.

Last updated

Was this helpful?