Model Integration
In this section, we will set up our classification model by either importing or building it.
UI
CLI
The project name,
IMDB
, is set with the leap init
command, as discussed in Dataset Integration above.The project page contains the Versions, Network and Dashboard views. To toggle them, click the
buttons at the top.

We'll start by pointing the model's dataset block to our
imdb
dataset. This will update the Dataset Block with the relevant input.In the Network view, click the Dataset Block to display the Dataset Details panel on the right, then click Connect Dataset and select
imdb
dataset from the list. Importing a Model
Building a Model
CLI
Tensorleap can import a model saved by Tensorflow (PB/H5/JSON_TF2) and PyTorch (ONNX). More information can be found at Import Model.
In this sample code, a simple dense model is created using Tensorflow and then saved as a file named
imdb-dense.h5
.import tensorflow as tf
MAX_FEATURES = 10000
SEQUENCE_LENGTH = 250
vectorized_inputs = tf.keras.Input(shape=250, dtype="int64")
x = tf.keras.layers.Embedding(MAX_FEATURES + 1, SEQUENCE_LENGTH)(vectorized_inputs)
x = tf.keras.layers.Dropout(0.2)(x)
x = tf.keras.layers.Dense(28, activation='relu')(x)
x = tf.keras.layers.GlobalAveragePooling1D()(x)
x = tf.keras.layers.Dropout(0.2)(x)
output = tf.keras.layers.Dense(2, activation='softmax')(x)
model = tf.keras.Model(inputs=vectorized_inputs, outputs=output)
model.save('imdb-dense.h5')
For your convenience, you can also download this file here:
imdb-dense.h5
10MB
Binary
- 1.In the Network view, click the
Import Model
button to open theImport Model
panel. - 2.Set the Revision Name to
imdb-dense
and the Model Name topre-trained
. - 3.For File Type, use
H5_TF2
and selectimdb-dense.h5
in the Upload File field. - 4.Click the Import button.
- 5.Once completed, the imported model,
imdb-dense,
is added to the Versions view. Position your cursor over that version, clickto Open Commit. - 6.Back in the Network view, set the Dataset Block to point to the
imdb
Dataset Instance, and connect it to the first layer.

Importing a Model and Dataset Block Setup
In this section, we will add layers, update their properties, and connect them together to form the model.
The model that we will build is based on a small dense neural network described below:
Layers | Properties |
---|---|
Embedding | input_dim=10001, output_dim=250 |
Dropout | rate=0.2 |
Dense | units=28, activation="relu"' |
GlobalAveragePooling1D | |
Dropout | rate=0.2 |
Dense | units=2, activation="softmax" |
- 1.
- 2.Update the corresponding properties in the Layer Properties view on the right. More info at Layer Properties.
- 3.Connect the Dataset to the first layer, and then connect all the layers in order. More info at Connections.
All steps are illustrated below.
Add Layers and Connect Them
Copy the
imdb_dense.py
file to the imdb
folder we created in Dataset Integration. This file defines the dense model, and can be found here:imdb_dense.py
580B
Text
Edit the
.tensorleap/model.py
file and point it to the model defined in imdb_dense.py
:from pathlib import Path
from imdb_dense import build_model
def leap_save_model(target_file_path: Path):
# Load your model
model = build_model()
# Save it to the path supplied as an arugment (has a .h5 suffix)
model.save(target_file_path)
To validate the correctness of our code, run the following command:
leap check --all
If done correctly, there should be no errors and we can move forward and push the model using the following command:
leap push --model --description=imdb-dense --model-name=pre-trained
Once completed, open the UI, where you should see the imported model,
imdb_dense,
in the Versions view. - Position your cursor over the view and clickon the right to Open Commit.
- Back on the Network view, set the Dataset Block to point to the
imdb
Dataset Instance, and connect it to the first layer.
Once all layers have been connected to each other, the model should look like this:

Connected Layers and Dataset
Each block shows the calculated output shape affected by the preceding layers.
In this section, we will set the Categorical Crossentropy loss function, and connect it to both the dataset's ground truth and the last layer in our model. We'll then add an Adam Optimizer block and connect it to the loss block. For more information at Loss and Optimizer.
After completing this section, our model will be ready for training.
- 1.Right-click and add the following:
- Loss -> CategoricalCrossentropy
- GroundTruth, and set it to
Ground Truth - classes
- Optimizer -> Adam
- 2.Connect the last Dense layer and the GroundTruth to the CategoricalCrossentropy block. Additionally, connect the Loss block to the Adam optimizer.
All steps are illustrated below.

Add Loss and Optimizer
Visualizers defines how to visualize tensors within the model graph. For more info, see Visualizers.
There must be at least one visualizer connected to the model's input for analysis. To add the Visualizer for the input, follow the steps below.
- 1.Right-click and choose Visualizer to add it.
- 2.Click the Visualizer node to open up the Visualizer Details on the right.
- 3.Choose
text_from_token
from theSelected Visulizer
list. - 4.Connect the Dataset node output to the input of the Visualizer node.
Additional visualizers will be connected to the prediction and ground truth.
To visualize the model's prediction output, follow the steps below:
- 1.Right-click and choose Visualizer to add it.
- 2.Click the Visualizer node to open up the Visualizer Details on the right.
- 3.Choose
HorizontalBar
from theSelected Visualizer
list. - 4.Connect the last Dense layer output to the input of the Visualizer node.
- 5.Repeat the steps, and connect the second visualizer to the GroundTruth node's output.
Great! Your first version is ready to be saved.

Full Model with Dataset, Layers, Loss and an Optimizer
Click the
button and set the

Revision Name
to dense-nn
(Dense Neural Network). This adds the new version to the Versions view. For more information, see Save a Version.
The newly-saved version appears on the Versions view
Tensorleap can import trained and untrained models. In our case, the model was created from scratch and needs to be trained.
To train the model, click
from the top bar. Let's set Batch Size to
. For more information, see Evaluate/Train Model.

32
and the Number of Epochs to 5
and click 

Train Model Dialog
Once training begins, you can start tracking metrics in real-time.
Click
for a Line Dashlet, set the name to

Loss
, and turn on Split series by subset
to separate training
and validation
metrics.
Add a Dashboard and a Loss Dashlet
Next, add another Line Dashlet for the accuracy. Follow all the previous steps to add it and set the Dashlet Name to
Accuracy
and set the Y-Axis to metrics.Accuracy
. Do not forget turn on Split series by subset
.
Once training begins, you can start tracking metrics in real-time.
From the Dashboard view on the right, select the
tl_default_metrics
from the list at the top. This will open up the Metrics dashboard.As training progresses, you should see loss values declining and accuracy values increasing.
When training is completed, the model achieved an accuracy of 89% at the 3rd epoch.

Loss vs Batch

Accuracy vs Batch - Reaching
89%
AccuracyTensorleap provides a host of innovative tools for model analysis and debugging. It tracks how each sample flows through each layer, and how each learned feature flows through the model. It also stores metrics in a big dataset to analyze the model's response to data.
We've discussed integrating and training our model. It's now time to analyze it. The next part of this tutorial will demonstrate various analyses of the model and data.
Last modified 8mo ago