Versions
Save versions and models, run analysis results, and export models out of Tensorleap
The Versions view is where you track all the saved versions and models of your project. From here, you can save versions and export models.
Click
on the top left to open the Versions view. Once open, you can choose to fix the view by clicking
.



The Versions view is on the left
The Versions view lists all versions of your project. Expanding each version shows a list of its related models.

master
version ModelsTo view the analysis results for a model, toggle the buttons to the left of the model. The stars to the right of the buttons light up, indicating the results are running.
Things to keep in mind when working with versions and models:
- You can save a version of a network after creating a new or editing an existing one (see Save a Version).
- Models are saved under versions. You need to expand a version to see the models under that version.
- To start or continue training a specific version, position your cursor over that version, clickon the right. (see Evaluate / Train Model).
The demo video below shows the results of the analysis performed on a model.

Select a model and show analysis
Once you complete laying out a new network or making changes to an existing one, you can save it as a new version.
To save a network:
- 1.Clickon the left side of the Network view.
- 2.On the Versions view, enter a revision name.
- 3.Click Save to add the new version to the list in the Versions view.

Saving a new version
By default, when you create a new project, Tensorleap also creates the first version within the master branch. If you want to override this initial version, mark the Override Current Version checkbox when saving a network. If not selected, a new version is added.
When training a network, you will be required to name the model (see Evaluate / Train Model). The model appears under the source network, where you can export it out of Tensorleap into a common standard format.

Models of each Version
To export a trained model out of Tensorleap:
- 1.With the Versions view open, search for the model under your version.
- 2.Hover your mouse on the model, then clickon the right to open the Export Model window.
- 3.On the Export Model window, select the format in which the model will be saved.
- 4.Clickto start the export process.
- 5.The job appears on the list to the right with status set to Pending. A notification message also appears briefly on your screen.

Exporting a model
8. Once Tensorleap completes compiling the file, status is set to Finished.
If status is still set to Pending after some time, you may need to refresh the page to see the change in status to Finished.
9. Click
to save the file to your computer.


Downloading an exported model
The JSON format serializes the model layers, their properties and connectivity. It does not hold the state (weights) of the model.
Below is a code snippet for loading the exported JSON file:
import tensorflow as tf
json_data = open('exported_model.json').read()
model = tf.keras.models.model_from_json(json_data)
The
h5
format serializes the model and model state (weights) as a single h5
file.Below is a code snippet for loading the exported
h5
file:import tensorflow as tf
model = tf.keras.models.load_model('exported_model.h5')
The
onnx
format is commonly used in PyTorch for serializing the model's layers and state (weights).Below is a code snippet for loading the exported
onnx
file in PyTorch:import onnx
# Load the ONNX model
model = onnx.load("exported_model.onnx")
This format uses the TensorFlow 2
SaveModel
format, and exports a folder with files containing the serialized model layers and state.The serialized data is stored to a folder with this directory structure:
assets/ (folder)
variables/ (folder)
saved_model.pb
When downloading the SaveModel format from Tensorleap, the exported folder is contained within a compressed
.gz
file. To load the exported model, you must extract the
.gz
file into a folder. One way to do it is by using tar
:tar -xf exported_model.tar.gz
Below is a code snippet to load the model from the extracted folder:
import tensorflow as tf
extracted_model_path = '/path/to/extracted_model'
model = tf.keras.models.load_model(extracted_model_path)
Last modified 1yr ago