Integration test

How to locally test your Tensorleap integration

The purpose and structure of a local integration test

before uploading code and models to the platform, it is recommended to test the validity of the integration script using a local integration test.

The purpose of this test is to simulate the data flow that would occur in the Tensorleap platform locally, and make sure that:

  • All of the inputs, outputs, metrics, loss & ground truths are:

    • Parsed succesfully

    • Have valid values

  • Visualizations are as expected

The way to create the inegration test is by creating a simply python script (leap_custom_test.py), that would:

  1. Load the .onnx or .h5 model

  2. Init the Preprocess response

  3. Iterate sample IDS or indices, and do the following:

    1. Call Input Encoder - to get the input of sample i

    2. Call the model on the input to get predicitons

    3. Call Ground Truth Encoder - to get the groundtruth of sample i

    4. Use the above to call the loss, metrics, and metadata.

    5. Use the above to call the visualizers.

    6. Use Tensorleap built-in methods to visualize the visualizers result locally and review the expected result in the platform

Since all of the Tensorleap decorators have a strong run-time validation for a mismatch in expected types and shapes, debugging this file using an IDE will easily point you to integration issues.

An integration test esxamples

Many integration tests could be found in the leap hub github space. Here, we share a basic integration test of the MNIST dataset. For a reference of the corresponding MNIST Tensorleap integration please check this repo.

from leap_binder import (input_encoder, preprocess_func_leap, gt_encoder,
                         combined_bar, leap_binder, metrics, image_visualizer)
import tensorflow as tf
import os
import numpy as np
from code_loader.helpers import visualize


def check_custom_test():
    check_generic = True
    plot_vis = True
    if check_generic:
        leap_binder.check()
    print("started custom tests")

    # load the model
    dir_path = os.path.dirname(os.path.abspath(__file__))
    model_path = 'model/model.h5'
    cnn = tf.keras.models.load_model(os.path.join(dir_path, model_path))

    responses = preprocess_func_leap()
    for subset in responses:  # train, val
        for idx in range(3):  # analyze first 3 images
            # get input and gt
            image = input_encoder(idx, subset)
            gt = gt_encoder(idx, subset)

            # add batch to input & gt
            concat = np.expand_dims(image, axis=0)
            batch_gt = np.expand_dims(gt, axis=0)


            # infer model
            y_pred = cnn([concat])

            # get inputs & outputs (no batch)
            both_vis = combined_bar(y_pred.numpy(), batch_gt)
            img_vis = image_visualizer(concat)

            # plot inputs & outputs
            if plot_vis:
                visualize(both_vis)
                visualize(img_vis)

            # print metrics
            metric_result = metrics(y_pred.numpy())
            print(metric_result)

            # print metadata
            for metadata_handler in leap_binder.setup_container.metadata:
                curr_metadata = metadata_handler.function(idx, subset)
                print(f"Metadata {metadata_handler.name}: {curr_metadata}")

    print("finish tests")


if __name__ == '__main__':
    check_custom_test()

Last updated

Was this helpful?