Integration test
How to instruct Tensorleap on which code to run during model analysis and locally test your integration
The integration test is a mandatory part of Tensorleap's Integration and must be implemented for the platform to work properly.
The purpose and structure of a local integration test
before uploading code and models to the platform, an integration test should be created and run locally. The purpose of this test is to: (1) Instruct Tensorleap on which code needs to be executed during model analysis (loss, metrics, visualizers & metadata) (2) Simulate the data flow that would occur in the Tensorleap platform locally, ensuring that:
All of the inputs, outputs, metrics, loss & ground truths are:
Parsed successfully
Have valid values
Visualizations are as expected
This integration test can be run or debugged locally on your machine in an appropriate python environment (see example). Since all of Tensorleap's decorators include a strong runtime validation for a mismatch in expected types and shapes, running the test locally will quickly highlight any existing integration issues.
The integration test decorators
To run the integration test, two interfaces should be implemented: tensorleap_load_model and tensorleap_integration_test.
@tensorleap_load_model
Supported model formats are .h5 or .onnx
This decorator wraps a function that performs .onnx or .h5 model loading. It should return the model to be later used in @tensorleap_integration_test for inference. The decorator receives a list of PredictionTypeHandler, which should match the number of model outputs in length (a full description is provided in the load_model decorator page)
import tensorflow as tf
from code_loader.inner_leap_binder.leapbinder_decorators import tensorleap_load_model, integration_test
#Define model outputs
prediction_type1 = PredictionTypeHandler('depth', ['high', 'low'], channel_dim=1)
#Load model
@tensorleap_load_model([prediction_type1])
def load_model():
dir_path = os.path.dirname(os.path.abspath(__file__))
model_path = 'models/GLPN_Kitti.onnx'
sess = onnxruntime.InferenceSession(os.path.join(dir_path, model_path))
return sess
#Instruct Tensorleap on how to infer model
@tensorleap_integration_test()
def integration_test(idx, subset):
sess = load_model()
# inputs
x = input_image(idx, subset)
# model
input_name_1 = sess.get_inputs()[0].name
pred = sess.run(None, {input_name_1: x})[0]
...
@tensorleap_integration_test
This decorator wraps a function that serves as a way to instruct Tensorleap on what code needs to be run during model analysis. This function:
Receives a PreprocessResponse and an idx as inputs
Calls the @tensorleap_load_model wrapped function to load the .onnx or .h5 model
Does the following:
Call Input Encoder - to get the input of sample
idx
Call the (decorated) model on the input to get predictions
Call Ground Truth Encoder - to get the ground truth of sample
idx
Use the above to call the visualizers.
(optional) Use Tensorleap built-in methods to visualize the visualizers result locally and review the expected result in the platform
(optional) Prints whatever values needed for debugging and integrity purposes (metadata, metrics, loss, etc.)
Only decorators that are called within the tensorleap_integration_test would be utilized in Tensorleap analysis. Any decorator that is defined in code, but not called within the integration test would not be executed in the platform.
To capture the connectivity of the different interfaces used in the integration test, Tensorleap passes pointers that maps the connection between the model and the decorators. For example, this instructs the platform on what should an image visualizer defined in the script visualize - the input or the output of the model.
To support a correct registration and tracking of the pointers used, the integration test function should adhere to a specific format:
Only functions decorated with Tensorleap decorators should be called within the code. Any python logic (post & pre-processing, input and output manipulation) is placed within these functions.
The model is loaded using the tensorleap_load_model decorator.
Visualizing and viewing outputs can only be done using the build in
visualize
function and the python build-inprint
method.
The integration test should not contain the following placed outside of other decorated functions:
Arithmetics done on model inputs and outputs
Usage of external libraries: NumPy, Pandas, etc.
Indexing of anything other than the model prediction. This includes metadata_result['specific_key'], input_result[3], etc.
Adding or removing a batch dimension from an input, output or the return array of any of the decorators.
An integration test example
Many integration tests could be found in the leap hub github space. Here, we share a basic integration test of the MNIST dataset. For a reference of the corresponding MNIST Tensorleap integration please check this repo.
import os
from code_loader.contract.datasetclasses import PredictionTypeHandler
from code_loader.plot_functions.visualize import visualize
from mnist.config import CONFIG
from leap_binder import (input_encoder, preprocess_func_leap, gt_encoder,
combined_bar, metrics, image_visualizer, categorical_crossentropy_loss,
metadata_sample_index, metadata_one_hot_digit, metadata_euclidean_distance_from_class_centroid)
import tensorflow as tf
from code_loader.inner_leap_binder.leapbinder_decorators import tensorleap_load_model, tensorleap_integration_test
prediction_type1 = PredictionTypeHandler('classes', CONFIG['LABELS'])
@tensorleap_load_model([prediction_type1])
def load_model():
dir_path = os.path.dirname(os.path.abspath(__file__))
model_path = 'model/model.h5'
cnn = tf.keras.models.load_model(os.path.join(dir_path, model_path))
return cnn
@tensorleap_integration_test()
def integration_test(idx, subset):
# Get input and GT
image = input_encoder(idx, subset)
gt = gt_encoder(idx, subset)
# Load Model and infer
cnn = load_model()
y_pred = cnn([image])
# Visualize the inputs and outputs of the model
horizontal_bar_vis = combined_bar(y_pred, gt)
img_vis = image_visualizer(image)
visualize(img_vis)
visualize(horizontal_bar_vis)
# Compute metrics and loss
metric_res = metrics(y_pred)
loss_res = categorical_crossentropy_loss(gt, y_pred)
print(metric_res)
print(loss_res)
# Compute metadata
m1 = metadata_sample_index(idx, subset)
m2 = metadata_one_hot_digit(idx, subset)
m3 = metadata_euclidean_distance_from_class_centroid(idx, subset)
print(m1)
print(m2)
print(m3)
# here the user can return whatever he wants
if __name__ == '__main__':
num_samples_to_test = 3
train, val = preprocess_func_leap()
for i in range(num_samples_to_test):
integration_test(i, train)
integration_test(i, val)
Last updated
Was this helpful?