Compose tests that continuously track your model performance

Test Overview

Tests in Tensorleap are composed via the tests panel, shown once you click on the dashboard view at the top. From this panel, you can view your current tests and add new tests to measure your model performance, and possibly on specific populations.


Tests in Tensorleap are defined using a specific operator, a metric, and a subset of the dataset (based on configured filters). We check, for a given population if the condition holds on average. If it does - the test passed. In such a way, for example, a test can be added to measure the loss on a specific subset from a specific class, size, or type.

Adding a new test

  1. Press the + button on the top

  2. Add a test name

  3. (Optional) add a filter, to only apply the test to some of the samples on the dataset

  4. Add a tested metric

  5. Add the condition for which the test passes

  6. Click the "save disk" icon on the top to Finalise the test

Analyzing test results

Out of the total number of tests, the number of failed tests would be listed, and for each model version separately.

Each new model version selected in the left panel would be automatically added to the tests analysis, to allow an easy comparison of test progression.

Each model version that fails on a test would have an indicative red icon on top of the test. Opening the test would allow us to view the exact score each version had, with the amount of samples that passed this specific test within the selected population.

Last updated