LogoLogo
  • Tensorleap
  • Examples
    • Semantic Segmentation
    • Image Analysis
    • Sentiment Analysis
    • MNIST Project Walkthrough
    • IMDB Project Walkthrough
  • Quickstart using CLI
  • Guides
    • Full Guides
      • MNIST Guide
        • Dataset Integration
        • Model Integration
        • Model Perception Analysis
        • Advanced Metrics
      • IMDB Guide
        • Dataset Integration
        • Model Integration
        • Model Perception Analysis
        • Advanced Metrics
    • Integration Script
      • Preprocess Function
      • Input Encoder
      • Ground Truth Encoder
      • Metadata Function
      • Visualizer Function
      • Prediction
      • Custom Metrics
      • Custom Loss Function
      • Custom Layers
      • Unlabeled Data
      • Examples
        • CelebA Object Detection (YoloV7)
        • Wikipedia Toxicity (using Tensorflow Datasets)
        • Confusion Matrix
        • CelebA Classification (using GCS)
  • Platform
    • Resources Management
    • Project
    • Dataset
    • Secret Manager
    • Network
      • Dataset Node
      • Layers
      • Loss and Optimizer
      • Visualizers
      • Import Model
      • Metrics
    • Evaluate / Train Model
    • Metrics Dashboard
    • Versions
    • Issues
    • Tests
    • Analysis
      • helpers
        • detection
          • YOLO
    • Team management
    • Insights
  • API
    • code_loader
      • leap_binder
        • add_custom_metric
        • set_preprocess
        • set_unlabeled_data_preprocess
        • set_input
        • set_ground_truth
        • set_metadata
        • add_prediction
        • add_custom_loss
        • set_visualizer
      • enums
        • DatasetMetadataType
        • LeapDataType
      • datasetclasses
        • PreprocessResponse
      • visualizer_classes
        • LeapImage
        • LeapImageWithBBox
        • LeapGraph
        • LeapText
        • LeapHorizontalBar
        • LeapImageMask
        • LeapTextMask
  • Tips & Tricks
    • Import External Code
  • Legal
    • Terms of Use
    • Privacy Policy
Powered by GitBook
On this page
  • Test Overview
  • Tests
  • Adding a new test
  • Analyzing test results

Was this helpful?

  1. Platform

Tests

Compose tests that continuously track your model performance

PreviousIssuesNextAnalysis

Last updated 2 years ago

Was this helpful?

Test Overview

Tests in Tensorleap are composed via the tests panel, shown once you click on the dashboard view at the top. From this panel, you can view your current tests and add new tests to measure your model performance, and possibly on specific populations.

Tests

Tests in Tensorleap are defined using a specific operator, a metric, and a subset of the dataset (based on configured filters). We check, for a given population if the condition holds on average. If it does - the test passed. In such a way, for example, a test can be added to measure the loss on a specific subset from a specific class, size, or type.

Adding a new test

  1. Press the + button on the top

  2. Add a test name

  3. (Optional) add a filter, to only apply the test to some of the samples on the dataset

  4. Add a tested metric

  5. Add the condition for which the test passes

  6. Click the "save disk" icon on the top to Finalise the test

Analyzing test results

Out of the total number of tests, the number of failed tests would be listed, and for each model version separately.

Each new model version selected in the left panel would be automatically added to the tests analysis, to allow an easy comparison of test progression.

Each model version that fails on a test would have an indicative red icon on top of the test. Opening the test would allow us to view the exact score each version had, with the amount of samples that passed this specific test within the selected population.

Adding a new test
Tests panel: showing two tests: 'Regression Loss On Big Objects' and 'Loss vs Model'.