Evaluate a Model
Learn how to evaluate models in the platform
Once the model and code integration are uploaded and the mapping is configured, we can start a model evaluation.
The Evaluate Model operation inferences the dataset throughout the model, and collects performance metrics and metadata for each sample. It runs the Tensorleap explainability engine over an extracted latent space to find issues within the model and dataset and to provide insights on how to diagnose and correct these. After it is finished, the collected data is presented on the Dashboard.
More than one evaluation can be performed on a model, since the same model could be configured to use different code integration scripts which would effect data loading and metrics computations.

To run an evaluation:
Click the
button at the top to open the Evaluate Model dialog. More info about the properties below.
Select a Model Version
Provide a Model Run Name
(Optionall) provide a description of the Run
Set the Batch Size.
Click
.
Evaluation Options
Skip Metric Estimation - This would avoid performing some of Tensorleap's computation relevant for unlabeled data and might speed-up inference.
Add to Dashboard - This would add the evaluated model to the selected model for analysis in the dashboard panel
Last updated
Was this helpful?