Advanced Metrics
As the model generalizes various characteristics in the data, we will see that samples with similar metadata will cluster together in the similarity map. Furthermore, we can use additional samples' metadata to identify correlations between various characteristics and the model's performance.
In this section, we'll add custom metadata to our dataset and inspect such correlations using the Metrics Dashboard.
Add Custom Metadata
As an example, we'll add the Euclidean Distance from Class Centroid metadata.
First, the preprocessing function preprocess_func
must calculate the average image for each class and store it in the dataset_binder
cache container. Then, the metadata function calculates the sample's Euclidean distance from the class average. This metric could aid us in analyzing the model's performance on samples that are relatively distinct in comparison to the class average.
Dataset Script
In the Resources Management view, click the mnist
dataset and add the code below to its script. Note that the centroid computation is added to the end of our preprocessing function preprocess_func()
.
Code snippet
def calc_classes_centroid(preprocess: PreprocessResponse) -> dict:
avg_images_dict = {}
# calculate average image on the pixels.
# returns a dictionary: key: class, values: images 28x28
data_X = preprocess.data['images']
data_Y = preprocess.data['labels']
for label in LABELS:
inputs_label = data_X[np.equal(np.argmax(data_Y, axis=1), int(label))]
avg_images_dict[label] = np.mean(inputs_label, axis=0)
return avg_images_dict
def preprocess_func() -> List[PreprocessResponse]:
...
leap_binder.cache_container["classes_avg_images"] = calc_classes_centroid(train)
response = [train, val, test]
return response
def metadata_euclidean_distance_from_class_centroid(idx: int, preprocess: Union[PreprocessResponse, list]) -> np.ndarray:
### calculate euclidean distance from the average image of the specific class
sample_input = preprocess.data['images'][idx]
label = preprocess.data['labels'][idx]
label = str(np.argmax(label))
class_average_image = leap_binder.cache_container["classes_avg_images"][label]
return np.linalg.norm(class_average_image - sample_input)
leap_binder.set_metadata(function=metadata_euclidean_distance_from_class_centroid, metadata_type=DatasetMetadataType.float, name='euclidean_diff_from_class_centroid')
For convenience, you can find the full script below:
Once you add the code to the script, click to save the Dataset Instance.
Dataset Block
After updating and saving the script, our dataset block needs to be updated. To do so, follow these steps:
Open the
MNIST
project.From the Versions view, position your cursor over the model revision, click
to Open Commit.
On the Dataset Block in the Network view, click the Update button. More info at Script Version.
To save the version with the updated dataset block, click the
button and set the
Revision Name
tocnn-extra
. More info at Versions.To train the update model, click
from the top bar. Let's set the
Number of Epochs
to10
and click. More info at Evaluate/Train Model.
Under the
cnn-extra
revision on the Versions view, clickto display the new version's metrics on the dashboard.
Add Custom Dashlets
In this section, you will add custom Dashlets with the added metadata.
Open the to the mnist
Dashboard that was created in the Model Integration step and follow the next steps.
Loss by Sample
To add a dashlet, click
at the top right.
Choose the Table type Dashlet by clicking
on the left side of the Dashlet.
Set the Dashlet Name to
Sample Loss
.Under Metrics add a field and set
metrics.loss
withaverage
aggregation.Under Metadata add these fields:
sample_identity.index
dataset_slice.keyword
Close the dashlet options panel to fully view the table.
Centroid Distance vs Loss
To add a dashlet, click
at the top right. The Bar dashlet option should be the first to open up.
Set the X-Axis to
metadata.euclidean_from_cls_centroid
.Set the Interval to
1
.Turn on the Split series by subset and the Show only last epoch options.
Close the dashlet options panel to fully view thew chart.
Dashboard
You can reposition and resize each dashlet within the dashboard. Here is the final layout:

Metrics Analysis
In this section, we will investigate the metrics within our custom dashboard.
First, let's focus on the Centroid Dist vs Loss
visualization we created:

The visualization above displays a histogram of the average loss vs the Euclidean distance. It reveals a strong correlation between distance and loss - samples with high distance values tend to have higher losses.
Sample Analysis
In the table in our dashboard, we see two samples that fall into that bucket, one of them with a very high loss. Let's run a Sample Analysis on that sample:
Select Analyzer from the drop-down at the top of the Dashboard view.
Click
and choose Analyze Sample.
Set Dataset Slice to
Validation
and set the Sample Index to the sample_index found in the Samples Loss table visualization -6754
.Click
.
From the Sample Analysis above, we get the following results:


From the results, we see that the model confuses this sample (the digit 8) with the digit 0
. We can also see that this sample was written in a thick marker, causing a high Euclidean distance from the average.
Conclusion
This section concludes our tutorial on the MNIST dataset.
For another tutorial on performing model analysis using Tensorleap, please check the next section dealing with building a classifier model to predict positive and negative reviews using the IMDB movie database.
Ready for more? Go to the IMDB Guide.
Last updated
Was this helpful?