Advanced Metrics

As the model generalizes various characteristics in the data, we will see that samples with similar metadata will cluster together in the similarity map. Furthermore, we can use additional samples' metadata to identify correlations between various characteristics and the model's performance.

In this section, we'll add custom metadata to our dataset and inspect such correlations using the Metrics dashboard.

Add Custom Metadata

As an example, we will be adding the following metadata:

  • Length - the number of words in a sample.

  • Score - the IMDB score a user had given the target movie.

These metadata functions calculate and return the length and score, respectively, of each sample in the IMDB dataset. For more information, see Metadata Function.

These metadata functions will return the length and score, respectively, of each sample in the IMDB dataset. We will add them to our Integration Script.

Integration Script

In the Resources Management view, click the imdb dataset and add the code below to its script.

Code snippet

def score_metadata(idx, preprocess: PreprocessResponse) -> int:
    return int(preprocess.data['df']['paths'][idx].split("_")[1].split(".")[0])
    
leap_binder.set_metadata(function=score_metadata, metadata_type=DatasetMetadataType.int, name='score')

For convenience, you can find the full script with additional metadata below:

Full Script (expandable)

Once you add the code to the script, click to save the Dataset.

Dataset Block

After updating and saving the script, our dataset block needs to be updated. To do so, follow these steps:

  1. Open the IMDB project.

  2. From the Versions view, position your cursor over the dense-nn model revision, click to Open Commit.

  3. On the Dataset Block in the Network view, click the Update button. More info at Script Version.

  4. To save the version with the updated dataset block, click the button and set the Revision Name to dense-nn-extra. More info at Versions.

  5. To train the updated model, click from the top bar. We'll set the Number of Epochs to 10 and click . More info at Evaluate/Train Model.

  6. Under the dense-nn-extra revision on the Versions view, click to display the new version's metrics on the dashboard.

Follow steps 2-6 above also for the imdb_cnn we imported earlier in the Model Perception Analysis section of this tutorial, using imdb_cnn-extra as the Revision Name.

Add Custom Dashlets

In this section, you will add custom Dashlets with the added metadata.

Open the to the imdb Dashboard that was created in the Model Integration step and follow the next steps.

Loss by Sample

  1. To add a dashlet, click at the top right.

  2. Choose the Table type Dashlet by clicking on the left side of the Dashlet.

  3. Set the Dashlet Name to Sample Loss.

  4. Under Metrics add a field and set metrics.loss with average aggregation.

  5. Under Metadata add these fields:

    • sample_identity.index

    • dataset_slice.keyword

  6. Close the dashlet options panel to fully view the table.

Loss vs Score

  1. To add a dashlet, click at the top right. The Bar dashlet option should be the first to open up.

  2. Set the X-Axis to metadata.score.

  3. Set the Interval to 1.

  4. Turn on the Split series by subset and the Show only last epoch options.

  5. Close the dashlet options panel to fully view thew chart.

Dashboard

You can reposition and resize each dashlet within the dashboard. Here is the final layout:

Custom Dashboard and Dashlets

Conclusion

This section concludes our tutorial on the IMDB dataset.

We also have another tutorial on building and training a classification model using the mnist database. If you haven't gone through it yet, go to our MNIST Guide.

You can also check out reference documentation for the Tensorleap UI and Command Line Interface (CLI) in Reference.

Last updated

Was this helpful?