# Model Perception Analysis

To perform this analysis, go to the **Dashboard** view on the right, and select `Analyzer` from the list at the top.

## Population Exploration Analysis

The Tensorleap platform tracks how each learned feature, within each layer, responds to each sample. From that information, it constructs a vector that captures how the model ***perceives*** each sample. This allows the platform to create a similarity map between samples as they are interpreted by the model. A more intuitive explanation is that similar samples would activate similar learned features within the model.

{% hint style="info" %}
For more information on model evaluation and training, see [**Evaluate/Train Model**](https://docs.tensorleap.ai/user-interface/project/menu-bar/evaluate-a-model).
{% endhint %}

**Population Exploration Analysis** takes these vectors and runs dimension reduction algorithms in order to visualize the similarities in the UI and provide insights on how various samples are perceived by the model.

In this section, we will perform a **Population Exploration** analysis. For more information, see [**Population Exploration**](https://docs.tensorleap.ai/user-interface/project/dashboards/dashlets/sample-analysis#population-exploration).

After each epoch in the training process, a **Population Exploration** analysis is performed automatically, the results of which can be found under the **Analysis** panel in the **Dashboard** view.

Each dot represents a sample (hover your mouse over it to see the sample's preview), while by default, the dot's **size** and **color** represent the sample's loss. This can be changed to fit your preferences. For example, we can change the **color** to represent the ground truth label - `metadata_gt`. By doing that, we can easily see the clusters formed for each label. In addition, we can see which label cluster is perceived to be similar to another, and which failing samples are not located within the right cluster.

![Color Dots by Ground Truth](https://3509361326-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F9UXeOlFqlw8pl79U2HGU%2Fuploads%2FbRbc5qIGGwTegVbH6qnc%2Fimdb-pe-color.gif?alt=media\&token=329d8e71-5d85-4c69-bb73-61901b5a0cb9)

In the default view, the dot **size** represents the loss (error). Large dot size is highly correlated with a failed prediction for that sample.

#### Mislabelled Sample

Exploring some samples with high loss reveals possible ambiguities and mislabeling. Once a sample dot is clicked, its details are displayed on the right. In the example below, we see a sample with high loss, located within the red cluster, which correlates to `positive` reviews. This sample's ground truth is marked as `negative`, but reading the review shows it is clearly a positive one.&#x20;

![Mis-labeled Sample](https://3509361326-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F9UXeOlFqlw8pl79U2HGU%2Fuploads%2F5kEsW2GvloJijkOH4Wpl%2Fimage.png?alt=media\&token=99af9f6f-2f84-4e3d-b4e1-2c8f27e3ad2f) ![Focus on Positive Review marked as Negative](https://3509361326-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F9UXeOlFqlw8pl79U2HGU%2Fuploads%2F8YbBXZUOqGy5bYcZy9XN%2Fimage.png?alt=media\&token=9e1656aa-bb4d-40f3-b654-fa8313fe0aad)

#### Failing Samples

Below is another interesting sample which is labelled as `positive` but predicted as `negative.`

![Sample Ground Truth vs Prediction](https://3509361326-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F9UXeOlFqlw8pl79U2HGU%2Fuploads%2FtW80C0UPteayf47EAOD9%2Fimage.png?alt=media\&token=920d0f2d-8569-4322-a6a7-b46a86f576c8)

{% hint style="info" %}
Note that due to the randomness of the initial weights, your model could converge to a different state, thus rendering different results.
{% endhint %}

## Sample Analysis <a href="#mnist.sample.analysis" id="mnist.sample.analysis"></a>

Analysis of a sample returns results from a variety of explainability algorithms. Details about these algorithms can be found at [**Sample Analysis**](https://docs.tensorleap.ai/user-interface/project/dashboards/dashlets/sample-analysis#sample-analysis).&#x20;

**Click** a **sample dot** to select it, then click on <img src="https://3509361326-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F9UXeOlFqlw8pl79U2HGU%2Fuploads%2FXOM3KQF4ubAg3uGY9tzU%2Fimage.png?alt=media&#x26;token=82a8dd2c-8a85-4ada-86d5-d4ce7a1623c3" alt="" data-size="line"> at the bottom right to analyze the selected sample.

By checking the most informative features that contribute to the prediction or the loss function, we can rank the features that contain larger impact and generate a heat map that demonstrates the area in the samples that activates those features. This area contributes to the prediction or to the loss the most.

![Heat Map for Negative (click-to-zoom)](https://3509361326-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F9UXeOlFqlw8pl79U2HGU%2Fuploads%2FUMvDiiuEd18olTnnodFT%2Fimage.png?alt=media\&token=b3e66575-8484-45f5-88b9-2bee0cd03418) ![Ground Truth (click-to-zoom)](https://3509361326-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F9UXeOlFqlw8pl79U2HGU%2Fuploads%2FD29mOS73AefPGXSj7ncu%2Fimage.png?alt=media\&token=01d0e834-2696-4d78-8605-03b348669807) ![Prediction (click-to-zoom)](https://3509361326-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F9UXeOlFqlw8pl79U2HGU%2Fuploads%2Fda6d6AxFfb2mtzRtv4cp%2Fimage.png?alt=media\&token=46ccccfe-d815-4f4f-84d7-dc55a7224453)

Another sample with high loss is about a vampire movie. :vampire:

![Sample with High Loss](https://3509361326-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F9UXeOlFqlw8pl79U2HGU%2Fuploads%2FgPuHGLMP3Jvysjlge4F0%2Fimage.png?alt=media\&token=3ecab64e-d025-4010-b5d3-b9fd7b449b22)

After selecting the sample, scroll down to the sample's Details panel on the right and click <img src="https://3509361326-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F9UXeOlFqlw8pl79U2HGU%2Fuploads%2FXOM3KQF4ubAg3uGY9tzU%2Fimage.png?alt=media&#x26;token=82a8dd2c-8a85-4ada-86d5-d4ce7a1623c3" alt="" data-size="line">.

Once the analysis completes, we can explore the model's response to the sample. In the example, we can see that the model's prediction was `negative`, even though the ground truth is `positive`.

![Prediction (click-to-zoom)](https://3509361326-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F9UXeOlFqlw8pl79U2HGU%2Fuploads%2FEOZVeXu9JNClQiq4wYrK%2Fimage.png?alt=media\&token=b9175429-ca99-43b4-a109-f6d84e9dc279) ![Ground Truth (click-to-zoom)](https://3509361326-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F9UXeOlFqlw8pl79U2HGU%2Fuploads%2FlQN4SCPg4roQyGJFfJW0%2Fimage.png?alt=media\&token=93582c8c-f487-44f6-bd52-ef27e9c4243b)

Clicking on **feature\_map** on the right will show a heat map correlated with each output. This shows what areas in the input had a high impact on the output.

![Heat map for Positive Output (click-to-zoom)](https://3509361326-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F9UXeOlFqlw8pl79U2HGU%2Fuploads%2FRw7vhDkkWb2nUmJpHC7A%2Fimage.png?alt=media\&token=5806f95c-2ace-4654-ab6b-15df9c097169)

![Heat-Map for Negative Output (click-to-zoom)](https://3509361326-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F9UXeOlFqlw8pl79U2HGU%2Fuploads%2FJWPHGf0IaDfto7JJT2v3%2Fimage.png?alt=media\&token=aa36bf98-2c01-46c9-ae5b-3eb1e3c928e4)

From the analysis above, we see the words that get high attention for each input.

A few insights we might get about our model:

* It has high attention to words separated from the sentence or neighboring words. This might point to a limited contextual understanding.
* It has high attention to positive words, e.g., `enjoyable`, `definitely`, `cool`, and `likeable`.
* The model gives high significance to negative words, e.g., `disappointed`, `unfortunately`, `bad`, and `slow`.
* &#x20;It has a bias against vampire movies. There is a lot of `negative` attention on words like `vampire`, `vampires`, and `Dracula`.

### Comparison to CNN

Up until now, we had used a model with **Dense** layers. Another approach is building a model based on convolutional layers, e.g. CNN (Convolutional Neural Network).

The motivation is that in some cases, the convolutional layers will catch the spatial connections between words better.

#### Importing the CNN

The model presented in this section is based on the `tensorleap_conv_model()` found in Tensorleap's [**Examples Repository**](https://github.com/tensorleap/tensorleap/tree/master/examples), specifically [**model\_infer.py**](https://github.com/tensorleap/tensorleap/blob/master/examples/imdb/imdb/imdb/model_infer.py).

Although we can build/push the model similar to how we handled the dense model, we will use **Import Model** this time.

For your convenience, you can download the model below.

{% file src="<https://3509361326-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F9UXeOlFqlw8pl79U2HGU%2Fuploads%2FXahdVt2n3sgLYBzPylKK%2Fimdb_cnn.h5?alt=media&token=ed7e3e14-b70f-4035-9d70-bbe2dd169e6f>" %}

After downloading the serialized model, follow these steps to import it:

1. Click <img src="https://3509361326-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F9UXeOlFqlw8pl79U2HGU%2Fuploads%2FZbix52umVdtURiu1YUQL%2Fimage.png?alt=media&#x26;token=3cfb8dbb-9446-45e9-a2a9-b84daab9aa09" alt="" data-size="line"> on the left side of the **Network** view.
2. Set **Revision Name** to `mnist-cnn`, the **Model Name** to `mnist-cnn-imported`, **File Type** to `H5_TF2`, select the model above for upload and click <img src="https://3509361326-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F9UXeOlFqlw8pl79U2HGU%2Fuploads%2FffzGMCnMcV7KxuxwDe57%2Fimage.png?alt=media&#x26;token=53a5ce9f-fb3b-498e-9022-9b013985487d" alt="" data-size="line">.
3. After the import is finished, you will see the added version on the left. You should see the model with the **Conv1D** layers at the **Network** view.
4. Click the **Dataset Block,** then on the **Dataset Details** panel, connect it to the `imdb` dataset instance. Then connect the block to the first **Embedding** layer.
5. Near the last layer, right-click on the background, select **GroundTruth**, then click and select `Ground Truth - sentiment`.
6. Right-click on the background and select **Loss**->**CategoricalCrossentropy**, and **Optimizer**->**Adam**.
7. Connect the **GroundTruth** and the last **Dense** layer to the **CategoricalCrossentropy** block, and connect that block to the **Adam** optimizer block.
8. Click <img src="https://3509361326-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F9UXeOlFqlw8pl79U2HGU%2Fuploads%2FUvrp3BL2kCpVF03znV5h%2Fimage.png?alt=media&#x26;token=78bd6c33-f1a5-4ca8-9f09-635cd772d0bf" alt="" data-size="line">on the **Network** view, select the **Override Current Version** checkbox and click <img src="https://3509361326-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F9UXeOlFqlw8pl79U2HGU%2Fuploads%2FVqcAjLBpBihZfjQ61LQQ%2Fimage.png?alt=media&#x26;token=abf954a3-9670-4912-b246-67768e585481" alt="" data-size="line">.
9. Add the **Visualizers** as was previously described in the [**Model Integration**](https://docs.tensorleap.ai/guides/full-guides/imdb-guide/model-integration).

The image below shows the CNN model after completing the steps above.

![Loss, Optimizer and Visualizers](https://3509361326-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F9UXeOlFqlw8pl79U2HGU%2Fuploads%2FjwW44GMOGFnJfYcdT1dy%2Fimage.png?alt=media\&token=116a7657-21d8-435f-bf04-ea9393028686)

#### Sample Analysis (CNN)

In this section, we will run a sample analysis on the imported CNN model.

Follow these steps:

1. On the **Versions** view, extend the `mnist-cnn` version and click the <img src="https://3509361326-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F9UXeOlFqlw8pl79U2HGU%2Fuploads%2FOmrnfFlmKOCi2f4jWfK3%2Fimage.png?alt=media&#x26;token=3f96cebc-2529-4f38-945a-d73722e81cfb" alt="" data-size="line"> button to add it to the **Dashboard** view.
2. Click <img src="https://3509361326-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F9UXeOlFqlw8pl79U2HGU%2Fuploads%2FU7PN1bycpAHai196sKE1%2Fimage.png?alt=media&#x26;token=bcce2473-1f35-46ce-81a2-fbc410251e64" alt="" data-size="line">, choose `mnist-cnn-imported` from the right, and for **Selected Subset**,   select `Validation`. This step collects metrics about the data in order to perform the analysis.
3. In the **Dashboard** view, click <img src="https://3509361326-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F9UXeOlFqlw8pl79U2HGU%2Fuploads%2FIrdyKuiHO2PhUaqICGxc%2Fimage.png?alt=media&#x26;token=93edf9c4-4426-4098-a82f-36600f50d8dd" alt="" data-size="line">, then <img src="https://3509361326-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F9UXeOlFqlw8pl79U2HGU%2Fuploads%2FWERRlKkTwY9q6PZzcev4%2Fimage.png?alt=media&#x26;token=1cdc3e33-3efb-440b-8327-7542d51122c4" alt="" data-size="line">.&#x20;
4. Set **Dataset Slice** to `Validation` and **Sample Index** to `235`, then click **Analyze**.

The images below show the resulting heat maps for `positive` and `negative` reviews.

![Heat map for Positive Output (CNN)](https://3509361326-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F9UXeOlFqlw8pl79U2HGU%2Fuploads%2Fhhd4pmoSiXYxRqwH2hKn%2Fimage.png?alt=media\&token=0e511f71-020c-4247-ae61-83978119f801)

![Heat map for Negative Output (CNN)](https://3509361326-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F9UXeOlFqlw8pl79U2HGU%2Fuploads%2FTZVr1UwG4t4vu9rhuh5S%2Fimage.png?alt=media\&token=da769952-baad-44e1-b533-790fd507b136)

The CNN model pays particular attention to sentences and neighboring words. This enables the model to perceive additional context in each review.

## Up Next - Advanced Metrics

In this section, we demonstrated how Tensorleap performs population exploration analysis and sample analysis using a dense and CNN models.

Next, we will add custom metadata to help us find more correlations in our samples and model. When you're ready, go to [**Advanced Metrics**](https://docs.tensorleap.ai/guides/full-guides/imdb-guide/advanced-metrics).
