Domain Gap Analysis
This describes how to charactarise domain gap and minimize it within the plaform
What Is Domain Gap and Why It Matters
A domain gap occurs when a model performs differently across distinct data distributions — for example, between synthetic and real-world data, or between data captured by different hardware. These performance drops often stem from subtle but impactful changes in the input space, such as lighting, compression, or scene content.
Understanding that a gap exists is not enough. To address it effectively, teams need to understand why it exists — in terms of quantifiable, input-level differences that drive the model's behavior.
This enables targeted mitigation strategies, like collecting specific new data, refining labeling, or applying domain adaptation techniques in a controlled way.
How Tensorleap Helps Analyze and Mitigate Domain Gap
1. Measure Domain Gap in Latent Space
Tensorleap computes the distance between domains in the model’s latent space. This allows users to:
Quantify whether two domains are semantically different from the model’s perspective
Rapidly test hypotheses by applying changes and re-measuring the latent shift
Identify specific concepts that are missing or distorted in one domain

2. Explore Metadata Distributions Between Domains
Tensorleap surfaces metadata differences across domains — whether manually added (e.g. “ISP type”) or automatically derived.
This helps uncover input-level shifts that might explain the gap:
Exposure or lighting distributions
Device-specific compression patterns
Scene or location types

3. Use Heatmaps to Understand Domain-Specific Focus
By comparing input-level heatmaps across domains, Tensorleap reveals:
What features the model focuses on in each domain
Whether the model relies on domain-specific artifacts or irrelevant cues
Potential blind spots or missing context in one domain

Domain Gap Walkthrough
Coming Soon
Last updated
Was this helpful?