SemanticLens Documentation¶
An open-source PyTorch library for interpreting and validating large vision models.
Read the paper now as part of Nature Machine Intelligence (Open Access).
Overview¶
SemanticLens is a universal framework for explaining and validating large vision models. While deep learning models are powerful, their internal workings are often a “black box,” making them difficult to trust and debug. SemanticLens addresses this by mapping the internal components of a model (like neurons or filters) into the rich, semantic space of a foundation model (e.g., CLIP or SigLIP).
This allows you to “translate” what the model is doing into a human-understandable format, enabling you to search, analyze, and audit its internal representations.
Key Features¶
- 🔍 Component Analysis
Identify and visualize what individual neurons and layers have learned
- 📚 Text Probing
Search model internals using natural language queries
- 🌄 Image Probing
Search model internals using natural image queries
- 📊 Quantitative Metrics
Measure clarity, polysemanticity, and redundancy of learned concepts
- 🧠 Foundation Model Integration
Built-in support for CLIP, SigLIP, and other vision-language models
- 🎯 Multiple Visualization Strategies
From activation maximization to attribution-based analysis
Quick Start¶
Install SemanticLens:
pip install semanticlens
Basic usage:
import semanticlens as sl
... # dataset and model setup
# Initialization
cv = sl.component_visualization.ActivationComponentVisualizer(
model,
dataset_model,
dataset_fm,
layer_names=layer_names,
device=device,
cache_dir=cache_dir,
)
fm = sl.foundation_models.OpenClip(url="RN50", pretrained="openai", device=device)
lens = sl.Lens(fm, device=device)
# Semantic Embedding
concept_db = lens.compute_concept_db(cv, batch_size=128, num_workers=8)
aggregated_cpt_db = {k: v.mean(1) for k, v in concept_db.items()}
# Analysis
polysemanticity_scores = lens.eval_polysemanticity(concept_db)
search_results = lens.text_probing(["cats", "dogs"], aggregated_cpt_db)
Try It Out¶
Want to explore SemanticLens without installing anything locally? We provide a hosted interactive demo where you can experiment with the system right in your browser:
🚀 Live Demo: https://semanticlens.hhi-research-insights.de/
The demo includes:
A selection of state-of-the-art vision models already integrated and preprocessed.
An intuitive interface for probing representations, exploring concept embeddings, and visualizing results.
The same core functionality exposed by the modules in this documentation, so you can get a feel for how SemanticLens works before diving into the API.
Use this demo as a lightweight way to experiment with model interpretability and get hands-on experience with SemanticLens—no setup, GPUs, or expensive compute required.
Citation¶
If you use SemanticLens in your research, please cite our paper:
@article{dreyer_mechanistic_2025,
title = {Mechanistic understanding and validation of large {AI} models with {SemanticLens}},
copyright = {2025 The Author(s)},
issn = {2522-5839},
url = {https://www.nature.com/articles/s42256-025-01084-w},
doi = {10.1038/s42256-025-01084-w},
language = {en},
urldate = {2025-08-18},
journal = {Nature Machine Intelligence},
author = {Dreyer, Maximilian and Berend, Jim and Labarta, Tobias and Vielhaben, Johanna and Wiegand, Thomas and Lapuschkin, Sebastian and Samek, Wojciech},
month = aug,
year = {2025},
note = {Publisher: Nature Publishing Group},
keywords = {Computer science, Information technology},
pages = {1--14},
}