Built for Discovery
A codebase designed for researchers, not just users. Modify freely, inspect deeply, and publish without paying for the privilege.
No "Research License" Required
Some frameworks charge separate fees for "research" or "science" licenses just to let you inspect the code without releasing your work. Libre-YOLO doesn't.
Typical "Science Licenses"
- Pay to keep research private
- Fees for commercial R&D use
- Complex license tiers
- Legal review for publication
Libre-YOLO (MIT)
- Investigate freely, keep it private
- Zero fees for any use case
- One simple license: MIT
- Publish without legal concerns
See Inside the Black Box
Built-in tools for interpretability and explainability. No external dependencies, no complex setup—just flags and function calls.
Feature Map Extraction
One flag. That's all it takes to save intermediate activations from every layer. Perfect for understanding what your model "sees" at each stage.
- Backbone, neck, and head layers
- Automatic organization by layer type
- NumPy arrays for easy analysis
from libreyolo import LIBREYOLO
model = LIBREYOLO("yolov11m.pt")
# One flag to save all feature maps
results = model.predict(
"image.jpg",
save_feature_maps=True # That's it!
)
# Feature maps saved to runs/detect/exp/feature_maps/
# Organized by layer: backbone/, neck/, head/Explainability Toolbox
Native implementations of GradCAM, saliency maps, and SHAP explanations. No pip installing extra packages or fighting version conflicts.
from libreyolo import LIBREYOLO
from libreyolo.explain import GradCAM, SaliencyMap, SHAP
model = LIBREYOLO("yolov11m.pt")
# GradCAM visualization
gradcam = GradCAM(model, target_layer="backbone.layer4")
heatmap = gradcam.generate("image.jpg", class_idx=0)
gradcam.visualize(heatmap, save="gradcam_person.png")
# Saliency maps
saliency = SaliencyMap(model)
saliency.compute("image.jpg", save="saliency.png")
# SHAP explanations
shap_explainer = SHAP(model)
shap_explainer.explain("image.jpg", save="shap_values.png")Attention Visualization
For transformer-based architectures, extract and visualize attention weights to understand which regions the model attends to.
- Per-head attention maps
- Cross-attention & self-attention
- Publication-ready visualizations
from libreyolo import LIBREYOLO
model = LIBREYOLO("yolov11m.pt")
# Extract attention weights (for transformer-based models)
results = model.predict(
"image.jpg",
extract_attention=True
)
# Access attention maps per head
for layer, attn in results[0].attention_maps.items():
print(f"{layer}: {attn.shape}") # [heads, H, W]
# Built-in visualization
results[0].plot_attention_heads(save="attention_heads.png")One File Per Model
No more hunting through labyrinthine file structures. Each model architecture lives in a single, self-contained file.
Typical Framework Structure
standard_framework/
├── nn/
│ ├── modules/
│ │ ├── block.py # 500+ lines
│ │ ├── conv.py # Where is C2f?
│ │ ├── head.py # Multiple classes
│ │ └── transformer.py
│ └── tasks.py # 1000+ lines
├── models/
│ ├── yolo/
│ │ ├── detect/
│ │ │ ├── train.py
│ │ │ └── val.py
│ │ └── model.py
│ └── ...
└── # Good luck finding what you needFragmented across dozens of files. Imports from everywhere. Modifying one thing breaks another.
Libre-YOLO Structure
libreyolo/
├── models/
│ ├── yolov8.py # Complete v8 model
│ ├── yolov11.py # Complete v11 model
│ └── yolov12.py # Complete v12 model
├── notebooks/
│ ├── yolov8.ipynb # Interactive v8
│ ├── yolov11.ipynb # Interactive v11
│ └── explainability.ipynb
├── explain/
│ └── toolbox.py # All XAI methods
└── # Find anything in secondsEach model is self-contained. Modify YOLOv11 without touching v8. Jupyter notebooks for interactive exploration.
Python Files
One .py file per architecture. Everything you need to understand or modify in one place.
Jupyter Notebooks
Interactive notebooks for each model. Run cell-by-cell, visualize outputs, experiment freely.
Easy to Fork
Want to create YOLOv11-Custom? Copy one file, modify, done. No dependency spaghetti.
Designed for Modification
Want to test a new backbone? Experiment with a custom attention mechanism? Add a novel loss function? The clean architecture makes modifications straightforward.
Modify Without Fear
- Isolated Changes
Modify YOLOv11 without affecting YOLOv8. Each model is independent.
- Clear Inheritance
Simple class hierarchy. Override what you need, inherit the rest.
- No License Barriers
Your modifications stay private. No fees to keep your research confidential.
For researchers: Publish papers using modified Libre-YOLO architectures. No "science license" needed. Just cite the project and publish freely.
# yolov11_custom.py - Your modified architecture
from libreyolo.models.yolov11 import YOLOv11
class YOLOv11Custom(YOLOv11):
def __init__(self):
super().__init__()
# Swap out the backbone
self.backbone = MyCustomBackbone()
def forward(self, x):
# Add your custom logic
features = self.backbone(x)
# Maybe add a custom attention layer?
features = self.custom_attention(features)
return self.head(self.neck(features))
# That's it. Train it:
model = YOLOv11Custom()
model.train(data="my_dataset.yaml")Start Your Research Today
No license fees. No paywalls. No restrictions on keeping your work private. Just install and start exploring.
$ pip install libreyolo