ULTRALYTICS YOLO
Built from the ground up for edge and low-power devices, Ultralytics YOLO26 sets a new standard for real-time vision AI, delivering up to 43% faster CPU inference with a cleaner, simpler architecture.



























Explore how Ultralytics YOLO models works directly in your browser.
130.7K+
263.7M+
2.8B+
1K+

Real-time performance on devices without GPUs, purpose-built for edge and constrained environments.
1

Predictions generated directly, with no post-processing step. Lower latency, simpler deployment.
2

Removing Distribution Focal Loss (DFL) simplifies exports and broadens edge device compatibility.
3

A hybrid of SGD and Muon inspired by LLM training advances, delivering more stable training and faster convergence.
4

Runs efficiently on CPUs, GPUs, and edge hardware. Export to 17+ formats and deploy anywhere.

Real-time vision AI on resource-constrained devices, without sacrificing accuracy.

Detect beyond fixed categories using text prompts, visual prompts, or prompt-free inference across 4,585 classes.

YOLO26 follows the same familiar interface as YOLOv8 and YOLO11, no steep learning curve.

Dedicated support channels, active forums, and regular updates keep you moving forward.

Flexible options for academic, open-source, and commercial use under AGPL-3.0 and Enterprise licenses.
.webp)



.webp)



YOLO26 removes DFL for simpler export, eliminates NMS for faster end-to-end inference, improves small-object accuracy with ProgLoss + STAL, introduces the MuSGD optimizer for more stable training, and delivers up to 43% faster CPU inference.
The nano (n) variant is ideal for edge and CPU-constrained devices. The small (s) and medium (m) variants offer a strong balance of speed and accuracy for most applications. The large (l) and extra-large (x) variants deliver maximum accuracy for demanding workloads.
Object detection, instance segmentation, image classification, pose estimation, and oriented object detection, all in a single unified model family.
Yes. YOLO26 follows the same interface as YOLOv8 and YOLO11, so migrating is straightforward. Simply swap in your YOLO26 model weights.
YOLO26 supports export to TensorRT, ONNX, CoreML, TFLite, and OpenVINO, covering the most common edge deployment targets. The NMS-free architecture means fewer integration headaches and lower latency out of the box.
From annotation to deployment, build vision AI solutions that scale with you.