Yolo Vision Shenzhen
Shenzhen
Join now

Deploy Ultralytics YOLO on Intel for high-performance inference

Ultralytics partners with Intel to deliver high-performance inference, using the power of CPUs, NPUs and GPUs.

Deploy Ultralytics YOLO on Intel for high-performance inference

About Intel

Intel (Nasdaq: INTC) is an industry leader creating world-changing technology that enables global progress and enriches lives. Intel continuously advances semiconductor design and manufacturing to help address its customers' greatest challenges, embedding intelligence in the cloud, network, edge, and every computing device to transform business and society.


OpenVINO™ is an open source toolkit that accelerates AI inference with lower latency and higher throughput, while maintaining accuracy and optimizing hardware use. It streamlines AI development and deep learning integration across computer vision, large language models, and generative AI.

Why choose Intel for YOLO?

Deploy Ultralytics YOLO models with unmatched performance and efficiency

Optimized for Ultralytics YOLO

Maximum throughput, minimal latency across Intel's full device lineup.

Edge-native performance

Edge-ready YOLO inference with FP32, FP16, and INT8 support. No accuracy trade-offs required.

Real-time inference

Sub-10ms inference across all major YOLO tasks, verified on Intel CPUs, GPUs, and NPUs.

Lower cost of ownership

Inference on existing Intel silicon. Lower costs, without compromising accuracy.

Easy integration

Up and running in minutes with the Ultralytics Python package or CLI. Same API, same workflow.

Future-proof

Always up to date with the latest YOLO models and Intel hardware. No pipeline rework required.

Technical integration

Seamless integration between Ultralytics models and Intel hardware

Model performance on Intel silicon

See how Ultralytics YOLO models perform across Intel CPUs, GPUs, and NPUs.

Deploy on Intel hardware
from ultralytics import YOLO
# Load a YOLO26n PyTorch model
model = YOLO("yolo26n.pt")
# Export the model
model.export(format="openvino")  # creates 'yolo26n_openvino_model/'
# Load the exported OpenVINO model
ov_model = YOLO("yolo26n_openvino_model/")
# Run inference
results = ov_model("https://ultralytics.com/images/bus.jpg")
# Run inference with specified device, available devices: ["intel:gpu", "intel:npu", "intel:cpu"]
results = ov_model("https://ultralytics.com/images/bus.jpg", device="intel:gpu")
# Export a YOLO26n PyTorch model to OpenVINO format
yolo export model=yolo26n.pt format=openvino # creates 'yolo26n_openvino_model/'
# Run inference with the exported model
yolo predict model=yolo26n_openvino_model source='https://ultralytics.com/images/bus.jpg'
# Run inference with specified device, available devices: ["intel:gpu", "intel:npu", "intel:cpu"]
yolo predict model=yolo26n_openvino_model source='https://ultralytics.com/images/bus.jpg' device="intel:gpu"

Become an Ultralytics partner

Join our partner ecosystem and unlock new opportunities to deliver cutting-edge AI solutions