Learn how to export Ultralytics YOLO models, such as Ultralytics YOLO11, using the ONNX integration for cross-platform deployment across various hardware.
When AI solutions first started gaining attention, most models were deployed on powerful servers in controlled environments. However, as technology has advanced, deployment has expanded far beyond the data center.
Today, AI models run on everything from cloud servers and desktops to smartphones and edge devices. This shift supports faster processing, offline functionality, and smarter systems that operate closer to where the data is generated.
One area where this is especially clear is computer vision - a branch of AI that enables machines to interpret visual data. It is being used to drive applications like facial recognition, autonomous driving, and real-time video analysis. As these use cases grow, so does the need for models that can run smoothly across diverse hardware and platforms.
But deploying computer vision models across a range of deployment targets isn’t always simple. Devices differ in terms of hardware, operating systems, and supported frameworks, making flexibility and compatibility essential.
That’s why having the option to export computer vision models like Ultralytics YOLO11 to different formats is key. For instance, the ONNX (Open Neural Network Exchange) integration supported by Ultralytics provides a practical way to bridge the gap between training and deployment. ONNX is an open format that makes models framework-agnostic and deployment-ready across platforms.
In this article, we’ll take a closer look at the ONNX integration supported by Ultralytics and explore how you can export your YOLO11 model for flexible, cross-platform deployment.
The Open Neural Network Exchange is an open-source project that defines a standard format for machine learning models. Originally developed by Microsoft and Facebook, it allows developers to train a model in one framework, like PyTorch, and run it in another, such as TensorFlow. This makes AI development more flexible, collaborative, and accessible, especially in fields like computer vision.
ONNX provides a common set of operators and a unified file format, making it easier to move models between different tools, frameworks, runtimes, and compilers. Normally, a model trained in one framework isn’t easily compatible with another - but with ONNX, you can export your model once and deploy it almost anywhere: on CPUs (Central Processing Units), GPUs (Graphics Processing Units), mobile devices, or edge hardware.
Also, ONNX Runtime is a high-performance inference engine developed specifically to run models in the ONNX format. It’s designed to make ONNX models run faster and more efficiently across a wide range of platforms - including servers, mobile devices, and edge hardware. ONNX Runtime is compatible with popular frameworks like PyTorch, TensorFlow, TensorFlow Lite, and scikit-learn, making it easy to integrate into different workflows and deploy models wherever they’re needed.
Before we discuss how to export YOLO11 to ONNX format, let’s check out some key features of the ONNX model format.
Whether you're switching between tools, deploying to different devices, or upgrading systems, ONNX helps keep everything running smoothly. Here’s what makes the ONNX model format unique:
Exporting Ultralytics YOLO models like Ultralytics YOLO11 in ONNX format is straightforward and it can be done in a few steps.
To get started, install the Ultralytics Python package using a package manager like ‘pip’. This can be done by running the command “pip install ultralytics” in your command prompt or terminal to get started.
With the Ultralytics package, you can easily train, test, fine-tune, export, and deploy models for various computer vision tasks - making the whole process faster and more efficient. While installing it, if you encounter any difficulties, you can refer to the Common Issues guide for solutions and tips.
Once the Ultralytics package is installed, you can load and export the YOLO11 model to ONNX format using the code below. This example loads a pre-trained YOLO11 model (yolo11n.pt) and exports it as an ONNX file (yolo11n.onnx), making it ready for deployment across different platforms and devices.
from ultralytics import YOLO
model = YOLO("yolo11n.pt")
model.export(format="onnx")
After converting your model to ONNX format, you can deploy it on a variety of platforms.
The example below shows how to load the exported YOLO11 model (yolo11n.onnx) and run an inference with it. Inferencing simply means using the trained model to make predictions on new data. In this case, we’ll use the URL of an image of a bus to test the model.
onnx_model = YOLO("yolo11n.onnx")
results = onnx_model("https://ultralytics.com/images/bus.jpg",save=True)
When you run this code, the following output image will be saved in the runs/detect/predict folder.
The Ultralytics Python package supports exporting models to several formats, including TorchScript, CoreML, TensorRT, and ONNX. So, why choose ONNX?
What makes ONNX stand out is that it’s a framework-agnostic format. While many other export formats are tied to specific tools or environments, ONNX uses a standardized format and a shared set of operators. This makes it highly portable, hardware-friendly, and ideal for cross-platform deployment - whether you're working with cloud servers, mobile apps, or edge devices.
Here’s are some reasons why the ONNX integration could be the ideal choice for your YOLO11 projects:
Next, let’s explore some real-world applications where YOLO11 can be deployed with the help of the ONNX integration.
In busy warehouses, it’s difficult to keep an eye on every product and package at all times. Computer vision systems can help workers find products on shelves and get insights such as, the number of products, type, etc. Such systems can assist businesses automatically manage their vast inventory and save warehouse workers a lot of time.
Specifically, in smart warehouses, YOLO11 models exported to ONNX can be used to identify and count items in real time using cameras and edge devices. The exported model can help scan shelves or pallets to detect stock levels, missing items, or empty spots. Since exporting to ONNX makes the model lightweight and efficient, it can run directly on small edge devices, such as smart cameras, removing the need for expensive servers or constant cloud access.
Hospitals all around the world create large amounts of waste every day, from used gloves and syringes to equipment used during surgery (like single-use or contaminated surgical tools like scissors and scalpels). In fact, research shows that hospitals produce around 5 million tones of waste every year, which is 29 pounds of waste per bed per day.
Sorting such waste properly is essential for hygiene, safety, and following regulations. With YOLO11 models exported in ONNX format, hospitals can automate and monitor waste disposal in real time.
For instance, cameras placed near waste bins in areas like operating rooms or hallways can monitor items as they are discarded. A custom YOLO11 model, trained to recognize different types of medical waste, can analyze the footage and identify what’s being thrown away. If an item ends up in the wrong bin, like a used syringe in regular trash, the system can be set up to immediately alert staff with a light or sound, helping to prevent contamination and ensure compliance.
Knowing the right time to harvest crops can have a big impact on both the quality of the produce and the overall productivity of a farm. Traditionally, farmers rely on experience and manual inspections - but with recent advances in technology, that’s starting to change.
Now, with computer vision innovations like YOLO11, exported in ONNX format, farmers can bring automation and precision into the field. By using drones or cameras mounted on tractors or poles, farmers can capture images of their crops (such as tomatoes, apples, or wheat). YOLO11 can then be used to detect key indicators like color, size, and the distribution of crops. Based on this information, farmers can determine whether the crops are ready to harvest, still maturing, or already past their peak.
While ONNX offers numerous benefits, such as portability, cross-platform compatibility, and framework interoperability, there are some limitations to keep in mind:
Exporting Ultralytics YOLO11 to ONNX makes it easy to take a trained computer vision model and deploy it almost anywhere - whether that’s on a laptop, mobile device, or even a compact smart camera. With the ONNX integration, you’re not tied to a single framework or platform, giving you the flexibility to run your model in the environment that suits your application best.
This makes the transition from training to real-world deployment faster and more efficient. Whether you're tracking inventory in a warehouse or ensuring hospital waste is disposed of correctly, this setup helps systems run more smoothly, reduces errors, and saves valuable time.
Want to learn more about computer vision and AI? Explore our GitHub repository, connect with our community, and check out our licensing options to jumpstart your computer vision project. If you're exploring innovations like AI in manufacturing and computer vision in the automotive industry, visit our solutions pages to discover more.
Begin your journey with the future of machine learning