Découvre comment ONNX améliore la portabilité et l'interopérabilité des modèles d'IA, permettant un déploiement transparent des modèles Ultralytics YOLO sur diverses plateformes.
In the rapidly evolving field of artificial intelligence (AI) and machine learning (ML), moving models between different tools and platforms efficiently is crucial. ONNX (Open Neural Network Exchange) addresses this challenge by providing an open-source format designed specifically for AI models. It acts as a universal translator, allowing developers to train a model in one framework, like PyTorch, and then deploy it using another framework or inference engine, such as TensorFlow or specialized runtimes like ONNX Runtime. This interoperability streamlines the path from research to production, fostering collaboration and flexibility within the AI ecosystem. ONNX was initially developed by Facebook AI Research and Microsoft Research and is now a thriving community project.
The core value of ONNX lies in promoting portability and interoperability within the AI development lifecycle. Instead of being locked into a specific framework's ecosystem, developers can leverage ONNX to move models freely between different tools and hardware platforms. By defining a common set of operators (the building blocks of neural networks) and a standard file format (.onnx
), ONNX ensures that a model's structure and learned parameters (weights) are represented consistently. This is particularly beneficial for users of Ultralytics YOLO models, as Ultralytics provides straightforward methods for exporting models to ONNX format. This export capability allows users to take models like YOLOv8 or the latest YOLO11 and deploy them on a wide variety of hardware and software platforms, often utilizing optimized inference engines for enhanced performance and hardware acceleration.
ONNX achieves interoperability through several key technical features:
ONNX serves as a crucial bridge between model training environments and diverse deployment targets. Here are two concrete examples:
It's important to distinguish ONNX from related terms:
.pt
/.pth
ou TensorFlow's SavedModel are native to their respective frameworks. ONNX acts as an intermediary, allowing conversion between these formats or deployment via a common runtime. TorchScript is another format for PyTorch model serialization, sometimes used as an alternative or precursor to ONNX export.In summary, ONNX is a vital standard for ensuring flexibility and interoperability in the machine learning operations (MLOps) pipeline, enabling developers to choose the best tools for training and deployment without being constrained by framework limitations. Platforms like Ultralytics HUB leverage such formats to simplify the journey from model development to real-world application.