See how easy it is to bring Ultralytics YOLO11 to Apple devices with CoreML and enable fast offline computer vision tasks for real-time iOS apps.

See how easy it is to bring Ultralytics YOLO11 to Apple devices with CoreML and enable fast offline computer vision tasks for real-time iOS apps.
With Apple introducing features like Apple Intelligence, it’s clear that on-device AI is becoming a central part of how we use our phones. For developers, this shift means users are adopting iOS apps that use capabilities like computer vision to deliver smarter, more responsive experiences.
Computer vision is a type of artificial intelligence (AI) that enables computers to understand and analyze visual information, such as images or videos. On mobile devices, it can be used in real-time to detect, classify, and interact with objects through the phone’s camera. Vision AI models like Ultralytics YOLO11 can be custom-trained to recognize specific objects, depending on your app’s needs.
However, YOLO11 isn’t set up to run on iOS right out of the box. To deploy YOLO11 on iPhones or other Apple devices, particularly for offline use, it needs to be converted into a format optimized for Apple’s ecosystem.
This is exactly the kind of problem CoreML was built to solve. CoreML is Apple’s machine learning framework, built to run models locally and integrate seamlessly into iOS and macOS applications. The CoreML integration, supported by Ultralytics, makes it easy to export your model for local deployment on iPhones.
In this article, we’ll take a closer look at how to export your YOLO11 model to the CoreML format. We’ll also explore real-time use cases that show the advantages of running computer vision models directly on iOS devices. Let’s get started!
CoreML is a machine learning (ML) framework developed by Apple that makes it possible for developers to integrate trained ML models directly into apps across Apple’s ecosystem, including iOS (iPhone and iPad), macOS (Mac), watchOS (Apple Watch), and tvOS (Apple TV). It is designed to make machine learning accessible and efficient on Apple devices by enabling models to run directly on-device, without requiring an internet connection.
At the core of CoreML is a unified model format that supports a wide range of AI tasks such as image classification, object detection, speech recognition, and natural language processing. The framework is optimized to make the most of Apple’s hardware, using the CPU (central processing unit), GPU (graphics processing unit), and ANE (Apple Neural Engine) to execute models quickly and efficiently.
CoreML supports a variety of model types and is compatible with popular machine learning libraries, including TensorFlow, PyTorch, scikit-learn, XGBoost, and LibSVM. This makes it easier for developers to bring advanced ML capabilities to everyday apps while ensuring they run smoothly across Apple devices.
Here are some of the key features that make CoreML a reliable tool for integrating AI into Apple apps:
Now that we have a better understanding of the CoreML framework, let’s walk through how to use the CoreML integration supported by Ultralytics to export a YOLO11 model to the CoreML format.
To access the integration features provided by Ultralytics, start by installing the Ultralytics Python package. It’s a lightweight, easy-to-use library that simplifies tasks such as training, evaluating, predicting, and exporting Ultralytics YOLO models.
You can install the Ultralytics Python package by running “pip install ultralytics” in your command terminal. If you're using an environment like Jupyter Notebook or Google Colab, include an exclamation mark (!) before the command: “!pip install ultralytics”.
If you run into any issues during installation or while exporting to CoreML, check the official Ultralytics documentation or the Common Issues guide for help.
Once the package is successfully installed, you’re ready to load a YOLO11 model and convert it to CoreML format.
If you’re not sure which pre-trained YOLO11 model to use, you can explore the range of models supported by Ultralytics. Each one offers a different balance of speed, size, and accuracy, and you can pick the best fit for your project. You can also use a custom-trained YOLO11 model if you’ve trained one on your own dataset.
In the code snippet below, a pre-trained YOLO11 model file named "yolo11n.pt" is used. During the export process, it is converted into a CoreML package called "yolo11n.mlpackage."
The "yolo11n" model is the nano version, optimized for speed and low resource usage. Depending on your project’s needs, you can also choose other model sizes such as "s" for small, "m" for medium, "l" for large, or "x" for extra-large. Each version offers a different balance between performance and accuracy.
from ultralytics import YOLO
model = YOLO("yolo11n.pt")
model.export(format="coreml")
After exporting to the CoreML format, YOLO11 can be easily integrated into iOS applications, enabling real-time computer vision tasks like object detection on devices such as iPhones, iPads, and Macs.
For example, the code snippet below demonstrates how to load the exported CoreML model and perform inference. Inference is the process of using a trained model to make predictions on new data. In this case, the model analyzes an image of a family playing with a ball.
coreml_model = YOLO("yolo11n.mlpackage")
results = coreml_model("https://images.pexels.com/photos/4933841/pexels-photo-4933841.jpeg", save=True)
After running the code, the output image will be saved in the "runs/detect/predict" folder.
Exporting YOLO11 to CoreML brings the flexibility to build diverse computer vision applications that can run efficiently on iPhones, iPads, and Macs. Next, let's look at some real-world scenarios where this integration can be especially useful.
Augmented reality (AR) blends digital content with the real world by overlaying virtual elements onto live camera views. It's becoming a key part of mobile gaming, creating more interactive and immersive experiences.
With YOLO11 exported to the CoreML format, iOS developers can build AR games that recognize real-world objects like benches, trees, or signs using the phone’s camera. The game can then overlay virtual items, such as coins, clues, or creatures, on top of these objects to enhance the player’s surroundings.
Behind the scenes, this works using object detection and object tracking. YOLO11 detects and identifies objects in real-time, while tracking keeps those objects in view as the camera moves, making sure the virtual elements stay aligned with the real world.
Players can point their phones, explore their environment, and interact with what they see to collect items or complete quick challenges. All of this can run directly on the device without needing an internet connection, making the experience smooth and engaging.
Automatic Number Plate Recognition (ANPR) is a computer vision application used to detect and read vehicle license plates. It’s commonly used in security, traffic monitoring, and access control systems. With CoreML and models like YOLO11, ANPR can now run efficiently on iOS devices.
Having an ANPR app on your iPhone can be especially useful in security-focused environments. For example, it can help teams quickly determine whether a vehicle entering a restricted area is authorized or not.
Such an app can use a Vision AI model such as YOLO11, integrated through CoreML, to detect vehicles and locate their license plates in real time using the device's camera. Once a plate is detected, Optical Character Recognition (OCR) technology can read the license number. The app can then compare this number against a local or cloud-based database to verify access or flag unauthorized vehicles.
AI has had a huge impact on accessibility, helping break down barriers for people with visual impairments. With tools like CoreML and computer vision models such as YOLO11, developers can build iOS apps that describe the world around users in real time, making everyday tasks easier and more independent.
For example, a visually impaired person can point their iPhone camera at their surroundings. The app uses object detection to recognize key elements, like vehicles, people, or street signs, and narrates what it sees. This can be used in situations like navigating a busy street or understanding an emergency.
Exporting YOLO11 to the CoreML format creates new opportunities for real-time applications, including offline object detection on iOS devices. From agriculture and security to accessibility, this combination allows developers to build smart, efficient, and privacy-focused apps that run entirely on the device.
With just a few simple steps, you can convert your YOLO11 model and add reliable computer vision features to iPhones. Best of all, it works without needing an internet connection. Overall, the CoreML integration brings the power of advanced AI to everyday mobile apps, making them faster, more responsive, and ready to run anywhere.
Curious to learn more about AI? Explore our GitHub repository, connect with our community, and check out our licensing options to jumpstart your computer vision project. Find out how innovations like AI in retail and computer vision in logistics are shaping the future on our solutions pages.