Join us as we revisit Ultralytics' experience at Embedded World North America 2025. Find out how Ultralytics YOLO models are advancing embedded vision.

Join us as we revisit Ultralytics' experience at Embedded World North America 2025. Find out how Ultralytics YOLO models are advancing embedded vision.

Earlier this year, in March, we joined the embedded community in Nuremberg for the Embedded World exhibition and conference. There, we saw how Ultralytics YOLO models are being used in real embedded vision applications across many industries.
Various innovations made it clear how quickly on-device AI is growing and how widely Ultralytics YOLO models are being adopted. Last week, continuing this momentum, we traveled to Anaheim, California, from November 4th to 6th for Embedded World North America 2025.
The event brought together engineers, hardware makers, software developers, tech enthusiasts and researchers who are building the next generation of intelligent embedded devices. It was a great opportunity to connect with the embedded community in the United States and to see how Ultralytics YOLO models are being used in practical, production-focused environments.
With that in mind, let’s explore some of the highlights from our time at Embedded World North America 2025. Let’s get started!
Before heading to Anaheim, Francesco Mattioli, our Lead Partnership Engineer, and Nuvola Ladi, our Digital Content Manager, traveled to Santa Clara in the heart of Silicon Valley to visit NVIDIA's offices. The visit offered a relaxed and open space to foster collaboration and catch up with familiar faces.

They toured the campus and had lunch together. It was a warm and energizing way to begin the week, helping set the tone for our time in Anaheim.
To set the stage, let’s take a quick look back at our experience in Nuremberg earlier this year before diving into what happened last week.
In March, the Embedded World Expo floor was filled with inspiring examples of embedded vision in action. In particular, we saw a lot of teams using Ultralytics YOLO models to power real-time applications on edge hardware.
From compact development boards to more advanced computing modules, Ultralytics YOLO models were being used as the go-to models for testing performance, showcasing new hardware capabilities, and demonstrating practical embedded AI workflows. For instance, we saw Ultralytics YOLO models running on hardware from companies such as D-Robotics, Infineon Technologies, Seeed Studio, and M5Stack, each showcasing different approaches to bringing efficient computer vision to the edge.
Similar to the event in March, Embedded World North America brought the embedded community together again at the Anaheim Convention Center in California. The three-day gathering included keynotes from Dario Freddi of SECO, who spoke on the evolution of edge AI acceleration, and Joe Fabbre of Green Hills Software, who highlighted the growing complexity of embedded software development.
Across the conference sessions and expo floor, there was a strong focus on embedded vision and practical AI deployment. Many teams were exploring how to run computer vision reliably in real environments where efficiency, power use, and response time are crucial. This made it a great environment to see how Ultralytics YOLO models are being used as approachable and reliable solutions for on-device AI.

As you go through this recap, you might be wondering why embedded AI has become such a big focus lately. A lot of devices today need to understand what they are seeing and make decisions right on the spot, without sending data back to the cloud.
This is becoming important in places like factories, warehouses, hospitals, consumer products, and even small robots and sensors. When AI can run directly on the device, everything becomes faster and more reliable, and sensitive data stays local.
Because of this, embedded vision is growing quickly and gaining a lot of attention across the industry. Many teams are looking for models that are efficient, easy to deploy, and flexible enough to run on different kinds of hardware.
Specifically, Ultralytics YOLO models have become a popular choice because they offer high accuracy while remaining lightweight enough to run on resource-constrained devices. They also integrate smoothly with a wide range of hardware platforms, which makes it easier for developers to prototype, test, and deploy real embedded AI solutions.
On November 5th, Francesco Mattioli and Nuvola Ladi arrived at Embedded World North America and spent the day exploring the expo floor. It was exciting to see how many teams are using Ultralytics YOLO models as part of their embedded vision workflows.
From early-stage prototypes to production-ready systems, Ultralytics YOLO models were present in a wide range of solutions across robotics, automation, smart devices, and industrial systems. These applications benefit from running AI directly on the device, which keeps response times fast, improves reliability, and removes the need to constantly send data to the cloud.

Francesco shared, “Our experience at Embedded World really drove home that Ultralytics YOLO models are absolutely ready for prime time in the industry. It was genuinely impressive to walk around and see just how many different types of devices can now run these models smoothly. We're talking about everything from tiny microcontrollers that fit in the palm of your hand all the way up to powerful GPU setups. The versatility is pretty incredible, whether you're working with resource-constrained edge devices or have access to serious computing power, these models just work. It's exciting to see this kind of flexibility becoming a reality, making computer vision accessible across such a broad spectrum of hardware.”
One of the key highlights of the day was our visit to the STMicroelectronics booth in Hall C, Level 1, Booth 4015. The booth featured an Edge AI Ecosystem Partner Wall that showcased a range of companies working together to make on-device AI more accessible and practical across different types of hardware.

It was great to see Ultralytics included and recognized as part of that ecosystem. The wall featured a demo of Ultralytics YOLO models running real-time object detection, highlighting how our models can operate efficiently on STMicroelectronics hardware that is designed for low-power edge devices.
We also had the opportunity to talk with the STMicroelectronics team about how developers are using Ultralytics YOLO models on their microcontrollers (MCUs) and microprocessors (MPUs), and how performance, memory efficiency, and deployment workflows continue to improve.
The next day, we continued meeting with companies, developers, and partners across the expo. Rather than just focusing on demos, this day was more about conversations. We heard firsthand how teams are using Ultralytics YOLO models in projects ranging from robotics and automated inspection systems to smart consumer devices and medical tools.
Many of these teams shared feedback, lessons learned, and ideas for what they would like to see next in embedded vision. We also had the chance to reconnect with members of our open-source community who were attending the event.
It was encouraging to see how people are experimenting, adapting, and extending Ultralytics YOLO models to meet practical needs in real environments. These conversations highlighted how important ease of use, flexibility, and strong documentation continue to be as more teams bring AI directly onto devices.
Our time at Embedded World North America 2025 highlighted just how quickly embedded AI is advancing and how widely Ultralytics YOLO models are being used in real products and workflows. It was inspiring to connect with teams who are actively bringing computer vision to the edge, and to see these applications making an impact in areas like robotics, automation, healthcare, and smart devices.
Thank you to everyone who took the time to share their work, insights, and feedback with us throughout the event. We’re excited to continue supporting the embedded AI community and to keep making high-performance, accessible computer vision possible on devices of all sizes.
Join our global community and check our GitHub repository to learn more about computer vision. Explore our solutions pages to discover innovations such as AI in agriculture and computer vision in retail. Check out our licensing options and get started with building your own computer vision model.