Tune in to YOLO Vision 2025!
September 25, 2025
10:00 — 18:00 BST
Hybrid event
Yolo Vision 2024

Ultralytics' key highlights from YOLO Vision 2025!

Abirami Vina

5 min read

September 29, 2025

Join us for a recap of Ultralytics’ biggest event of the year, showcasing the Ultralytics YOLO26 launch, inspiring panels, and key community highlights.

The AI and computer vision community came together on September 25th for YOLO Vision 2025 (YV25), Ultralytics’ annual hybrid Vision AI event. Hosted in London at The Pelligon and streamed worldwide, the event welcomed a diverse group of researchers, engineers, and AI enthusiasts to share ideas and learn about new innovations, such as Ultralytics YOLO26.

Marking its fourth year, the event has continued to grow in terms of reach and impact. The YV25 live stream has already captured over 6,800 views, generated more than 49,000 impressions, and gathered nearly 2,000 hours of watch time.

YV25 kicked off with an opening note from our host Oisin Lunny, who set the tone for the day by encouraging attendees to connect, share, and make the most of the event. As he put it, “YOLO Vision 2025 is the conference that unites the open-source vision AI community to focus on data, machine learning, and computer vision advancements.”

In this article, we’ll recap the key highlights from YOLO Vision 2025, including the product launch, keynote talks, a panel, live demos, and the community moments that made the day special. Let’s get started!

Going from a single GPU to a $30M Series A funding

Leading up to the event, there was a lot of excitement surrounding the new product launch, and Glenn Jocher, our Founder and CEO, started the day by building on that energy. 

He shared the journey of Ultralytics, recalling how in 2020, he was running experiments on a single 1080 Ti plugged into his MacBook, a setup that is obsolete today. From those modest beginnings, Ultralytics has grown into a global community with billions of daily inferences powered by YOLO models.

Glenn also spoke about Ultralytics recently closing a $30 million Series A funding round. He explained how this investment will power the next stage of growth by allowing the company to scale the team, expand research, and secure the computing resources needed to continue pushing the boundaries of computer vision. 

Ultralytics YOLO26: A better, faster, smaller YOLO model

Glenn went on to announce two new efforts from Ultralytics. The first is Ultralytics YOLO26, the latest model in the Ultralytics YOLO family, designed to be smaller, faster, and more efficient while achieving even greater accuracy. The second is the Ultralytics Platform, a new end-to-end SaaS workspace that combines data, training, deployment, and monitoring to make building computer vision solutions easier than ever, which is expected to be announced in the near future.

Fig 1. Glenn Jocher announcing Ultralytics YOLO26 on stage at YOLO Vision 2025.

YOLO26 is built to push performance forward while staying practical for real-world use. The smallest version already runs up to 43% faster on CPUs while still improving accuracy, making it ideal for applications from mobile devices to large enterprise systems. YOLO26 will be publicly available by the end of October.

Here’s a glimpse of YOLO26’s key features:

  • Streamlined architecture: The Distribution Focal Loss (DFL) module was removed, which previously slowed models down. YOLO26 now runs more efficiently without sacrificing accuracy.
  • Faster predictions: YOLO26 introduces an option to skip the Non-Maximum Suppression (NMS) step, enabling it to deliver results more quickly and facilitating real-time deployment.
  • Better at spotting small objects: New training methods improve stability and significantly boost accuracy, especially when detecting small details in complex scenes.
  • Smarter training: The new MuSGD optimizer combines the strengths of two training techniques, helping the model learn faster and reach higher accuracy.

A first look at the Ultralytics platform

After introducing YOLO26, Glenn invited Prateek Bhatnagar, our Head of Product Engineering, to demo the next project on the horizon, the Ultralytics Platform. Built to simplify the entire computer vision workflow, the platform aims to brings datasets, annotation, training, deployment, and monitoring together in one place.

Prateek compared it to tuning a car: instead of visiting different shops for tires, engines, and transmissions, everything happens in one garage. In the same way, the platform gives developers an integrated workspace to manage the full lifecycle of a vision AI model.

The demo showcased AI-assisted annotation tools that speed up dataset preparation, customizable training options for both experts and beginners, and real-time monitoring of training runs. 

Insights from a panel discussion on edge deployment

Another highlight of YV25 was a panel on edge deployment, moderated by Oisin Lunny. The session featured Yuki Tsuji from Sony Semiconductor Solutions, David Plowman from Raspberry Pi, and Glenn Jocher. 

The discussion explored how moving AI to the edge reduces latency, lowers costs, and improves privacy. Yuki showcased Sony’s IMX500 sensor, which can run inference directly on the chip. Meanwhile, David spoke about how Raspberry Pi is expanding from its maker roots into large-scale commercial applications.

Fig 2. A panel on edge deployment featuring Oisin Lunny, Yuki Tsuji, David Plowman, and Glenn Jocher.

The panel also touched on one of the biggest hurdles for developers: getting models to run smoothly across different devices. This is where the Ultralytics Python package plays a key role. 

With its wide range of export options, it makes it simple to move a trained model into production on mobile, embedded systems, or enterprise hardware. By taking the pain out of model conversion, Ultralytics helps teams focus on building solutions instead of wrestling with compatibility issues.

As David explained, “I know from my bitter experience that converting models is horrid, and if someone else will do that for me, it makes life a whole lot easier. That’s where Ultralytics is really improving the story and offering something that’s valuable to our users.” 

Accelerating innovation and AI hardware

AI software advances are happening in parallel to hardware, and together they are driving a new wave of innovation in computer vision. While models like Ultralytics YOLO continue to push accuracy forward, their real-world impact also depends on the platforms they run on.

For instance, Seeed Studio showcased how modular, low-cost hardware like their reCamera and XIAO boards, preloaded with Ultralytics YOLO models, makes it easy for developers to move from prototyping to real-world AI systems. This kind of hardware–software integration lowers the barrier to entry and shows how innovation at the hardware level directly accelerates adoption.

Here are some key takeaways from other YV25 keynotes that emphasized how hardware–software co-design is unlocking new possibilities:

  • Quantization unlocks big speed gains: Intel showed how converting Ultralytics YOLO models to OpenVINO with quantization boosted inference from 54 FPS to 606 FPS in just 30 minutes, highlighting the power of optimization.
  • Full-stack tools make edge AI deployment practical: NVIDIA highlighted how Jetson devices, TensorRT, Triton Inference Server, and the DeepStream SDK work together to streamline deploying high-performance vision AI at the edge.
  • Open ecosystems accelerate prototyping: AMD emphasized its end-to-end platform built on GPUs and the ROCm software stack, helping developers move quickly from prototype to deployment while controlling costs.
  • Low-power chips expand AI to constrained devices: DEEPX introduced their DX-M1 and DX-M2 processors, delivering tens of TOPS under 5 watts to enable advanced inference in compact, power-limited systems.

Recent trends in computer vision

With advances in both software and hardware working hand in hand, computer vision is evolving faster than ever. These parallel developments are not just improving accuracy and speed but also shaping how vision AI can be deployed in the real world. At YV25, participants had the chance to hear from experts across robotics, edge deployment, and multimodal AI, each offering a different perspective on where the field is headed.

For example, in his keynote, Michael Hart from D-Robotics demonstrated how pairing Ultralytics YOLO models with their compact RDK X5 board (a small embedded AI vision module) enables robots to run advanced vision models in real time. His live demo showed just how far robotics has come, evolving from lab experiments into practical, AI-powered systems.

Fig 3. Michael Hart highlighted how today’s AI-enabled robots depend on computer vision.

Similarly, Alexis Crowell and Steven Hunsche from Axelera AI emphasized the challenges and opportunities of deploying vision AI at the edge. Through live demos, they explained how Axelera AI’s Metis AI Processing Units (AIPUs) combine RISC-V and digital in-memory compute to deliver high performance at very low power. Packaged in familiar form factors like M.2 and PCIe, the platform’s hardware-software co-design makes scaling edge AI both practical and efficient.

And in another session, Merve Noyan from Hugging Face explored the rise of multimodal AI, where models combine vision with text, audio, and other inputs. She talked about use cases ranging from document analysis to embodied agents, stressing how open-source innovation is accelerating AI adoption.

Balancing technical progress with human values

While YV25 featured inspiring big-picture talks, it also included deeply practical sessions. Jiri Borovec from Lightning AI gave a hands-on walkthrough showing how to train and fine-tune Ultralytics YOLO models with PyTorch Lightning and multi-GPU support. 

He walked through code examples and highlighted how open-source tools, clear documentation, and flexible frameworks make it easier for developers to scale training, validate every stage, and adapt workflows to their own projects. It was a reminder of how important the community and accessible tooling are for real progress in computer vision.

On the other side of the spectrum, speakers urged the audience to think about AI’s broader role in society. In his keynote, Gerd Leonhard, futurist, humanist, and CEO of The Futures Agency, argued that “technology is morally neutral until we use it,” stressing that the real question isn’t just what AI can do, but what it should do. He warned against falling into traps like reductionism and truthlessness, and called for AI that truly serves humanity’s long-term interests.

Fig 4. Gerd Leonhard sharing his thoughts on building AI solutions while keeping them human-centric.

This focus on responsibility continued in a fireside chat with Carissa Véliz from the University of Oxford, who emphasized privacy and security. She pointed out that open-source communities are vital for checking and improving code, and that ethics and design are inseparable. Her message was clear: developers need to anticipate misuse and build systems that put human dignity and social well-being first.

Networking in London at YV25

Going a step beyond the talks and demos, YV25 also created space for people to connect. During the coffee breaks and lunch, attendees mingled, shared experiences, compared approaches, and sparked new collaborations.

For the Ultralytics team, it was also a great opportunity to meet in person. With members spread across the globe, moments like this help strengthen connections and celebrate progress together.

Fig 5. The Ultralytics team wrapping up an inspiring day at YOLO Vision 2025.

The day wrapped with an after-party, where participants had the chance to relax and continue networking. It was a moment to reflect, recharge, and look ahead to the next chapter of innovation in Vision AI.

Pushing the boundaries of Vision AI together

YOLO Vision 2025 was a celebration of ideas, innovation, and community. The launch of Ultralytics YOLO26 set the stage, followed by engaging talks on edge deployment and human-centric AI that highlighted the rapid progress of Vision AI and its growing impact on the world.

In addition to keynote sessions, the event brought people together. Researchers, developers, and enthusiasts shared experiences, sparked meaningful conversations, and explored new possibilities for the future. The event ended on a high note, with participants excited about the future of Ultralytics YOLO models and computer vision.

Ready to explore AI? Join our community and GitHub repository to learn more about AI and computer vision. Visit our solutions pages to explore more applications of computer vision in agriculture and AI in robotics. Check our licensing options and get started with computer vision today!

Let’s build the future
of AI together!

Begin your journey with the future of machine learning

Start for free
Link copied to clipboard