Tune in to YOLO Vision 2025!
September 25, 2025
10:00 — 18:00 BST
Hybrid event
Yolo Vision 2024

From Dubai with insights: Key takeaways from the GDG MENA-T Summit 2025

Onuralp Sezer

4 min read

October 10, 2025

Get key takeaways from the GDG MENA-T Summit 2025 in Dubai. This deep dive covers Google's AI agents, Firebase Studio, Gemini, and real-world computer vision insights for the Ultralytics YOLO community.

A GDG Summit is a major annual conference organized by Google Developer Groups (GDGs) for developers, tech enthusiasts, and students. This summit brings together the local and regional developer community, Google Developer Experts (GDEs), and GDG Organizers to learn about Google technologies, share knowledge, and network with peers and experts and this year, the energy at the GDG MENA-T Summit 2025 in Dubai was electric. 

From the moment I arrived at the beautiful Uptown Dubai Hotel, with its stunning views of the city, I knew this event would be special. As a GDG organizer from Türkiye and an Ultralytics’ representative, I had the unique opportunity to wear two hats: one for my local developer community in Türkiye, and one for the global computer vision community that our company serves. I was eager to connect, share, and dive into the future of technology. What I found were conversations that went deeper than just surface-level trends, exploring the very fabric of how we will build and deploy software tomorrow. From keynotes to demos and networking, let’s take a look at some key highlights from this event!

Fig 1. Ultralytics' Senior Machine Learning Engineer, Onuralp Sezer attending GDG Summit MENAT 2025 in Dubai with various GDG Turkey organizers. Image by author.

There were Three major themes left animpression on me: the rapid evolution of interconnected AI agents, the dawn of a new, AI-accelerated development workflow, and the critical importance of optimizing AI for real-world, real-time performance.

Unpacking Agent Protocols: From theory to cloud deployment

One of the most compelling sessions was Mete Atamel's deep dive into Agent Protocols. For years, we've talked about AI agents in the abstract, but this session grounded the concept in concrete, actionable engineering. Mete broke down the framework that will allow agents to become truly collaborative and useful:

Fig 2. Mete Atamel explaining a2a usage in Agent Development kit.

MCP (Model Context Protocol): Think of this as the "universal translator" for an AI agent. It's the foundational layer that allows an agent to reliably connect with external tools, APIs, and data sources. Without a standard like MCP, every integration would be a custom, brittle job. With it, agents can plug into the digital world with confidence and consistency.

A2A (Agent-to-Agent Protocol): If MCP is how an agent talks to tools, A2A is how agents talk to *each other*. This protocol enables agents, even those running on entirely different platforms, to discover one another, collaborate, delegate tasks, and coordinate complex workflows. This is the framework for a future where a specialized agent could hire another agent to handle a specific sub-task, creating a dynamic, autonomous workforce

ADK (Agent Development Kit): This is the toolkit that brings it all together. The ADK provides the structure, libraries, and patterns to assemble robust agents using MCP and A2A. It’s the bridge from a cool concept to a production-ready system.

The most exciting part was the final step: deployment. Mete demonstrated how an agent built with the ADK can be containerized and deployed effortlessly on Google Cloud Run.  It showed a clear, scalable path from building an intelligent agent on your local machine to running it in a managed, serverless environment, ready to handle real-world demand.

A new era of development: AI as your co-pilot

The summit also made it clear that AI is no longer just a feature we add to our apps; it's becoming a core part of the development process itself. The showcase of Google's new suite of tools felt like a glimpse into a radically more efficient future.

A key highlight was the introduction of Firebase Studio, an ambitious, agentic, cloud-based environment. The demo was stunning: starting with a simple natural language prompt like "Build me a photo-sharing app with user logins," Firebase Studio got to work. It scaffolded the entire project, set up the necessary Cloud Firestore schemas, configured Firebase Authentication rules, and generated boilerplate frontend code. It’s a tool designed to eliminate the tedious setup that consumes so much of a developer's time, allowing us to focus immediately on the unique logic and user experience of our application.

Fig 3. Vikas Anand explaining firebase studio usage and integrations. Image by author.

Alongside this was Jules, Google's asynchronous AI coding agent. Jules is different from inline tools like Copilot. One is able to  delegate a complete task to it: "Refactor this module to be more efficient," "Add unit tests for this service," or "Update all the dependencies in this repo and fix any breaking changes." Jules then works on it in the background and, when finished, submits a pull request for your review. This paradigm shifts the developer's role from a line-by-line writer of code to a high-level architect and reviewer.

Underpinning these revolutionary tools is the powerful next generation of Google's models, accessible through the Google One AI Plans. With enhanced reasoning, multimodal capabilities, and massive context windows, these models provide the "brains" that make agentic tools Jules possible. Firebase Studio on the other hand is free but If you like to increase quota you need to subscribe to the Google Developer Program so you can use more of it. 

From inference to action: Optimizing real-time AI with NVIDIA

Our passion lies in computer vision, so I was thrilled to attend the "Building Real-Time AI Systems" talk by Katja Sirazitdinova, Senior Developer at NVIDIA. This session was a fantastic opportunity to connect my role as Senior Machine Learning Engineer at Ultralytics directly with the cutting edge of hardware acceleration and saw me asking specific questions about enhancing the export pipelines for our widely-used YOLO models.

Katja shared invaluable, practical insights on wringing every last drop of performance out of a model. We dove deep into strategies like model quantization (reducing model size while minimizing accuracy loss), ensuring export compatibility across different hardware, and leveraging NVIDIA's powerful toolchains like TensorRT to dramatically improve throughput and reduce latency. I walked away with a notebook full of concrete ideas to bring back to the Ultralytics team ideas that will help our entire community streamline deployment, reduce friction, and make even better use of GPU acceleration for demanding, real-time applications like robotics and video analytics.

Fig 4. Ultralytics' Senior Machine Learning Engineer, Onuralp Sezer and NVIDIA Senior Developer Katja Sirazitdinova. Image by author.

The Intersection of Community and Innovation

Beyond the various keynotes and demos, the summit was a powerful reminder of why open-source is such a force in the tech world: community. The "hallway track" was just as valuable as the talks. I had countless conversations with developers, researchers, and entrepreneurs who use our tools every day. They asked thoughtful, practical questions about the `Ultralytics` Python package from optimizing YOLO performance on edge devices to creative, real-world use cases I had never even considered.

Being able to provide on-the-spot support, brainstorm solutions, and gather direct, unfiltered feedback from our users was incredibly rewarding. It reinforced how vital the Ultralytics community is to our mission. Every feature request, every bug report, and every success story shared strengthens our ecosystem. These interactions are what drive true innovation.

Building the future, together

The GDG MENA-T Summit was more than just a conference; it was a glimpse into the future. A future where intelligent agents collaborate on the cloud, AI-powered tools amplify our own abilities as developers, and our models run faster and more efficiently than ever before. Most importantly, it's a future where open-source communities and enterprise innovation don't just coexist; they actively drive each other forward.

Fig 5. The whole GDG and Googlers group photo at the closing of the event. Image by GDG MENAT photographers.

A huge thank you to the organizers and the Google Developer Program teams, especially Ramesh Chander, Nour Bouayadi, Alaa Shahin, and Beyza Sunay Güler for putting together such an inspiring, enriching, and technically profound event. The momentum from Dubai is strong, and I can't wait to see what we all build next.

Let’s build the future
of AI together!

Begin your journey with the future of machine learning

Start for free
Link copied to clipboard