ultralytics platform
Your trained models from browser testing to production endpoints in just a few clicks with auto-scaling, real-time monitoring, and 17+ export formats. The end-to-end solution for real-world use case deployment.

43+
Deployment regions
17+
Export formats
500+
Active deployments





Dedicated endpoints scale up automatically to handle traffic spikes and scale down to zero when idle, so you're never paying for compute you're not using.
Scale to zero by default. No cost when your endpoint isn't receiving requests.
No rate limits. No rate limits. Unlike shared inference, dedicated endpoints have no throughput caps, limited only by your endpoint's resources.
Configurable resources. Choose CPU cores (1–8) and memory (1–32 GB) to match your model's requirements and traffic patterns.
Ultralytics Platform supports both cloud and edge deployment. All Ultralytics YOLO models are natively optimized to run efficiently across environments, delivering reliable performance even on hardware with limited compute resources.


Full real-time visibility into how your models perform. Once your models are live, the deployments dashboard gives you a centralized overview of every running endpoint, with the metrics you need to keep your frameworks running reliably.
Request volume. Total requests across all endpoints over the last 24 hours.
P95 latency. 95th percentile response time to track real-world use case performance.
Error rates. Highlighted alerts when error rates exceed 5%, with severity-filtered logs to help you diagnose issues fast.
Health checks. Live status indicators with automatic retry when endpoints are unhealthy. Response latency is displayed alongside each check.
Every deployed endpoint comes with auto-generated code examples in Python, JavaScript, and cURL, pre-populated with your actual endpoint URL and API key. Copy, paste, and start sending inference requests from any application.

Yes. Each model can be deployed to multiple regions simultaneously. Your plan determines the total number of endpoints available, 3 for Free, 10 for Pro, and unlimited for Enterprise. This allows you to serve users globally with low-latency endpoints in each region.
Dedicated endpoints are billed based on CPU, memory, and request volume. With scale-to-zero enabled by default, you only pay for active inference time, there's no cost when your endpoint isn't receiving requests. Shared inference is included with your platform plan.
Shared inference runs on a multi-tenant service across 3 regions and is rate-limited to 20 requests per minute. It's best for development and quick testing. Dedicated endpoints are single-tenant services deployed to any of 43 regions with no rate limits, consistent latency, and configurable resources, built for scalable production workloads.
Dedicated endpoint deployment typically takes one to two minutes. This includes container provisioning, startup, and an initial health check to validate the service is ready. Once the endpoint is ready, it begins accepting inference requests immediately.
Model deployment is the process of making a trained computer vision model available to receive and process real-world data. Once deployed, computer vision applications can send images and video frames to the model via API and receive predictions, enabling everything from automated quality inspection to real-time object detection in production systems. On Ultralytics Platform, deployment is integrated directly into the end-to-end training workflow. Once your model is trained, you can test it in the browser, deploy it to a dedicated endpoint in any of 43 global regions, and monitor its performance, all from the same workspace.
Take your trained models to production across 43 global regions with auto-scaling and real-time monitoring.