By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage and assist in our marketing efforts. More info
Cookie Settings
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage and assist in our marketing efforts. More info
Empower Ultralytics YOLOv5 model training & deployment with Neural Magic's DeepSparse for GPU-class performance on CPUs. Achieve faster, scalable YOLOv5 deployments.
Want to accelerate the training and deployment of your YOLOv5 models? We’ve got you covered! Introducing our newest partner, Neural Magic. As Neural Magic provides software tools that emphasize peak model performance and workflow simplicity, it’s only natural that we’ve come together to offer a solution to make the YOLOv5 deployment process even better.
DeepSparse is Neural Magic’s CPU inference runtime, which takes advantage of sparsity and low-precision arithmetic within neural networks to offer exceptional performance on commodity hardware. For instance, compared to the ONNX Runtime baseline, DeepSparse offers a 5.8x speed-up for YOLOv5s running on the same machine!
For the first time, your deep learning workloads can meet the performance demands of production without the complexity and costs of hardware accelerators. Put simply, DeepSparse gives you the performance of GPUs and the simplicity of software:
Flexible Deployments: Run consistently across cloud, data center, and edge with any hardware provider
Infinite Scalability: Scale out with standard Kubernetes, vertically to 100s of cores, or fully abstracted with serverless
Easy Integration: Use clean APIs for integrating your model into an application and monitoring it in production
Achieve GPU-Class Performance on Commodity CPUs
DeepSparse takes advantage of model sparsity to gain its performance speedup.
Sparsification through pruning and quantization allows order-of-magnitude reductions in the size and compute needed to execute a network while maintaining high accuracy. DeepSparse is sparsity-aware, skipping the multiply-adds by zero and shrinking the amount of compute in a forward pass. Since sparse computation is memory-bound, DeepSparse executes the network depth-wise, breaking the problem into Tensor Columns, which are vertical stripes of computation that fit in cache.
Sparse networks with compressed computation, executed depth-wise in cache, allow DeepSparse to deliver GPU-class performance on CPUs!
Create A Sparse Version of YOLOv5 Trained on Custom Data
Neural Magic's open-source model repository, SparseZoo, contains pre-sparsified checkpoints of each YOLOv5 model. Using SparseML, which is integrated with Ultralytics, you can fine-tune a sparse checkpoint onto your data with a single CLI command.
Pipelines wrap pre-processing and output post-processing around the runtime, providing a clean interface for adding DeepSparse to an application. The DeepSparse-Ultralytics integration includes an out-of-the-box Pipeline that accepts raw images and outputs the bounding boxes.
Create a Pipeline and run inference:
from deepsparse import Pipeline
# list of images in local filesystem images = ["basilica.jpg"]
# run inference on images, receive bounding boxes + classes pipeline_outputs = yolo_pipeline(images=images, iou_thres=0.6, conf_thres=0.001) print(pipeline_outputs)
If you are running in the cloud, you may get an error that open-cv cannot find libGL.so.1. Running the following on Ubuntu installs it:
apt-get install libgl1-mesa-glx
HTTP Server
DeepSparse Server runs on top of the popular FastAPI web framework and Uvicorn web server. With just a single CLI command, you can easily set up a model service endpoint with DeepSparse. The Server supports any Pipeline from DeepSparse, including object detection with YOLOv5, enabling you to send raw images to the endpoint and receive the bounding boxes.
Spin up the Server with the pruned-quantized YOLOv5s:
An example request, using Python's requests package:
import requests, json
# list of images for inference (local files on client side) path = ['basilica.jpg'] files = [('request', open(img, 'rb')) for img in path]
# send request over HTTP to /predict/from_files endpoint url = 'http://0.0.0.0:5543/predict/from_files' resp = requests.post(url=url, files=files)
# response is returned in JSON annotations = json.loads(resp.text) # dictionary of annotation results bounding_boxes = annotations["boxes"] labels = annotations["labels"]
Annotate CLI
You can also use the annotate command to have the engine save an annotated photo on disk. Try --source 0 to annotate your live webcam feed!
At Ultralytics, we commercially partner with other startups to help us fund the research and development of our awesome open-source tools, like YOLOv5, to keep them free for everybody. This article may contain affiliate links to those partners.