We strive to make machine learning deployment simpler, better, faster!
You shouldn't wait much longer! From TensorRT, Edge TPU and OpenVINO support, to retrained models at --batch-size 128 and a new default one-cycle linear LR scheduler, we present to you YOLOv5 v6.1 - much simpler, faster, better, stronger!
Ever since our last release in October 2021, we've been working on improving your favorite YOLO Vision AI architecture. From bug fixes to new features, these are the most important enhancements in the latest YOLOv5 v6.1 release:
YOLOv5 now officially supports 11 different formats, not just for export but for inference (both detect.py and PyTorch Hub), and validation to profile mAP and speed results after export. Check the list of supported models in the latest YOLOv5 for yourself:
✅ TensorFlow SavedModel
✅ TensorFlow GraphDef
✅ TensorFlow Lite
✅ TensorFlow Edge TPU
Inspired by the Olympic spirit, we at Ultralytics believe that the most important thing is not to win but to take part. We acknowledge our YOLOv5 family has taken an enormous part in both our triumphs and struggles. This release has been brought to you by 271 PRs from 48 new contributors. Our shared vision to make AI accessible for everyone has brought us together to create the most accurate ML models.
This month our Ultralytics/YOLOv5 repository has surpassed Joseph Redmon's pjreddie/darknet YOLOv3 by the total number of GitHub stars, now counting over 22.4k. We are humbled by the opportunities ahead in the Vision AI space for YOLOv5 and beyond. It is our pleasure to carry the You Only Look Once legacy forward.
Visit our YOLOv5 open-source GitHub repository to find out more detail regarding this release and get into YOLO object detection.
Find out how you can do all YOLO-related magic with no code! Ultralytics HUB will get you into Computer Vision from scratch with a few clicks of a button!
Begin your journey with the future of machine learning