Tune in to YOLO Vision 2024!
Discover how Ultralytics partners with Comet for YOLOv5 model optimization: real-time tracking, streamlined collaboration, and enhanced reproducibility.
At Ultralytics we commercially partner with other startups to help us fund the research and development of our awesome open-source tools, like YOLOv5, to keep them free for everybody. This article may contain affiliate links to those partners.
Our newest partner, Comet, builds tools that help data scientists, engineers, and team leaders accelerate and optimize machine learning and deep learning models.
Comet is a powerful tool for tracking your models, datasets, and metrics. It even logs your system and environment variables to ensure reproducibility and smooth debugging for each and every run. It’s like having a virtual assistant that magically knows what notes to keep. Track and visualize model metrics in real time, save your hyperparameters, datasets, and model checkpoints, and visualize your model predictions with Comet Custom Panels!
Further, Comet ensures you never lose track of your work and makes it easy to share results and collaborate across teams of all sizes!
YOLOv5 is a great starting point for your computer vision journey. To improve your model’s performance and get it production-ready, you’ll need to log the results in an experiment tracking tool like Comet.
The Comet and YOLOv5 integration offers 3 main features:
This guide will cover how to use YOLOv5 with Comet.
So, ready to track your experiments in real time? Let’s get started!
Pip install comet_ml
There are two ways to configure Comet with YOLOv5.
You can either set your credentials through environment variables or create a .comet.config file in your working directory and set your credentials there.
export COMET_API_KEY=export COMET_PROJECT_NAME= # This will default to 'yolov5'
[comet] api_key= project_name=<Your Comet API Key># This will default to 'yolov5'
# Train YOLOv5s on COCO128 for 5 epochspython train.py --img 640 --batch 16 --epochs 5 --data coco128.yaml --weights yolov5s.pt
That's it!
Comet will automatically log your hyperparameters, command line arguments, training, and validation metrics. You can visualize and analyze your runs in the Comet UI.
Check out an example of a completed run here.
Or better yet, try it out yourself in this Colab Notebook.
By default, Comet will log the following items:
Comet can be configured to log additional data through command line flags passed to the training script or environment variables.
export COMET_MODE=online # Set whether to run Comet in 'online' or 'offline' mode. Defaults to onlineexport COMET_MODEL_NAME= #Set the name for the saved model. Defaults to yolov5export COMET_LOG_CONFUSION_MATRIX=false # Set to disable logging a Comet Confusion Matrix. Defaults to trueexport COMET_MAX_IMAGE_UPLOADS= # Controls how many total image predictions to log to Comet. Defaults to 100.export COMET_LOG_PER_CLASS_METRICS=true # Set to log evaluation metrics for each detected class at the end of training. Defaults to falseexport COMET_DEFAULT_CHECKPOINT_FILENAME= # Set this if you would like to resume training from a different checkpoint. Defaults to 'last.pt'export COMET_LOG_BATCH_LEVEL_METRICS=true # Set this if you would like to log training metrics at the batch level. Defaults to false.export COMET_LOG_PREDICTIONS=true # Set this to false to disable logging model predictions
Logging Models to Comet is disabled by default. To enable it, pass the save-period argument to the training script. This will save the logged checkpoints to Comet based on the interval value provided by save-period.
python train.py \
--img 640 \
--batch 16 \
--epochs 5 \
--data coco128.yaml \
--weights yolov5s.pt \
--save-period 1
By default, model predictions (images, ground truth labels, and bounding boxes) will be logged to Comet. You can control the frequency of logged predictions and the associated images by passing the bbox_interval command line argument. Predictions can be visualized using Comet's Object Detection Custom Panel. This frequency corresponds to every Nth batch of data per epoch. In the example below, we are logging every 2nd batch of data for each epoch.
Note: The YOLOv5 validation data loader will default to a batch size of 32, so you will have to set the logging frequency accordingly.
Here is an example project using the Panel.
python train.py \--img 640 \--batch 16 \--epochs 5 \--data coco128.yaml \--weights yolov5s.pt \--bbox_interval 2
When logging predictions from YOLOv5, Comet will log the images associated with each set of predictions. By default, a maximum of 100 validation images are logged. You can increase or decrease this number using the COMET_MAX_IMAGE_UPLOADS environment variable.
env COMET_MAX_IMAGE_UPLOADS=200python train.py \--img 640 \--batch 16 \--epochs 5 \--data coco128.yaml \--weights yolov5s.pt \--bbox_interval 1
Use the COMET_LOG_PER_CLASS_METRICS environment variable to log mAP, precision, recall, and f1 for each class.
env COMET_LOG_PER_CLASS_METRICS=true python train.py \--img 640 \--batch 16 \--epochs 5 \--data coco128.yaml \--weights yolov5s.pt
If you would like to store your data using Comet Artifacts, you can do so using the upload_dataset flag.
The dataset is organized in the way described in the YOLOv5 documentation. The dataset config yaml file must follow the same format as the coco128.yaml file.
python train.py \--img 640 \--batch 16 \--epochs 5 \--data coco128.yaml \--weights yolov5s.pt \--upload_dataset
You can find the uploaded dataset in the Artifacts tab in your Comet Workspace
You can preview the data directly in the Comet UI.
Artifacts are versioned and also support adding metadata about the dataset. Comet will automatically log the metadata from your dataset yaml file.
If you want to use a dataset from Comet Artifacts, set the path variable in your dataset yaml file to point to the following Artifact resource URL.
# contents of artifact.yaml file path: "comet:///:"
Then pass this file to your training script in the following way:
python train.py \--img 640 \--batch 16 \--epochs 5 \--data artifact.yaml \--weights yolov5s.pt
Artifacts also allow you to track the lineage of data as it flows through your Experimentation workflow. Here you can see a graph that shows you all the experiments that have used your uploaded dataset.
If your training run is interrupted for any reason, e.g. disrupted internet connection, you can resume the run using the resume flag and the Comet Run Path.
The Run Path has the following format comet:////.
This will restore the run to its state before the interruption, which includes restoring the model from a checkpoint, restoring all hyperparameters and training arguments, and downloading Comet dataset Artifacts if they were used in the original run. The resumed run will continue logging to the existing Experiment in the Comet UI.
python train.py \--resume "comet://"
YOLOv5 is also integrated with Comet's Optimizer, making it simple to visualize hyperparameter sweeps in the Comet UI.
To configure the Comet Optimizer, you will have to create a JSON file with the information about the sweep.
An example file has been provided in:
utils/loggers/comet/optimizer_config.json python utils/loggers/comet/hpo.py \--comet_optimizer_config "utils/loggers/comet/optimizer_config.json"
The hpo.py script accepts the same arguments as train.py. If you wish to pass additional arguments to your sweep simply add them after the script.
python utils/loggers/comet/hpo.py \--comet_optimizer_config "utils/loggers/comet/optimizer_config.json" \--save-period 1 \--bbox_interval 1
comet optimizer -j utils/loggers/comet/hpo.py \utils/loggers/comet/optimizer_config.json"
Comet provides many ways to visualize the results of your sweep. Take a look at a project with a completed sweep here:
Begin using our integration with Comet to manage, visualize, and optimize your YOLOv5 models—from training runs to production monitoring.
And, of course, join the Ultralytics Community – a place to ask questions and share tips about YOLOv5 training, validation, and deployment.
Begin your journey with the future of machine learning