Descubra a importância da precisão média médiamAP) na avaliação de modelos de deteção de objectos para aplicações de IA como a condução autónoma e os cuidados de saúde.
Mean Average Precision (mAP) is a comprehensive metric widely used to evaluate the performance of computer vision models, specifically in tasks like object detection and instance segmentation. Unlike simple accuracy, which merely checks if an image is classified correctly, mAP assesses how well a model finds objects and how accurately it positions the bounding box around them. This makes it the primary benchmark for comparing state-of-the-art architectures like YOLO26 against previous generations. By summarizing the trade-off between precision and recall across all classes, mAP provides a single score that reflects a model's robustness in real-world scenarios.
To calculate mAP, it is necessary to first understand three underlying concepts that define detection quality:
The calculation begins by computing the Average Precision (AP) for each specific class (e.g., "person," "car," "dog"). This is done by finding the area under the Precision-Recall Curve, which plots precision against recall at various confidence thresholds. The "Mean" in Mean Average Precision simply refers to averaging these AP scores across all categories in the training data.
Standard research benchmarks, such as the COCO dataset, frequently report two main variations:
It is important to distinguish mAP from Accuracy. Accuracy is suitable for image classification where the output is a single label for the whole image, but it fails in object detection because it does not account for the spatial position of the object or the background class. Similarly, while the F1-Score provides a harmonic mean of precision and recall at a single confidence threshold, mAP integrates performance across all confidence levels, offering a more holistic view of model stability.
mAP elevadas são fundamentais em ambientes onde a segurança e a eficiência são primordiais.
Modern frameworks simplify the calculation of these metrics during the
validation phase. The following example demonstrates how to load
a model and compute mAP using the ultralytics Pacote Python .
from ultralytics import YOLO
# Load the YOLO26 model (recommended for new projects)
model = YOLO("yolo26n.pt")
# Validate the model on a dataset to compute mAP
# This runs inference and compares predictions to ground truth
metrics = model.val(data="coco8.yaml")
# Print mAP@50-95 (map) and mAP@50 (map50)
print(f"mAP@50-95: {metrics.box.map:.3f}")
print(f"mAP@50: {metrics.box.map50:.3f}")
Understanding and optimizing for mAP is crucial before model deployment. To streamline this process, the Ultralytics Platform offers automated tracking of mAP, loss curves, and other KPIs during training, allowing developers to visualize progress and select the best model checkpoint for production.