Yolo 비전 선전
선전
지금 참여하기
용어집

평균 평균 정밀도mAP

자율 주행 및 헬스케어와 같은 AI 애플리케이션의 객체 감지 모델을 평가할 때 평균 정밀도mAP의 중요성에 대해 알아보세요.

Mean Average Precision (mAP) is a comprehensive metric widely used to evaluate the performance of computer vision models, specifically in tasks like object detection and instance segmentation. Unlike simple accuracy, which merely checks if an image is classified correctly, mAP assesses how well a model finds objects and how accurately it positions the bounding box around them. This makes it the primary benchmark for comparing state-of-the-art architectures like YOLO26 against previous generations. By summarizing the trade-off between precision and recall across all classes, mAP provides a single score that reflects a model's robustness in real-world scenarios.

mAP 구성 요소

To calculate mAP, it is necessary to first understand three underlying concepts that define detection quality:

  • Intersection over Union (IoU): This measures the spatial overlap between the predicted box and the ground truth annotation. It is a ratio ranging from 0 to 1. A prediction is often considered a "True Positive" only if the IoU exceeds a specific threshold, such as 0.5 or 0.75.
  • Precision: This metric answers, "Of all the objects the model claimed to detect, what fraction were actually correct?" High precision means the model produces very few false positives.
  • Recall: This metric asks, "Of all the objects that actually exist in the image, what fraction did the model find?" High recall indicates the model avoids false negatives and rarely misses an object.

Calculation Methodology

The calculation begins by computing the Average Precision (AP) for each specific class (e.g., "person," "car," "dog"). This is done by finding the area under the Precision-Recall Curve, which plots precision against recall at various confidence thresholds. The "Mean" in Mean Average Precision simply refers to averaging these AP scores across all categories in the training data.

Standard research benchmarks, such as the COCO dataset, frequently report two main variations:

  1. mAP@50: This considers a detection correct if the IoU is at least 0.50. It is a lenient metric.
  2. mAP@50-95: This is the average of mAP calculated at IoU thresholds from 0.50 to 0.95 in steps of 0.05. This rigorous metric rewards models that achieve high localization accuracy.

mAP vs. Related Metrics

It is important to distinguish mAP from Accuracy. Accuracy is suitable for image classification where the output is a single label for the whole image, but it fails in object detection because it does not account for the spatial position of the object or the background class. Similarly, while the F1-Score provides a harmonic mean of precision and recall at a single confidence threshold, mAP integrates performance across all confidence levels, offering a more holistic view of model stability.

실제 애플리케이션

안전과 효율성이 최우선인 환경에서는 높은 mAP 매우 중요합니다.

  • Autonomous Vehicles: In self-driving technology, safety depends on detecting pedestrians and traffic signs with high recall (not missing anything) and high precision (avoiding phantom braking). mAP ensures the perception system balances these needs effectively.
  • Medical Image Analysis: When identifying tumors or fractures in X-rays, radiologists rely on AI in healthcare to flag potential issues. A high mAP score indicates the model reliably highlights anomalies without overwhelming the doctor with false alarms, facilitating accurate diagnosis.

Measuring mAP with Ultralytics

Modern frameworks simplify the calculation of these metrics during the validation phase. The following example demonstrates how to load a model and compute mAP using the ultralytics Python 패키지.

from ultralytics import YOLO

# Load the YOLO26 model (recommended for new projects)
model = YOLO("yolo26n.pt")

# Validate the model on a dataset to compute mAP
# This runs inference and compares predictions to ground truth
metrics = model.val(data="coco8.yaml")

# Print mAP@50-95 (map) and mAP@50 (map50)
print(f"mAP@50-95: {metrics.box.map:.3f}")
print(f"mAP@50: {metrics.box.map50:.3f}")

Understanding and optimizing for mAP is crucial before model deployment. To streamline this process, the Ultralytics Platform offers automated tracking of mAP, loss curves, and other KPIs during training, allowing developers to visualize progress and select the best model checkpoint for production.

Ultralytics 커뮤니티 가입

AI의 미래에 동참하세요. 글로벌 혁신가들과 연결하고, 협력하고, 성장하세요.

지금 참여하기