교차점 오버 유니온IoU이 무엇인지, 어떻게 계산되는지, 객체 감지 및 AI 모델 평가에서 중요한 역할을 하는지 알아보세요.
Intersection over Union (IoU) is a fundamental metric used in computer vision to quantify the accuracy of an object detector by measuring the overlap between two boundaries. Often technically referred to as the Jaccard Index, IoU evaluates how well a predicted bounding box aligns with the ground truth box—the actual location of the object as labeled by a human annotator. The score ranges from 0 to 1, where 0 indicates no overlap and 1 represents a perfect pixel-for-pixel match. This metric is essential for assessing the spatial precision of models like YOLO26, moving beyond simple classification to ensure the system knows exactly where an object is located.
The concept behind IoU is intuitive: it calculates the ratio of the area where two boxes intersect to the total area covered by both boxes combined (the union). Because this calculation normalizes the overlap by the total size of the objects, IoU serves as a scale-invariant metric. This means it provides a fair assessment of performance regardless of whether the computer vision model is detecting a massive cargo ship or a tiny insect.
In standard object detection workflows, IoU is the primary filter for determining whether a prediction is a "True Positive" or a "False Positive." During evaluation, engineers set a specific threshold—commonly 0.50 or 0.75. If the overlap score exceeds this number, the detection is counted as correct. This thresholding process is a prerequisite for calculating aggregate performance metrics like Mean Average Precision (mAP), which summarizes model accuracy across different classes and difficulty levels.
High spatial precision is critical in industries where vague approximations can lead to failure or safety hazards. IoU ensures that AI systems are perceiving the physical world accurately.
While the concept is geometric, the implementation is mathematical. The ultralytics package provides
optimized utilities to calculate IoU efficiently, which is useful for verifying model behavior or filtering
predictions.
import torch
from ultralytics.utils.metrics import box_iou
# Define ground truth and prediction boxes: [x1, y1, x2, y2]
ground_truth = torch.tensor([[100, 100, 200, 200]])
predicted = torch.tensor([[110, 110, 210, 210]])
# Calculate the Intersection over Union score
iou_score = box_iou(ground_truth, predicted)
print(f"IoU Score: {iou_score.item():.4f}")
# Output: IoU Score: 0.6806
Beyond serving as a scorecard, IoU is an active component in the training of deep learning networks.
To effectively evaluate machine learning models, it is important to distinguish IoU from other similarity metrics.
To achieve high IoU scores, models require precise training data. Tools like the Ultralytics Platform facilitate the creation of high-quality data annotations, allowing teams to visualize ground truth boxes and ensure they tightly fit objects before training begins.