Yolo Vision Shenzhen
Shenzhen
Rejoindre maintenant
Glossaire

Confiance

Définissez les scores de confiance de l'IA. Découvrez comment les modèles évaluent la certitude des prédictions, définissez des seuils de fiabilité et distinguez la confiance de la précision.

In the realm of artificial intelligence and machine learning, a confidence score is a metric that quantifies the level of certainty a model has regarding a specific prediction. This value typically ranges from 0 to 1 (or 0% to 100%) and represents the estimated probability that the algorithm's output aligns with the ground truth. For instance, in an object detection task, if a system identifies a region of an image as a "bicycle" with a confidence of 0.92, it suggests a 92% estimated likelihood that the classification is correct. These scores are derived from the final layer of a neural network, often processed through an activation function such as Softmax for multi-class categorization or the Sigmoid function for binary decisions.

Le rôle de la confiance dans l'inférence

Confidence scores are a fundamental component of the inference engine workflow, acting as a filter to distinguish high-quality predictions from background noise. This filtering process, known as thresholding, enables developers to adjust the sensitivity of an application. By establishing a minimum confidence threshold, you can manage the critical precision-recall trade-off. A lower threshold may detect more objects but increases the risk of false positives, whereas a higher threshold improves precision but might result in missing subtle instances.

In advanced architectures like Ultralytics YOLO26, confidence scores are essential for post-processing techniques like Non-Maximum Suppression (NMS). NMS utilizes these scores to remove redundant bounding boxes that overlap significantly, preserving only the detection with the highest probability. This step ensures that the final output is clean and ready for downstream tasks such as object counting or tracking.

The following Python example demonstrates how to filter predictions by confidence using the ultralytics l'emballage :

from ultralytics import YOLO

# Load the latest YOLO26n model
model = YOLO("yolo26n.pt")

# Run inference with a confidence threshold of 0.5 (50%)
# Only detections with a score above this value are returned
results = model.predict("https://ultralytics.com/images/bus.jpg", conf=0.5)

# Inspect the confidence scores of the detected objects
for box in results[0].boxes:
    print(f"Class: {box.cls}, Confidence: {box.conf.item():.2f}")

Applications concrètes

Confidence scores provide a layer of interpretability that is indispensable across industries where computer vision (CV) is applied. They help automated systems determine when to proceed autonomously and when to trigger alerts for human review.

  • Autonomous Driving: In the sector of AI in automotive, self-driving vehicles rely on confidence metrics to ensure passenger safety. If a perception system detects an obstacle with low confidence, it might cross-reference this data with LiDAR sensors or radar to verify the object's presence before executing an emergency maneuver. This redundancy helps prevent "phantom braking" caused by shadows or glare.
  • Medical Diagnostics: When leveraging AI in healthcare, models assist medical professionals by flagging potential anomalies in imaging data. A system built for tumor detection might highlight regions with high confidence for immediate diagnosis, while lower confidence predictions are logged for secondary analysis. This human-in-the-loop workflow ensures AI augments clinical decision-making without replacing expert judgment.
  • Industrial Automation: In smart manufacturing, robotic arms use confidence scores to interact with objects on assembly lines. A robot equipped with vision AI might only attempt to grasp a component if the detection confidence exceeds 90%, thereby reducing the risk of damaging delicate parts due to misalignment.

Distinguer la confiance des termes apparentés

It is crucial to differentiate confidence from other statistical metrics used in model evaluation.

  • Confidence vs. Accuracy: Accuracy is a global metric that describes how often a model is correct across an entire dataset (e.g., "The model is 92% accurate"). In contrast, confidence is a local, prediction-specific value (e.g., "The model is 92% sure this specific image contains a cat"). A model can have high overall accuracy but still yield low confidence on edge cases.
  • Confidence vs. Probability Calibration: A raw confidence score does not always align with the true probability of correctness. A model is "well-calibrated" if predictions made with 0.8 confidence are correct approximately 80% of the time. Techniques such as Platt scaling or Isotonic Regression are often employed to align scores with empirical probabilities.
  • Confidence vs. Precision: Precision measures the proportion of positive identifications that were actually correct. While increasing the confidence threshold generally boosts precision, it often does so at the expense of recall. Developers must tune this threshold based on whether their application prioritizes missing fewer objects or minimizing false alarms.

Améliorer la confiance dans les modèles

If a model consistently outputs low confidence for valid objects, it often signals a discrepancy between the training data and the deployment environment. Strategies to mitigate this include data augmentation, which artificially expands the dataset by varying lighting, rotation, and noise. Furthermore, using the Ultralytics Platform to implement active learning pipelines allows developers to easily identify low-confidence samples, annotate them, and retrain the model. This iterative cycle is vital for creating robust AI agents capable of operating reliably in dynamic, real-world settings.

Rejoindre la communauté Ultralytics

Rejoignez le futur de l'IA. Connectez-vous, collaborez et évoluez avec des innovateurs mondiaux.

Rejoindre maintenant