Yolo Vision Shenzhen
Shenzhen
Jetzt beitreten
Glossar

Detection Head

Entdecken Sie die entscheidende Rolle von Detektions-Heads bei der Objekterkennung, die Feature Maps verfeinern, um Objektpositionen und -klassen präzise zu bestimmen.

A detection head acts as the final decision-making layer in an object detection neural network architecture. While the earlier layers of the model are responsible for understanding the shapes, textures, and features within an image, the detection head is the specific component that interprets this information to predict exactly what objects are present and where they are located. It transforms the abstract, high-level data produced by the feature extractor into actionable results, typically outputting a set of bounding boxes enclosing identified objects along with their corresponding class labels and confidence scores.

Unterscheidung zwischen Kopf, Wirbelsäule und Hals

To fully grasp the function of a detection head, it is helpful to visualize modern detectors as being composed of three primary stages, each serving a distinct purpose in the computer vision (CV) pipeline:

  • Backbone: This is the initial part of the network, often a Convolutional Neural Network (CNN) like ResNet or CSPNet. It processes the raw input image to create feature maps that represent visual patterns.
  • Neck: Sitting between the backbone and the head, the neck refines and combines features from different scales. Architectures like the Feature Pyramid Network (FPN) ensure the model can detect objects of varying sizes by aggregating context.
  • Head: The final component that consumes the refined features from the neck. It performs the actual task of classification (what is it?) and regression (where is it?).

Entwicklung: Ankerbasiert vs. ankerfrei

The design of detection heads has evolved significantly to improve speed and accuracy, particularly with the transition from traditional methods to modern real-time inference models.

  • Ankerbasierte Köpfe: Herkömmliche einstufige Objektdetektoren basierten auf vordefinierten Ankerboxen– festen Referenzformen in verschiedenen Größen. Der Kopf prognostizierte, wie stark diese Anker gedehnt oder verschoben werden mussten, um sich an das Objekt anzupassen. Dieser Ansatz wird in der Grundlagenforschung zu Faster R-CNN detailliert beschrieben.
  • Anchor-Free Heads: State-of-the-art models, including the latest YOLO26, utilize anchor-free detectors. These heads predict object centers and dimensions directly from the pixels in the feature maps, eliminating the need for manual anchor tuning. This simplifies the architecture and enhances the model's ability to generalize to novel object shapes, a technique often associated with Fully Convolutional One-Stage Object Detection (FCOS).

Anwendungsfälle in der Praxis

The precision of the detection head is critical for deploying artificial intelligence (AI) in safety-critical and industrial environments. Users can easily annotate data and train these specialized heads using the Ultralytics Platform.

  • Autonomes Fahren: In der KI für Automobile ist der Erkennungskopf dafür zuständig, in Echtzeit zwischen Fußgängern, Ampeln und anderen Fahrzeugen zu unterscheiden. Ein hochgradig optimierter Kopf stellt sicher, dass die Inferenzlatenz niedrig genug bleibt, damit das Fahrzeug sofort reagieren kann.
  • Medizinische Diagnostik: Bei der medizinischen Bildanalyse werden Detektionsköpfe feinabgestimmt, um Anomalien wie Tumore in MRT-Scans zu lokalisieren. Der Regressionszweig muss äußerst genau sein, um die genauen Grenzen einer Läsion zu umreißen und Ärzte bei Gesundheitslösungen zu unterstützen.

Code-Beispiel

The following example demonstrates how to load a YOLO26 model and inspect the output of its detection head. When inference runs, the head processes the image and returns the final boxes containing coordinates and class IDs.

from ultralytics import YOLO

# Load the YOLO26n model (nano version)
model = YOLO("yolo26n.pt")

# Run inference on an image to utilize the detection head
results = model("https://ultralytics.com/images/bus.jpg")

# The detection head outputs are stored in results[0].boxes
for box in results[0].boxes:
    # Print the bounding box coordinates and the predicted class
    print(f"Class: {int(box.cls)}, Coordinates: {box.xywh.numpy()}")

This interaction highlights how the detection head translates complex neural network activations into readable data that developers can use for downstream tasks like object tracking or counting.

Werden Sie Mitglied der Ultralytics

Gestalten Sie die Zukunft der KI mit. Vernetzen Sie sich, arbeiten Sie zusammen und wachsen Sie mit globalen Innovatoren

Jetzt beitreten