深圳Yolo 视觉
深圳
立即加入
词汇表

自注意力机制

Explore the self-attention mechanism in deep learning. Learn how it weighs data importance to power Transformers, LLMs, and [YOLO26](https://docs.ultralytics.com/models/yolo26/).

Self-attention is a foundational mechanism in deep learning that enables models to weigh the importance of different elements within an input sequence relative to one another. Unlike traditional architectures that process data sequentially or focus only on local neighborhoods, self-attention allows a neural network to examine the entire context simultaneously. This capability helps systems identify complex relationships between distant parts of data, such as words in a sentence or distinct regions in an image. It serves as the core building block for the Transformer architecture, which has driven massive advancements in generative AI and modern perception systems.

自注意力机制的工作原理

The mechanism mimics cognitive focus by assigning a weight, often called an "attention score," to each input feature. To compute these scores, the model transforms input data—typically represented as embeddings—into three distinct vectors: the Query, the Key, and the Value.

  • Query (Q): Represents the current item seeking relevant context from the rest of the sequence.
  • Key (K): Acts as a label or identifier for every item in the sequence against which the query is matched.
  • Value (V): Contains the actual informational content of the item that will be aggregated.

The model compares the Query of one element against the Keys of all other elements to determine compatibility. These compatibility scores are normalized using a softmax function to create probability-like weights. These weights are then applied to the Values, generating a context-rich representation. This process enables Large Language Models (LLMs) and vision systems to prioritize significant information while filtering out noise.

实际应用

The versatility of self-attention has led to its widespread adoption across various domains of Artificial Intelligence (AI).

  • Natural Language Processing (NLP): In tasks such as machine translation, self-attention resolves ambiguity by linking pronouns to their referents. For instance, in the sentence "The animal didn't cross the street because it was too tired," the model uses self-attention to strongly associate "it" with "animal" rather than "street." This contextual awareness powers tools like Google Translate.
  • Global Image Context: In Computer Vision (CV), architectures like the Vision Transformer (ViT) divide images into patches and apply self-attention to understand the scene globally. This is vital for object detection in complex environments where identifying an object relies on understanding its surroundings.

区分相关术语

While often discussed alongside similar concepts, these terms have distinct technical definitions:

  • Attention Mechanism: The broad category of techniques allowing models to focus on specific data parts. It encompasses Cross-Attention, where a model uses one sequence (like a decoder output) to query a different sequence (like an encoder input).
  • 自注意力:一种特殊的注意力机制,其中查询、键和值均源自同一输入序列。该机制旨在学习单个数据集内部的依赖关系。
  • Flash Attention: An optimization algorithm developed by researchers at Stanford University that makes the computation of self-attention significantly faster and more memory-efficient on GPUs without altering the mathematical output.

代码示例

以下Python 演示了如何使用 RTDETR,Transformer物体检测器包含在 ultralytics package. Unlike standard convolutional networks, this model relies heavily on self-attention to process visual features.

from ultralytics import RTDETR

# Load the RT-DETR model which utilizes self-attention for detection
model = RTDETR("rtdetr-l.pt")

# Perform inference on an image to detect objects with global context
# Self-attention helps the model understand relationships between distant objects
results = model("https://ultralytics.com/images/bus.jpg")

# Print the number of objects detected
print(f"Detected {len(results[0].boxes)} objects using Transformer attention.")

Evolution and Future Impact

Self-attention effectively solved the vanishing gradient problem that hindered earlier Recurrent Neural Networks (RNNs), enabling the training of massive foundation models. While highly effective, the computational cost of standard self-attention grows quadratically with sequence length. To address this, current research focuses on efficient linear attention mechanisms.

Ultralytics integrates these advancements into state-of-the-art models like YOLO26, which combines the speed of CNNs with the contextual power of attention for superior real-time inference. These optimized models can be easily trained and deployed via the Ultralytics Platform, streamlining the workflow for developers building the next generation of intelligent applications.

加入Ultralytics 社区

加入人工智能的未来。与全球创新者联系、协作和共同成长

立即加入