Yolo 비전 선전
선전
지금 참여하기
용어집

Self-Attention

Explore the self-attention mechanism in deep learning. Learn how it weighs data importance to power Transformers, LLMs, and [YOLO26](https://docs.ultralytics.com/models/yolo26/).

Self-attention is a foundational mechanism in deep learning that enables models to weigh the importance of different elements within an input sequence relative to one another. Unlike traditional architectures that process data sequentially or focus only on local neighborhoods, self-attention allows a neural network to examine the entire context simultaneously. This capability helps systems identify complex relationships between distant parts of data, such as words in a sentence or distinct regions in an image. It serves as the core building block for the Transformer architecture, which has driven massive advancements in generative AI and modern perception systems.

Self-Attention 작동 방식

The mechanism mimics cognitive focus by assigning a weight, often called an "attention score," to each input feature. To compute these scores, the model transforms input data—typically represented as embeddings—into three distinct vectors: the Query, the Key, and the Value.

  • Query (Q): Represents the current item seeking relevant context from the rest of the sequence.
  • Key (K): Acts as a label or identifier for every item in the sequence against which the query is matched.
  • Value (V): Contains the actual informational content of the item that will be aggregated.

The model compares the Query of one element against the Keys of all other elements to determine compatibility. These compatibility scores are normalized using a softmax function to create probability-like weights. These weights are then applied to the Values, generating a context-rich representation. This process enables Large Language Models (LLMs) and vision systems to prioritize significant information while filtering out noise.

실제 애플리케이션

The versatility of self-attention has led to its widespread adoption across various domains of Artificial Intelligence (AI).

  • Natural Language Processing (NLP): In tasks such as machine translation, self-attention resolves ambiguity by linking pronouns to their referents. For instance, in the sentence "The animal didn't cross the street because it was too tired," the model uses self-attention to strongly associate "it" with "animal" rather than "street." This contextual awareness powers tools like Google Translate.
  • Global Image Context: In Computer Vision (CV), architectures like the Vision Transformer (ViT) divide images into patches and apply self-attention to understand the scene globally. This is vital for object detection in complex environments where identifying an object relies on understanding its surroundings.

관련 용어 구분하기

While often discussed alongside similar concepts, these terms have distinct technical definitions:

  • Attention Mechanism: The broad category of techniques allowing models to focus on specific data parts. It encompasses Cross-Attention, where a model uses one sequence (like a decoder output) to query a different sequence (like an encoder input).
  • 자기 주의(Self-Attention): 쿼리(Query), 키(Key), 값(Value)이 모두 동일한 입력 시퀀스에서 비롯되는 특정 주의 방식입니다. 단일 데이터셋 내부의 내부적 의존성을 학습하도록 설계되었습니다.
  • Flash Attention: An optimization algorithm developed by researchers at Stanford University that makes the computation of self-attention significantly faster and more memory-efficient on GPUs without altering the mathematical output.

코드 예제

다음 Python 코드 Python 사용 방법을 보여줍니다. RTDETR, 트랜스포머 기반 객체 탐지기로서 포함된 in the ultralytics package. Unlike standard convolutional networks, this model relies heavily on self-attention to process visual features.

from ultralytics import RTDETR

# Load the RT-DETR model which utilizes self-attention for detection
model = RTDETR("rtdetr-l.pt")

# Perform inference on an image to detect objects with global context
# Self-attention helps the model understand relationships between distant objects
results = model("https://ultralytics.com/images/bus.jpg")

# Print the number of objects detected
print(f"Detected {len(results[0].boxes)} objects using Transformer attention.")

Evolution and Future Impact

Self-attention effectively solved the vanishing gradient problem that hindered earlier Recurrent Neural Networks (RNNs), enabling the training of massive foundation models. While highly effective, the computational cost of standard self-attention grows quadratically with sequence length. To address this, current research focuses on efficient linear attention mechanisms.

Ultralytics integrates these advancements into state-of-the-art models like YOLO26, which combines the speed of CNNs with the contextual power of attention for superior real-time inference. These optimized models can be easily trained and deployed via the Ultralytics Platform, streamlining the workflow for developers building the next generation of intelligent applications.

Ultralytics 커뮤니티 가입

AI의 미래에 동참하세요. 글로벌 혁신가들과 연결하고, 협력하고, 성장하세요.

지금 참여하기