Yolo 비전 선전
선전
지금 참여하기
용어집

역전파

역전파가 신경망을 훈련하고, 오류율을 줄이며, 이미지 인식 및 NLP와 같은 AI 애플리케이션에 효율적으로 전력을 공급하는 방법에 대해 알아보세요.

Backpropagation, short for "backward propagation of errors," is the fundamental algorithm that enables modern artificial intelligence systems to learn from data. It acts as the mathematical messenger during the model training process, calculating exactly how much each parameter in a neural network contributed to an incorrect prediction. By determining the gradient of the loss function with respect to each weight, backpropagation provides the necessary feedback that allows the network to adjust itself and improve accuracy over time. Without this efficient method of calculating derivatives, training deep, complex models would be computationally infeasible.

학습의 메커니즘

To understand backpropagation, it helps to view it as part of a cycle. When a neural network processes an image or text, it performs a "forward pass" to make a prediction. The system then compares this prediction to the correct answer using a loss function, which quantifies the error.

Backpropagation starts at the output layer and moves backward through the network layers. It utilizes the chain rule of calculus to compute the gradients. These gradients effectively tell the system, "To reduce the error, increase this weight slightly" or "decrease that bias significantly." This information is essential for deep architectures, such as Convolutional Neural Networks (CNNs), where millions of parameters must be fine-tuned simultaneously.

역전파 대 최적화

It is common for beginners to confuse backpropagation with the optimization step, but they are distinct processes within the training loop.

  • 역전파는 진단 도구입니다. 이 도구는 기울기를 계산하여 오류 환경의 기울기를 보여주는 지도를 효과적으로 그려줍니다. "오류를 줄이기 위해 어느 방향으로 이동해야 오류를 줄이기 위해 어느 방향으로 이동해야 할까요?"라는 질문에 답합니다.
  • 최적화가 바로 실행입니다. 다음과 같은 알고리즘 확률적 그라디언트 하강(SGD) 또는 Adam 옵티마이저와 같은 알고리즘은 역전파를 통해 제공된 기울기를 기울기를 취하고 가중치를 업데이트합니다. 역전파가 지도라면 최적화 도구는 등산객이 걸음 수를 계산합니다.

AI의 실제 적용 사례

Backpropagation is the underlying mechanic for virtually all modern AI successes, enabling models to generalize from training data to new, unseen inputs.

  • Computer Vision: In object detection tasks using models like YOLO26, backpropagation enables the network to learn spatial hierarchies. It helps the model understand that certain edges form shapes, and those shapes form objects like cars or pedestrians. Looking ahead, the Ultralytics Platform leverages these training techniques to help users create custom models that can accurately identify defects in manufacturing or monitor crop health in agriculture.
  • Natural Language Processing (NLP): For Large Language Models (LLMs) such as those developed by OpenAI, backpropagation allows the system to learn the probability of the next word in a sentence. By propagating errors from incorrect text predictions, the model learns nuanced grammar and context, essential for applications like machine translation.

Challenges in Deep Networks

While powerful, the algorithm faces challenges in very deep networks. The vanishing gradient problem occurs when gradients become too small as they move backward, causing early layers to stop learning. Conversely, an exploding gradient involves gradients accumulating to largely unstable values. Techniques like Batch Normalization and specialized architectures like ResNet are often employed to mitigate these issues.

Python 코드 예제

다음과 같은 고급 라이브러리는 ultralytics abstract this process during training, the underlying PyTorch framework allows you to see the mechanism directly. The .backward() method triggers the backpropagation process, computing derivatives for any tensor where requires_grad=True.

import torch

# Create a tensor that tracks operations for backpropagation
w = torch.tensor([2.0], requires_grad=True)
x = torch.tensor([3.0])

# Forward pass: compute prediction and loss (simple example)
# Let's assume the target value is 10.0
loss = (w * x - 10.0) ** 2

# Backward pass: This command executes backpropagation
loss.backward()

# The gradient is now stored in w.grad, showing how to adjust 'w'
# This tells us the slope of the loss with respect to w
print(f"Gradient (dL/dw): {w.grad.item()}")

추가 읽기

To understand how backpropagation fits into the broader scope of AI development, exploring the concept of data augmentation is beneficial, as it provides the varied examples necessary for the algorithm to generalize effectively. Additionally, understanding the specific metrics used to evaluate the success of training, such as mean Average Precision (mAP), helps in interpreting how well the backpropagation process is optimizing the model. For a deeper theoretical dive, the Stanford CS231n course notes offer an excellent technical breakdown of the calculus involved.

Ultralytics 커뮤니티 가입

AI의 미래에 동참하세요. 글로벌 혁신가들과 연결하고, 협력하고, 성장하세요.

지금 참여하기