深圳Yolo 视觉
深圳
立即加入
词汇表

反向传播

了解反向传播如何训练神经网络、降低错误率,并有效地驱动图像识别和 NLP 等人工智能应用。

Backpropagation, short for "backward propagation of errors," is the fundamental algorithm that enables modern artificial intelligence systems to learn from data. It acts as the mathematical messenger during the model training process, calculating exactly how much each parameter in a neural network contributed to an incorrect prediction. By determining the gradient of the loss function with respect to each weight, backpropagation provides the necessary feedback that allows the network to adjust itself and improve accuracy over time. Without this efficient method of calculating derivatives, training deep, complex models would be computationally infeasible.

学习机制

To understand backpropagation, it helps to view it as part of a cycle. When a neural network processes an image or text, it performs a "forward pass" to make a prediction. The system then compares this prediction to the correct answer using a loss function, which quantifies the error.

Backpropagation starts at the output layer and moves backward through the network layers. It utilizes the chain rule of calculus to compute the gradients. These gradients effectively tell the system, "To reduce the error, increase this weight slightly" or "decrease that bias significantly." This information is essential for deep architectures, such as Convolutional Neural Networks (CNNs), where millions of parameters must be fine-tuned simultaneously.

反向传播与优化

It is common for beginners to confuse backpropagation with the optimization step, but they are distinct processes within the training loop.

  • 反向传播是一种诊断工具。它通过计算梯度,有效地绘制出一幅 图,显示误差景观的斜率。它能回答 "我们应该向哪个方向移动才能减少误差? 减少误差?
  • 优化就是行动。算法包括 随机梯度下降算法(SGDAdam 优化器等算法利用反向传播提供的梯度 并更新权重。如果说反向传播是一张地图,那么优化器就是一个徒步旅行者。 步。

人工智能在现实世界中的应用

Backpropagation is the underlying mechanic for virtually all modern AI successes, enabling models to generalize from training data to new, unseen inputs.

  • Computer Vision: In object detection tasks using models like YOLO26, backpropagation enables the network to learn spatial hierarchies. It helps the model understand that certain edges form shapes, and those shapes form objects like cars or pedestrians. Looking ahead, the Ultralytics Platform leverages these training techniques to help users create custom models that can accurately identify defects in manufacturing or monitor crop health in agriculture.
  • Natural Language Processing (NLP): For Large Language Models (LLMs) such as those developed by OpenAI, backpropagation allows the system to learn the probability of the next word in a sentence. By propagating errors from incorrect text predictions, the model learns nuanced grammar and context, essential for applications like machine translation.

Challenges in Deep Networks

While powerful, the algorithm faces challenges in very deep networks. The vanishing gradient problem occurs when gradients become too small as they move backward, causing early layers to stop learning. Conversely, an exploding gradient involves gradients accumulating to largely unstable values. Techniques like Batch Normalization and specialized architectures like ResNet are often employed to mitigate these issues.

Python 代码示例

虽然高级库,如 ultralytics abstract this process during training, the underlying PyTorch framework allows you to see the mechanism directly. The .backward() method triggers the backpropagation process, computing derivatives for any tensor where requires_grad=True.

import torch

# Create a tensor that tracks operations for backpropagation
w = torch.tensor([2.0], requires_grad=True)
x = torch.tensor([3.0])

# Forward pass: compute prediction and loss (simple example)
# Let's assume the target value is 10.0
loss = (w * x - 10.0) ** 2

# Backward pass: This command executes backpropagation
loss.backward()

# The gradient is now stored in w.grad, showing how to adjust 'w'
# This tells us the slope of the loss with respect to w
print(f"Gradient (dL/dw): {w.grad.item()}")

更多阅读

To understand how backpropagation fits into the broader scope of AI development, exploring the concept of data augmentation is beneficial, as it provides the varied examples necessary for the algorithm to generalize effectively. Additionally, understanding the specific metrics used to evaluate the success of training, such as mean Average Precision (mAP), helps in interpreting how well the backpropagation process is optimizing the model. For a deeper theoretical dive, the Stanford CS231n course notes offer an excellent technical breakdown of the calculus involved.

加入Ultralytics 社区

加入人工智能的未来。与全球创新者联系、协作和共同成长

立即加入