Yolo 비전 선전
선전
지금 참여하기
용어집

파라미터 효율적 미세 조정 (PEFT)

Learn how Parameter-Efficient Fine-Tuning (PEFT) optimizes large models like YOLO26 by updating minimal parameters. Explore techniques to reduce compute costs.

Parameter-Efficient Fine-Tuning (PEFT) is a sophisticated optimization strategy in machine learning (ML) that enables the customization of large, pre-trained models to specific tasks while minimizing computational costs. As modern foundation models have grown to encompass billions of parameters, traditional training methods that update every weight in the network have become prohibitively expensive in terms of hardware and energy. PEFT addresses this challenge by freezing the vast majority of the pre-trained model weights and only updating a small subset of parameters or adding lightweight adapter layers. This approach lowers the barrier to entry, allowing developers to achieve state-of-the-art results on consumer-grade GPUs without requiring industrial-scale data centers.

효율성의 메커니즘

The core principle of PEFT relies on transfer learning, where a model leverages feature representations learned from massive public datasets like ImageNet to solve new problems. In a standard workflow, adapting a model might involve "full fine-tuning," where backpropagation adjusts every parameter in the neural network.

PEFT techniques, such as LoRA (Low-Rank Adaptation), take a different route. They keep the model's heavy "backbone" static—preserving its general knowledge—and inject small, trainable matrices into specific layers. This prevents catastrophic forgetting, a phenomenon where a model loses its original capabilities while learning new information. By reducing the number of trainable parameters by up to 99%, PEFT significantly decreases storage requirements and allows multiple task-specific adapters to be swapped in and out of a single base model during real-time inference.

실제 애플리케이션

PEFT is particularly valuable in industries where edge computing and data privacy are paramount.

  • AI in Agriculture: Agritech startups often deploy models on drones with limited battery life and processing power. Using PEFT, engineers can take a highly efficient model like YOLO26 and fine-tune it to detect specific regional pests, such as the fall armyworm, using a small custom dataset. By freezing the backbone, the training can be done quickly on a laptop, and the resulting model remains lightweight enough for onboard processing.
  • AI in Healthcare: In medical image analysis, annotated data is often scarce and expensive to obtain. Hospitals use PEFT to adapt general-purpose vision models to identify anomalies in MRI scans. Because the base parameters are frozen, the model is less prone to overfitting on the small dataset, ensuring robust diagnostic performance while preserving patient data privacy.

Implementing Frozen Layers with Ultralytics

In the Ultralytics ecosystem, parameter efficiency is often achieved by "freezing" the initial layers of a network. This ensures the robust feature extractors remain unchanged while only the head or later layers adapt to new classes. This is a practical implementation of PEFT principles for object detection.

The following example demonstrates how to train a YOLO26 model while freezing the first 10 layers of the backbone to save compute resources:

from ultralytics import YOLO

# Load the YOLO26 model (latest stable version)
model = YOLO("yolo26n.pt")

# Train on a custom dataset with the 'freeze' argument
# freeze=10 keeps the first 10 layers static, updating only deeper layers
results = model.train(data="coco8.yaml", epochs=5, freeze=10)

For teams looking to scale this process, the Ultralytics Platform offers a unified interface to manage datasets, automate annotation, and monitor these efficient training runs from the cloud.

Distinguishing PEFT from Related Concepts

To select the right model adaptation strategy, it is helpful to differentiate PEFT from similar terms:

  • Fine-Tuning: Often referred to as "full fine-tuning," this process updates all parameters in the model. While it offers maximum plasticity, it is computationally expensive and requires saving a full copy of the model for every task. PEFT is a sub-category of fine-tuning focused on efficiency.
  • Prompt Engineering: This involves crafting text inputs to guide a model's output without changing any internal weights. PEFT, conversely, mathematically alters a subset of weights or adapters to permanently change how the model processes data.
  • Knowledge Distillation: This technique trains a small student model to mimic a large teacher model. While it results in an efficient model, it is a compression method, whereas PEFT is an adaptation method used to teach an existing model new skills.

By democratizing access to high-performance AI, PEFT allows developers to build specialized tools for autonomous vehicles and smart manufacturing without the need for supercomputer infrastructure.

Ultralytics 커뮤니티 가입

AI의 미래에 동참하세요. 글로벌 혁신가들과 연결하고, 협력하고, 성장하세요.

지금 참여하기