Yolo Tầm nhìn Thâm Quyến
Thâm Quyến
Tham gia ngay
Bảng chú giải thuật ngữ

Điều chỉnh siêu tham số

Điều chỉnh siêu tham số chính để tối ưu hóa các mô hình ML như Ultralytics YOLO . Nâng cao độ chính xác, tốc độ và hiệu suất bằng các kỹ thuật chuyên môn.

Hyperparameter tuning is the iterative process of optimizing the external configuration variables that govern the training process of a machine learning (ML) model. Unlike internal parameters—such as weights and biases which are learned from data during training—hyperparameters are set by the data scientist or engineer before the learning process begins. These settings control the model's structure and the algorithm's behavior, acting as the "knobs and dials" that fine-tune performance. Finding the ideal combination of these values is critical for maximizing metrics like accuracy and efficiency, often making the difference between a mediocre model and a state-of-the-art solution.

Core Concepts and Techniques

The collection of all possible hyperparameter combinations creates a high-dimensional search space. Practitioners use various strategies to navigate this space to find the optimal configuration that minimizes the loss function.

  • Grid Search: This exhaustive method evaluates the model for every specified combination of parameters in a grid. While thorough, it is computationally expensive and suffers from the curse of dimensionality when dealing with many variables.
  • Random Search: Instead of testing every combination, this technique selects random combinations of hyperparameters. Research suggests this is often more efficient than grid search, as it explores the search space more effectively for the most impactful parameters.
  • Bayesian Optimization: This probabilistic approach builds a surrogate model to predict which hyperparameters will yield the best results based on past evaluations, focusing the search on the most promising areas.
  • Evolutionary Algorithms: Inspired by biological evolution, this method uses mechanisms like mutation and crossover to evolve a population of configurations over generations. This is the primary method used by the ultralytics library to optimize modern architectures like YOLO26.

Điều chỉnh siêu tham số so với đào tạo mô hình

It is essential to distinguish between tuning and training, as they represent distinct phases in the MLOps lifecycle:

Các Ứng dụng Thực tế

Effectively tuned models are critical for deploying robust solutions in complex environments.

Nông nghiệp chính xác

In AI in Agriculture, autonomous drones use computer vision to identify weeds and crop diseases. These models often run on edge devices with limited battery life. Engineers utilize hyperparameter tuning to optimize the data augmentation pipeline and input resolution, ensuring the model balances high inference speeds with the precision needed to spray only the weeds, reducing chemical usage.

Chẩn đoán y khoa

For AI in Healthcare, specifically in medical image analysis, a false negative can have severe consequences. When training models to detect anomalies in MRI scans, practitioners aggressively tune hyperparameters related to class weighting and focal loss. This tuning maximizes recall, ensuring that even subtle signs of pathology are flagged for human review, significantly aiding early diagnosis.

Điều chỉnh tự động với Ultralytics

Các ultralytics library simplifies optimization by including a built-in tuner that utilizes genetic algorithms. This allows users to automatically search for the best hyperparameters for their custom datasets without manual trial-and-error. For large-scale operations, teams can leverage the Ultralytics Nền tảng to manage datasets and visualize these tuning experiments in the cloud.

The following example demonstrates how to initiate hyperparameter tuning for a YOLO26 model. The tuner will mutate hyperparameters over several iterations to maximize Mean Average Precision (mAP).

from ultralytics import YOLO

# Initialize a YOLO26 model (using the 'nano' weight for speed)
model = YOLO("yolo26n.pt")

# Start tuning hyperparameters on the COCO8 dataset
# The tuner runs for 30 epochs per iteration, evolving parameters like lr0 and momentum
model.tune(data="coco8.yaml", epochs=30, iterations=100, optimizer="AdamW", plots=False)

By automating this process, developers can move closer to the concept of Automated Machine Learning (AutoML), where the system self-optimizes to achieve the best possible performance for a specific task.

Tham gia Ultralytics cộng đồng

Tham gia vào tương lai của AI. Kết nối, hợp tác và phát triển cùng với những nhà đổi mới toàn cầu

Tham gia ngay