探索CPU 在人工智能和机器学习中的重要作用。了解 CPU 在数据准备、推理中的应用,以及与 GPU/TPU 的比较。
A Central Processing Unit (CPU) is the primary component of a computer that acts as its "brain," responsible for interpreting and executing instructions from hardware and software. In the context of artificial intelligence (AI), the CPU plays a fundamental role in data handling, system orchestration, and executing inference, particularly on edge devices where power efficiency is critical. While specialized hardware like GPUs are often associated with the heavy lifting of training deep learning models, the CPU remains indispensable for the overall machine learning (ML) pipeline.
Although GPUs are celebrated for their massive parallelism during training, the CPU is the workhorse for many essential stages of the computer vision (CV) lifecycle. Its architecture, typically based on x86 (Intel, AMD) or ARM designs, is optimized for sequential processing and complex logic control.
理解硬件环境对优化机器学习运维(MLOps)至关重要。这些处理器在架构和理想应用场景方面存在显著差异。
CPUs are frequently the hardware of choice for applications where cost, availability, and energy consumption outweigh the need for massive raw throughput.
开发人员常在CPU上测试模型,以验证其与无服务器计算环境或低功耗设备的兼容性。Ultralytics 可让您轻松定位CPU,确保应用程序无处不在地运行。
以下示例演示了如何加载轻量级模型,并在CPU进行推理:
from ultralytics import YOLO
# Load the lightweight YOLO26 nano model
# Smaller models are optimized for faster CPU execution
model = YOLO("yolo26n.pt")
# Run inference on an image, explicitly setting the device to 'cpu'
results = model.predict("https://ultralytics.com/images/bus.jpg", device="cpu")
# Print the detection results (bounding boxes)
print(results[0].boxes.xywh)
To further improve performance on Intel CPUs, developers can export their models to the OpenVINO format, which optimizes the neural network structure specifically for x86 architecture. For managing datasets and orchestrating these deployments, tools like the Ultralytics Platform simplify the workflow from annotation to edge execution.