深圳Yolo 视觉
深圳
立即加入
词汇表

模型服务

Learn how model serving bridges the gap between training and production. Explore how to deploy [YOLO26](https://docs.ultralytics.com/models/yolo26/) for real-time inference using the [Ultralytics Platform](https://platform.ultralytics.com).

Model serving is the process of hosting a trained machine learning model and making its functionality available to software applications via a network interface. It acts as the bridge between a static model file saved on a disk and a live system that processes real-world data. Once a model has completed the machine learning (ML) training phase, it must be integrated into a production environment where it can receive inputs—such as images, text, or tabular data—and return predictions. This is typically achieved by wrapping the model in an Application Programming Interface (API), allowing it to communicate with web servers, mobile apps, or IoT devices.

The Role of Model Serving in AI

The primary goal of model serving is to operationalize predictive modeling capabilities effectively. While training focuses on accuracy and loss minimization, serving focuses on performance metrics like latency (how fast a prediction is returned) and throughput (how many requests can be handled per second). Robust serving infrastructure ensures that computer vision (CV) systems remain reliable under heavy loads. It often involves technologies like containerization using tools such as Docker, which packages the model with its dependencies to ensure consistent behavior across different computing environments.

实际应用

通过基于数据的即时决策,模型服务为各行各业无处不在的人工智能功能提供动力。 数据。

  • 智能制造:在工业环境中, 制造系统中的AI利用服务模型 对装配线进行检测。零部件的高分辨率图像被传送至本地服务器, YOLO26模型在此检测划痕或错位等缺陷, 触发即时警报以剔除不合格品。
  • 零售自动化:零售商利用人工智能提升客户体验。 搭载物体检测 模型的摄像头能在结账区识别商品,自动计算总金额,无需人工扫描条形码。

具体实施

To serve a model effectively, it is often beneficial to export models to a standardized format like ONNX, which promotes interoperability between different training frameworks and serving engines. The following example demonstrates how to load a model and run inference, simulating the logic that would exist inside a serving endpoint using Python.

from ultralytics import YOLO

# Load the YOLO26 model (this typically happens once when the server starts)
model = YOLO("yolo26n.pt")

# Simulate an incoming API request with an image source URL
image_source = "https://ultralytics.com/images/bus.jpg"

# Run inference to generate predictions for the user
results = model.predict(source=image_source)

# Process results (e.g., simulating a JSON response to a client)
print(f"Detected {len(results[0].boxes)} objects in the image.")

选择正确的战略

The choice of serving strategy depends heavily on the specific use case. Online Serving provides immediate responses via protocols like REST or gRPC, which is essential for user-facing web applications. Conversely, Batch Serving processes large volumes of data offline, suitable for tasks like nightly report generation. For applications requiring privacy or low latency without internet dependence, Edge AI moves the serving process directly to the device, utilizing optimized formats like TensorRT to maximize performance on constrained hardware. Many organizations leverage the Ultralytics Platform to simplify the deployment of these models to various endpoints, including cloud APIs and edge devices.

与相关术语的区别

While closely related, "Model Serving" is distinct from Model Deployment and Inference.

  • Model Deployment: This refers to the broader lifecycle stage of releasing a model into a production environment. Serving is the specific mechanism or software (like NVIDIA Triton Inference Server or TorchServe) used to execute the deployed model.
  • Inference: This is the mathematical act of calculating a prediction from an input. Model serving provides the infrastructure (networking, scalability, and security) that allows inference to happen reliably for end-users.
  • Microservices: Serving is often architected as a set of microservices, where the model runs as an independent service that other parts of an application can query, often exchanging data in lightweight formats like JSON.

加入Ultralytics 社区

加入人工智能的未来。与全球创新者联系、协作和共同成长

立即加入