ゼロショット学習:未知のデータをclassify するモデルを可能にし、物体検出やNLPなどに革命をもたらす最先端のAIアプローチをご覧ください。
Zero-Shot Learning (ZSL) is a machine learning paradigm that enables artificial intelligence models to recognize, classify, or detect objects they have never encountered during their training phase. In traditional supervised learning, a model requires thousands of labeled examples for every specific category it needs to identify. ZSL eliminates this strict dependency by leveraging auxiliary information—typically text descriptions, semantic attributes, or embeddings—to bridge the gap between seen and unseen classes. This capability allows artificial intelligence (AI) systems to be significantly more flexible, scalable, and capable of handling dynamic environments where collecting exhaustive data for every possible object is impractical.
The core mechanism of ZSL involves transferring knowledge from familiar concepts to unfamiliar ones using a shared semantic space. Instead of learning to recognize a "zebra" solely by memorizing pixel patterns of black and white stripes, the model learns the relationship between visual features and semantic attributes (e.g., "horse-like shape," "striped pattern," "four legs") derived from natural language processing (NLP).
This process often relies on multi-modal models that align image and text representations. For instance, foundational research like OpenAI's CLIP demonstrates how models can learn visual concepts from natural language supervision. When a ZSL model encounters an unseen object, it extracts the visual features and compares them against a dictionary of semantic vectors. If the visual features align with the semantic description of the new class, the model can correctly classify it, effectively performing a "zero-shot" prediction. This approach is fundamental to modern foundation models which generalize across vast arrays of tasks.
ゼロショット学習は、システムが初期の訓練データを超えて一般化することを可能にすることで、様々な産業における革新を推進している。
Ultralytics YOLOモデルは、ゼロショット学習の実践例である。 このモデルは、再学習を必要とせずに実行時に動的にカスタムクラスを定義することを可能にする。 これは、堅牢な検出バックボーンと自然言語を理解するテキストエンコーダーを接続することで実現されている。
The following Python example demonstrates how to use YOLO-World to detect objects that were not explicitly part of a
standard training set using the ultralytics パッケージで提供される。
from ultralytics import YOLOWorld
# Load a pre-trained YOLO-World model capable of Zero-Shot Learning
model = YOLOWorld("yolov8s-world.pt")
# Define custom classes via text prompts (e.g., specific accessories)
# The model adjusts to detect these new classes without retraining
model.set_classes(["blue backpack", "red apple", "sunglasses"])
# Run inference on an image to detect the new zero-shot classes
results = model.predict("https://ultralytics.com/images/bus.jpg")
# Display the results
results[0].show()
ZSLを完全に理解するためには、次のような類似の学習戦略と区別することが役に立つ。 コンピュータビジョン(CV)で使用される同様の学習ストラテジーと区別することが有用である:
While ZSL offers immense potential, it faces challenges such as the domain shift problem, where the semantic attributes learned during training do not perfectly map to the visual appearance of unseen classes. Additionally, ZSL models can suffer from bias, where prediction accuracy is significantly higher for seen classes compared to unseen ones.
Research from organizations like Stanford University's AI Lab and the IEEE Computer Society continues to address these limitations. As computer vision tools become more robust, ZSL is expected to become a standard feature, reducing the reliance on massive data labeling efforts. For teams looking to manage datasets efficiently before deploying advanced models, the Ultralytics Platform offers comprehensive tools for annotation and dataset management.