Explore how Question Answering (QA) uses AI to provide factual answers. Learn about VQA with [YOLO26](https://docs.ultralytics.com/models/yolo26/) and NLP techniques.
Question Answering (QA) is a specialized field within artificial intelligence (AI) and natural language processing (NLP) focused on building systems that automatically answer questions posed by humans in natural language. Unlike traditional search engines that retrieve a list of relevant documents or web pages, a QA system attempts to understand the intent of the user's query and provide a precise, factual answer. This capability bridges the gap between massive, unstructured data repositories and the specific information needs of users, making it a critical component of modern AI Agents and virtual assistants.
At its core, a Question Answering system involves three main stages: question processing, document retrieval, and answer extraction. First, the system analyzes the input query to determine what is being asked (e.g., a "who," "where," or "how" question) and identifies key entities. Next, it searches through a knowledge base—which could be a closed set of manuals or the open internet—to find passages relevant to the query. Finally, it uses advanced techniques like machine reading comprehension to pinpoint the exact answer within the text or generate a response based on the synthesized information.
Modern QA systems often leverage Large Language Models (LLMs) and transformers like BERT (Bidirectional Encoder Representations from Transformers) to achieve high accuracy. These models are pre-trained on vast amounts of text, allowing them to grasp context, nuance, and semantic relationships better than keyword-based methods.
QA 시스템은 일반적으로 접근하는 데이터의 도메인과 지원하는 모달리티에 따라 분류됩니다.
QA 기술의 도입은 산업계가 방대한 양의 비정형 데이터와 상호작용하는 방식을 변화시키고 있다.
시각적 질문응답(VQA)을 위해 시스템은 먼저 장면 내 객체와 그 관계를 식별해야 합니다. 고성능 객체 탐지 모델은 QA 시스템의 '눈' 역할을 합니다. 최신 Ultralytics 모델은 이 작업에 이상적이며, 장면 요소를 신속하고 정확하게 탐지하여 추론을 위한 언어 모델에 입력할 수 있습니다.
The following Python example demonstrates how to use the Ultralytics YOLO26 model to extract visual context (objects) from an image, which is the foundational step in a VQA pipeline:
from ultralytics import YOLO
# Load a pre-trained YOLO26 model (latest generation)
model = YOLO("yolo26n.pt")
# Perform inference to identify objects in the image
# This provides the "visual facts" for a QA system
results = model("https://ultralytics.com/images/bus.jpg")
# Display the detected objects and their labels
results[0].show()
기계 학습 분야에서 질문 답변(Question Answering)을 유사한 용어들과 구분하는 것이 유용합니다:
The evolution of QA is heavily supported by open-source frameworks like PyTorch and TensorFlow, enabling developers to build increasingly sophisticated systems that understand the world through both text and pixels. For those looking to manage datasets for training these systems, the Ultralytics Platform offers comprehensive tools for annotation and model management.