Yolo فيجن شنتشن
شنتشن
انضم الآن
مسرد المصطلحات

الذكاء الاصطناعي الدستوري

Explore how Constitutional AI aligns models with ethical principles. Learn how to implement safety checks in computer vision using YOLO26 for more reliable AI.

Constitutional AI is a method for training artificial intelligence systems to align with human values by providing them with a set of high-level principles—a "constitution"—rather than relying solely on extensive human feedback on individual outputs. This approach essentially teaches the AI model to critique and revise its own behavior based on a predefined set of rules, such as "be helpful," "be harmless," and "avoid discrimination." By embedding these ethical guidelines directly into the training process, developers can create systems that are safer, more transparent, and easier to scale than those dependent on manual Reinforcement Learning from Human Feedback (RLHF).

The Mechanism of Constitutional AI

The core innovation of Constitutional AI lies in its two-phase training process, which automates the alignment of models. Unlike traditional supervised learning, where humans must label every correct response, Constitutional AI uses the model itself to generate training data.

  1. Supervised Learning Phase: The model generates responses to prompts, then critiques its own output based on the constitutional principles. It revises the response to better align with the rules. This refined dataset is then used to fine-tune the model, teaching it to inherently follow the guidelines.
  2. Reinforcement Learning Phase: This phase, often called Reinforcement Learning from AI Feedback (RLAIF), replaces the human labeler. The AI generates pairs of responses and selects the one that best adheres to the constitution. This preference data trains a reward model, which then reinforces the desired behaviors via standard reinforcement learning techniques.

الصلة بالرؤية الحاسوبية

بينما نشأت الذكاء الاصطناعي الدستوري في سياق نماذج اللغة الكبيرة (LLM) التي طورتها منظمات مثل Anthropic، فإن مبادئها تزداد أهمية بالنسبة لمهام التعلم الآلي الأوسع نطاقًا، بما في ذلك الرؤية الحاسوبية (CV).

  • Ethical Image Generation: Generative AI tools for creating images can be "constitutionally" trained to refuse prompts that would generate violent, hateful, or copyrighted imagery. This ensures that the model weights themselves encode safety constraints, preventing the creation of harmful visual content.
  • Safety-Critical Vision Systems: In autonomous vehicles, a "constitutional" approach can define hierarchical rules for decision-making. For instance, a rule stating "human safety overrides traffic efficiency" can guide the model when analyzing complex road scenes, ensuring that object detection results are interpreted with safety as the priority.

تنفيذ عمليات التحقق من السياسات في Vision AI

في حين أن التدريب الكامل على الذكاء الاصطناعي الدستوري ينطوي على حلقات تغذية مرتدة معقدة، يمكن للمطورين تطبيق مفهوم "الضوابط الدستورية" أثناء الاستدلال لتصفية المخرجات بناءً على سياسات السلامة . يوضح المثال التالي استخدام YOLO26 detect وتطبيق قاعدة أمان لتصفية عمليات الكشف منخفضة الثقة، محاكاةً دستور الموثوقية.

from ultralytics import YOLO

# Load the YOLO26 model (latest stable Ultralytics release)
model = YOLO("yolo26n.pt")

# Run inference on an image
results = model("https://ultralytics.com/images/bus.jpg")

# Apply a "constitutional" safety check: Only accept high-confidence detections
for result in results:
    # Filter boxes with confidence > 0.5 to ensure reliability
    safe_boxes = [box for box in result.boxes if box.conf > 0.5]

    print(f"Safety Check Passed: {len(safe_boxes)} reliable objects detected.")
    # Further processing would only use 'safe_boxes'

Constitutional AI vs. Conventional RLHF

It is important to distinguish Constitutional AI from standard Reinforcement Learning from Human Feedback (RLHF).

  • Scalability: RLHF requires vast amounts of human labor to rate model outputs, which is expensive and slow. Constitutional AI automates this with AI agents, making it highly scalable.
  • Transparency: In RLHF, the model learns from an opaque "reward signal" (a score), making it hard to know why a behavior was preferred. In Constitutional AI, the chain of thought prompting used during the critique phase makes the reasoning explicit and traceable to specific written principles.
  • Consistency: Human raters can be inconsistent or biased. A written constitution provides a stable baseline for AI ethics, reducing subjectivity in the alignment process.

مستقبل التوافق

مع تطور النماذج نحو الذكاء الاصطناعي العام (AGI)، تزداد أهمية استراتيجيات المواءمة القوية مثل الذكاء الاصطناعي المؤسسي. هذه الأساليب ضرورية من أجل الامتثال للمعايير الناشئة من هيئات مثل معهد NIST لسلامة الذكاء الاصطناعي العام.

The Ultralytics Platform offers tools to manage data governance and model monitoring, facilitating the creation of responsible AI systems. By integrating these ethical considerations into the lifecycle of AI development—from data collection to model deployment—organizations can mitigate risks and ensure their technologies contribute positively to society.

انضم إلى مجتمع Ultralytics

انضم إلى مستقبل الذكاء الاصطناعي. تواصل وتعاون وانمو مع المبتكرين العالميين

انضم الآن