Yolo Vision Shenzhen
Shenzhen
Únete ahora
Glosario

Redes de Cápsulas (CapsNet)

Explore Capsule Networks (CapsNets) and how they preserve spatial hierarchies to solve the "Picasso problem" in AI. Learn about dynamic routing and vector neurons.

Capsule Networks, often abbreviated as CapsNets, represent an advanced architecture in the field of deep learning designed to overcome specific limitations found in traditional neural networks. Introduced by Geoffrey Hinton and his team, CapsNets attempt to mimic the biological neural organization of the human brain more closely than standard models. Unlike a typical convolutional neural network (CNN), which excels at detecting features but often loses spatial relationships due to downsampling, a Capsule Network organizes neurons into groups called "capsules." These capsules encode not just the probability of an object's presence, but also its specific properties, such as orientation, size, and texture, effectively preserving the hierarchical spatial relationships within visual data.

The Limitation of Traditional CNNs

To understand the innovation of CapsNets, it is helpful to look at how standard computer vision models operate. A conventional CNN uses layers of feature extraction followed by pooling layers—specifically max pooling—to reduce computational load and achieve translational invariance. This means a CNN can identify a "cat" regardless of where it sits in the image.

However, this process often discards precise location data, leading to the "Picasso problem": a CNN might classify a face correctly even if the mouth is on the forehead, simply because all the necessary features are present. CapsNets address this by removing pooling layers and replacing them with a process that respects the spatial hierarchies of objects.

How Capsule Networks Work

The core building block of this architecture is the capsule, a nested set of neurons that outputs a vector rather than a scalar value. In vector mathematics, a vector has both magnitude and direction. In a CapsNet:

  • Magnitude (Length): Represents the probability that a specific entity exists in the current input.
  • Direction (Orientation): Encodes the instantiation parameters, such as the object's pose estimation, scale, and rotation.

Capsules in lower layers (detecting simple shapes like edges) predict the output of capsules in higher layers (detecting complex objects like eyes or tires). This communication is managed by an algorithm called "dynamic routing" or "routing by agreement." If a lower-level capsule's prediction aligns with the higher-level capsule's state, the connection between them is strengthened. This allows the network to recognize objects from different 3D viewpoints without requiring the massive data augmentation usually needed to teach CNNs about rotation and scale.

Diferencias clave: CapsNets frente a CNN

Aunque ambas arquitecturas son fundamentales para la visión artificial (CV), difieren en cómo procesan y representan los datos visuales:

  • Scalar vs. Vector: CNN neurons use scalar outputs to signify feature presence. CapsNets use vectors to encode presence (length) and pose parameters (orientation).
  • Routing vs. Pooling: CNNs use pooling to downsample data, often losing location details. CapsNets use dynamic routing to preserve spatial data, making them highly effective for tasks requiring precise object tracking.
  • Data Efficiency: Because capsules implicitly understand 3D viewpoints and affine transformations, they can often generalize from less training data compared to CNNs, which may require extensive examples to learn every possible rotation of an object.

Aplicaciones en el mundo real

Aunque las redes CapsNet suelen ser más costosas desde el punto de vista computacional que los modelos optimizados como YOLO26, ofrecen ventajas claras en ámbitos especializados:

  1. Análisis de imágenes médicas: En el ámbito sanitario, la orientación y la forma precisas de una anomalía son fundamentales. Los investigadores han aplicado CapsNets a la segmentación de tumores cerebrales, donde el modelo debe distinguir un tumor del tejido circundante basándose en sutiles jerarquías espaciales que las CNN estándar podrían suavizar . Puede explorar investigaciones relacionadas con las redes de cápsulas en imágenes médicas.
  2. Overlapping Digit Recognition: CapsNets achieved state-of-the-art results on the MNIST dataset specifically in scenarios where digits overlap. Because the network tracks the "pose" of each digit, it can disentangle two overlapping numbers (e.g., a '3' on top of a '5') as distinct objects rather than merging them into a single confused feature map.

Contexto práctico y aplicación

Las redes Capsule son principalmente una arquitectura de clasificación. Aunque ofrecen solidez teórica, las aplicaciones industriales modernas a menudo prefieren las CNN o los Transformers de alta velocidad para obtener un rendimiento en tiempo real. Sin embargo, es útil comprender los puntos de referencia de clasificación utilizados para las CapsNets, como MNIST.

El siguiente ejemplo muestra cómo entrenar un modelo moderno. Modelo YOLO en el MNIST utilizando el ultralytics paquete. Esto es paralelo a la tarea de referencia principal utilizada para validar las redes de cápsulas.

from ultralytics import YOLO

# Load a YOLO26 classification model (optimized for speed and accuracy)
model = YOLO("yolo26n-cls.pt")

# Train the model on the MNIST dataset
# This dataset helps evaluate how well a model learns handwritten digit features
results = model.train(data="mnist", epochs=5, imgsz=32)

# Run inference on a sample image
# The model predicts the digit class (0-9)
predict = model("https://docs.ultralytics.com/datasets/classify/mnist/")

El futuro de las cápsulas y la visión artificial

Los principios que sustentan las redes de cápsulas siguen influyendo en la investigación sobre la seguridad y la interpretabilidad de la IA. Al modelar explícitamente las relaciones entre las partes y el todo, las cápsulas ofrecen una alternativa de «caja de cristal» a la naturaleza de «caja negra» de las redes neuronales profundas, lo que hace que las decisiones sean más explicables. Los desarrollos futuros buscan combinar la robustez espacial de las cápsulas con la velocidad de inferencia de arquitecturas como YOLO11 o la más reciente YOLO26 para mejorar el rendimiento en la detección de objetos 3D y la robótica. Los investigadores también están explorando las cápsulas matriciales con enrutamiento EM para reducir aún más el coste computacional del algoritmo de acuerdo.

For developers looking to manage datasets and train models efficiently, the Ultralytics Platform provides a unified environment to annotate data, train in the cloud, and deploy models that balance the speed of CNNs with the accuracy required for complex vision tasks.

Únase a la comunidad Ultralytics

Únete al futuro de la IA. Conecta, colabora y crece con innovadores de todo el mundo

Únete ahora