Back to customer stories

Theia Scientific redefines microscopy data analysis with Ultralytics YOLO models

Problem

Theia Scientific set out to find a Vision AI Model that would improve speed, accuracy, and reproducibility of microscopy image analysis.

Solution

By integrating Ultralytics YOLO models into its platform, Theia Scientific transformed how microscopy data is processed, making analysis more efficient and reliable.

Scientific research across fields like materials science and nanotechnology often depends on charged particle, scanning probe, and optical microscopy to explore structures that are invisible to the human eye. For instance, Transmission Electron Microscopy (TEM) is a key tool, capable of capturing fine details at the nano and atomic scale.

Unfortunately, once these images are acquired, analyzing them can be slow and complex, often requiring significant manual effort and domain expertise. To enhance this process, Theia Scientific developed the Theiascope™ platform, a real-time microscopy image analysis system that integrates Ultralytics YOLO models to automate image detection, segmentation, and quantitative measurements, making microscopy faster, more efficient, and reproducible.

Exploring the role of Vision AI in scientific imaging

Founded by brothers Kevin and Christopher Field, Theia Scientific develops advanced software tools to accelerate microscopy research. With expertise spanning materials science, industrial automation, electronics, and software engineering, they focus on reducing the bottlenecks scientists, engineers, and researchers face when analyzing complex image data. 

Their flagship product, the Theiascope™ platform, integrates computer vision to automatically detect, segment, and measure features in electron microscopy images. By relying on Vision AI rather than manual annotation and tracing, the platform provides consistent and reproducible results.

Why are microscopy images difficult to analyze manually?

Microscopy images, especially those captured with TEM, are very detailed but challenging to interpret. Each image contains hundreds to thousands of fine features and structures, such as grains and boundaries, that have to be carefully identified, annotated, traced, and/or measured to extract meaningful data. Traditionally, this has been done by hand, which is slow and can vary from person to person. Two researchers might annotate the same image differently, leading to inconsistent results and large error bars.

This process becomes even more complex when large datasets are involved. To get reliable insights, thousands of images often need to be analyzed, which can take weeks or even months using manual methods. On top of that, variations in contrast, noise, and overlapping structures make the process even harder.

For researchers aiming to study microstructural evolution or track changes over time, these issues can slow down research. Theia Scientific recognized these concerns required a more automated and reliable solution.

Enhancing microscopy workflows using Ultralytics YOLO models

After exploring different approaches to automate microscopy data analysis, Theia Scientific saw that Ultralytics YOLO models offered the speed, accuracy, and flexibility needed for real-time microscopy image analysis, enabling instant quantitative results at the microscope while experiments are still in progress. Ultralytics YOLO models like Ultralytics YOLO11 and Ultralytics YOLOv8 support computer vision tasks like object detection (identifying and locating individual features in an image) and instance segmentation (outlining each feature at the pixel level). These tasks make it possible to detect nanoscale structures, such as grains and boundaries, directly in TEM images as they are captured.

Fig 1. Current microscopy image and data analysis workflow. Scientists, engineers, and researchers are ultimately looking for discovery and answers at the end of the workflow. Meanwhile, the workflow is disjointed and laborious, with the relative time/labor needed for each step shown at the bottom. The Feature Detection and aggregation are the most time-consuming stages in the workflow. The gray arrows leading back to the acquisition represent having to re-acquire data because the current data is not useful. Source: Theia Scientific.

For instance, in a recent study on polycrystalline thin films, the Theiascope™ and Ultralytics YOLO models were used to identify and measure grain structures that influence the properties of materials used in electronics, coatings, and energy devices. Accurate grain size distributions are critical for understanding how these films evolve during experiments. 

One of the key reasons Ultralytics YOLO models are so effective in these use cases is their ability to interpolate across large datasets. Instead of requiring every frame in an experiment to be labeled, researchers can annotate just a small fraction of images, train a YOLO model, and then let it reliably analyze thousands of additional frames. This makes it possible to track grain growth and boundary changes across time‑lapse TEM experiments with minimal manual input.

Why choose Ultralytics YOLO models?

In the study on polycrystalline thin films discussed earlier, Ultralytics YOLOv8 was found to be up to 43 times faster than U‑Net (a model often used for scientific image analysis). This speed makes YOLO practical for real‑time, on‑microscope analysis. 

While U‑Net is accurate but slow, YOLO combines speed with accuracy, matching grain size measurements to within 3% of ground truth. Its design also makes it more flexible, handling different scales and training setups with ease. For researchers, this means faster results without sacrificing reliability, which is ideal for accelerating microscopy workflows.

Fig 2. Compared to manual tracing (b) and U‑Net (c), YOLOv8 segmentation (d) provides sharper, more accurate outlines on microscopy images. (Source)

Reducing bias and boosting consistency in microscopy with YOLO

Through the Theiascope™ platform, Theia Scientific showed Ultralytics YOLO models can accelerate microscopy image analysis and TEM experiments while supporting reproducible, long‑term research. The platform is designed to be microscope‑agnostic, meaning YOLO models are used to analyze images collected from different instruments without requiring customized pipelines. This flexibility makes sure workflows remain consistent across varied experiments, operators, and environments.

Reproducibility is another key outcome. Scientific research often requires results to be revisited and validated years later. With various YOLO models integrated into the Theiascope™, researchers can rerun older models such as Ultralytics YOLOv5 on archived datasets and obtain consistent outputs, then compare them directly with results from newer models like Ultralytics YOLO11. This makes verifying findings straightforward, even as AI methods evolve.

Fig 3. The Theiascope™ platform. Electron microscopy images are captured and streamed from the acquisition computer to a GPU-enabled device running a web application, time-series database, and the Ultralytics YOLO models. Updates and new Ultralytics YOLO models can be pushed to the platform with OTA updates. Source: Theia Scientific.

Also, Ultralytics YOLO models give the platform the scalability needed to handle large datasets. Their real‑time inference capabilities allow thousands of TEM images to be analyzed in the time it would take to manually analyze just a few. This enables researchers to follow dynamic processes like grain growth across entire experiments, generating new insights and unlocking novel experiments at both the scale and speed required for cutting-edge research.

Integrating advanced Vision AI into next‑gen research tools

Theia Scientific sees Ultralytics YOLO models as a foundation for the future of microscopy. By continuing to refine training methods and calibration approaches, they aim to further improve accuracy across scales and experimental conditions. 

Moving forward, Theia Scientific plans to expand Theiascope™ to support more complex, in‑situ experiments and multi‑modal datasets. They believe it’s likely that Vision AI will become a standard part of next‑generation research workflows, enabling faster discovery and deeper insights across scientific domains.

Interested in streamlining your company’s workflows? Check out our GitHub repository to learn more about Vision AI. Explore how YOLO models are driving innovations in areas like AI in healthcare and computer vision in retail. To get hands‑on with YOLO, discover how our licensing options can support your vision.

Our solution to your Industry

View all

Frequently asked questions

What are Ultralytics YOLO models?

Ultralytics YOLO models are computer vision architectures developed to analyze visual data from images and video inputs. These models can be trained for tasks including Object detection, classification, pose estimation, tracking and instance segmentation.Ultralytics YOLO models include:

  • Ultralytics YOLOv5
  • Ultralytics YOLOv8
  • Ultralytics YOLO11

What is the difference between Ultralytics YOLO models?

Ultralytics YOLO11 is the latest version of our Computer Vision models. Just like its previous versions, it supports all computer vision tasks that the Vision AI community has come to love about YOLOv8. The new YOLO11, however, comes with greater performance and accuracy, making it a powerful tool and the perfect ally for real-world industry challenges.

Which Ultralytics YOLO model should I choose for my project?

The model you choose to use depends on your specific project requirements. It's key to take into account factors like performance, accuracy, and deployment needs. Here's a quick overview:

  • Some of Ultralytics YOLOv8's key features:
  1. Maturity and Stability: YOLOv8 is a proven, stable framework with extensive documentation and compatibility with earlier YOLO versions, making it ideal for integrating into existing workflows.
  2. Ease of Use: With its beginner-friendly setup and straightforward installation, YOLOv8 is perfect for teams of all skill levels.
  3. Cost-Effectiveness: It requires fewer computational resources, making it a great option for budget-conscious projects.
  • Some of Ultralytics YOLO11's key features:
  1. Higher Accuracy: YOLO11 outperforms YOLOv8 in benchmarks, achieving better accuracy with fewer parameters.
  2. Advanced Features: It supports cutting-edge tasks like pose estimation, object tracking, and oriented bounding boxes (OBB), offering unmatched versatility.
  3. Real-Time Efficiency: Optimized for real-time applications, YOLO11 delivers faster inference times and excels on edge devices and latency-sensitive tasks.
  4. Adaptability: With broad hardware compatibility, YOLO11 is well-suited for deployment across edge devices, cloud platforms, and NVIDIA GPUs

What license do i need?

Ultralytics YOLO repositories, such as YOLOv5 and YOLO11, are distributed under the AGPL-3.0 License by default. This OSI-approved license is designed for students, researchers, and enthusiasts, promoting open collaboration and requiring that any software using AGPL-3.0 components also be open-sourced. While this ensures transparency and fosters innovation, it may not align with commercial use cases.
If your project involves embedding Ultralytics software and AI models into commercial products or services and you wish to bypass the open-source requirements of AGPL-3.0, an Enterprise License is ideal.

Benefits of the Enterprise License include:

  • Commercial Flexibility: Modify and embed Ultralytics YOLO source code and models into proprietary products without adhering to the AGPL-3.0 requirement to open-source your project.
  • Proprietary Development: Gain full freedom to develop and distribute commercial applications that include Ultralytics YOLO code and models.

To ensure seamless integration and avoid AGPL-3.0 constraints, request an Ultralytics Enterprise License using the form provided. Our team will assist you in tailoring the license to your specific needs.

Power up with Ultralytics YOLO

Get advanced AI vision for your projects. Find the right license for your goals today.

Explore licensing options
Link copied to clipboard