Without YOLOv5, the time to manually count bacteria colonies, evaluate smears, and detect wildlife is enormous.
Have you ever had to evaluate countless images, data, results, etc? To make the process more complicated, have you ever had to do these evaluations manually? Of course, it is incredibly time-consuming.
For Martin Schätz, YOLOv5 proved to be a tool useful in cutting down the time necessary for image analysis involved in infectious disease research and monitoring. While Martin does several jobs in one, the essence of his work focuses on bioimage analysis, a sector he describes as “the point between computer science and biology.” We wanted to learn more about Martin’s work with colony monitoring and counting, so we sat down and asked him a few questions.
Martin's logic behind implementing YOLOv5 for his projects stems from the need to automate existing processes for object detection, classification, and counting. Martin also aims to use YOLOv5 for cases like the Long-Term Evolution Experiment.
In labs, bacterial colonies grown on agar plates are generally counted manually by technicians. Unfortunately, manual counting can be conducive to error-prone results. To tackle this problem, Martin utilized YOLOv5 to automate the counting process. This approach has greatly cut down on the error and time associated with colony detection and classification.
To perform tests in the microscopic world, it is necessary to evaluate smears. This is still a process that is performed mostly manually. And as we know, manual processes are more prone to error and variability in results. Additionally, while proper tools for object detection of specific shapes exist, more specialized tools for automatic counting and classification of various objects.
“My colleagues record wildlife in forests and other locations and typically run through the videos manually, meaning that they have to sit down and run through hundreds of videos.”
Keeping in mind that manually searching for an instance of a wild pig or deer on a video can take an exorbitant amount of time, Martin knew object detection could definitely optimize this process. Here, YOLOv5 was implemented allowing for wildlife to easily and instantaneously be detected when an animal entered the camera's line of sight.
For his master’s degree, Martin studied what he likes to call the “classical approaches to image analysis.” While finishing up his degree, deep learning was becoming more and more talked about, which at the time, was just called “convolutional networks.”
During this period, Martin was working on mining data, which was not very usable. Wanting to be able to get his hands dirty with the data, Martin chose to dive into the world of machine learning and vision AI.
Right now, the process of learning ML and vision AI can be quite complicated. As someone who has been using vision AI for some time, Martin mentioned three points for anyone looking to get started:
Martin Schätz is a researcher who also teaches with a focus on BioImage Analysis and data processing in confocal microscopy. The motivation behind the project Martin is working on is to optimize the process of image analysis for infectious disease research and monitoring. You can find documentation and details behind Martin’s three projects on his GitHub repository. Additionally, Martin is part of NEUBIAS, an organization that promotes the most used tools for scientific image analysis in biology/microscopy, including these trained deep learning models in model Zoo.
We want to spotlight your YOLOv5 use case as well! Tag us on social media @Ultralytics with #YOLOvME for a chance to be featured.
Begin your journey with the future of machine learning