For human eyes, distinguishing snap peas from their leaves is no easy task.
Takayuki Nukui is a Materials Data Scientist from Tokyo, Japan. You might think that ML and material science are an unlikely pair, but Takayuki found that many ML solutions can be applied in his line of work.
However, the real reason Takayuki got into ML has nothing to do with his current role. Growing up, Takayuki's father was a farmer. Oftentimes, he would have to help his father harvest snap peas – a very demanding process.
For human eyes, it can be challenging to spot all the snap peas on a plant as they camouflage extremely well among the leaves. During harvest season, Takayuki would have to trek back and forth across his father’s fields time after time to be sure that he picked every last ripe snap pea. This arduous process led Takayuki to imagine how the vision AI he was studying at the time could possibly help simplify snap pea harvesting.
We came across Takayuki's snap pea detection application on Twitter and spoke with him to learn more about his work with YOLOv5.
In the beginning, Takayuki tried various object detection models from YOLOv3 to SSD to EfficientDet. However, a year ago Takayuki tried out YOLOv5 and ended up working with it until the present day as it delivered the best accuracy.
For Takayuki, the predesigned mechanisms for improving model accuracies, such as data augmentation and parameter evolution make YOLOv5 easy. While this would normally require a cumbersome program, YOLOv5 can be implemented by adding a simple code. “I was happy to be able to analyze the results and tune the model in the time created. Of course, I also spent time on annotations!”
Takayuki is keeping his options open: “I want to try it with other crops on the farm. Not only that, but I want to keep trying with whatever at all that comes to mind. I think there are more things I can find out by trying to detect objects.”
“First of all, I would recommend YOLOv5 to those who think object detection looks difficult and are apprehensive to start with vision AI. In my opinion, YOLOv5 is the most accessible object detection model to implement.
Also, I would suggest trying to use it with a smaller amount of training data. Data Augmentation is pre-designed, and it often produces surprisingly interesting models.”
Takayuki Nukui balances his life between engineering and growing vegetables on his small farm. His website is FarML, where he publishes articles on ML. Check out his detailed article on Snap Pea detection. Takayuki also often posts his use cases on his Twitter and Youtube.
We want to spotlight your YOLOv5 use case as well! Tag us on social media @Ultralytics with #YOLOvME for a chance to be featured.
Begin your journey with the future of machine learning