Artificial Intelligence Identifies Crop Heads for Smarter Agriculture

Nanjing Agricultural University The Academy of Science

Recent advances in artificial intelligence (AI), alongside drones and digital cameras, have greatly extended the frontiers of smart agriculture. One attractive use case for these technologies is precision agriculture. In this modern approach to farming, the idea is to optimize crop production by gathering precise data about plants and the state of the field, and then act accordingly. For example, by analyzing aerial images of crops, AI models can determine what parts of a field need more attention, as well as the current stage of development of the plants.

Among all the crop monitoring functions that AI can do, crop head counting remains as one of the most challenging to implement. Images of crops contain densely packed, repeating patterns that are usually irregular and overlapped, making it difficult for deep learning models to automatically detect specific plant organs. Ideally, one would train such models using thousands of manually annotated images, in which pixels belonging to crop heads are pre-specified. In practice, however, annotating crop images is extremely tedious and time-consuming.

To address this issue, a research team including Assistant Professor Lingling Jin from the University of Saskatchewan, Canada, developed an innovative technique that can simplify the training and development of deep learning models. Their approach, which is described in a paper recently made available online on 24 February 2023 in Plant Phenomics, could promote a more widespread adoption of AI in agriculture.

To illustrate their idea, the team focused on the identification (or 'segmentation') of wheat heads in crop images as an example use case. Their strategy revolves around generating a synthetic annotated dataset. That is, instead of manually marking pixels belonging to wheat heads in hundreds of images, they devised a convenient way to produce artificial images in which the wheat heads are automatically marked.

First, the researchers recorded short videos of a wheat field and of other locations without wheat plants (also called 'background' videos). From the footage of the wheat field, they extracted a small number of still frames and manually annotated them, identifying all the wheat heads. Then, using frames from the background videos as a canvas, they generated synthetic wheat images by pasting 'cutouts' of the manually segmented wheat heads onto them. This approach enabled the team to produce thousands of training images for a deep learning model with minimal effort.

To further improve the model, which was based on a customized U-Net architecture, the researchers also employed various domain adaptation techniques. These techniques fine-tuned the algorithm so that it would perform better on images from various real-world wheat fields, even though it was trained mainly on synthetic images. Numerous tests on an open-access dataset revealed impressive gains in accuracy, as Jin highlights: "Our approach established—and by a wide margin in performance—a new state-of-the-art model for wheat head segmentation."

Worth noting, the techniques showcased in this work are not limited to identifying wheat heads. In this regard, Jin remarks: "While we showed the utility of the proposed method for wheat head segmentation, it could be applied to other applications that have similar dense repeating patterns of objects, such as segmenting plant organs in other crop species or segmenting molecular components in microscopy images." Hence, this work paints a bright future for deep learning both in agriculture and other fields.

Let us hope further polishing of these AI techniques pave the way to a world where we can all work smarter, not harder!

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.