Deep Learning Boosts Arabidopsis Trait Estimation

Beijing Zhongke Journal Publising Co. Ltd.

This study is led by Professor Wanneng Yang (National Key Laboratory of Crop Genetic Improvement, National Center of Plant Gene Research, Hubei Hongshan Laboratory, Huazhong Agricultural University, Wuhan, China). The team created the Arabidopsis Phenotypic Trait Estimation System (APTES), an open-access pipeline integrating computer vision with optimized deep learning models to automate organ phenotyping.

For individual leaf segmentation, an enhanced Cascade Mask R-CNN model achieved precision, recall, and F1 scores of 0.965, 0.958, and 0.961 respectively, representing consistent ~1% improvements over baseline model. Silique segmentation utilized an upgraded DetectoRS model, attaining precision of 0.954, recall of 0.930, and an F1 score of 0.942 with 1–2 percentage-point gains. APTES automatically computes 64 leaf traits and 64 silique traits per sample, with trait measurements showing high reliability (coefficient of determination: 0.776–0.976; mean absolute percentage error: 1.89%–7.90%).

When applied to 166 Arabidopsis accessions, the system's outputs enabled a genome-wide association study (GWAS) that identified 1,042 SNPs significantly associated with 18 leaf and silique traits, including a key chromosome 3 locus linked to silique number. The pipeline was successfully validated across public Arabidopsis datasets and other plant species, demonstrating broad applicability for accelerated phenotypic analysis and genetic discovery in plant science.

See the article:

APTES: A high-throughput deep learning–based Arabidopsis phenotypic trait estimation system for individual leaves and siliques

https://link.springer.com/article/10.1007/s42994-025-00239-y

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.