RootNav 2.0: Deep learning for automatic navigation of complex plant root architectures

This article has been Reviewed by the following groups

Read the full article

Abstract

Background

In recent years quantitative analysis of root growth has become increasingly important as a way to explore the influence of abiotic stress such as high temperature and drought on a plant's ability to take up water and nutrients. Segmentation and feature extraction of plant roots from images presents a significant computer vision challenge. Root images contain complicated structures, variations in size, background, occlusion, clutter and variation in lighting conditions. We present a new image analysis approach that provides fully automatic extraction of complex root system architectures from a range of plant species in varied imaging set-ups. Driven by modern deep-learning approaches, RootNav 2.0 replaces previously manual and semi-automatic feature extraction with an extremely deep multi-task convolutional neural network architecture. The network also locates seeds, first order and second order root tips to drive a search algorithm seeking optimal paths throughout the image, extracting accurate architectures without user interaction.

Results

We develop and train a novel deep network architecture to explicitly combine local pixel information with global scene information in order to accurately segment small root features across high-resolution images. The proposed method was evaluated on images of wheat (Triticum aestivum L.) from a seedling assay. Compared with semi-automatic analysis via the original RootNav tool, the proposed method demonstrated comparable accuracy, with a 10-fold increase in speed. The network was able to adapt to different plant species via transfer learning, offering similar accuracy when transferred to an Arabidopsis thaliana plate assay. A final instance of transfer learning, to images of Brassica napus from a hydroponic assay, still demonstrated good accuracy despite many fewer training images.

Conclusions

We present RootNav 2.0, a new approach to root image analysis driven by a deep neural network. The tool can be adapted to new image domains with a reduced number of images, and offers substantial speed improvements over semi-automatic and manual approaches. The tool outputs root architectures in the widely accepted RSML standard, for which numerous analysis packages exist (http://rootsystemml.github.io/), as well as segmentation masks compatible with other automated measurement tools. The tool will provide researchers with the ability to analyse root systems at larget scales than ever before, at a time when large scale genomic studies have made this more important than ever.

Article activity feed

  1. Now published in GigaScience doi: 10.1093/gigascience/giz123

    Robail Yasrab 1School of Computer Science, University of Nottingham, NG8 1BB, UKFind this author on Google ScholarFind this author on PubMedSearch for this author on this siteORCID record for Robail YasrabJonathan A Atkinson 2School of Biosciences, University of Nottingham, LE12 5RD, UKFind this author on Google ScholarFind this author on PubMedSearch for this author on this siteORCID record for Jonathan A AtkinsonDarren M Wells 2School of Biosciences, University of Nottingham, LE12 5RD, UKFind this author on Google ScholarFind this author on PubMedSearch for this author on this siteORCID record for Darren M WellsAndrew P French 1School of Computer Science, University of Nottingham, NG8 1BB, UK2School of Biosciences, University of Nottingham, LE12 5RD, UKFind this author on Google ScholarFind this author on PubMedSearch for this author on this siteORCID record for Andrew P FrenchTony P Pridmore 1School of Computer Science, University of Nottingham, NG8 1BB, UKFind this author on Google ScholarFind this author on PubMedSearch for this author on this siteORCID record for Tony P PridmoreMichael P Pound 1School of Computer Science, University of Nottingham, NG8 1BB, UKFind this author on Google ScholarFind this author on PubMedSearch for this author on this siteORCID record for Michael P PoundFor correspondence: michael.pound@nottingham.ac.uk

    A version of this preprint has been published in the Open Access journal GigaScience (see paper https://doi.org/10.1093/gigascience/giz123 ), where the paper and peer reviews are published openly under a CC-BY 4.0 license.

    These peer reviews were as follows:

    Reviewer 1: http://dx.doi.org/10.5524/REVIEW.101967 Reviewer 2: http://dx.doi.org/10.5524/REVIEW.101968 Reviewer 3: http://dx.doi.org/10.5524/REVIEW.101969