Evaluation of One-image 3D Reconstruction for Plant Model Generation

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Generating accurate and visually realistic 3D models of plants from single-view images is crucial yet remains challenging due to plants' intricate geometry and frequent occlusions. This capability matters because it supplements current plant datasets and enables non-destructive, high-throughput phenotyping for crop breeding and precision agriculture. More broadly, 3D reconstruction is particularly important because plant morphology is inherently three-dimensional, while 2D representations miss occluded leaves, branching geometry, and volumetric traits. However, plants present unique challenges compared to common rigid objects, and most current generative methods have not been systematically tested in this domain, leaving a gap in understanding their reliability for realistic plant reconstruction. This study systematically evaluates six advanced generative techniques—Hunyuan3D 2.0, Trellis (Structured 3D Latents), One2345++, InstantMesh, Direct3D and Unique3D—using the existing PlantDreamer dataset. Specifically, this research reconstructs mesh models from images of Bean plants and quantitatively assesses each method’s performance against ground-truth scans using Chamfer Distance, Normal Consistency, F-Score, PSNR, LPIPS, and CLIP Score. The paper also presents qualitative results of Kale and Mint plants. The results indicate that Hunyuan3D 2.0 achieves superior performance overall, suggesting its effectiveness in capturing complex plant structures. This work provides valuable insights into strengths and limitations of contemporary 3D generative approaches, guiding future improvements in realistic plant digitisation.

Article activity feed