3D Environment Generation from Sparse Inputs for Automated Driving Function Development

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Development of AI-driven automated driving functions requires vast amounts of diverse, high-quality data to ensure road safety and reliability. However, manual collection of real-world data and creation of 3D environments is costly, time-consuming, and hard to scale. Most automatic environment generation methods still rely heavily on manual effort, and only a few are tailored for Advanced Driver Assistance Systems (ADAS) and Automated Driving (AD) training and validation. We propose an automated generative framework that learns real-world features to reconstruct realistic 3D environments from a road definition and two simple parameters for country and area type. Environment generation is structured into three modules - map-based data generation, semantic city generation, and final detailing. The overall framework is validated by training a perception network on a mixed set of real and synthetic data, validating it solely on real data, and comparing performance to assess the practical value of the environments we generated. By constructing a Pareto front over combinations of training set sizes and real-to-synthetic data ratios, we show that our synthetic data can replace up to 90% of real data without significant quality degradation. Our results demonstrate how multi-layered environment generation frameworks enable flexible and scalable data generation for perception tasks while incorporating ground-truth 3D environment data. This reduces reliance on costly field data and supports automated rapid scenario exploration for finding safety-critical edge cases.

Article activity feed