Generative AI for realistic urban traffic scenario generation: β-GCN-VAE and β-Transformer-VAE models

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Generating plausible traffic‑speed scenarios over multiple interconnected road segments is crucial for applications such as traffic optimization, urban‑planning analysis, and traffic management. Traditional statistical approaches, such as Copulas, tend to be computationally expensive both during training and scenario generation, which severely limits their applicability to large urban networks. To address these challenges, two novel deep generative architectures for synthesizing realistic spatial traffic‑speed scenarios across urban road networks are proposed: a Graph-based $\beta$-Variational Autoencoder with Dual Latent Vectors ($\beta$-GCN-VAE) and a Transformer-based Variational Autoencoder ($\beta$-T-VAE). The $\beta$-GCN-VAE enhances the variational framework by adopting a higher $\beta$ coefficient to encourage more disentangled latent representations, while the $\beta$‑T‑VAE incorporates a transformer decoder with multi‑head non-causal self‑attention, enabling direct modeling of long‑range spatial dependencies that cannot be captured by convolutional structures alone. The paper also compares two $\beta$‑scheduling strategies, linear and periodic, to assess how different annealing dynamics influence reconstruction quality, latent‑space organization, and overall generative performance. Both architectures are trained on large-scale urban traffic-speed datasets collected in the Chinese megacity of Chengdu. Extensive experiments show that they substantially outperform standard VAEs, GANs, and the statistical Copula baselines in terms of distributional fidelity, spatial coherence, and computational efficiency.

Article activity feed