U-Net Inspired Transformer Architecture for Multivariate Time-Series Synthesis

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This study introduces a Multiscale Dual-Attention U-Net (TS-MSDA U-Net) model, for long-term time-series synthesis. Enhancing the U-Net architecture with multiscale temporal feature extraction and dual attention mechanisms, the model effectively captures complex time-series dynamics. Performance evaluation was conducted across two distinct applications. First, on multivariate datasets collected from 70 real-world electric vehicle (EV) trips, TS-MSDA U-Net achieved mean absolute errors within ±1% for key vehicle parameters, including battery state of charge, voltage, mechanical acceleration, and torque. This represents a substantial two-fold improvement over the baseline TS-p2pGAN model, although the dual attention mechanisms contributed only marginal gains over the basic U-Net. Second, the model was applied to high-resolution signal reconstruction using data sampled from low-speed analog-to-digital converters in a protype resonant CLLC half-bridge converter. TS-MSDA U-Net successfully captured non-linear synthetic mappings and enhanced the signal resolution by 36 times, while the basic U-Net failed to reconstruct the signals. These findings collectively highlight the potential of U-Net-inspired transformer architectures for high-fidelity multivariate time-series modeling in both real-world EV scenarios and advanced power electronic systems.

Article activity feed