Deep Learning Analysis of Figure Copying Tasks for Parkinson’s Disease Detection with GAN-Based Data Augmentation

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Early and accurate diagnosis of Parkinson’s disease (PD) is essential for enabling timely treatment and effective disease management. In this study, we propose a deep learning approach to automate PD detection using convolutional neural networks (CNNs) trained on images derived from spiral drawing tasks performed by patients and healthy controls. These drawings were collected using a digital pen and tablet, which captured dynamic signals during the task. A total of 82 drawings from 56 PD patients and 26 control subjects were converted into three visual modalities: time, pressure, and pen angle relative to the X and Y axes. Given the small dataset size, we implemented several data augmentation strategies to increase training diversity and balance class distributions. These included traditional geometric transformations, as well as synthetic augmentation using generative adversarial networks (GANs) and deep convolutional GANs (DCGANs). All augmented datasets were used to train a CNN classifier.

Among all experiments, the highest classification performance was achieved using representations derived from pen pressure, combined with traditional augmentation techniques, reaching 80.14% accuracy and a Kappa value of 0.57. This modality consistently outperformed both time and angle-based representations. While GAN and DCGAN models produced visually varied images, they required extensive training epochs to generate sufficient sample diversity, limiting their current practicality. These findings demonstrate the potential of combining CNNs with drawing-based representations and augmentation methods to create a non-invasive, rapid screening tool for PD. Future work will aim to expand the dataset, investigate more advanced model architectures such as transformers and attention-based networks, and further explore the impact of input resolution and colour mapping on performance.

Article activity feed