Dual Generators and Dynamically Fused Discriminators Adversarial Network to Create Synthetic Coronary Optical Coherence Tomography Images for Coronary Artery Disease Classification
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Deep neural networks have led to a substantial incursion for multifaceted classification tasks by making use of large-scale and diverse annotated datasets. However, diverse optical coherence tomography (OCT) datasets in the cardiovascular imaging remains an uphill task. This research focuses on improving the diversity and generalization ability of augmentation architectures while maintaining the baseline classification accuracy for coronary atrial plaques using a novel dual generators and dynamically fused discriminators conditional generative adversarial network (DGDFGAN). Our method is demonstrated on an augmented OCT dataset of 6900 images. With dual generators, our network provides the diverse outputs for the same input condition as each generator acts as a regularize for the other. In our model, this mutual regularization enhances the ability of both generators to generalize better across different features. The fusion discriminators use one discriminator for classification purposes hence avoiding the need for a separate deep architecture. A loss functional including the SSIM loss and FID scores confirm that perfect synthetic OCT image aliases are created. We optimize our model via Grey Wolf optimizer during model training. Furthermore, an inter-comparison and recorded SSID loss 0.9542±0.008 and FID score of 7 are suggestive of better diversity and generation characteristics that outperforms the performance of leading GANs architectures. We trust that our approach is practically viable and thus assist professionals for an informed decision making in clinical settings.