Flippable Siamese Differential Neural Network for Differential Graph Inference

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Differential graph inference is a critical analytical technique that enables researchers to accurately identify the variables and their interactions that change under different conditions. By comparing two conditions, researchers can gain a deeper understanding of the differences between them. Currently, the mainstream methods in differential graph inference are mathematical optimization algorithms, including sparse optimization based on Gaussian graphical models or sparse Bayesian regression. These methods can eliminate many false positives in graphs, but at the cost of heavy reliance on the prior distributions of data or parameters, and they suffer from the curse of dimensionality. To address these challenges, we introduce a new architecture called the Flippable Siamese Differential Neural Network (FSDiffNet). We originally established the concept of flippability and the theoretical foundation of flippable neural networks, laying the groundwork for building a flippable neural network. This theoretical framework guided the design of architecture and components, including the SoftSparse activation function and high-dilation circular padding diagonal convolution. FSDiffNet uses large-scale pre-training techniques to acquire differential features and perform differential graph inference. Through experiments with simulated and real datasets, FSDiffNet outperforms existing state-of-the-art methods on multiple metrics, effectively inferring key differential factors related to conditions such as autism and breast cancer. This proves the effectiveness of FSDiffNet as a solution for differential graph inference challenges.

Article activity feed