Evaluating Transcriptomic Integration for Cyanobacterial Constraint-based Metabolic Modelling

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

1

Metabolic modelling has wide-ranging applications, including for the production of high-value compounds, understanding complex disease and analysing community interactions. Integrating transcriptomic data with genome-scale metabolic models is crucial for deepening our understanding of complex biological systems, as it enables the development of models tailored to specific conditions, such as particular tissues, environments, or experimental setups. Relatively little attention has been given to the assessment of such integration methods in predicting intracellular fluxes. While a few validation studies offer some insights, their scope remains limited, particularly for organisms like cyanobacteria, for which little metabolic flux data are available. Cyanobacteria hold significant biotechnological potential due to their ability to synthesize a wide range of high-value compounds with minimal resource inputs [21]. The impact of specific methodological decisions on integration, however, has scarcely been assessed beyond human models, with no thorough exploration of parameter choices in valve-based integration methods. By implementing a novel analysis pipeline, we evaluated these methodological decisions using the genome-scale model for Synechocystis sp. PCC 6803 (iSynCJ816 [17]) with existing transcriptomic data in biomass-optimised scenarios. Our analyses indicate that selecting an appropriate integration method may not always be straightforward and depends on the initial model configuration - a factor which is often overlooked during integration. By evaluating sets of methods, we identified a trade-off between the buffering of light into the system and maintenance of flux near system boundaries. We suggest that the use of the lazy-step mapping function with importance-based scaling results in the best predictions, particularly when these can be validated with experimental data. When using one-size-fits-all scaling with the lazy-step mapping function, it appears preferable to use light buffering to avoid inappropriate bound changes near the photosystems, a factor which importance-based scaling may help to compensate for. In cases where no experimental data can be used for validation, the novel thresholding approach could be adopted as this showed some improvements upon the standard Lazy method.

Article activity feed