CAlstm-Nash:The nash multi-task learning based on The Convolutional Long Short-Term Memory neural network with Attention Mechanism for missing well-logging parameters prediction
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
In reservoir evaluation, well-logging parameters such as porosity, permeability, and resistivity are essential for assessing the hydrocarbon storage capacity and producibility of reservoirs. These parameters provide critical insights into the sedimentary characteristics of different geological periods and the underlying subsurface conditions. Leveraging geophysical data to predict key reservoir properties plays an important role in enhancing the accuracy of reservoir descriptions and optimizing exploration and development efforts. To address this challenge, we propose a novel method for simultaneously predicting multiple missing well-logging parameters using known well-logging data based on an convolutional LSTM model with an Attention Mechanism (CAlstm). Specifically, to handle the missing parameters, we introduce Nash Multi-Task Learning (NashMTL) to resolve gradient conflicts among different tasks. Our method begins by developing a CAlstm neural network model, which incorporates a convolutional layer to extract features and generate task-specific outputs. A multi-head attention mechanism is then applied to capture contextual information from each position in the input sequence, and the results are passed to an LSTM layer to process the temporal features of the sequence. Subsequently, NashMTL is introduced to resolve gradient conflicts between different tasks, ensuring that the model effectively balances the impact of each task. The results show that after using Nash Multi-Task Learning, with the correlation coefficient (R2) as the model evaluation metric, for the dataset used in this study, the proposed model's prediction performance for Permeability, Resistivity, and Porosity on the test set improved by approximately 1.69%, 13.58%, and 2.97%, respectively, compared to the single-task model.