Harnessing Large Language Models for End-to-End Open-Domain Event Extraction and Latent Pattern Identification
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Open-domain event extraction, which aims to identify and structure event information from text without predefined schemas, remains a challenging task. Traditional methods often struggle with the diversity of real-world events, while recent efforts leveraging large language models (LLMs) show promise but still face challenges in effectively extracting structured information and inducing event patterns. In this paper, we propose a novel two-stage generative approach built entirely on LLMs. Our method first employs instruction tuning to train an LLM to generate natural language descriptions of events, including triggers and argument roles, from input text. Subsequently, we introduce a meta-learning inspired few-shot learning strategy that enables the LLM to implicitly learn event patterns and identify common argument roles based on the generated descriptions. We evaluate our approach on the ACE 2005 and ERE benchmark datasets, demonstrating significant improvements in F1 score compared to strong baseline methods, including traditional supervised models and other LLM-based approaches. Furthermore, ablation studies validate the contribution of each stage of our method, and human evaluations confirm the superior quality of the extracted event descriptions. Our work highlights the potential of a purely LLM-centric approach for flexible and effective open-domain event extraction and pattern induction.