Large language models possess some ecological knowledge, but how much?

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Large Language Models (LLMs) have garnered attention for their potential at question answering across multiple different domains. However, despite the promise demonstrated, there has only been limited exploration of their effectiveness in the context of ecological knowledge. We investigate the ecological knowledge and potential reasoning abilities of two LLMs, Gemini 1.5 Pro and GPT-4o, across a suite of ecologically focused tasks. Our tasks quantitatively assess a models' ability to predict species presence at locations, generate range maps, list critically endangered species, classify threats, and estimate species traits. We compare model performance against expert-derived data to quantify their accuracy and reliability using a new benchmark dataset we introduce. We show that while the LLMs tested outperform naive baselines, they still exhibit significant limitations, particularly in generating spatially accurate range maps and classifying threats. Our findings underscore both the potential and challenges of using LLMs in ecological applications, highlighting the need for further refinement, including domain-specific fine-tuning, to better approximate ecological reasoning. Our new benchmark dataset will enable researchers to make progress on this task by providing a repeatable way to evaluate future models.

Article activity feed