OpenSeeSimE: A Large-Scale Benchmark to Assess Vision-Language Model Question Answering Capabilities in Engineering Simulations

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Engineering simulation interpretation is a major bottleneck in design cycles, requiring expensive domain expertise to validate complex outputs and ensure safety and performance. While modern large language models (LLMs) may assist in interpretation, they face fundamental scalability limitations, as even modest simulations exceed the context windows of best-in-class LLMs. Vision-language models (VLMs), having demonstrated success across technical visual reasoning domains from medical imaging to materials characterization, represent a promising alternative for processing simulation visualizations as compressed representations. However, their effectiveness for engineering simulation interpretation remains unknown, constrained by the absence of large-scale evaluation frameworks and prohibitive expert annotation costs. We introduce OpenSeeSimE, a large-scale benchmark consisting of 200,000+ question-answer pairs across 10,000 parametrically-varied simulations. This 850× scale increase, enables statistically robust evaluation across diverse simulation configurations and question types. Evaluation of ten state-of-the-art VLMs reveals a fundamental discovery: models that demonstrate strong performance on general visual reasoning benchmarks perform at random chance levels (29-47%) on engineering simulations with negligible effect sizes, establishing critical baselines for domain-specific model development

Article activity feed