V-MAGE Logo

V-MAGE: A Game Evaluation Framework for Assessing Visual-Centric Capabilities in MLLMs

Xiangxi Zheng1    Linjie Li2    Zhengyuan Yang2    Ping Yu1    Alex Jinpeng Wang3    Rui Yan1    Yuan Yao1    Lijuan Wang2   
1Nanjing University     2Microsoft Research     3Central South University    
Image description

We present V-MAGE, a benchmark built on video game environments designed to evaluate the comprehensive performance of MLLMs, with a focus on visual-centric capabilities. V-MAGE consists of five distinct video game environments, each containing manually crafted levels with varying difficulties to holistically assess the visual perception and reasoning abilities of MLLM. The evaluation employs a dynamic Elo-based framework with statistical stabilization, iteratively refining models' relative capabilities through randomized pairwise comparisons across multi-round interactions.

Abstract

Recent advancements in Multimodal Large Language Models (MLLMs) have demonstrated impressive capabilities in visual-text processing. However, existing static image-text benchmarks are insufficient for evaluating their dynamic perception and interactive reasoning abilities. We introduce Vision-centric Multiple Abilities Game Evaluation (V-MAGE), a novel game-based evaluation framework designed to systematically assess MLLMs’ visual reasoning in interactive, continuous-space environments. V-MAGE features five distinct video games comprising over 30 carefully constructed evaluation scenarios. These scenarios are set in free-form, visually complex environments that require models to interpret dynamic game states and make decisions based solely on visual input, thereby closely reflecting the conditions encountered by human players. To ensure robust and interpretable comparisons across models, V-MAGE employs a dynamic Elobased ranking system that accounts for varying difficulty levels and task diversity. Benchmarking state-of-the-art MLLMs against human baselines reveals that while leading models approach human-level performance in simple tasks, their performance drops significantly in complex scenarios requiring advanced reasoning and task orchestration. This persistent performance gap highlights fundamental limitations in current MLLMs’ ability to perform real-time, vision-grounded interactions. Through extensive analyses, we demonstrate the utility of V-MAGE in uncovering these limitations and providing actionable insights for improving the visual and reasoning capabilities of MLLMs in dynamic, interactive settings.

Performance

OpenAI GPT-4o

Google Gemini 2.0 Flash

Qwen2.5VL-72B

OpenAI GPT-4o

Google Gemini 2.0 Flash

Qwen2.5VL-72B

OpenAI GPT-4o

Google Gemini 2.0 Flash

Qwen2.5VL-72B

Model performance in different games. The game playing examples of top-performing MLLMs. These examples are sampled from the evaluation process in games of V-MAGE.

Pipeline

V-MAGE Evaluation Pipeline. We selected five different games and designed several levels for each game to decompose the evaluation of model performance. The games used are FlappyBird, RaceGame, SuperMario, PongGame, and Tempest Run.
During the evaluation process, the Agent module receives visual game state information from the Game module, specifically in the form of game screenshots. Within the Agent module, these screenshots are structured into inputs for the model. In the baseline agent of V-MAGE, inputs to the MLLM models are constructed by combining screenshots from the most recent three frames with a prompt containing the game rules. The output of the model is processed by the Agent module into response actions, which are subsequently sent back to the Game module.

Comparison

The comparison of V-MAGE with existing game-based evaluation benchmarks. *Text in V-MAGE only represents the instructions for game rules and output format.

Results

Evaluation results(Elo rating). Performance comparison across different games based on the elo ranking system.

Evaluation results(Score). The MLLM trails humans by a large margin in all six games. The levels with an asterisk (*) represent 'no history'.

Further Analysis





Error Case

Error analysis in RaceGame and FlappyBird.