V-MAGE Evaluation Pipeline.
We selected five different games and designed several levels for each game to decompose the evaluation of model performance.
The games used are FlappyBird, RaceGame, SuperMario, PongGame, and Tempest Run.
During the evaluation process, the Agent module receives visual game state information from the Game module, specifically in the form of game screenshots. Within the Agent module, these screenshots are structured into inputs for the model.
In the baseline agent of V-MAGE, inputs to the MLLM models are constructed by combining screenshots from the most recent three frames with a prompt containing the game rules.
The output of the model is processed by the Agent module into response actions, which are subsequently sent back to the Game module.
The comparison of V-MAGE with existing game-based evaluation benchmarks. *Text in V-MAGE only represents the instructions for game rules and output format.
OpenAI GPT-4o
Google Gemini 2.0 Flash
Qwen2.5VL-72B
OpenAI GPT-4o
Google Gemini 2.0 Flash
Qwen2.5VL-72B
OpenAI GPT-4o
Google Gemini 2.0 Flash
Qwen2.5VL-72B
Model performance in different games. The game playing examples of top-performing MLLMs. These examples are sampled from the evaluation process in games of V-MAGE.
Evaluation results(Elo rating). Performance comparison across different games based on the elo ranking system.
Evaluation results(Score). The MLLM trails humans by a large margin in all six games.
Error analysis in RaceGame and FlappyBird.