🎉 We’re excited to announce that Tsukada Lab (TLab) at the University of Tokyo, in collaboration with the Autonomous & Intelligent Systems Lab (AISL) from Amirkabir University of Technology, has received the Exceptional Merit Award 🥉 in the V2X-Sec MEIS Challenge (Track 1: Temporal Perception) at CVPR 2025, and out of 29 participating teams, our method ranked 🥇3rd place on the official leaderboard in Track 1.
🧠 This challenge was part of the CVPR 2025 workshop:
“Multi-Agent Embodied Intelligent Systems Meet Generative-AI Era: Opportunities, Challenges and Futures.”
🔗 Workshop Website
📊 Leaderboard
🔍 About the Challenge:
With the rapid advancement of autonomous driving technology, vehicle-to-everything (V2X) communication has become a vital enabler for improving driving safety and efficiency. By allowing ego-vehicles to exchange real-time data with infrastructure and other road users, V2X extends perception beyond the vehicle’s own sensors and mitigates their line-of-sight limitations.
The V2X-Sec MEIS Challenge focuses on end-to-end autonomous driving with V2X cooperation, emphasizing planning-centric optimization. Participants are tasked with fusing multi-view sensor data—both from ego-vehicles and infrastructure—under constrained communication bandwidth, to produce robust driving plans.
🚘 Track 1: Cooperative Temporal Perception specifically aims to improve V2X-enabled detection and multi-object tracking using pre-recorded multi-view sensor data (from ego and infrastructure) in the open-source UniV2X framework. The challenge pushes teams to deliver high-quality detection and tracking results under bandwidth-constrained and failure-prone communication scenarios.
-
Input: vehicle front-view images, infrastructure images, commands, ego states, and calibration files
-
Output: 3D bounding box information with tracking IDs
👥 Team Members:
From TLab: Ehsan Javanmardi, Manabu Tsukada
From AISL: Fardin Ayar, Najmeh Mohammad Bagheri, Mahdi Javanmardi
🚀 We’re proud to contribute to advancing the field of multi-agent autonomous perception and secure V2X systems, and to be part of shaping the future of intelligent cooperative mobility.