ChatTracker: Enhancing Visual Tracking Performance via Chatting with Multimodal Large Language Model.

Yiming Sun*1,2, Fan Yu1*, Shaoxiang Chen3, Yu Zhang1, Junwei Huang1, Yang Li1,4, Chenhui Li1, Changbo Wang1
1School of Computer Science and Technology, East China Normal University, Shanghai, China, 2School of Computer Science, Fudan University, Shanghai, China, 3Meituan Inc, 4Shanghai Frontiers Science Center of Molecule Intelligent Syntheses, Shanghai, China
Neurips 2024 MainTrack

Abstract

Visual object tracking aims to locate a targeted object in a video sequence based on an initial bounding box. Recently, Vision-Language trackers have proposed to utilize additional natural language descriptions to enhance versatility in various applications. However, VL trackers are still inferior to State-of-The-Art (SoTA) visual trackers in terms of tracking performance. We found that this inferiority primarily results from their heavy reliance on manual textual annotations, which include the frequent provision of ambiguous language descriptions. In this paper, we propose ChatTracker to leverage the wealth of world knowledge in the Multimodal Large Language Model (MLLM) to generate high-quality language descriptions and enhance tracking performance. To this end, we propose a novel reflection-based prompt optimization module to iteratively refine the ambiguous and inaccurate descriptions of the target with tracking feedback. To further utilize semantic information produced by MLLM, a simple yet effective VL tracking framework is proposed and can be easily integrated as a plug-and-play module to boost the performance of both VL and visual trackers. Experimental results show that our proposed ChatTracker achieves SoTA performance across multiple datasets and the generated language descriptions surpass manually annotated texts in terms of image-text alignment. The source code and results will be released.

Overview

Overview

Video


BibTeX

@inproceedings{chattracker,
      title={ChatTracker: Enhancing Visual Tracking Performance via Chatting with Multimodal Large Language Model},
      author={Sun, Yiming and Yu, Fan and Chen, Shaoxiang and Zhang, Yu and Huang, Junwei and Li, Yang and Li, Chenhui and Wang, Changbo},
      booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems (NeurIPS) 2024}
    }