We are pleased to announce the VATEX Captioning Challenge 2019! The challenge will be hosted at the 3rd Workshop on Closing the Loop Between Vision and Language, ICCV 2019.
Congratulations to the winning teams!
The 1st VATEX Captioning Challenge has ended! We plan to archive the competition results from CodaLab to the official VATEX website and further contribute to the vision-and-language research community. Meanwhile, we have several rewards for the winning teams, which will be announced at the 3rd CLVL workshop on Oct 28th, 2019. To be eligible for result archives and consideration for awards, we kindly request you to send the following information to vatex.org@gmail.com using your main contact email:
The VATEX dataset is a new large-scale multilingual video description dataset, which contains over 41,250 videos and 825,000 captions in both English and Chinese. Among the captions, there are over 206,000 English-Chinese parallel translation pairs. Compared to the widely-used MSRVTT dataset, VATEX is multilingual, larger, linguistically complex, and more diverse in terms of both video and natural language descriptions. Please refer to our ICCV paper for more details. This VATEX Captioning Challenge aims to benchmark progress towards models that can describe the videos in various languages such as English and Chinese.
Please refer to the details at the Download page. You can download English/Chinese captions and video features from the page.
The challenge is hosted at the CodaLab. Please go to the challenge page to submit your models.
Rank | Model / Team | BLEU-4 | Meteor | Rouge-L | CIDEr |
---|---|---|---|---|---|
1 | Forence-CASIA | 40.9 | 26.4 | 54.2 | 82.4 |
2 | RUC_AIM3 + Adelaide | 39.1 | 25.8 | 53.3 | 73.4 |
3 | pp3 | 38.4 | 24.5 | 52.1 | 70.0 |
4 | anyeshine | 31.2 | 22.7 | 48.6 | 49.2 |
5 | Imperial College London (ICL) | 29.0 | 21.1 | 46.7 | 45.2 |
6 | Baseline Shared Encoder | 28.4 | 21.7 | 47.0 | 45.1 |
7 | Naive Baseline | 28.1 | 21.6 | 46.9 | 44.3 |
8 | Baseline Shared Encoder-Decoder | 27.9 | 21.6 | 46.8 | 44.2 |
Rank | Model / Team | BLEU-4 | Meteor | Rouge-L | CIDEr | 1 | Forence-CASIA | 32.6 | 32.5 | 56.7 | 64.4 |
---|---|---|---|---|---|
2 | pp3 | 32.2 | 32.1 | 56.2 | 56.8 |
3 | RUC_AIM3 + Adelaide | 31.7 | 30.2 | 49.4 | 51.9 |
4 | anyeshine | 26.1 | 30.4 | 52.4 | 37.7 |
5 | Baseline Shared Encoder-Decoder | 24.9 | 29.8 | 51.7 | 35.0 |
6 | Baseline Shared Encoder | 24.9 | 29.7 | 51.6 | 34.9 |
7 | Naive Baseline | 24.9 | 29.7 | 51.5 | 34.7 |
8 | Imperial College London (ICL) | 23.3 | 29.5 | 50.8 | 33.1 |
Xin (Eric) Wang
UC Santa Cruz
Jiawei Wu
Shannon AI
Junkun Chen
Oregon State University
Lei Li
ByteDance AI Lab
William Yang Wang
UC Santa Barbara