Xiang Yue*                       Yueqi Song*                       Akari Asai                       Seungone Kim &

Analyzing the Difficulties

submited by
Style Pass
2024-10-23 15:30:15

Xiang Yue*   Yueqi Song*   Akari Asai   Seungone Kim   Jean de Dieu Nyandwi   Simran Khanuja   Anjali Kantharuban   Lintang Sutawika   Sathyanarayanan Ramamoorthy   Graham Neubig

We introduce Pangea-7B, a fully open multilingual multimodal language model (MLLM) designed to bridge multilingual and multicultural gaps in visual understanding tasks. Pangea-7B is trained on PangeaIns, a diverse 6M instruction dataset spanning 39 languages. Pangea-7B is evaluated on PangeaBench, a holistic evaluation suite encompassing 14 datasets covering 47 languages. As demonstrated in Figure 1, Pangea-7B demonstrates state-of-the-art results, outperforming existing open models in multilingual and culturally diverse contexts.

Pangea is structured around three key aspects, each offering important insights into the design space of MLLMs: §Pangea-7B: a strong multilingual multimodal LLM capable of 39 languages. §Instruction Tuning Data: We construct a instruction tuning dataset PangeaIns, a diverse dataset with 6 million multilingual multimodal instruction tuning data spanning 39 languages, which Pangea-7B is trained on. Figure 2 shows the data distribution of PangeaIns. §Benchmarking: We construct a multilingual multimodal evaluation benchmark PangeaBench, including 14 datasets spanning 47 languages, which Pangea-7B is evaluated on.

Leave a Comment