FigAgent / 2004.04312 /paper_text /intro_method.md
Eric03's picture
Add files using upload-large-folder tool
9357a7c verified

Introduction

Learning a good language representation is a fundamental component of addressing a vision-language task, such as phrase grounding [22,34] or visual question answering [3,17]. Many recent methods have demonstrated that learning text representations aligned to images can boost performance across many visionlanguage tasks over traditional text-only trained representations [8,19,29,37,38]. This is often accomplished by using auxiliary vision-language tasks when learning the language representation (such as image-sentence retrieval, as shown in Figure 1(a)). However, these methods often only support a single language. Although some work has addressed a multilingual scenario (e.g., [16,23,41]), these

Project page: http://ai.bu.edu/smalr

  • (a) Multilingual image-sentence retrieval
  • (b) MSCOCO multilingual retrieval

Fig. 1: (a) presents multilingual bidirectional retrieval. We embed sentences in ten languages with SMALR, which is used to compute the highest scoring image. (b) shows the effect of the number of training languages on performance for prior work MULE [23] and LIWE [41]. LIWE is the original model, hereafter referred to as S-LIWE. The plot contains two points: L-LIWE, [41] trained with a larger embedding (120-D vs. 24-D) for fair comparison, in orange, and SMALR, in yellow. The points are scaled to the number of parameters, P; specifically, their area is $(\frac{P}{106})^{\frac{3}{2}}$ . SMALR is able to outperform all prior work with few parameters

methods do not scale well to support many languages in terms of memory or performance (see Figure 1(b)). As the number of languages grows, methods like LIWE [41] that use character-based recognition systems can save memory but suffer from performance degradation. In contrast, methods that learn to align word embeddings across languages can maintain (or even improve) performance as languages are added (e.g., [16,23]), but require additional parameters for the word embeddings that represent each new language's vocabulary.

This becomes a challenge when scaling to support many languages, as an increasing majority of trainable parameters are required for representing each language ( $e.g. \sim 93%$ of parameters of [23] with ten languages). While pretrained word embeddings could be used without fine-tuning, e.g. Multilingual BERT [13] or MUSE [11], this comes at a significant cost in downstream task performance [8,23].

To address this trade-off between multilingual capacity and performance, we propose a Scalable Multilingual Aligned Language Representation (SMALR) model, which we demonstrate achieves strong task performance while also being highly compact compared to state-of-the-art word embedding methods [13,24,26]. As seen in Figure 1, LIWE drops over 10% in performance going from supporting one to ten languages. MULE slightly increases performance with more languages, but requires 6x more parameters compared to its single language model. Our approach, SMALR, outperforms both with only 1/5th the parameters of MULE. We learn to efficiently represent each language by separating our language embedding into language-specific and language-agnostic token representations. As language follows a long-tailed distribution, only a few words occur often, with

large portions of tokens occurring very rarely. For example, in the MSCOCO dataset [28] there are 25,126 unique tokens, but 61% of them occur less than 4 times. This suggests that having unique representations for every token in the vocabulary is unnecessary, as only a subset would affect downstream task performance significantly. Thus, we use a Hybrid Embedding Model (HEM) that contains language-specific embeddings for the common tokens, thereby providing a good representation for each language, and a compact language-agnostic representation for rare and uncommon words. This results in a model that needs far fewer unique embeddings than prior work without sacrificing performance.

We learn how to assign tokens to the language-agnostic representation in a pretraining step, which uses monolingual FastText embeddings [7] to map similar words to the same token, e.g. mapping "double-decker" in English and "imp´eriale" in French to the same shared token. Once we obtain our language embeddings, our goal is to align them so that semantically similar words, even those from other languages, are embedded nearby. To accomplish this, we use a multilingual masked language model, where we randomly mask words and then predict them based on context. Unlike similar masking approaches used to train models such as BERT [13], we mask words of sentences from any two languages, say German and Chinese, which are semantically similar sentences referring to the same image, and use the context from each to predict both masked tokens. To further encourage cross-language alignment, we also use an adversarial language classifier and neighborhood constraints that have been used in prior work [23]. These universal language embeddings are provided as input to a multimodal model that learns to relate them to images. Finally, we use a crosslingual consistency module that uses machine translations to reason about the image-sentence similarity across multiple languages, which we show significantly boosts performance. Figure 2 contains an overview of our model.

We use bidirectional image-sentence retrieval as the primary evaluation of our multilingual language representation. In this task, the goal is to retrieve a relevant sentence from a database given an image or to retrieve a relevant image from a database given a sentence. We augment current multilingual datasets Multi30K [6,14,15,43] and MSCOCO [27,28,31] using machine translations so that every image has at least five sentences across ten diverse languages: English (En), German (De), French (Fr), Czech (Cs), Chinese (Cn), Japanese (Ja), Arabic (Ar), Afrikaans (Af), Korean (Ko), and Russian (Ru). See the supplementary for details on our data augmentation procedure. This constitutes the highest number of languages used in multilingual learning for vision-language tasks to date, supporting more than double the number of visually-semantically aligned languages compared to prior work [5,11,16,23,36,41].

We list the contributions of our work below:

  • SMALR, a scalable multilingual model for training visually-semantically aligned word embeddings that outperforms the state-of-the-art on multilingual image-sentence retrieval while also requiring few model parameters.
  • A comparison to four types of vocabulary reduction methods that serve as baselines to complement our evaluation against prior work.

Fig. 2: The contributions of SMALR are in blue: a Hybrid Embedding Model (HEM), a Masked Cross-Language Model (MCLM), and a Cross-Lingual Consistency stage (CLC). HEM embeds input sentences as a mixture of languagespecific and language-agnostic representations using a hard attention mechanism. The MCLM component provides an additional loss to enforce language alignment, while also augmenting the original dataset with masked sentences

  • A Masked Cross-Language Modeling (MCLM) procedure that further aligns the multilingual embedding, stabilizing variance in performance over all languages, and serves as an additional data augmentation technique.
  • A Cross-Lingual Consistency (CLC) module, the first of its kind, that learns how to aggregate an ensemble of predictions across languages made with machine translations, which, combined with our SMALR architecture, results in a total improvement over the state-of-the-art by 3-4%.