diff --git "a/2601/2601.10096.md" "b/2601/2601.10096.md" new file mode 100644--- /dev/null +++ "b/2601/2601.10096.md" @@ -0,0 +1,813 @@ +Title: Unlocking New Languages with Monolingual Text + +URL Source: https://arxiv.org/html/2601.10096 + +Published Time: Fri, 16 Jan 2026 01:22:56 GMT + +Markdown Content: +Multilingual-To-Multimodal (M2M): + +Unlocking New Languages with Monolingual Text +--------------------------------------------------------------------------------- + +Piyush Singh Pasi + +Amazon + +piyush.singh.pasi@gmail.com + +###### Abstract + +Multimodal models excel in English, supported by abundant image–text and audio–text data, but performance drops sharply for other languages due to limited multilingual multimodal resources. Existing solutions rely on machine translation, while advances in multilingual text modeling remain underutilized. We introduce M2M, a lightweight alignment method that learns only a few linear layers—using English text alone—to map multilingual text embeddings into multimodal space. Despite its simplicity, M2M matches baseline performance in English (94.9% Recall@10) and achieves strong zero-shot transfer (89.5% Recall@10 averaged across 11 languages, 10 unseen) on XTD Text-to-Image retrieval. Qualitative t-SNE visualizations show that multilingual embeddings align tightly with multimodal representations, while weight analysis reveals that the transformation reshapes embedding geometry rather than performing trivial rotations. Beyond image–text retrieval, M2M generalizes to Audio–Text retrieval and cross-lingual Text-to-Image generation. We release code and checkpoints 1 1 1[GitHub: m2m-codebase/M2M](https://github.com/m2m-codebase/M2M) along with multilingual evaluation datasets: MSCOCO Multilingual 30K 2 2 2[HF: piyushsinghpasi/mscoco-multilingual-30k](https://huggingface.co/datasets/piyushsinghpasi/mscoco-multilingual-30k), AudioCaps Multilingual 3 3 3[HF: piyushsinghpasi/audiocaps-multilingual](https://huggingface.co/datasets/piyushsinghpasi/audiocaps-multilingual), and Clotho Multilingual 4 4 4[HF: piyushsinghpasi/clotho-multilingual](https://huggingface.co/datasets/piyushsinghpasi/clotho-multilingual). + +Multilingual-To-Multimodal (M2M): + +Unlocking New Languages with Monolingual Text + +Piyush Singh Pasi Amazon††thanks: Work done outside of Amazon, not related to role directly piyush.singh.pasi@gmail.com + +1 Introduction +-------------- + +![Image 1: Refer to caption](https://arxiv.org/html/2601.10096v1/labse-clip-intro-diagram.png) + +Figure 1: Overview of M2M. Using only English text supervision, we learn a lightweight linear mapping that aligns multilingual text embeddings to a frozen multimodal text space (e.g., CLIP). English acts as a shared anchor during training, aligning multilingual text representations (triangles) to the multimodal text space (diamonds). This alignment implicitly transfers to other languages (stars and circles) without requiring any additional multilingual or multimodal supervision. + +Humans naturally align information across modalities, associating visual objects with words and sounds. For example, once a person has learned to associate the visual concept of a _cat_ with the English word “cat”, learning that the Spanish word “gato” refers to the same concept allows the object–word association for “gato” to emerge implicitly, without requiring direct visual supervision in Spanish. + +Existing multimodal models, such as CLIP Radford et al. ([2021](https://arxiv.org/html/2601.10096v1#bib.bib54 "Learning transferable visual models from natural language supervision")) and CLAP Elizalde et al. ([2023](https://arxiv.org/html/2601.10096v1#bib.bib53 "CLAP learning audio concepts from natural language supervision")), are primarily trained on large-scale English multimodal corpora and rely on explicit multimodal supervision. Extending these models to additional languages typically requires substantial multilingual image–text or audio–text data, either by training from scratch or fine-tuning pretrained models Carlsson et al. ([2022](https://arxiv.org/html/2601.10096v1#bib.bib65 "Cross-lingual and multilingual CLIP")); Yan et al. ([2024](https://arxiv.org/html/2601.10096v1#bib.bib4 "Bridging language gaps in audio-text retrieval")); Koukounas et al. ([2024b](https://arxiv.org/html/2601.10096v1#bib.bib55 "Jina-clip-v2: multilingual multimodal embeddings for text and images")). Acquiring such multilingual multimodal resources is expensive or infeasible, particularly for low-resource languages. By contrast, multilingual text encoders have demonstrated strong cross-lingual generalization using only large-scale text corpora and self-supervised objectives Devlin et al. ([2019](https://arxiv.org/html/2601.10096v1#bib.bib12 "BERT: pre-training of deep bidirectional transformers for language understanding")); Radford ([2018](https://arxiv.org/html/2601.10096v1#bib.bib8 "Improving language understanding by generative pre-training")), but this capability remains largely disconnected from pretrained multimodal representations. + +In this work, we propose M2M, a simple, data-efficient, and parameter-efficient approach to bridge multilingual text and multimodal latent spaces using English text as a shared anchor. Inspired by how humans learn, our method does not require explicit multimodal signals for each language—English textual data alone is sufficient for alignment. We learn a lightweight projection map, implemented as a few linear layers, while keeping all pretrained encoders frozen and training with MSE and structure-preserving losses. Despite its simplicity, this alignment enables multilingual text representations to participate directly in multimodal tasks, including retrieval and generation, without observing any multilingual image–text or audio–text pairs. Rather than replacing large-scale multilingual multimodal pretraining, our goal is to show that improved latent-space alignment can recover much of the same capability at a fraction of the data and computational cost. + +Prior work has shown the effectiveness of linear projection maps for aligning latent spaces using English multimodal data, mostly in classification settings Maiorca et al. ([2024b](https://arxiv.org/html/2601.10096v1#bib.bib48 "Latent space translation via semantic alignment")); Rosenfeld et al. ([2022](https://arxiv.org/html/2601.10096v1#bib.bib50 "APE: aligning pretrained encoders to quickly learn aligned multimodal representations")). We extend this approach to a broader setting: multilingual and multimodal alignment across retrieval and generative tasks. Our results show that strong multilingual multimodal behavior can emerge from lightweight alignment alone when robust multilingual text encoders are used. To summarize, our contributions are as follows: + +1. 1.We propose M2M, a lightweight alignment method that maps multilingual text representations into pretrained multimodal latent spaces using only monolingual (English) text data. Despite its simplicity, this approach enables cross-task, cross-modality transfer, allowing multilingual text representations to participate in retrieval and generation tasks without observing any multilingual multimodal data during training. +2. 2.M2M is highly data-efficient, achieving strong performance with as few as ∼\sim 1K sentences, and parameter-light, requiring only a few linear layers (∼\sim 1–2M parameters). It generalizes across architectures, modalities (image and audio), tasks (Image–Text and Audio–Text retrieval, and Text-to-Image generation), and languages—including those unseen during multimodal pretraining. +3. 3.We construct synthetic multilingual evaluation benchmarks for multimodal retrieval and generation: (i) Audio–Text retrieval datasets in 33 languages derived from AudioCaps Kim et al. ([2019](https://arxiv.org/html/2601.10096v1#bib.bib63 "AudioCaps: generating captions for audios in the wild")) (160K samples) and Clotho Drossos et al. ([2019](https://arxiv.org/html/2601.10096v1#bib.bib62 "Clotho: an audio captioning dataset")) (172K samples), and (ii) MSCOCO-30K captions translated into 9 additional languages (270K samples) for cross-lingual Text-to-Image generation. These datasets provide a unified and reproducible benchmark for evaluating multilingual multimodal models. + +2 Related Work +-------------- + +Multilingual Multimodal Models. Strong multimodal models like CLIP Radford et al. ([2021](https://arxiv.org/html/2601.10096v1#bib.bib54 "Learning transferable visual models from natural language supervision")) and CLAP Elizalde et al. ([2023](https://arxiv.org/html/2601.10096v1#bib.bib53 "CLAP learning audio concepts from natural language supervision")); Wu et al. ([2022](https://arxiv.org/html/2601.10096v1#bib.bib64 "Large-scale contrastive language-audio pretraining with feature fusion and keyword-to-caption augmentation")) are typically trained on large amounts of English multimodal data (paired image-text and audio-text data). Extending these models to other languages typically requires explicit training on multilingual-multimodal data—either by training from scratch Jain et al. ([2021](https://arxiv.org/html/2601.10096v1#bib.bib26 "MURAL: multimodal, multitask retrieval across languages")) or by finetuning pretrained models Koukounas et al. ([2024b](https://arxiv.org/html/2601.10096v1#bib.bib55 "Jina-clip-v2: multilingual multimodal embeddings for text and images")); Yan et al. ([2024](https://arxiv.org/html/2601.10096v1#bib.bib4 "Bridging language gaps in audio-text retrieval")); Chen et al. ([2023](https://arxiv.org/html/2601.10096v1#bib.bib14 "MCLIP: multilingual CLIP via cross-lingual transfer")); Ye et al. ([2024](https://arxiv.org/html/2601.10096v1#bib.bib42 "Altdiffusion: a multilingual text-to-image diffusion model")); Li et al. ([2023](https://arxiv.org/html/2601.10096v1#bib.bib43 "Translation-enhanced multilingual text-to-image generation")). Some approaches Carlsson et al. ([2022](https://arxiv.org/html/2601.10096v1#bib.bib65 "Cross-lingual and multilingual CLIP")); Chen et al. ([2022](https://arxiv.org/html/2601.10096v1#bib.bib52 "AltCLIP: altering the language encoder in clip for extended language capabilities")); Zhai et al. ([2021](https://arxiv.org/html/2601.10096v1#bib.bib51 "LiT: zero-shot transfer with locked-image text tuning")) fine-tune only the text encoders while keeping the image encoder frozen, while Aggarwal and Kale ([2020](https://arxiv.org/html/2601.10096v1#bib.bib24 "Towards zero-shot cross-lingual image retrieval")) train projection layers on top of frozen encoders using multimodal English data. + +In contrast, our method targets multilingual alignment without relying on multilingual multimodal supervision (e.g., image–text pairs) or encoder fine-tuning. Using simple and intuitive training losses, a small number of linear projection layers, and only English text data, we demonstrate strong multilingual transfer across a broad range of multimodal tasks, including image–text and audio–text retrieval, as well as text-to-image generation. + +Latent Space Translation. Latent space translation aims to map representations between distinct latent spaces in order to enable information sharing across modalities or languages. Prior work broadly follows two directions: (i) aligning spaces using relative representations Moschella et al. ([2022](https://arxiv.org/html/2601.10096v1#bib.bib40 "Relative representations enable zero-shot latent space communication")); Norelli et al. ([2022](https://arxiv.org/html/2601.10096v1#bib.bib3 "ASIF: coupled data turns unimodal models to multimodal without training")), and (ii) learning direct transformation mappings between source and target spaces Gower ([1975](https://arxiv.org/html/2601.10096v1#bib.bib21 "Generalized procrustes analysis")); Maiorca et al. ([2024b](https://arxiv.org/html/2601.10096v1#bib.bib48 "Latent space translation via semantic alignment")); Lähner and Moeller ([2024](https://arxiv.org/html/2601.10096v1#bib.bib5 "On the direct alignment of latent spaces")). These techniques have been successfully applied to tasks such as cross-modal classification and generative modeling. A more recent extension is the Inverse Relative Projection method Maiorca et al. ([2024a](https://arxiv.org/html/2601.10096v1#bib.bib41 "Latent space translation via inverse relative projection")), which converts source representations into a relative form before mapping them to a target space, enabling the translation of monolingual text representations into multilingual ones. + +Building on this line of work, our approach learns a linear mapping between multilingual and multimodal latent spaces. By leveraging English text as a shared anchor between these spaces, we enable multilingual multimodal transfer without requiring multilingual multimodal training data. + +3 Methodology +------------- + +Our method, M2M, is a simple alignment approach that learns a small projection network to align multilingual latent spaces with multimodal latent spaces using English text representations. While we focus on dual-modality multimodal models, the method naturally extends to more than two modalities. + +Consider an English (monolingual) multimodal model ℳ e=(T e,X e)\mathcal{M}_{e}=(T_{e},X_{e}) for language e e, where T e T_{e} is the text encoder and X e X_{e} represents any other modality encoder (e.g., image, audio). We assume representations from T e T_{e} and X e X_{e} are already aligned in a shared latent space using paired multimodal data from language e e (e.g., CLIP, CLAP). Let T m T_{m} be a multilingual text encoder. For a sentence s s in language e e, let z e=T e​(s)z_{e}=T_{e}(s) denote its multimodal representation and z m=T m​(s)z_{m}=T_{m}(s) its multilingual representation, with z e∈ℝ d e z_{e}\in\mathbb{R}^{d_{e}} and z m∈ℝ d m z_{m}\in\mathbb{R}^{d_{m}}. Since both text encoders represent the same sentence s s, in an all-aligned world, z e z_{e} and z m z_{m} would be identical. In practice, however, they differ due to distinct objectives and training data. + +Our goal is therefore to align the multilingual and multimodal latent spaces using English—the common language between these spaces—as an anchor. This alignment enables multimodal tasks on non-English languages, for which no multimodal training data is ever seen; any downstream performance on these languages arises purely from the learned alignment. + +To achieve this, we learn a projection map ℱ:ℝ d m→ℝ d e\mathcal{F}:\mathbb{R}^{d_{m}}\rightarrow\mathbb{R}^{d_{e}} that transforms z m z_{m} into z e z_{e}, using text-only alignment data in language e e (English). The alignment data must be semantically consistent with the downstream task (e.g., image captions for image–text retrieval, audio captions for audio–text retrieval). + +The projection map ℱ\mathcal{F}—implemented as a few linear layers—is the only learned component, while all encoders (T e T_{e}, T m T_{m}, X e X_{e}) remain frozen. During inference, multilingual text is encoded by T m T_{m}, mapped via ℱ\mathcal{F}, and then directly compared with X e X_{e}, producing task-compatible representations for retrieval or generation. In this setup, z e z_{e} serves as an anchor guiding the translation of multilingual embeddings into the multimodal space. We use mean squared error (MSE) as our primary loss function. + +z m→e\displaystyle z_{m\to e}=ℱ​(z m)\displaystyle=\mathcal{F}(z_{m})(1) +ℒ align\displaystyle\mathcal{L}_{\text{align}}=MSE​(z m→e,z e)\displaystyle=\text{MSE}(z_{m\to e},z_{e})(2) + +To derive additional supervision, we enforce structure preservation within each batch B B. Let {z e i}i=1|B|\{z_{e}^{i}\}_{i=1}^{|B|} and {z m→e i}i=1|B|\{z_{m\to e}^{i}\}_{i=1}^{|B|} denote the target multimodal embeddings and their projected multilingual counterparts. We compute pairwise cosine similarities: + +R e\displaystyle R_{e}=cos_sim​(z e i,z e j)i,j=1|B|,\displaystyle=\text{cos\_sim}(z_{e}^{i},z_{e}^{j})_{i,j=1}^{|B|},(3) +R m→e\displaystyle R_{m\to e}=cos_sim​(z m→e i,z m→e j)i,j=1|B|,\displaystyle=\text{cos\_sim}(z_{m\to e}^{i},z_{m\to e}^{j})_{i,j=1}^{|B|},(4) + +where R e,R m→e∈ℝ|B|×|B|R_{e},R_{m\to e}\in\mathbb{R}^{|B|\times|B|}. Let triu​(⋅)\text{triu}(\cdot) denote the upper-triangular part of a matrix, excluding the diagonal. The structure-preserving loss is then + +ℒ str\displaystyle\mathcal{L}_{\text{str}}=MSE​(triu​(R e),triu​(R m→e)).\displaystyle=\text{MSE}\!\big(\text{triu}(R_{e}),\text{triu}(R_{m\to e})\big).(5) + +The final objective combines the alignment and structure-preserving terms: + +ℒ=λ​ℒ align+β​ℒ str.\displaystyle\mathcal{L}=\lambda\,\mathcal{L}_{\text{align}}+\beta\,\mathcal{L}_{\text{str}}.(6) + +We experimented with alternative objectives such as L1 loss and similarity loss (1−cosine​(z e,z m→e)1-\text{cosine}(z_{e},z_{m\to e})), but these underperform compared to ℒ\mathcal{L}. MSE is particularly effective because it encourages z m→e z_{m\to e} to fully substitute for z e z_{e} in the latent space, rather than focusing solely on angular alignment as in contrastive or cosine-based losses. We avoid token- or word-level alignment, which overemphasizes linguistic form over semantics, and do not introduce a reverse mapping ℱ e→m\mathcal{F}_{e\to m} since it would disrupt the existing alignment between the other modality encoder X e X_{e} and T e T_{e}. For retrieval tasks, both ℒ align\mathcal{L}_{\text{align}} and ℒ str\mathcal{L}_{\text{str}} are computed on L2-normalized embeddings to optimize for downstream evaluation which uses cosine similarity metric. For generative tasks (e.g., text-to-image generation), we omit normalization and L s​t​r L_{str} (see Section[7](https://arxiv.org/html/2601.10096v1#S7 "7 Cross-lingual Text-to-Image Generation ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text")). + +4 Exploring Alignment Design Space +---------------------------------- + +We investigate the impact of varying number of linear layers (1, 2, 4), adding or removing residual connections He et al. ([2015](https://arxiv.org/html/2601.10096v1#bib.bib39 "Deep residual learning for image recognition")) in ℱ\mathcal{F}, and testing different training objectives through ablation studies. + +Table 1: Comparison of I2T and T2I Recall@10 (averaged across 11 languages) for different training losses, linear layers, and residual connections (Skip Conn.) with M2M-aligned Jina-CLIP-v1×\times M-MPNET on XTD test dataset. λ=48,β=1\lambda=48,\beta=1. + +Experimental setup. We primarily use Jina-CLIP-v1 Koukounas et al. ([2024a](https://arxiv.org/html/2601.10096v1#bib.bib33 "Jina clip: your clip model is also your text retriever")) as the multimodal model (ℳ e\mathcal{M}_{e}) and Multilingual MPNET (M-MPNET)Reimers and Gurevych ([2020](https://arxiv.org/html/2601.10096v1#bib.bib45 "Making monolingual sentence embeddings multilingual using knowledge distillation")) as the multilingual text encoder (T m T_{m}). Following Carlsson et al. ([2022](https://arxiv.org/html/2601.10096v1#bib.bib65 "Cross-lingual and multilingual CLIP")), we use a combination of Google Conceptual Captions (GCC)Sharma et al. ([2018](https://arxiv.org/html/2601.10096v1#bib.bib38 "Conceptual captions: a cleaned, hypernymed, image alt-text dataset for automatic image captioning")), MSCOCO Lin et al. ([2014](https://arxiv.org/html/2601.10096v1#bib.bib27 "Microsoft coco: common objects in context")), and VizWiz Bigham et al. ([2010](https://arxiv.org/html/2601.10096v1#bib.bib37 "VizWiz: nearly real-time answers to visual questions")) as our training dataset to learn ℱ\mathcal{F}. We remove duplicate sentences and create a N N-sentence training split through random sampling. We experiment with various model architectures and training split sizes (Scaling). Unless specified otherwise, we train for 50 epochs using 250K-sentence training size, batch size of 64, AdamW optimizer Loshchilov ([2017](https://arxiv.org/html/2601.10096v1#bib.bib2 "Decoupled weight decay regularization")) with a learning rate of 3e-4, weight decay of 1e-2, and a linear learning rate scheduler with 50 warmup steps. All M2M-aligned models are trained on two RTX A5000 24GB Nvidia GPUs. For validation, we use XTD Aggarwal and Kale ([2020](https://arxiv.org/html/2601.10096v1#bib.bib24 "Towards zero-shot cross-lingual image retrieval")) English image-text pairs, saving the best checkpoint based on the mean of Text-to-Image (T2I) and Image-to-Text (I2T) recall. Both T2I and I2T recalls are averaged across Recall@1,5,10. We evaluate these experiments on Image-to-Text retrieval task using the XTD test dataset. + +Results and discussion. Table[1](https://arxiv.org/html/2601.10096v1#S4.T1 "Table 1 ‣ 4 Exploring Alignment Design Space ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text") shows that our proposed loss (eq.[6](https://arxiv.org/html/2601.10096v1#S3.E6 "In 3 Methodology ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text")) consistently outperforms alternatives. Two linear layers and no skip connection (row V6) achieves the best performance, yielding absolute gains of up to 0.7% over MSE (row V2) and similarity loss (row V3), and around 3–5% over L1 loss (row V4). Varying linear layers or using a skip connection has minor effects (rows V6-V8), indicating the model is robust to these architectural choices. Assigning higher weight to ℒ align\mathcal{L}_{\text{align}} (λ=48\lambda=48) versus ℒ str\mathcal{L}_{\text{str}} (β=1\beta=1) yields a 0.5% gain compared to equal weighting (λ=1,β=1\lambda=1,\beta=1), as shown in Figure[2](https://arxiv.org/html/2601.10096v1#S4.F2 "Figure 2 ‣ 4 Exploring Alignment Design Space ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). Overall, the combination of two linear layers (∼\sim 1M parameters) without a skip connection and λ=48,β=1\lambda=48,\beta=1 provides the strongest results. See appendix[D](https://arxiv.org/html/2601.10096v1#A4 "Appendix D Ablations experiments on Image-text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text") for detailed numbers. + +![Image 2: Refer to caption](https://arxiv.org/html/2601.10096v1/hyp_lambda_beta.png) + +Figure 2: Impact of λ\lambda and β\beta on XTD image–text retrieval. Increasing λ\lambda while reducing β\beta leads to consistent performance gains. + +Data scaling experiments (Figure[3](https://arxiv.org/html/2601.10096v1#S4.F3 "Figure 3 ‣ 4 Exploring Alignment Design Space ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text")) using the optimal configuration (2 linear layers, no residuals, λ=48,β=1\lambda=48,\beta=1) show that M2M achieves 85.8% Avg. Recall@10 with just 1,000 English sentences, without any multilingual or multimodal data. Performance saturates beyond 250K sentences; scaling to 2M sentences provides minimal improvements (0.1–0.2%). + +![Image 3: Refer to caption](https://arxiv.org/html/2601.10096v1/x1.png) + +Figure 3: Effect of scaling train data on XTD eval set for M2M-aligned model- Jina-CLIP-v1 ×\times M-MPNET. + +Alignment quality. For the best configuration (row V6 in Table[1](https://arxiv.org/html/2601.10096v1#S4.T1 "Table 1 ‣ 4 Exploring Alignment Design Space ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text")), we visualize alignment using t-SNE (Figure[4](https://arxiv.org/html/2601.10096v1#S4.F4 "Figure 4 ‣ 4 Exploring Alignment Design Space ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text")). To avoid language bias, we first cluster image embeddings of Jina-CLIP-v1 (J-CLIP) from the XTD test set using KMeans (K=100), then select 17 clusters via farthest-cluster sampling from the 50 largest clusters, excluding very small clusters. From each cluster, we sample up to 10 points to prevent overcrowding. We compute t-SNE jointly for text embeddings from J-CLIP (z e z_{e}) and M-MPNET (z m z_{m}) before alignment (Figure[4](https://arxiv.org/html/2601.10096v1#S4.F4 "Figure 4 ‣ 4 Exploring Alignment Design Space ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"), top), and for J-CLIP embeddings (z e z_{e}) with aligned embeddings (z m→e z_{m\to e}) after alignment (Figure[4](https://arxiv.org/html/2601.10096v1#S4.F4 "Figure 4 ‣ 4 Exploring Alignment Design Space ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"), bottom) to visualize the effect of M2M. As shown, J-CLIP and M-MPNET embeddings occupy distinct regions before alignment; after alignment, J-CLIP embeddings align with the multilingual embeddings across clusters, demonstrating effective cross-lingual alignment. + +![Image 4: Refer to caption](https://arxiv.org/html/2601.10096v1/labse_clip_tsne_17.png) + +Figure 4: t-SNE visualization (perplexity = 32) of text embeddings before and after alignment. Marker shapes denote visual clusters and colors indicate languages, with English J-CLIP text embeddings (z e z_{e} or en_clip) in red. Before alignment (top), text embeddings (z m z_{m}&z e z_{e}) are fragmented; after alignment (bottom), multilingual captions (z m→e z_{m\to e}) and J-CLIP text embeddings (z e z_{e}) align closely with shared visual clusters. + +Weight analysis. Using our best configuration (row V6 in Table[1](https://arxiv.org/html/2601.10096v1#S4.T1 "Table 1 ‣ 4 Exploring Alignment Design Space ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text")), we analyze the effective linear map W e​f​f=W 2​W 1 W_{eff}=W_{2}W_{1}. The singular value spectrum revealed that the map focuses on a compact, semantically relevant subspace, with an effective rank of ∼204\sim 204. The orthogonality deviation (‖W⊤​W−I‖F≈554\|W^{\top}W-I\|_{F}\approx 554) indicates that the transformation involves substantial _mixing and rescaling_ rather than a simple rotation. The effective bias is small (‖𝐛 eff‖≈1.3\|\mathbf{b}_{\text{eff}}\|\approx 1.3), suggesting that alignment primarily reshapes the geometry of embeddings rather than just shifting them. Because the effective rank is lower than d e=d m=768 d_{e}=d_{m}=768, Jina-CLIP-v1 text embeddings likely capture language-specific features that are absent in the language-agnostic M-MPNET space. Pairwise cosine distances between nearly identical sentences further support this hypothesis: Jina-CLIP-v1 embeddings vary the most (0.03–0.08), multilingual MPNET embeddings are tighter (0.01–0.04), and the mapped embeddings fall in between (0.01–0.05). This confirms that Jina-CLIP-v1 embeddings encode language-specific style variations, whereas the mapped embeddings preserve semantic consistency across sentence variants. See additional details and box plots in Appendix[E](https://arxiv.org/html/2601.10096v1#A5 "Appendix E Weight Analysis ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). + +5 Image-Text Retrieval +---------------------- + +Models XTD-T2I XTD-I2T XM3600 Multi30K +Avg.de en es fr it jp ko pl ru tr zh Avg.en T2I I2T T2I I2T +English-only Vision-Language Models +E1: CLIP (ViT-L 336px)35.7 55.4 92.5 64.1 67.0 53.7 18.7 2.7 15.6 5.0 13.2 4.9 43.2 94.1 14.0 23.7 54.9 63.7 +E2: Jina-CLIP-v1 37.4 61.5 95.0 67.8 77.4 58.3 9.8 1.9 16.8 4.4 10.9 7.5 39.5 95.8 20.3 26.5 58.9 59.6 +E3: K-ALIGN 47.6 73.3 94.0 67.1 80.0 72.8 26.2 12.6 37.6 34.0 19.1 7.0 53.1 93.8 22.9 31.0 67.5 70.1 +Multilingual Vision-Language Models Trained on Supervised Multimodal and/or Multilingual Data +T1: mUSEM3L 74.9 73.5 85.3 76.7 78.9 78.9 67.8 70.7 71.7 73.6 70.9 76.1–––––– +T2: MCLIP-ST 76.4 78.7 88.5 78.2 79.8 79.3 68.6 63.1 75.6 74.7 74.4 79.4 78.6 90.4 48.7 60.6 80.7 83.4 +T3: ALIGN-Base 82.2*––88.8–87.9–76.6 79.8 82.3 73.5 86.5–––––– +T4: MURAL-Large 90.2*––92.9–91.8–88.1 91.0 87.2 89.5 89.7–––––– +T5: LABSE ViT-L/14 87.2 89.6 91.6 89.5 89.9 90.1 73.9 80.8 89.8 85.5 89.8 88.9 90.8 94.9 73.2 83.6 90.9 93.7 +T6: XLM-R-L ViT-B/32 88.0 88.7 91.8 89.1 89.4 89.8 81.0 82.1 91.4 86.1 88.8 89.3 89.9 91.7 75.2 84.5 89.2 91.0 +T7: XLM-R ViT-L/14 89.0 90.6 92.4 91.0 90.0 91.1 81.9 85.2 91.3 85.8 90.3 89.7 92.2 94.5 76.4 85.0 92.2 94.4 +T8: XLM-R-L ViT-B/16+92.0 93.0 95.0 93.6 93.1 93.1 84.2 89.0 94.4 90.0 93.0 94.0 93.2 96.1 81.8 87.1 93.9 94.2 +T9: Jina-CLIP-v2 92.6 92.5 92.8 88.9 95.5 93.2 94.1 90.6 94.9 90.7 93.5 91.4 93.2 92.7 81.1 85.7 93.8 94.0 +T10: AltCLIP M​9\text{AltCLIP}_{M9}93.7*–95.4 94.1 92.9 94.2 91.7 94.4–91.8–95.1–––––– +T11: SIGLIP 67.2 87.9 96.7 93.3 91.0 90.2 19.7 25.8 75.5 69.3 63.3 26.2 71.6 98.3 40.1 51.9 85.4 87.7 +T12: SIGLIP2 92.6 94.6 96.7 95.8 95.0 96.1 80.2 91.3 95.8 92.1 91.7 89.1 93.7 97.9 74.6 81.5 96.2 96.2 +M2M-aligned Multilingual Multimodal models using English-only Text data +M1: Jina-CLIP-v1 ×\times LaBSE 82.7 82.4 86.5 83.6 84.8 85.0 76.5 80.3 85.4 80.7 81.4 82.6 80.0 86.8 62.9 65.6 79.0 75.7 +M2: Jina-CLIP-v1 ×\times M-MiniLM 86.4 86.7 93.8 88.1 88.4 87.6 80.5 74.8 89 85.2 86.3 90.2 84.9 93.7 57.7 64.5 88.0 85.9 +M3: Jina-CLIP-v1 ×\times JinaTextV3 88.0 91.1 94.9 89.6 90.6 90.9 80.2 80.9 90.4 85.5 88.0 85.9 87.5 95.0 67.2 72.2 87.8 87.5 +M4: Jina-CLIP-v1 ×\times M-MPNET 89.5 90.5 94.7 91.9 90.5 91.1 82.4 85.8 91.2 86.8 89.2 89.9 89.4 95.2 66.3 73.1 89.9 89.7 +M5: CLIP ×\times M-MPNET 84.6 85.6 90.8 86.6 85.3 86.2 79.1 80.5 85.1 82.3 84.6 84.5 86.2 93.9 56.2 67.3 90.1 92.1 +M6: K-ALIGN ×\times M-MPNET 86.8 87.5 92.6 89.6 87.9 87.8 78.9 83.0 89.0 83.5 87.0 87.9 86.0 94.4 59.0 68.2 91.0 90.2 +M7: siglip1 ×\times M-MPNET 86.1 87.2 92.3 86.7 87.2 87.3 80.9 82.8 87.0 83.0 85.6 87.0 85.9 94.5 59.1 65.3 93.5 91.4 + +Table 2: Comparison of M2M-aligned model performance with English and Multilingual CLIP-like models using Recall@10 across datasets. Results include reported XTD-T2I numbers for T1, T3-T8, T10 and rest are computed using available checkpoints. * denotes average is computed over only supported languages. + +Experimental setup. We evaluate English vision–language models (ℳ e\mathcal{M}_{e}): CLIP Radford et al. ([2021](https://arxiv.org/html/2601.10096v1#bib.bib54 "Learning transferable visual models from natural language supervision")), Jina-CLIP-v1 Koukounas et al. ([2024a](https://arxiv.org/html/2601.10096v1#bib.bib33 "Jina clip: your clip model is also your text retriever")), and K-ALIGN Yoon et al. ([2022](https://arxiv.org/html/2601.10096v1#bib.bib13 "COYO-align")), and multilingual text encoders (T m T_{m}): LaBSE Feng et al. ([2020](https://arxiv.org/html/2601.10096v1#bib.bib49 "Language-agnostic bert sentence embedding")), M-MPNET and M-MiniLM Reimers and Gurevych ([2020](https://arxiv.org/html/2601.10096v1#bib.bib45 "Making monolingual sentence embeddings multilingual using knowledge distillation")), and Jina-Text-v3 Sturua et al. ([2024](https://arxiv.org/html/2601.10096v1#bib.bib7 "Jina-embeddings-v3: multilingual embeddings with task lora")). Aligned models are denoted as ℳ e×T m\mathcal{M}_{e}\times T_{m} (e.g., CLIP ×\times LaBSE). + +We compare M2M-aligned models against: (i) English-only ℳ e\mathcal{M}_{e} (CLIP, Jina-CLIP-v1, K-ALIGN), and (ii) multilingual multimodal baselines (MMMs): mUSEM3L Aggarwal and Kale ([2020](https://arxiv.org/html/2601.10096v1#bib.bib24 "Towards zero-shot cross-lingual image retrieval")), MCLIP-ST (Multilingual CLIP Reimers and Gurevych ([2020](https://arxiv.org/html/2601.10096v1#bib.bib45 "Making monolingual sentence embeddings multilingual using knowledge distillation")) from SentenceTransformers 5 5 5[https://www.sbert.net/](https://www.sbert.net/)), MURAL-Large Jain et al. ([2021](https://arxiv.org/html/2601.10096v1#bib.bib26 "MURAL: multimodal, multitask retrieval across languages")), ALIGN-Base Jia et al. ([2021](https://arxiv.org/html/2601.10096v1#bib.bib6 "Scaling up visual and vision-language representation learning with noisy text supervision")) (via MURAL), XLM-R ViT variants (ViT-L/14, ViT-B/32, ViT-B/16+) & LaBSE ViT-L/14 Carlsson et al. ([2022](https://arxiv.org/html/2601.10096v1#bib.bib65 "Cross-lingual and multilingual CLIP")), Jina-CLIP-v2 Koukounas et al. ([2024b](https://arxiv.org/html/2601.10096v1#bib.bib55 "Jina-clip-v2: multilingual multimodal embeddings for text and images")), and AltCLIP M​9\text{AltCLIP}_{M9}Chen et al. ([2022](https://arxiv.org/html/2601.10096v1#bib.bib52 "AltCLIP: altering the language encoder in clip for extended language capabilities")). Supported languages are listed in Appendix[C](https://arxiv.org/html/2601.10096v1#A3 "Appendix C List supported languages for multilingual and/or multimodal models ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). + +Evaluation. We evaluate on three multilingual image-text datasets: XTD (11 languages)Aggarwal and Kale ([2020](https://arxiv.org/html/2601.10096v1#bib.bib24 "Towards zero-shot cross-lingual image retrieval")), which includes MIC Rajendran et al. ([2016](https://arxiv.org/html/2601.10096v1#bib.bib58 "Bridge correlational neural networks for multilingual multimodal representation learning")) (de, fr) and STAIR Captions Yoshikawa et al. ([2017](https://arxiv.org/html/2601.10096v1#bib.bib57 "STAIR captions: constructing a large-scale Japanese image caption dataset")) (jp); XM3600 (36 languages)Thapliyal et al. ([2022](https://arxiv.org/html/2601.10096v1#bib.bib56 "Crossmodal-3600: a massively multilingual multimodal evaluation dataset")); and Multi30K (4 languages)Elliott et al. ([2016](https://arxiv.org/html/2601.10096v1#bib.bib22 "Multi30K: multilingual english-german image descriptions"), [2017](https://arxiv.org/html/2601.10096v1#bib.bib23 "Findings of the second shared task on multimodal machine translation and multilingual image description")); Barrault et al. ([2018](https://arxiv.org/html/2601.10096v1#bib.bib25 "Findings of the third shared task on multimodal machine translation")). Following prior work Aggarwal and Kale ([2020](https://arxiv.org/html/2601.10096v1#bib.bib24 "Towards zero-shot cross-lingual image retrieval")); Jain et al. ([2021](https://arxiv.org/html/2601.10096v1#bib.bib26 "MURAL: multimodal, multitask retrieval across languages")); Carlsson et al. ([2022](https://arxiv.org/html/2601.10096v1#bib.bib65 "Cross-lingual and multilingual CLIP")), we use Recall@10 with cosine similarity as the ranking score. For XTD, we report Text-to-Image retrieval scores for all languages and the average Recall@10. For XM3600, Multi30K, and Image-to-Text retrieval tasks, we report only the mean Recall@10 across all languages; per-language results are provided in Appendix[F](https://arxiv.org/html/2601.10096v1#A6 "Appendix F Image-Text Retrieval: Additional Results ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). + +Results & Analysis. For the XTD Text-to-Image (T2I) task, the M2M-aligned Jina-CLIP-v1×M-MPNET\text{Jina-CLIP-v1}\times\text{M-MPNET} model (row M4, Table[2](https://arxiv.org/html/2601.10096v1#S5.T2 "Table 2 ‣ 5 Image-Text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text")) outperforms several MMMs trained on multimodal and/or multilingual paired data (rows T1-T3, T5-T7). For English, our models (row M3 & M4) match English-trained baselines (rows E1-E3). For subsequent comparisons, we use Jina-CLIP-v2 as the SOTA model, as it achieves the best average performance across all languages. + +On XTD, our best M2M-aligned model (row M4) performs 3.1% lower on T2I and 3.8% lower on Image-to-Text (I2T) compared to SOTA. This gap is expected, as models like Jina-CLIP-v2 are explicitly trained on massive multilingual-multimodal data—∼\sim 400M non-English image-text pairs from CommonPool Gadre et al. ([2023](https://arxiv.org/html/2601.10096v1#bib.bib35 "DataComp: in search of the next generation of multimodal datasets")) and 1.2M multilingual synthetic captions. For Multi30K, the performance gap is similar: 3.9% for T2I and 4.3% for I2T. For XM3600, the gap widens to 14.8% (T2I) and 12.6% (I2T), likely due to the larger retrieval space (Multi30K and XTD test sets have 1K instances, while XM3600 has 3,600 images and ∼\sim 7K captions). Detailed per-language results for XM3600 and Multi30K are provided in Appendix[F](https://arxiv.org/html/2601.10096v1#A6 "Appendix F Image-Text Retrieval: Additional Results ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). + +6 Audio-Text Retrieval +---------------------- + +Table 3: Performance comparison of Audio-Text Models on AudioCaps and Clotho datasets using Recall@10 for Text-to-Audio (T2A) retrieval, averaged across M-MPNET supported languages. * denotes reported numbers from Wu et al. ([2022](https://arxiv.org/html/2601.10096v1#bib.bib64 "Large-scale contrastive language-audio pretraining with feature fusion and keyword-to-caption augmentation")) and rest are computed from checkpoints. + +Experimental Setup. We use LAION-CLAP Wu et al. ([2022](https://arxiv.org/html/2601.10096v1#bib.bib64 "Large-scale contrastive language-audio pretraining with feature fusion and keyword-to-caption augmentation")) as the audio-text multimodal model (ℳ e\mathcal{M}_{e}) and align it with M-MPNET (T m T_{m}). We experiment with two variants of LAION-CLAP: (i) CLAP-HTSAT-fused, trained on AudioCaps Kim et al. ([2019](https://arxiv.org/html/2601.10096v1#bib.bib63 "AudioCaps: generating captions for audios in the wild")), Clotho Drossos et al. ([2019](https://arxiv.org/html/2601.10096v1#bib.bib62 "Clotho: an audio captioning dataset")), and LAION-Audio-630k Wu et al. ([2022](https://arxiv.org/html/2601.10096v1#bib.bib64 "Large-scale contrastive language-audio pretraining with feature fusion and keyword-to-caption augmentation")); and (ii) CLAP-General, trained on additional speech and music data. For alignment, we use English captions from AudioCaps, Clotho, and WavCaps Mei et al. ([2023](https://arxiv.org/html/2601.10096v1#bib.bib61 "WavCaps: a chatgpt-assisted weakly-labelled audio captioning dataset for audio-language multimodal research")). The AudioCaps validation set is used to select the best checkpoint. + +Synthetic Evaluation Datasets. Due to the lack of multilingual audio-text evaluation datasets, we extend the AudioCaps (4,875 captions) and Clotho (5,225 captions) test sets to 33 new languages using machine translation. For 11 Indic languages 6 6 6 bn, gu, hi, kn, ml, mr, ne, pa, ta, te, ur, we use the English-to-Indic translation model from IndicTrans2 Gala et al. ([2023](https://arxiv.org/html/2601.10096v1#bib.bib60 "IndicTrans2: towards high-quality and accessible machine translation models for all 22 scheduled indian languages")), and for 22 other languages 7 7 7 ar, zh-Hans, zh-Hant, cs, nl, fr, de, el, he, id, it, ja, ko, fa, pl, pt, ro, ru, es, tr, uk, vi, we use Aya-23-35B Aryabumi et al. ([2024](https://arxiv.org/html/2601.10096v1#bib.bib59 "Aya 23: open weight releases to further multilingual progress")). + +Based on Aya-23-35B’s reported results on the FLoRes-200 test set Costa-jussà et al. ([2022](https://arxiv.org/html/2601.10096v1#bib.bib11 "No language left behind: scaling human-centered machine translation")) and manual spot checks, we assume that the translations for the 22 languages are of reasonably high quality. The FLoRes-200 test set is also used to identify the optimal prompt for translation. To evaluate translation quality for Indic languages, we back-translate to English using IndicTrans2 (Indic-to-English). Across the 11 Indic languages, AudioCaps achieves a mean spBLEU Post ([2018](https://arxiv.org/html/2601.10096v1#bib.bib32 "A call for clarity in reporting BLEU scores")) of 48.7 and chrF++Popović ([2017](https://arxiv.org/html/2601.10096v1#bib.bib20 "ChrF++: words helping character n-grams")) of 63.6, while Clotho achieves 47.4 and 59.6, respectively. + +Additional details on dataset licenses and translation quality assessment are provided in Appendix[B](https://arxiv.org/html/2601.10096v1#A2 "Appendix B Model & Data License ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text") and[G](https://arxiv.org/html/2601.10096v1#A7 "Appendix G Curation of Synthetic evaluation dataset ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). Due to the lack of comparable multilingual baselines, we report Recall@10 for our method only on these synthetic multilingual test sets. Language-wise Recall@10 scores are provided in Appendix[H](https://arxiv.org/html/2601.10096v1#A8 "Appendix H CLAP ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text") for both AudioCaps and Clotho. + +Results & Analysis. Table[3](https://arxiv.org/html/2601.10096v1#S6.T3 "Table 3 ‣ 6 Audio-Text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text") shows that our method generalizes effectively to modalities beyond images. On AudioCaps, our approach performs 6% below the SOTA on Text-to-Audio retrieval (T2A), while on Clotho the gap is 2.3% (T2A). + +To understand this gap, we compute Text-to-Text (T2T) Recall@10 on XM3600 (image-text) and AudioCaps (audio-text), leveraging multiple captions per instance. M-MPNET achieves 62.1% T2T Recall@10 on XM3600, comparable to Jina-CLIP-v1 (63.8%), but only 73.8% on AudioCaps, substantially lower than CLAP-General (80.2%). This suggests that M-MPNET, while effective for image-caption encoding, underperforms for audio-caption encoding. + +Qualitative analysis confirms strong semantic alignment. For the query “A man speaks with some clicks and then loud long scrapes”, the top three retrieved audio captions were: 1) “Sanding and filing then a man speaks”, 2) “A man speaks with some clicking and some sanding”, and 3) “A man speaks with a high-frequency hum with some banging and clanking”. The ground truth audio ranked 10th, but its captions—“A man talking as metal clacks followed by metal scraping against a metal surface” and “A man is speaking followed by saw blade noises”—closely match the top retrieved results, demonstrating robust semantic retrieval. Additional examples and details are in Appendix[H](https://arxiv.org/html/2601.10096v1#A8 "Appendix H CLAP ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). + +7 Cross-lingual Text-to-Image Generation +---------------------------------------- + +![Image 5: Refer to caption](https://arxiv.org/html/2601.10096v1/original_images/BASELINE_the_city_bus_is_traveling_down_the_road.png) + +(a) FLUX (en) + +![Image 6: Refer to caption](https://arxiv.org/html/2601.10096v1/original_images/CLIP_ONLY_BASELINE_the_city_bus_is_traveling_down_the_road.png) + +(b) FLUX CLIP (en) + +![Image 7: Refer to caption](https://arxiv.org/html/2601.10096v1/captioned/el_the_city_bus_is_traveling_down_the_road.png) + +(c) Ours (el) + +![Image 8: Refer to caption](https://arxiv.org/html/2601.10096v1/captioned/fa_the_city_bus_is_traveling_down_the_road.png) + +(d) Ours (fa) + +![Image 9: Refer to caption](https://arxiv.org/html/2601.10096v1/captioned/fr_the_city_bus_is_traveling_down_the_road.png) + +(e) Ours (fr) + +![Image 10: Refer to caption](https://arxiv.org/html/2601.10096v1/captioned/he_the_city_bus_is_traveling_down_the_road.png) + +(f) Ours (he) + +![Image 11: Refer to caption](https://arxiv.org/html/2601.10096v1/original_images/T5_the_city_bus_is_traveling_down_the_road.png) + +(g) FLUX-T5 (en) + +![Image 12: Refer to caption](https://arxiv.org/html/2601.10096v1/captioned/es_the_city_bus_is_traveling_down_the_road.png) + +(h) Ours (es) + +![Image 13: Refer to caption](https://arxiv.org/html/2601.10096v1/captioned/hi_the_city_bus_is_traveling_down_the_road.png) + +(i) Ours (hi) + +![Image 14: Refer to caption](https://arxiv.org/html/2601.10096v1/captioned/id_the_city_bus_is_traveling_down_the_road.png) + +(j) Ours (id) + +![Image 15: Refer to caption](https://arxiv.org/html/2601.10096v1/captioned/ko_the_city_bus_is_traveling_down_the_road.png) + +(k) Ours (ko) + +![Image 16: Refer to caption](https://arxiv.org/html/2601.10096v1/captioned/ru_the_city_bus_is_traveling_down_the_road.png) + +(l) Ours (ru) + +Figure 5: Images generated by FLUX text-to-image model using the prompt “The city bus is traveling down the road” in multiple languages (non-English captions shown on images). Our M2M-aligned model produces similar quality images compared to baseline FLUX (both T5 and CLIP encoders), FLUX-T5 and FLUX-CLIP models. + +Our method is task-agnostic and extends naturally to generative tasks such as Text-to-Image generation. Since M2M aligns sentence-level (CLS) representations, we experiment with FLUX.1-dev (FLUX)Labs ([2024](https://arxiv.org/html/2601.10096v1#bib.bib18 "FLUX")), a 12B parameter Text-to-Image model that conditions on CLS from a CLIP encoder. FLUX is chosen for its public availability, competitive performance Yang et al. ([2024](https://arxiv.org/html/2601.10096v1#bib.bib9 "1.58-bit flux")), and dual text encoders: CLIP (CLS conditioning) and T5 Raffel et al. ([2020](https://arxiv.org/html/2601.10096v1#bib.bib17 "Exploring the limits of transfer learning with a unified text-to-text transformer")) (token conditioning). + +To learn the projection map ℱ\mathcal{F}, we align the M-MPNET encoder (T m T_{m}) with the CLIP encoder from FLUX (T e T_{e}).8 8 8 In a qualitative comparison of 100 generations from FLUX×\times LaBSE vs. FLUX×\times M-MPNET, the latter consistently produced higher-quality images. Since FLUX uses both CLIP and T5, we consider four variants: (i) FLUX, which inputs text to both CLIP and T5; (ii) FLUX-CLIP, which inputs text to CLIP and a generic prompt (“A photo of:”) to T5 9 9 9 Among tested prompts (“An image of:”, “A picture of:”, “A photo of:”), the last yielded best results.; (iii) FLUX-T5, which inputs text to T5 and a generic prompt (“A photo of:”) to CLIP; and (iv) FLUX×\times M-MPNET, which inputs text to the M2M-aligned M-MPNET encoder and a generic prompt to T5. + +Training Setup & Evaluation. We follow Section[4](https://arxiv.org/html/2601.10096v1#S4 "4 Exploring Alignment Design Space ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"), training for 10 epochs without validation, using bfloat16 precision and MSE loss (instead of qq.[6](https://arxiv.org/html/2601.10096v1#S3.E6 "In 3 Methodology ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text")) on unnormalized representations. Unlike retrieval tasks, ℒ s​t​r\mathcal{L}_{str} degrades performance: preserving scale information (exact mapping of representations) rather than structural similarity is more important for generation. Images are generated at 512×512 512{\times}512 resolution with guidance scale 3.5, 10 inference steps, and a fixed seed. Following prior work Ramesh et al. ([2021](https://arxiv.org/html/2601.10096v1#bib.bib28 "Zero-shot text-to-image generation")); Rombach et al. ([2021](https://arxiv.org/html/2601.10096v1#bib.bib34 "High-resolution image synthesis with latent diffusion models")); Saharia et al. ([2022](https://arxiv.org/html/2601.10096v1#bib.bib31 "Photorealistic text-to-image diffusion models with deep language understanding")), we sample 30K captions from MSCOCO2014 Lin et al. ([2014](https://arxiv.org/html/2601.10096v1#bib.bib27 "Microsoft coco: common objects in context")) for English validation set. For multilingual evaluation, we extend captions into 9 languages (fr, el, he, id, ko, fa, ru, es, hi) using IndicTrans2 (hi) and Aya-23-35B (others). Metrics used are FID Heusel et al. ([2017](https://arxiv.org/html/2601.10096v1#bib.bib30 "GANs trained by a two time-scale update rule converge to a local nash equilibrium")) and Inception Score (IS)Salimans et al. ([2016](https://arxiv.org/html/2601.10096v1#bib.bib29 "Improved techniques for training gans")). + +Results & Analysis. FLUX×\times M-MPNET achieves a strong Inception Score of 31.81 averaged across languages, including 35.9±\pm 0.57 on English, surpassing trained models such as LDM Rombach et al. ([2021](https://arxiv.org/html/2601.10096v1#bib.bib34 "High-resolution image synthesis with latent diffusion models")) (30.29±\pm 0.42), CogView Ding et al. ([2021](https://arxiv.org/html/2601.10096v1#bib.bib16 "Cogview: mastering text-to-image generation via transformers")) (18.2), and LAFITE Zhou et al. ([2022](https://arxiv.org/html/2601.10096v1#bib.bib15 "Towards language-free training for text-to-image generation")) (26.02). However, FID is poor: 40.9 (FLUX-CLIP) and 43.4 (FLUX×\times M-MPNET), compared to 23.4 for both FLUX and FLUX-T5. The identical FID for FLUX and FLUX-T5 suggests FLUX relies heavily on T5 token conditioning and can generate high-quality images without CLIP input. Since our setup replaces CLIP with an aligned encoder and uses a generic T5 prompt, generated images are less faithful to the text (e.g., missing objects). + +![Image 17: Refer to caption](https://arxiv.org/html/2601.10096v1/green/hi_the_city_bus_is_traveling_down_the_road.png) + +(a) Ours (hi) + +![Image 18: Refer to caption](https://arxiv.org/html/2601.10096v1/green/fr_the_city_bus_is_traveling_down_the_road.png) + +(b) Ours (fr) + +![Image 19: Refer to caption](https://arxiv.org/html/2601.10096v1/cartoon/hi_the_city_bus_is_traveling_down_the_road.png) + +(c) Ours (hi) + +![Image 20: Refer to caption](https://arxiv.org/html/2601.10096v1/cartoon/fr_the_city_bus_is_traveling_down_the_road.png) + +(d) Ours (fr) + +![Image 21: Refer to caption](https://arxiv.org/html/2601.10096v1/van_gogh/hi_the_city_bus_is_traveling_down_the_road.png) + +(e) Ours (hi) + +![Image 22: Refer to caption](https://arxiv.org/html/2601.10096v1/van_gogh/fr_the_city_bus_is_traveling_down_the_road.png) + +(f) Ours (fr) + +![Image 23: Refer to caption](https://arxiv.org/html/2601.10096v1/green/ru_the_city_bus_is_traveling_down_the_road.png) + +(g) Ours (ru) + +![Image 24: Refer to caption](https://arxiv.org/html/2601.10096v1/green/fa_the_city_bus_is_traveling_down_the_road.png) + +(h) Ours (fa) + +![Image 25: Refer to caption](https://arxiv.org/html/2601.10096v1/cartoon/ru_the_city_bus_is_traveling_down_the_road.png) + +(i) Ours (ru) + +![Image 26: Refer to caption](https://arxiv.org/html/2601.10096v1/cartoon/fa_the_city_bus_is_traveling_down_the_road.png) + +(j) Ours (fa) + +![Image 27: Refer to caption](https://arxiv.org/html/2601.10096v1/van_gogh/ru_the_city_bus_is_traveling_down_the_road.png) + +(k) Ours (ru) + +![Image 28: Refer to caption](https://arxiv.org/html/2601.10096v1/van_gogh/fa_the_city_bus_is_traveling_down_the_road.png) + +(l) Ours (fa) + +“A green-themed photo of: ” + +“A cartoon photo of: ” + +“A Van Gogh-style photo of: ” + +Figure 6: Images generated from multilingual translations of input prompt: “The city bus is traveling down the road” using FLUX ×\times M-MPNET model, with theme prompts in T5 encoder to enhance image quality and style. + +Despite this limitation, qualitative results (Fig.[5](https://arxiv.org/html/2601.10096v1#S7.F5 "Figure 5 ‣ 7 Cross-lingual Text-to-Image Generation ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text")) show diverse, semantically relevant images with slightly reduced fidelity. For FLUX-CLIP and FLUX×\times M-MPNET, we also observe _hallucinated_ generations—well-formed but text-misaligned outputs. These are not random noise but coherent, object-rich images, likely caused by weak signal from T5 due to generic prompts. Adding more specific object/style cues to T5 input alleviates this, as illustrated in Fig.[6](https://arxiv.org/html/2601.10096v1#S7.F6 "Figure 6 ‣ 7 Cross-lingual Text-to-Image Generation ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). Language-wise FID and IS score breakdowns and examples are in Appendix[I](https://arxiv.org/html/2601.10096v1#A9 "Appendix I Cross-lingual Text-to-Image Generation. ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). + +8 Conclusion +------------ + +We introduce M2M, an efficient method to align multilingual latent spaces with multimodal spaces using only a few linear layers and English text data. Unlike existing approaches requiring large-scale multilingual or multimodal corpora, M2M reduces resource needs while maintaining strong performance across tasks and modalities. + +On XTD-T2I retrieval, it achieves 94.9% Recall@10 for English and 89.5% averaged across 11 languages, demonstrating robust zero-shot transfer. Qualitative analyses, including t-SNE visualizations, show projected multilingual embeddings align closely with multimodal representations. Beyond image-text retrieval, M2M generalizes to Audio-Text retrieval and cross-lingual Text-to-Image generation. We release synthetic evaluation datasets: AudioCaps and Clotho extended to 33 languages, and MSCOCO-30K captions extended to 9 languages, providing a unified open benchmark. While promising, further improvements are possible, particularly via token-level alignment. Overall, M2M shows that lightweight, data-efficient strategies can bridge multilingual and multimodal spaces by leveraging implicit alignment between languages and modalities. + +Limitations +----------- + +Need for local alignment. Our method focuses on aligning _global_, sentence-level representations in encoder-based models and demonstrates strong performance in this setting. However, in its current form, it does not provide alignment at the token level and therefore does not directly extend to Multimodal Large Language Models (MLLMs), where effective generation relies on fine-grained representations. Tasks such as text-to-image generation and cross-lingual skill transfer would benefit from token-level alignment signals alongside high-level semantic consistency. Extending the proposed framework to support local alignment is a natural and promising direction for future work. + +Joint Cross-modal Representations. Our work effectively aligns multilingual and multimodal representations from dual encoder models, where each modality is encoded individually. Joint cross-modal encoders generate representations by combining multiple modality representations through shared architectural components. The effectiveness of our method for joint cross-modal representations remains to be explored. + +Lack of Human-verified multilingual-multimodal evaluation set. Finding high-quality standard multilingual evaluation sets for Audio-Text retrieval and Text-to-Image Generation tasks is challenging. To address this, we curated synthetic parallel evaluation data for AudioCaps (160K samples), Clotho (172K samples), and MSCOCO-30K (270K samples). Due to the large scale of the data, human verification of the translated captions was not feasible for us. While we use objective metrics like spBLEU and chrF++ to ensure dataset quality, these measures alone are not sufficient, and without human verification, some errors may persist in the evaluation dataset. + +References +---------- + +* P. Aggarwal and A. Kale (2020)Towards zero-shot cross-lingual image retrieval. arXiv preprint arXiv:2012.05107. Cited by: [§2](https://arxiv.org/html/2601.10096v1#S2.p1.1 "2 Related Work ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"), [§4](https://arxiv.org/html/2601.10096v1#S4.p2.4 "4 Exploring Alignment Design Space ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"), [§5](https://arxiv.org/html/2601.10096v1#S5.p2.2 "5 Image-Text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"), [§5](https://arxiv.org/html/2601.10096v1#S5.p3.1 "5 Image-Text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* N. Alam, K. R. Kanjula, S. Guthikonda, T. Chung, B. K. S. Vegesna, A. Das, A. Susevski, R. S. Chan, S. Uddin, S. B. Islam, et al. (2024)Maya: an instruction finetuned multilingual multimodal model. arXiv preprint arXiv:2412.07112. Cited by: [Appendix G](https://arxiv.org/html/2601.10096v1#A7.p1.1 "Appendix G Curation of Synthetic evaluation dataset ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* V. Aryabumi, J. Dang, D. Talupuru, S. Dash, D. Cairuz, H. Lin, B. Venkitesh, M. Smith, K. Marchisio, S. Ruder, A. F. Locatelli, J. Kreutzer, N. Frosst, P. Blunsom, M. Fadaee, A. Ustun, and S. Hooker (2024)Aya 23: open weight releases to further multilingual progress. ArXiv abs/2405.15032. External Links: [Link](https://api.semanticscholar.org/CorpusID:270045533)Cited by: [§6](https://arxiv.org/html/2601.10096v1#S6.p2.1 "6 Audio-Text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* L. Barrault, F. Bougares, L. Specia, C. Lala, D. Elliott, and S. Frank (2018)Findings of the third shared task on multimodal machine translation. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pp.304–323. Cited by: [§5](https://arxiv.org/html/2601.10096v1#S5.p3.1 "5 Image-Text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* J. P. Bigham, C. Jayant, H. Ji, G. Little, A. Miller, R. Miller, R. Miller, A. Tatarowicz, B. A. White, S. White, and T. Yeh (2010)VizWiz: nearly real-time answers to visual questions. Proceedings of the 23nd annual ACM symposium on User interface software and technology. External Links: [Link](https://api.semanticscholar.org/CorpusID:52804681)Cited by: [§4](https://arxiv.org/html/2601.10096v1#S4.p2.4 "4 Exploring Alignment Design Space ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* F. Carlsson, P. Eisen, F. Rekathati, and M. Sahlgren (2022)Cross-lingual and multilingual CLIP. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, N. Calzolari, F. Béchet, P. Blache, K. Choukri, C. Cieri, T. Declerck, S. Goggi, H. Isahara, B. Maegaard, J. Mariani, H. Mazo, J. Odijk, and S. Piperidis (Eds.), Marseille, France, pp.6848–6854. External Links: [Link](https://aclanthology.org/2022.lrec-1.739)Cited by: [Appendix B](https://arxiv.org/html/2601.10096v1#A2.p1.1 "Appendix B Model & Data License ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"), [Table 4](https://arxiv.org/html/2601.10096v1#A3.T4.1.4.3.1.1.1 "In Appendix C List supported languages for multilingual and/or multimodal models ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"), [§1](https://arxiv.org/html/2601.10096v1#S1.p2.1 "1 Introduction ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"), [§2](https://arxiv.org/html/2601.10096v1#S2.p1.1 "2 Related Work ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"), [§4](https://arxiv.org/html/2601.10096v1#S4.p2.4 "4 Exploring Alignment Design Space ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"), [§5](https://arxiv.org/html/2601.10096v1#S5.p2.2 "5 Image-Text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"), [§5](https://arxiv.org/html/2601.10096v1#S5.p3.1 "5 Image-Text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* G. Chen, L. Hou, Y. Chen, W. Dai, L. Shang, X. Jiang, Q. Liu, J. Pan, and W. Wang (2023)MCLIP: multilingual CLIP via cross-lingual transfer. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), A. Rogers, J. Boyd-Graber, and N. Okazaki (Eds.), Toronto, Canada, pp.13028–13043. External Links: [Link](https://aclanthology.org/2023.acl-long.728/), [Document](https://dx.doi.org/10.18653/v1/2023.acl-long.728)Cited by: [§2](https://arxiv.org/html/2601.10096v1#S2.p1.1 "2 Related Work ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* Z. Chen, G. Liu, B. Zhang, F. Ye, Q. Yang, and L. Y. Wu (2022)AltCLIP: altering the language encoder in clip for extended language capabilities. ArXiv abs/2211.06679. External Links: [Link](https://api.semanticscholar.org/CorpusID:253511222)Cited by: [§2](https://arxiv.org/html/2601.10096v1#S2.p1.1 "2 Related Work ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"), [§5](https://arxiv.org/html/2601.10096v1#S5.p2.2 "5 Image-Text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* M. R. Costa-jussà, J. Cross, O. Çelebi, M. Elbayad, K. Heafield, K. Heffernan, E. Kalbassi, J. Lam, D. Licht, J. Maillard, et al. (2022)No language left behind: scaling human-centered machine translation. arXiv preprint arXiv:2207.04672. Cited by: [Appendix G](https://arxiv.org/html/2601.10096v1#A7.p1.1 "Appendix G Curation of Synthetic evaluation dataset ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"), [§6](https://arxiv.org/html/2601.10096v1#S6.p3.1 "6 Audio-Text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* J. Devlin, M. Chang, K. Lee, and K. Toutanova (2019)BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), J. Burstein, C. Doran, and T. Solorio (Eds.), Minneapolis, Minnesota, pp.4171–4186. External Links: [Link](https://aclanthology.org/N19-1423/), [Document](https://dx.doi.org/10.18653/v1/N19-1423)Cited by: [§1](https://arxiv.org/html/2601.10096v1#S1.p2.1 "1 Introduction ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* M. Ding, Z. Yang, W. Hong, W. Zheng, C. Zhou, D. Yin, J. Lin, X. Zou, Z. Shao, H. Yang, et al. (2021)Cogview: mastering text-to-image generation via transformers. Advances in neural information processing systems 34, pp.19822–19835. Cited by: [§7](https://arxiv.org/html/2601.10096v1#S7.p4.4 "7 Cross-lingual Text-to-Image Generation ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* K. Drossos, S. Lipping, and T. Virtanen (2019)Clotho: an audio captioning dataset. ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp.736–740. External Links: [Link](https://api.semanticscholar.org/CorpusID:204800739)Cited by: [item 3](https://arxiv.org/html/2601.10096v1#S1.I1.i3.p1.1 "In 1 Introduction ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"), [§6](https://arxiv.org/html/2601.10096v1#S6.p1.2 "6 Audio-Text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* B. Elizalde, S. Deshmukh, M. A. Ismail, and H. Wang (2023)CLAP learning audio concepts from natural language supervision. ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp.1–5. External Links: [Link](https://api.semanticscholar.org/CorpusID:249605738)Cited by: [§1](https://arxiv.org/html/2601.10096v1#S1.p2.1 "1 Introduction ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"), [§2](https://arxiv.org/html/2601.10096v1#S2.p1.1 "2 Related Work ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* D. Elliott, S. Frank, L. Barrault, F. Bougares, and L. Specia (2017)Findings of the second shared task on multimodal machine translation and multilingual image description. In Proceedings of the Second Conference on Machine Translation, Volume 2: Shared Task Papers, Copenhagen, Denmark, pp.215–233. External Links: [Link](http://www.aclweb.org/anthology/W17-4718)Cited by: [§5](https://arxiv.org/html/2601.10096v1#S5.p3.1 "5 Image-Text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* D. Elliott, S. Frank, K. Sima’an, and L. Specia (2016)Multi30K: multilingual english-german image descriptions. In Proceedings of the 5th Workshop on Vision and Language, pp.70–74. External Links: [Document](https://dx.doi.org/10.18653/v1/W16-3210), [Link](http://www.aclweb.org/anthology/W16-3210)Cited by: [§5](https://arxiv.org/html/2601.10096v1#S5.p3.1 "5 Image-Text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* F. Feng, Y. Yang, D. M. Cer, N. Arivazhagan, and W. Wang (2020)Language-agnostic bert sentence embedding. In Annual Meeting of the Association for Computational Linguistics, External Links: [Link](https://api.semanticscholar.org/CorpusID:220347683)Cited by: [Table 4](https://arxiv.org/html/2601.10096v1#A3.T4.1.2.1.1.1.1 "In Appendix C List supported languages for multilingual and/or multimodal models ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"), [§5](https://arxiv.org/html/2601.10096v1#S5.p1.4 "5 Image-Text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* S. Y. Gadre, G. Ilharco, A. Fang, J. Hayase, G. Smyrnis, T. Nguyen, R. Marten, M. Wortsman, D. Ghosh, J. Zhang, E. Orgad, R. Entezari, G. Daras, S. Pratt, V. Ramanujan, Y. Bitton, K. Marathe, S. Mussmann, R. Vencu, M. Cherti, R. Krishna, P. W. Koh, O. Saukh, A. J. Ratner, S. Song, H. Hajishirzi, A. Farhadi, R. Beaumont, S. Oh, A. G. Dimakis, J. Jitsev, Y. Carmon, V. Shankar, and L. Schmidt (2023)DataComp: in search of the next generation of multimodal datasets. ArXiv abs/2304.14108. External Links: [Link](https://api.semanticscholar.org/CorpusID:258352812)Cited by: [§5](https://arxiv.org/html/2601.10096v1#S5.p5.2 "5 Image-Text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* J. P. Gala, P. A. Chitale, A. Raghavan, V. Gumma, S. Doddapaneni, M. AswanthKumar, J. A. Nawale, A. Sujatha, R. Puduppully, V. Raghavan, P. Kumar, M. M. Khapra, R. Dabre, and A. Kunchukuttan (2023)IndicTrans2: towards high-quality and accessible machine translation models for all 22 scheduled indian languages. Trans. Mach. Learn. Res.2023. External Links: [Link](https://api.semanticscholar.org/CorpusID:271601569)Cited by: [§6](https://arxiv.org/html/2601.10096v1#S6.p2.1 "6 Audio-Text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* J. C. Gower (1975)Generalized procrustes analysis. Psychometrika 40, pp.33–51. Cited by: [§2](https://arxiv.org/html/2601.10096v1#S2.p2.1 "2 Related Work ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* K. He, X. Zhang, S. Ren, and J. Sun (2015)Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp.770–778. External Links: [Link](https://api.semanticscholar.org/CorpusID:206594692)Cited by: [§4](https://arxiv.org/html/2601.10096v1#S4.p1.1 "4 Exploring Alignment Design Space ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter (2017)GANs trained by a two time-scale update rule converge to a local nash equilibrium. In Neural Information Processing Systems, External Links: [Link](https://api.semanticscholar.org/CorpusID:326772)Cited by: [§7](https://arxiv.org/html/2601.10096v1#S7.p3.2 "7 Cross-lingual Text-to-Image Generation ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* A. Jain, M. Guo, K. Srinivasan, T. Chen, S. Kudugunta, C. Jia, Y. Yang, and J. Baldridge (2021)MURAL: multimodal, multitask retrieval across languages. ArXiv abs/2109.05125. External Links: [Link](https://api.semanticscholar.org/CorpusID:237490989)Cited by: [§2](https://arxiv.org/html/2601.10096v1#S2.p1.1 "2 Related Work ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"), [§5](https://arxiv.org/html/2601.10096v1#S5.p2.2 "5 Image-Text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"), [§5](https://arxiv.org/html/2601.10096v1#S5.p3.1 "5 Image-Text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* C. Jia, Y. Yang, Y. Xia, Y. Chen, Z. Parekh, H. Pham, Q. Le, Y. Sung, Z. Li, and T. Duerig (2021)Scaling up visual and vision-language representation learning with noisy text supervision. In International conference on machine learning, pp.4904–4916. Cited by: [§5](https://arxiv.org/html/2601.10096v1#S5.p2.2 "5 Image-Text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* C. D. Kim, B. Kim, H. Lee, and G. Kim (2019)AudioCaps: generating captions for audios in the wild. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), J. Burstein, C. Doran, and T. Solorio (Eds.), Minneapolis, Minnesota, pp.119–132. External Links: [Link](https://aclanthology.org/N19-1011/), [Document](https://dx.doi.org/10.18653/v1/N19-1011)Cited by: [item 3](https://arxiv.org/html/2601.10096v1#S1.I1.i3.p1.1 "In 1 Introduction ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"), [§6](https://arxiv.org/html/2601.10096v1#S6.p1.2 "6 Audio-Text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* A. Koukounas, G. Mastrapas, M. Günther, B. Wang, S. Martens, I. Mohr, S. Sturua, M. K. Akram, J. F. Mart’inez, S. Ognawala, S. Guzman, M. Werk, N. Wang, and H. Xiao (2024a)Jina clip: your clip model is also your text retriever. ArXiv abs/2405.20204. External Links: [Link](https://api.semanticscholar.org/CorpusID:270123621)Cited by: [§4](https://arxiv.org/html/2601.10096v1#S4.p2.4 "4 Exploring Alignment Design Space ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"), [§5](https://arxiv.org/html/2601.10096v1#S5.p1.4 "5 Image-Text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* A. Koukounas, G. Mastrapas, B. Wang, M. K. Akram, S. Eslami, M. Gunther, I. Mohr, S. Sturua, S. Martens, N. Wang, and H. Xiao (2024b)Jina-clip-v2: multilingual multimodal embeddings for text and images. ArXiv abs/2412.08802. External Links: [Link](https://api.semanticscholar.org/CorpusID:274656285)Cited by: [Table 4](https://arxiv.org/html/2601.10096v1#A3.T4.1.3.2.1.1.1 "In Appendix C List supported languages for multilingual and/or multimodal models ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"), [§1](https://arxiv.org/html/2601.10096v1#S1.p2.1 "1 Introduction ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"), [§2](https://arxiv.org/html/2601.10096v1#S2.p1.1 "2 Related Work ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"), [§5](https://arxiv.org/html/2601.10096v1#S5.p2.2 "5 Image-Text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* B. F. Labs (2024)FLUX. Note: [https://github.com/black-forest-labs/flux](https://github.com/black-forest-labs/flux)Accessed: 2025-02-11 Cited by: [§7](https://arxiv.org/html/2601.10096v1#S7.p1.1 "7 Cross-lingual Text-to-Image Generation ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* Z. Lähner and M. Moeller (2024)On the direct alignment of latent spaces. In Proceedings of UniReps: the First Workshop on Unifying Representations in Neural Models, pp.158–169. Cited by: [§2](https://arxiv.org/html/2601.10096v1#S2.p2.1 "2 Related Work ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* Y. Li, C. Chang, S. Rawls, I. Vulić, and A. Korhonen (2023)Translation-enhanced multilingual text-to-image generation. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), A. Rogers, J. Boyd-Graber, and N. Okazaki (Eds.), Toronto, Canada, pp.9174–9193. External Links: [Link](https://aclanthology.org/2023.acl-long.510/), [Document](https://dx.doi.org/10.18653/v1/2023.acl-long.510)Cited by: [§2](https://arxiv.org/html/2601.10096v1#S2.p1.1 "2 Related Work ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* T. Lin, M. Maire, S. J. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick (2014)Microsoft coco: common objects in context. In European Conference on Computer Vision, External Links: [Link](https://api.semanticscholar.org/CorpusID:14113767)Cited by: [§4](https://arxiv.org/html/2601.10096v1#S4.p2.4 "4 Exploring Alignment Design Space ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"), [§7](https://arxiv.org/html/2601.10096v1#S7.p3.2 "7 Cross-lingual Text-to-Image Generation ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* I. Loshchilov (2017)Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101. Cited by: [§4](https://arxiv.org/html/2601.10096v1#S4.p2.4 "4 Exploring Alignment Design Space ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* V. Maiorca, L. Moschella, M. Fumero, F. Locatello, and E. Rodolà (2024a)Latent space translation via inverse relative projection. arXiv preprint arXiv:2406.15057. Cited by: [§2](https://arxiv.org/html/2601.10096v1#S2.p2.1 "2 Related Work ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* V. Maiorca, L. Moschella, A. Norelli, M. Fumero, F. Locatello, and E. Rodolà (2024b)Latent space translation via semantic alignment. Advances in Neural Information Processing Systems 36. Cited by: [§1](https://arxiv.org/html/2601.10096v1#S1.p4.1 "1 Introduction ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"), [§2](https://arxiv.org/html/2601.10096v1#S2.p2.1 "2 Related Work ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* X. Mei, C. Meng, H. Liu, Q. Kong, T. Ko, C. Zhao, M. D. Plumbley, Y. Zou, and W. Wang (2023)WavCaps: a chatgpt-assisted weakly-labelled audio captioning dataset for audio-language multimodal research. IEEE/ACM Transactions on Audio, Speech, and Language Processing 32, pp.3339–3354. External Links: [Link](https://api.semanticscholar.org/CorpusID:257834090)Cited by: [§6](https://arxiv.org/html/2601.10096v1#S6.p1.2 "6 Audio-Text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* L. Moschella, V. Maiorca, M. Fumero, A. Norelli, F. Locatello, and E. Rodolà (2022)Relative representations enable zero-shot latent space communication. ArXiv abs/2209.15430. External Links: [Link](https://api.semanticscholar.org/CorpusID:252668844)Cited by: [§2](https://arxiv.org/html/2601.10096v1#S2.p2.1 "2 Related Work ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* A. Norelli, M. Fumero, V. Maiorca, L. Moschella, E. Rodolà, and F. Locatello (2022)ASIF: coupled data turns unimodal models to multimodal without training. ArXiv abs/2210.01738. External Links: [Link](https://api.semanticscholar.org/CorpusID:252693369)Cited by: [§2](https://arxiv.org/html/2601.10096v1#S2.p2.1 "2 Related Work ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* A. Obukhov, M. Seitzer, P. Wu, S. Zhydenko, J. Kyl, and E. Y. Lin (2020)High-fidelity performance metrics for generative models in pytorch. Zenodo. Note: Version: 0.3.0, DOI: 10.5281/zenodo.4957738 External Links: [Link](https://github.com/toshas/torch-fidelity), [Document](https://dx.doi.org/10.5281/zenodo.4957738)Cited by: [Appendix I](https://arxiv.org/html/2601.10096v1#A9.p1.1 "Appendix I Cross-lingual Text-to-Image Generation. ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* M. Popović (2017)ChrF++: words helping character n-grams. In Proceedings of the Second Conference on Machine Translation, O. Bojar, C. Buck, R. Chatterjee, C. Federmann, Y. Graham, B. Haddow, M. Huck, A. J. Yepes, P. Koehn, and J. Kreutzer (Eds.), Copenhagen, Denmark, pp.612–618. External Links: [Link](https://aclanthology.org/W17-4770/), [Document](https://dx.doi.org/10.18653/v1/W17-4770)Cited by: [§6](https://arxiv.org/html/2601.10096v1#S6.p3.1 "6 Audio-Text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* M. Post (2018)A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, O. Bojar, R. Chatterjee, C. Federmann, M. Fishel, Y. Graham, B. Haddow, M. Huck, A. J. Yepes, P. Koehn, C. Monz, M. Negri, A. Névéol, M. Neves, M. Post, L. Specia, M. Turchi, and K. Verspoor (Eds.), Brussels, Belgium, pp.186–191. External Links: [Link](https://aclanthology.org/W18-6319/), [Document](https://dx.doi.org/10.18653/v1/W18-6319)Cited by: [§6](https://arxiv.org/html/2601.10096v1#S6.p3.1 "6 Audio-Text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, G. Krueger, and I. Sutskever (2021)Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, External Links: [Link](https://api.semanticscholar.org/CorpusID:231591445)Cited by: [§1](https://arxiv.org/html/2601.10096v1#S1.p2.1 "1 Introduction ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"), [§2](https://arxiv.org/html/2601.10096v1#S2.p1.1 "2 Related Work ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"), [§5](https://arxiv.org/html/2601.10096v1#S5.p1.4 "5 Image-Text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* A. Radford (2018)Improving language understanding by generative pre-training. Cited by: [§1](https://arxiv.org/html/2601.10096v1#S1.p2.1 "1 Introduction ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu (2020)Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machine learning research 21 (140), pp.1–67. Cited by: [§7](https://arxiv.org/html/2601.10096v1#S7.p1.1 "7 Cross-lingual Text-to-Image Generation ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* J. Rajendran, M. M. Khapra, S. Chandar, and B. Ravindran (2016)Bridge correlational neural networks for multilingual multimodal representation learning. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, K. Knight, A. Nenkova, and O. Rambow (Eds.), San Diego, California, pp.171–181. External Links: [Link](https://aclanthology.org/N16-1021/), [Document](https://dx.doi.org/10.18653/v1/N16-1021)Cited by: [§5](https://arxiv.org/html/2601.10096v1#S5.p3.1 "5 Image-Text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* A. Ramesh, M. Pavlov, G. Goh, S. Gray, C. Voss, A. Radford, M. Chen, and I. Sutskever (2021)Zero-shot text-to-image generation. ArXiv abs/2102.12092. External Links: [Link](https://api.semanticscholar.org/CorpusID:232035663)Cited by: [§7](https://arxiv.org/html/2601.10096v1#S7.p3.2 "7 Cross-lingual Text-to-Image Generation ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* N. Reimers and I. Gurevych (2020)Making monolingual sentence embeddings multilingual using knowledge distillation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, External Links: [Link](https://arxiv.org/abs/2004.09813)Cited by: [Table 4](https://arxiv.org/html/2601.10096v1#A3.T4.1.5.4.1.1.1 "In Appendix C List supported languages for multilingual and/or multimodal models ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"), [§4](https://arxiv.org/html/2601.10096v1#S4.p2.4 "4 Exploring Alignment Design Space ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"), [§5](https://arxiv.org/html/2601.10096v1#S5.p1.4 "5 Image-Text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"), [§5](https://arxiv.org/html/2601.10096v1#S5.p2.2 "5 Image-Text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer (2021)High-resolution image synthesis with latent diffusion models. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp.10674��10685. External Links: [Link](https://api.semanticscholar.org/CorpusID:245335280)Cited by: [§7](https://arxiv.org/html/2601.10096v1#S7.p3.2 "7 Cross-lingual Text-to-Image Generation ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"), [§7](https://arxiv.org/html/2601.10096v1#S7.p4.4 "7 Cross-lingual Text-to-Image Generation ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* E. Rosenfeld, P. Nakkiran, H. Pouransari, O. Tuzel, and F. Faghri (2022)APE: aligning pretrained encoders to quickly learn aligned multimodal representations. ArXiv abs/2210.03927. External Links: [Link](https://api.semanticscholar.org/CorpusID:263792597)Cited by: [§1](https://arxiv.org/html/2601.10096v1#S1.p4.1 "1 Introduction ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* C. Saharia, W. Chan, S. Saxena, L. Li, J. Whang, E. L. Denton, S. K. S. Ghasemipour, B. K. Ayan, S. S. Mahdavi, R. G. Lopes, T. Salimans, J. Ho, D. J. Fleet, and M. Norouzi (2022)Photorealistic text-to-image diffusion models with deep language understanding. ArXiv abs/2205.11487. External Links: [Link](https://api.semanticscholar.org/CorpusID:248986576)Cited by: [§7](https://arxiv.org/html/2601.10096v1#S7.p3.2 "7 Cross-lingual Text-to-Image Generation ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* T. Salimans, I. J. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen (2016)Improved techniques for training gans. ArXiv abs/1606.03498. External Links: [Link](https://api.semanticscholar.org/CorpusID:1687220)Cited by: [§7](https://arxiv.org/html/2601.10096v1#S7.p3.2 "7 Cross-lingual Text-to-Image Generation ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* P. Sharma, N. Ding, S. Goodman, and R. Soricut (2018)Conceptual captions: a cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Annual Meeting of the Association for Computational Linguistics, External Links: [Link](https://api.semanticscholar.org/CorpusID:51876975)Cited by: [§4](https://arxiv.org/html/2601.10096v1#S4.p2.4 "4 Exploring Alignment Design Space ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* S. Sturua, I. Mohr, M. K. Akram, M. Günther, B. Wang, M. Krimmel, F. Wang, G. Mastrapas, A. Koukounas, N. Wang, et al. (2024)Jina-embeddings-v3: multilingual embeddings with task lora. arXiv preprint arXiv:2409.10173. Cited by: [Table 4](https://arxiv.org/html/2601.10096v1#A3.T4.1.3.2.1.1.1 "In Appendix C List supported languages for multilingual and/or multimodal models ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"), [§5](https://arxiv.org/html/2601.10096v1#S5.p1.4 "5 Image-Text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* A. V. Thapliyal, J. Pont Tuset, X. Chen, and R. Soricut (2022)Crossmodal-3600: a massively multilingual multimodal evaluation dataset. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, Y. Goldberg, Z. Kozareva, and Y. Zhang (Eds.), Abu Dhabi, United Arab Emirates, pp.715–729. External Links: [Link](https://aclanthology.org/2022.emnlp-main.45/), [Document](https://dx.doi.org/10.18653/v1/2022.emnlp-main.45)Cited by: [§5](https://arxiv.org/html/2601.10096v1#S5.p3.1 "5 Image-Text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* Y. Wu, K. Chen, T. Zhang, Y. Hui, T. Berg-Kirkpatrick, and S. Dubnov (2022)Large-scale contrastive language-audio pretraining with feature fusion and keyword-to-caption augmentation. ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp.1–5. External Links: [Link](https://api.semanticscholar.org/CorpusID:253510826)Cited by: [§2](https://arxiv.org/html/2601.10096v1#S2.p1.1 "2 Related Work ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"), [Table 3](https://arxiv.org/html/2601.10096v1#S6.T3 "In 6 Audio-Text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"), [§6](https://arxiv.org/html/2601.10096v1#S6.p1.2 "6 Audio-Text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* Z. Yan, H. Dinkel, Y. Wang, J. Liu, J. Zhang, Y. Wang, and B. Wang (2024)Bridging language gaps in audio-text retrieval. ArXiv abs/2406.07012. External Links: [Link](https://api.semanticscholar.org/CorpusID:270379641)Cited by: [§1](https://arxiv.org/html/2601.10096v1#S1.p2.1 "1 Introduction ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"), [§2](https://arxiv.org/html/2601.10096v1#S2.p1.1 "2 Related Work ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* C. Yang, C. Liu, X. Deng, D. Kim, X. Mei, X. Shen, and L. Chen (2024)1.58-bit flux. arXiv preprint arXiv:2412.18653. Cited by: [§7](https://arxiv.org/html/2601.10096v1#S7.p1.1 "7 Cross-lingual Text-to-Image Generation ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* F. Ye, G. Liu, X. Wu, and L. Wu (2024)Altdiffusion: a multilingual text-to-image diffusion model. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 38, pp.6648–6656. Cited by: [§2](https://arxiv.org/html/2601.10096v1#S2.p1.1 "2 Related Work ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* B. Yoon, Y. Lee, and W. Baek (2022)COYO-align. Note: [https://github.com/kakaobrain/coyo-align](https://github.com/kakaobrain/coyo-align)Cited by: [§5](https://arxiv.org/html/2601.10096v1#S5.p1.4 "5 Image-Text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* Y. Yoshikawa, Y. Shigeto, and A. Takeuchi (2017)STAIR captions: constructing a large-scale Japanese image caption dataset. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), R. Barzilay and M. Kan (Eds.), Vancouver, Canada, pp.417–421. External Links: [Link](https://aclanthology.org/P17-2066/), [Document](https://dx.doi.org/10.18653/v1/P17-2066)Cited by: [§5](https://arxiv.org/html/2601.10096v1#S5.p3.1 "5 Image-Text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* X. Zhai, X. Wang, B. Mustafa, A. Steiner, D. Keysers, A. Kolesnikov, and L. Beyer (2021)LiT: zero-shot transfer with locked-image text tuning. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp.18102–18112. External Links: [Link](https://api.semanticscholar.org/CorpusID:244117175)Cited by: [§2](https://arxiv.org/html/2601.10096v1#S2.p1.1 "2 Related Work ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). +* Y. Zhou, R. Zhang, C. Chen, C. Li, C. Tensmeyer, T. Yu, J. Gu, J. Xu, and T. Sun (2022)Towards language-free training for text-to-image generation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.17907–17917. Cited by: [§7](https://arxiv.org/html/2601.10096v1#S7.p4.4 "7 Cross-lingual Text-to-Image Generation ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). + +Appendix A Potential Risks +-------------------------- + +There has been investigation of various biases (gender, race, etc.) for multimodal models primarily in English language. Our method extends the capability of multimodal models to many languages including low resource languages. However, there have been very few works to detect and mitigate biases for these languages. Additionally, since we use English language as anchor it is possible that the biases present in English multimodal model can manifest in the resulting multilingual multimodal model. + +Appendix B Model & Data License +------------------------------- + +All models that are taken from sentence-transformers 10 10 10[https://www.sbert.net/](https://www.sbert.net/) library (Multilingual CLIP (MCLIP-ST), Multilingual MPNET (M-MPNET), Multilingual MiniLM (M-MiniLM)), LaBSE, KakaoBrain-ALIGN, Jina-CLIP-v1, and LAION-CLAP (CLAP-General, CLAP-HTSAT-Fused) are under Apache License 2.0. For model FLUX.1-dev, generated outputs can be used for personal, scientific, and commercial purposes as described in the [FLUX.1 [dev] Non-Commercial License](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md). Multilingual CLIP Carlsson et al. ([2022](https://arxiv.org/html/2601.10096v1#bib.bib65 "Cross-lingual and multilingual CLIP")), OpenAI-CLIP, and IndicTrans2 are under MIT License. Jina-CLIP-v2, Jina-embeddings-v3, AYA-23-35B are under CC-by-NC-4.0. Use of any combination of the models aligned using our method must adhere to the license of all individual models. + +We release our extended datasets in new languages for AudioCaps, Clotho, and MSCOCO2014-30K under CC-By-NC-4.0 License, adhering to source dataset licenses and models used to generate data (AudioCaps- MIT License, Clotho- [Tampere University License (non-commercial with attribution)](https://github.com/audio-captioning/clotho-dataset?tab=License-1-ov-file#readme), MSCOCO- CC-By-4.0). + +Appendix C List supported languages for multilingual and/or multimodal models +----------------------------------------------------------------------------- + +Table 4: List of Multilingual text encoder and multilingual multimodal models and it’s supported languages. + +Different multilingual text encoder and multilingual CLIP models support different languages. For fairer comparison, we also report metrics averaged on model-supported languages (e.g. Table[3](https://arxiv.org/html/2601.10096v1#S6.T3 "Table 3 ‣ 6 Audio-Text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text") and Table[9](https://arxiv.org/html/2601.10096v1#A6.T9 "Table 9 ‣ F.2 Results on model-supported languages ‣ Appendix F Image-Text Retrieval: Additional Results ��� Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text")). Table[4](https://arxiv.org/html/2601.10096v1#A3.T4 "Table 4 ‣ Appendix C List supported languages for multilingual and/or multimodal models ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text") shows a list of models and their supported languages. + +Appendix D Ablations experiments on Image-text Retrieval +-------------------------------------------------------- + +Table[5](https://arxiv.org/html/2601.10096v1#A4.T5 "Table 5 ‣ Appendix D Ablations experiments on Image-text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text") and [6](https://arxiv.org/html/2601.10096v1#A4.T6 "Table 6 ‣ Appendix D Ablations experiments on Image-text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text") shows our method outperforms all other training objectives on Text-to-Image retrieval for XTD dataset. The impact of high λ\lambda is significant for both Text-to-Image retrieval and Image-to-Text retrieval (0.5% gain in Avg. Recall@10) shown in Table[6](https://arxiv.org/html/2601.10096v1#A4.T6 "Table 6 ‣ Appendix D Ablations experiments on Image-text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text") and Table[5](https://arxiv.org/html/2601.10096v1#A4.T5 "Table 5 ‣ Appendix D Ablations experiments on Image-text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). + +Table 5: Comparison of Recall@10 metric across different training losses, and settings- varying number of linear layers, presence or absence of residual connections (Skip Conn.) between linear layers for M2M-aligned Jina-CLIP-v1×\times M-MPNET on XTD dataset for Text-to-Image retrieval task. λ 1=48,λ 2=1,β 1=1\lambda_{1}=48,\lambda_{2}=1,\beta_{1}=1. + +Table 6: Comparison of Recall@10 metric across different training losses, and settings- varying number of linear layers, presence or absence of residual connections (Skip Conn.) between linear layers for M2M-aligned Jina-CLIP-v1×\times M-MPNET on XTD dataset for Image-to-Text retrieval task. λ 1=48,λ 2=1,β 1=1\lambda_{1}=48,\lambda_{2}=1,\beta_{1}=1. + +Appendix E Weight Analysis +-------------------------- + +To further investigate the behavior of the mapping, we performed a pairwise cosine distance analysis on clusters of semantically identical sentences that differ only in style or phrasing. Distances were computed for (i) Jina-CLIP-v1 embeddings, (ii) multilingual MPNET embeddings, and (iii) our mapped embeddings. All embeddings were ℓ 2\ell_{2}-normalized, and only the upper-triangle of the distance matrix was considered to avoid double-counting symmetric pairs. Results show that Jina-CLIP-v1 embeddings exhibit the largest variability (0.028–0.083), reflecting sensitivity to small syntactic or stylistic changes.fig:appendix boxplots Multilingual MPNET embeddings are more compact (0.011–0.043), consistent with their language-agnostic nature, while our mapped embeddings fall in between (0.011–0.046), indicating successful alignment to the Jina-CLIP-v1 space while preserving semantic consistency. These observations support our hypothesis that Jina-CLIP-v1 text encoders capture language-specific features absent in language-agnostic embeddings like M-MPNET, and that the learned linear map effectively projects onto the relevant subspace for retrieval. + +![Image 29: Refer to caption](https://arxiv.org/html/2601.10096v1/sentence_deviation.png) + +Figure 7: Pairwise cosine distances for clusters of semantically identical sentences with stylistic or syntactic variations. Jina-CLIP-v1 embeddings (blue) show the largest variability, reflecting sensitivity to language-specific phrasing. Multilingual MPNET embeddings (orange) are more compact, consistent with language-agnostic representations, and our mapped embeddings (green) fall in between, indicating successful alignment while preserving semantic consistency. Each boxplot summarizes the distribution of distances within a cluster. + +Figure[7](https://arxiv.org/html/2601.10096v1#A5.F7 "Figure 7 ‣ Appendix E Weight Analysis ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text") presents boxplots for all clusters, illustrating the distribution of pairwise distances and confirming the trends described above. Below, we list the clusters of nearly identical sentences used for this analysis. + +### E.1 Sentence Variation Clusters + +* • + +Dog / Animal + + * –A dog running in the park. + * –The dog is running in the park. + * –A dog runs in the park. + * –A dog is running through the park. + * –In the park, a dog runs. + +* • + +Cat + + * –A cat sleeps on the sofa. + * –The cat is sleeping on the sofa. + * –A sleeping cat is on the sofa. + * –On the sofa, a cat is sleeping. + * –A cat is lying asleep on the sofa. + +* • + +Human Actions: Cycling + + * –A person is riding a bicycle on the street. + * –A person is riding a bike along the street. + * –A cyclist is riding on the street. + * –The person rides a bicycle on the street. + * –On the street, a person is riding a bicycle. + +* • + +Human Actions: Drawing + + * –A child is drawing on a piece of paper. + * –The child is drawing on a sheet of paper. + * –On paper, a child is drawing. + * –A child is drawing on paper. + * –The child draws on a piece of paper. + +* • + +Nature / Scenery: Sunset + + * –A sunset over the mountains. + * –The sun is setting over the mountains. + * –The sun is setting near the mountains. + * –The mountains during sunset. + * –The setting sun is over the mountains. + +* • + +Nature / Scenery: River + + * –A river flows through the forest. + * –The river is flowing through the forest. + * –Through the forest flows a river. + * –The forest has a river flowing through it. + * –In the forest, a river flows. + +* • + +Objects / Still Life: Car + + * –A red car is parked on the street. + * –On the street, a red car is parked. + * –A car is parked on the street, and the car is red. + * –A car colored red is parked on the street. + * –A red-colored car is parked on the street. + +* • + +Objects / Still Life: Coffee + + * –A cup of coffee is on the table. + * –On the table is a cup of coffee. + * –The cup of coffee is on the table. + * –There is a coffee cup on the table. + * –A mug of coffee sits on the table. + +Appendix F Image-Text Retrieval: Additional Results +--------------------------------------------------- + +### F.1 Language-wise Recall on XM3600 & Multi30K + +Table 7: Recall@10 across 36 languages for XM3600 on I2T and T2I retrieval task using M2M-aligned Jina-CLIP-v1 ×\times M-MPNET. + +Table 8: Recall@10 across 4 languages for Multi30K on I2T and T2I retrieval task using M2M-aligned Jina-CLIP-v1 ×\times M-MPNET. + +Tables[7](https://arxiv.org/html/2601.10096v1#A6.T7 "Table 7 ‣ F.1 Language-wise Recall on XM3600 & Multi30K ‣ Appendix F Image-Text Retrieval: Additional Results ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"), [8](https://arxiv.org/html/2601.10096v1#A6.T8 "Table 8 ‣ F.1 Language-wise Recall on XM3600 & Multi30K ‣ Appendix F Image-Text Retrieval: Additional Results ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text") show language-wise performance of our M2M-aligned models on XM3600 and Multi30K datasets respectively. Interestingly, CLIP ×\times M-MPNET outperforms Jina-CLIP-v1 ×\times M-MPNET by 2.4% I2T and 0.2% T2I on Multi30K dataset. + +### F.2 Results on model-supported languages + +Table 9: Performance of M2M-align models in comparison with English and Mutlingual CLIP-like models on Recall@10 metric for supported languages for XM3600 and Multi30K datasets. + +Similar to our results for Image-Text retrieval in Table[2](https://arxiv.org/html/2601.10096v1#S5.T2 "Table 2 ‣ 5 Image-Text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"), in Table[9](https://arxiv.org/html/2601.10096v1#A6.T9 "Table 9 ‣ F.2 Results on model-supported languages ‣ Appendix F Image-Text Retrieval: Additional Results ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"), we report Recall@10 metric averaged only on languages supported by the respective multilingual text encoder/mutlilingual CLIPs. Supported languages for each model is listed in Table[4](https://arxiv.org/html/2601.10096v1#A3.T4 "Table 4 ‣ Appendix C List supported languages for multilingual and/or multimodal models ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). + +### F.3 Reproducibility experiments + +To show that our method’s performance is reproducible. We run experiments twice on our method for Image-Text retrieval task, and report mean and standard deviation in Tables[10](https://arxiv.org/html/2601.10096v1#A6.T10 "Table 10 ‣ F.3 Reproducibility experiments ‣ Appendix F Image-Text Retrieval: Additional Results ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text") to show that the performance is stable across varying random seeds. + +Table 10: Performance of M2M-aligned Jina-CLIP-v1 ×\times M-MPNET on Recall@10 metrics averaged (±\pm standard deviation) over 2 different runs across 11 languages for Image-Text retrieval task on XTD dataset. + +Appendix G Curation of Synthetic evaluation dataset +--------------------------------------------------- + +For AYA-23-35B, we use translation prompts to generate synthetic data following Alam et al. ([2024](https://arxiv.org/html/2601.10096v1#bib.bib10 "Maya: an instruction finetuned multilingual multimodal model")). We experiment with zero-shot and 3-shot prompts. We use FLoRes-200 dataset to assess the quality of translation prompts. Zero-shot prompt is fairly straightforward method- we pass the input sentence and prompt the model to generate translation in target language. For 3-shot prompt, for each input english text for which translation has to be generated, we pick 3 examples. These 3 examples are picked from sampling set- created by combining FLoRes-200 validation and test set (excluding current input text). We compute cosine similarity between input text and sampling set using LaBSE, and select top 3 texts and it’s corresponding translation of the target language as a few-shot example. The zero-shot translation prompt performs better on the FLoReS-200 dataset Costa-jussà et al. ([2022](https://arxiv.org/html/2601.10096v1#bib.bib11 "No language left behind: scaling human-centered machine translation")) across 14 languages 11 11 11 ar, zho-Hant, fr, de, he, hi, it, jp, ko, pl, ru, es, tr, vi, achieving a mean spBLEU of 39.7 and mean chrF++ of 51.5, compared to the 3-shot prompt with mean spBLEU of 37.2 and mean chrF++ of 47.4. Given these results, we apply the zero-shot prompt to generate Aya-23-35B translations for all 22 languages. Language-wise spBLEU and chrF++ scores for AYA-23-35B are shown in Table[13](https://arxiv.org/html/2601.10096v1#A7.T13 "Table 13 ‣ Appendix G Curation of Synthetic evaluation dataset ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"), and for backtranslated Indic translations are shown in Table[14](https://arxiv.org/html/2601.10096v1#A7.T14 "Table 14 ‣ Appendix G Curation of Synthetic evaluation dataset ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). Zero-shot prompt and 3-shot prompt templates are listed in Table[11](https://arxiv.org/html/2601.10096v1#A7.T11 "Table 11 ‣ Appendix G Curation of Synthetic evaluation dataset ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text") and [12](https://arxiv.org/html/2601.10096v1#A7.T12 "Table 12 ‣ Appendix G Curation of Synthetic evaluation dataset ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). + +Table 11: Zero-shot prompt used generating translation from AYA-23-35B. Text in square bracket is a placeholder for actual input + +Table 12: 3-shot prompt template used to compare effect of few-shots on translation quality for AYA-23-35B. Text in square bracket is a placeholder for actual input. + +Table 13: spBLEU and chrF++ scores for zero-shot and 3-shot prompts for FLoRes-200 using AYA-23-35B model. zh in the table denotes Chinese Traditional (zh-Hant). + +Table 14: spBLEU and chrF++ scores on English backtranslations of AudioCaps and Clotho dataset using IndicTrans2 models. + +Appendix H CLAP +--------------- + +### H.1 Language-wise Recall on Synthetic Evaluation Dataset. + +We show language-wise performance of M2M-aligned CLAP-general ×\times M-MPNET on AudioCaps in Table[15](https://arxiv.org/html/2601.10096v1#A8.T15 "Table 15 ‣ H.1 Language-wise Recall on Synthetic Evaluation Dataset. ‣ Appendix H CLAP ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text") and Clotho in Table[16](https://arxiv.org/html/2601.10096v1#A8.T16 "Table 16 ‣ H.1 Language-wise Recall on Synthetic Evaluation Dataset. ‣ Appendix H CLAP ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). + +Table 15: Recall@10 metric across 34 languages on AudioCaps dataset for Audio-to-Text (A2T) and Text-to-Audio (T2A) retrieval task using M2M-aligned CLAP-general ×\times M-MPNET model. + +Table 16: Recall@10 metric across 34 languages on Clotho dataset for Audio-to-Text (A2T) and Text-to-Audio (T2A) retrieval task using M2M-aligned CLAP-general ×\times M-MPNET model. + +### H.2 Quantifying the qualitative analysis and more examples + +We see in Table[3](https://arxiv.org/html/2601.10096v1#S6.T3 "Table 3 ‣ 6 Audio-Text Retrieval ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text") that M2M-aligned models don’t match the performance of baseline CLAP models. For English, qualitative analysis revealed that the retrieved audio for a query text had high semantic similarity. To verify our qualitative analysis, we perform following quantitative test. For each query text, we retrieve top five audios using M2M-aligned model CLAP-General ×\times M-MPNET. Next, we compute cosine similarity between query text and captions of retrieved audio using CLAP-general model. On average, we see higher cosine similarity for CLAP-general ×\times M-MPNET (0.7) compared to CLAP-general (0.65), demonstrating semantic agreement between CLAP-general and retrieved audio. More examples are listed in Table[17](https://arxiv.org/html/2601.10096v1#A8.T17 "Table 17 ‣ H.2 Quantifying the qualitative analysis and more examples ‣ Appendix H CLAP ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). + +Table 17: Captions of Audio retrieved for a Query text (Text-to-Audio retrieval task) using M2M-aligned CLAP-General ×\times M-MPNET + +Appendix I Cross-lingual Text-to-Image Generation. +-------------------------------------------------- + +Both Inception score and FID scores are computed using torch-fidelity Obukhov et al. ([2020](https://arxiv.org/html/2601.10096v1#bib.bib19 "High-fidelity performance metrics for generative models in pytorch")) package 12 12 12[https://github.com/toshas/torch-fidelity](https://github.com/toshas/torch-fidelity). Language-wise FID scores shown in Table[19](https://arxiv.org/html/2601.10096v1#A9.T19 "Table 19 ‣ Appendix I Cross-lingual Text-to-Image Generation. ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text") and inception scores are shown in Table[18](https://arxiv.org/html/2601.10096v1#A9.T18 "Table 18 ‣ Appendix I Cross-lingual Text-to-Image Generation. ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). For English, our aligned model gives better FID score than FLUX-CLIP though both are still high compared to FLUX (upper-bound/skyline model). More examples of generated images are shown in Figure[8](https://arxiv.org/html/2601.10096v1#A9.F8 "Figure 8 ‣ Appendix I Cross-lingual Text-to-Image Generation. ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"), Figure[9](https://arxiv.org/html/2601.10096v1#A9.F9 "Figure 9 ‣ Appendix I Cross-lingual Text-to-Image Generation. ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text")& Figure[10](https://arxiv.org/html/2601.10096v1#A9.F10 "Figure 10 ‣ Appendix I Cross-lingual Text-to-Image Generation. ‣ Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text"). + +Table 18: Inception score for MSCOCO-30K on 512×\times 512 images (10 inference steps; guidance scale = 3.5). + +Table 19: FID scores computed on our MSCOCO 30K synthetic multilingual evaluation dataset. + +![Image 30: Refer to caption](https://arxiv.org/html/2601.10096v1/snow_mountain/BASELINE_a_snow_caped_mountain_is_behind_a_large_lake.png) + +(a) FLUX (en) + +![Image 31: Refer to caption](https://arxiv.org/html/2601.10096v1/snow_mountain/en_a_snow_caped_mountain_is_behind_a_large_lake.png) + +(b) Ours (en) + +![Image 32: Refer to caption](https://arxiv.org/html/2601.10096v1/snow_mountain/el_a_snow_caped_mountain_is_behind_a_large_lake.png) + +(c) Ours (el) + +![Image 33: Refer to caption](https://arxiv.org/html/2601.10096v1/snow_mountain/fa_a_snow_caped_mountain_is_behind_a_large_lake.png) + +(d) Ours (fa) + +![Image 34: Refer to caption](https://arxiv.org/html/2601.10096v1/snow_mountain/fr_a_snow_caped_mountain_is_behind_a_large_lake.png) + +(e) Ours (fr) + +![Image 35: Refer to caption](https://arxiv.org/html/2601.10096v1/snow_mountain/he_a_snow_caped_mountain_is_behind_a_large_lake.png) + +(f) Ours (he) + +![Image 36: Refer to caption](https://arxiv.org/html/2601.10096v1/snow_mountain/CLIP_ONLY_BASELINE_a_snow_caped_mountain_is_behind_a_large_lake.png) + +(g) FLUX CLIP (en) + +![Image 37: Refer to caption](https://arxiv.org/html/2601.10096v1/snow_mountain/ru_a_snow_caped_mountain_is_behind_a_large_lake.png) + +(h) Ours (ru) + +![Image 38: Refer to caption](https://arxiv.org/html/2601.10096v1/snow_mountain/hi_a_snow_caped_mountain_is_behind_a_large_lake.png) + +(i) Ours (hi) + +![Image 39: Refer to caption](https://arxiv.org/html/2601.10096v1/snow_mountain/id_a_snow_caped_mountain_is_behind_a_large_lake.png) + +(j) Ours (id) + +![Image 40: Refer to caption](https://arxiv.org/html/2601.10096v1/snow_mountain/ko_a_snow_caped_mountain_is_behind_a_large_lake.png) + +(k) Ours (ko) + +![Image 41: Refer to caption](https://arxiv.org/html/2601.10096v1/snow_mountain/es_a_snow_caped_mountain_is_behind_a_large_lake.png) + +(l) Ours (es) + +Figure 8: Images generated by FLUX text-to-image model using the prompt “a snow caped mountain is behind a large lake” in multiple languages. Our M2M-aligned model produces similar quality images compared to baseline FLUX (both T5 and CLIP encoders), and FLUX-CLIP models. + +![Image 42: Refer to caption](https://arxiv.org/html/2601.10096v1/flowers/BASELINE_assortment_of_colorful_flowers_in_glass_vase_on_table.png) + +(a) FLUX (en) + +![Image 43: Refer to caption](https://arxiv.org/html/2601.10096v1/flowers/en_assortment_of_colorful_flowers_in_glass_vase_on_table.png) + +(b) Ours (en) + +![Image 44: Refer to caption](https://arxiv.org/html/2601.10096v1/flowers/el_assortment_of_colorful_flowers_in_glass_vase_on_table.png) + +(c) Ours (el) + +![Image 45: Refer to caption](https://arxiv.org/html/2601.10096v1/flowers/fa_assortment_of_colorful_flowers_in_glass_vase_on_table.png) + +(d) Ours (fa) + +![Image 46: Refer to caption](https://arxiv.org/html/2601.10096v1/flowers/fr_assortment_of_colorful_flowers_in_glass_vase_on_table.png) + +(e) Ours (fr) + +![Image 47: Refer to caption](https://arxiv.org/html/2601.10096v1/flowers/he_assortment_of_colorful_flowers_in_glass_vase_on_table.png) + +(f) Ours (he) + +![Image 48: Refer to caption](https://arxiv.org/html/2601.10096v1/flowers/CLIP_ONLY_BASELINE_assortment_of_colorful_flowers_in_glass_vase_on_table.png) + +(g) FLUX CLIP (en) + +![Image 49: Refer to caption](https://arxiv.org/html/2601.10096v1/flowers/ru_assortment_of_colorful_flowers_in_glass_vase_on_table.png) + +(h) Ours (ru) + +![Image 50: Refer to caption](https://arxiv.org/html/2601.10096v1/flowers/hi_assortment_of_colorful_flowers_in_glass_vase_on_table.png) + +(i) Ours (hi) + +![Image 51: Refer to caption](https://arxiv.org/html/2601.10096v1/flowers/id_assortment_of_colorful_flowers_in_glass_vase_on_table.png) + +(j) Ours (id) + +![Image 52: Refer to caption](https://arxiv.org/html/2601.10096v1/flowers/ko_assortment_of_colorful_flowers_in_glass_vase_on_table.png) + +(k) Ours (ko) + +![Image 53: Refer to caption](https://arxiv.org/html/2601.10096v1/flowers/es_assortment_of_colorful_flowers_in_glass_vase_on_table.png) + +(l) Ours (es) + +Figure 9: Images generated by FLUX text-to-image model using the prompt “Assortment of colorful flowers in glass vase on table.” in multiple languages. Our M2M-aligned model produces similar quality images compared to baseline FLUX (both T5 and CLIP encoders), and FLUX-CLIP models. + +(1) T5 prompt: “A photo of: ” + +![Image 54: Refer to caption](https://arxiv.org/html/2601.10096v1/cat_book/BASELINE_a_cat_sitting_on_a_bed_behind_a_book.png) + +(a) FLUX (en) + +![Image 55: Refer to caption](https://arxiv.org/html/2601.10096v1/cat_book/en_a_cat_sitting_on_a_bed_behind_a_book.png) + +(b) Ours (en) + +![Image 56: Refer to caption](https://arxiv.org/html/2601.10096v1/cat_book/el_a_cat_sitting_on_a_bed_behind_a_book.png) + +(c) Ours (el) + +![Image 57: Refer to caption](https://arxiv.org/html/2601.10096v1/cat_book/fa_a_cat_sitting_on_a_bed_behind_a_book.png) + +(d) Ours (fa) + +![Image 58: Refer to caption](https://arxiv.org/html/2601.10096v1/cat_book/fr_a_cat_sitting_on_a_bed_behind_a_book.png) + +(e) Ours (fr) + +![Image 59: Refer to caption](https://arxiv.org/html/2601.10096v1/cat_book/he_a_cat_sitting_on_a_bed_behind_a_book.png) + +(f) Ours (he) + +![Image 60: Refer to caption](https://arxiv.org/html/2601.10096v1/cat_book/CLIP_ONLY_BASELINE_a_cat_sitting_on_a_bed_behind_a_book.png) + +(g) FLUX CLIP (en) + +![Image 61: Refer to caption](https://arxiv.org/html/2601.10096v1/cat_book/ru_a_cat_sitting_on_a_bed_behind_a_book.png) + +(h) Ours (ru) + +![Image 62: Refer to caption](https://arxiv.org/html/2601.10096v1/cat_book/hi_a_cat_sitting_on_a_bed_behind_a_book.png) + +(i) Ours (hi) + +![Image 63: Refer to caption](https://arxiv.org/html/2601.10096v1/cat_book/id_a_cat_sitting_on_a_bed_behind_a_book.png) + +(j) Ours (id) + +![Image 64: Refer to caption](https://arxiv.org/html/2601.10096v1/cat_book/ko_a_cat_sitting_on_a_bed_behind_a_book.png) + +(k) Ours (ko) + +![Image 65: Refer to caption](https://arxiv.org/html/2601.10096v1/cat_book/es_a_cat_sitting_on_a_bed_behind_a_book.png) + +(l) Ours (es) + +(2) T5 prompt: “add a book: ” + +![Image 66: Refer to caption](https://arxiv.org/html/2601.10096v1/cat_book_add_a_book/T5_a_cat_sitting_on_a_bed_behind_a_book.png) + +(m) FLUX-T5 (en) + +![Image 67: Refer to caption](https://arxiv.org/html/2601.10096v1/cat_book_add_a_book/en_a_cat_sitting_on_a_bed_behind_a_book.png) + +(n) Ours (en) + +![Image 68: Refer to caption](https://arxiv.org/html/2601.10096v1/cat_book_add_a_book/el_a_cat_sitting_on_a_bed_behind_a_book.png) + +(o) Ours (el) + +![Image 69: Refer to caption](https://arxiv.org/html/2601.10096v1/cat_book_add_a_book/fa_a_cat_sitting_on_a_bed_behind_a_book.png) + +(p) Ours (fa) + +![Image 70: Refer to caption](https://arxiv.org/html/2601.10096v1/cat_book_add_a_book/fr_a_cat_sitting_on_a_bed_behind_a_book.png) + +(q) Ours (fr) + +![Image 71: Refer to caption](https://arxiv.org/html/2601.10096v1/cat_book_add_a_book/he_a_cat_sitting_on_a_bed_behind_a_book.png) + +(r) Ours (he) + +![Image 72: Refer to caption](https://arxiv.org/html/2601.10096v1/cat_book_add_a_book/CLIP_ONLY_BASELINE_a_cat_sitting_on_a_bed_behind_a_book.png) + +(s) FLUX CLIP (en) + +![Image 73: Refer to caption](https://arxiv.org/html/2601.10096v1/cat_book_add_a_book/ru_a_cat_sitting_on_a_bed_behind_a_book.png) + +(t) Ours (ru) + +![Image 74: Refer to caption](https://arxiv.org/html/2601.10096v1/cat_book_add_a_book/hi_a_cat_sitting_on_a_bed_behind_a_book.png) + +(u) Ours (hi) + +![Image 75: Refer to caption](https://arxiv.org/html/2601.10096v1/cat_book_add_a_book/id_a_cat_sitting_on_a_bed_behind_a_book.png) + +(v) Ours (id) + +![Image 76: Refer to caption](https://arxiv.org/html/2601.10096v1/cat_book_add_a_book/ko_a_cat_sitting_on_a_bed_behind_a_book.png) + +(w) Ours (ko) + +![Image 77: Refer to caption](https://arxiv.org/html/2601.10096v1/cat_book_add_a_book/es_a_cat_sitting_on_a_bed_behind_a_book.png) + +(x) Ours (es) + +(3) T5 prompt: “add a book on bed: ” + +![Image 78: Refer to caption](https://arxiv.org/html/2601.10096v1/cat_book_add_a_book_on_bed/T5_a_cat_sitting_on_a_bed_behind_a_book.png) + +(y) FLUX-T5 (en) + +![Image 79: Refer to caption](https://arxiv.org/html/2601.10096v1/cat_book_add_a_book_on_bed/en_a_cat_sitting_on_a_bed_behind_a_book.png) + +(z) Ours (en) + +![Image 80: Refer to caption](https://arxiv.org/html/2601.10096v1/cat_book_add_a_book_on_bed/el_a_cat_sitting_on_a_bed_behind_a_book.png) + +(aa) Ours (el) + +![Image 81: Refer to caption](https://arxiv.org/html/2601.10096v1/cat_book_add_a_book_on_bed/fa_a_cat_sitting_on_a_bed_behind_a_book.png) + +(ab) Ours (fa) + +![Image 82: Refer to caption](https://arxiv.org/html/2601.10096v1/cat_book_add_a_book_on_bed/fr_a_cat_sitting_on_a_bed_behind_a_book.png) + +(ac) Ours (fr) + +![Image 83: Refer to caption](https://arxiv.org/html/2601.10096v1/cat_book_add_a_book_on_bed/he_a_cat_sitting_on_a_bed_behind_a_book.png) + +(ad) Ours (he) + +![Image 84: Refer to caption](https://arxiv.org/html/2601.10096v1/cat_book_add_a_book_on_bed/CLIP_ONLY_BASELINE_a_cat_sitting_on_a_bed_behind_a_book.png) + +(ae) FLUX CLIP (en) + +![Image 85: Refer to caption](https://arxiv.org/html/2601.10096v1/cat_book_add_a_book_on_bed/ru_a_cat_sitting_on_a_bed_behind_a_book.png) + +(af) Ours (ru) + +![Image 86: Refer to caption](https://arxiv.org/html/2601.10096v1/cat_book_add_a_book_on_bed/hi_a_cat_sitting_on_a_bed_behind_a_book.png) + +(ag) Ours (hi) + +![Image 87: Refer to caption](https://arxiv.org/html/2601.10096v1/cat_book_add_a_book_on_bed/id_a_cat_sitting_on_a_bed_behind_a_book.png) + +(ah) Ours (id) + +![Image 88: Refer to caption](https://arxiv.org/html/2601.10096v1/cat_book_add_a_book_on_bed/ko_a_cat_sitting_on_a_bed_behind_a_book.png) + +(ai) Ours (ko) + +![Image 89: Refer to caption](https://arxiv.org/html/2601.10096v1/cat_book_add_a_book_on_bed/es_a_cat_sitting_on_a_bed_behind_a_book.png) + +(aj) Ours (es) + +Figure 10: Images generated by FLUX models using the prompt “A cat sitting on a bed behind a book” in multiple languages. Our M2M-aligned model produces similar images but with missing objects (book, bed) compared to FLUX models (T5 and CLIP encoders). T5 prompts help mitigate this issue, as shown in sub-figures (2) & (3).