File size: 2,803 Bytes
9db5fc9 e6b22ca 9db5fc9 c9c1f7b 9db5fc9 1798850 9db5fc9 e6b22ca 9db5fc9 e6b22ca 9db5fc9 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 | ---
language:
- en
license: cc-by-4.0
multilinguality: monolingual
task_categories:
- sentence-similarity
- feature-extraction
tags:
- sonar
- speech-embeddings
- text-embeddings
- common-voice
- interpretability
arxiv: 2604.18109
pretty_name: FLiP-data
---
# FLiP-data
Preprocessed data for the [FLiP](https://github.com/BUTSpeechFIT/FLiP) project — **Factorized Linear Projection for Interpreting Multimodal Multilingual Sentence Embeddings**.
FLiP trains a factorized log-linear model to recover lexical content (keywords) from pretrained sentence embeddings via a single linear projection, with no fine-tuning of the encoder.
## Contents
SONAR embeddings and transcripts for **Mozilla Common Voice v15 English** (train / dev / test):
| File | Description |
|------|-------------|
| `*_speech_embs.npy` | SONAR speech embeddings (float32, shape `[N, 1024]`) |
| `*_text_embs.npy` | SONAR text embeddings (float32, shape `[N, 1024]`) |
| `*_sim_scores.npy` | Cosine similarity between paired speech and text embeddings |
| `*_transcript.txt` | Reference transcripts (one utterance per line) |
| `*_entities_gemini2.5_flash_lite.jsonl` | Named entities extracted with Gemini 2.5 Flash Lite |
Splits: `train` (~1M utterances), `dev` (~16k), `test` (~16k).
## Source data
Embeddings were computed from [Mozilla Common Voice v15](https://commonvoice.mozilla.org/) English using the [SONAR](https://github.com/facebookresearch/SONAR) encoder. Audio and transcripts from Common Voice are licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).
## Trained checkpoints
| HF repo | Training data | Embedding | Rank | Size |
|---------|--------------|-----------|-----:|-----:|
| [BUT-FIT/FLiP-en-sonar](https://huggingface.co/BUT-FIT/FLiP-en-sonar) → `mcv15/rank-512/` | MCV v15 EN | SONAR | 512 | 207 MB |
| [BUT-FIT/FLiP-en-sonar](https://huggingface.co/BUT-FIT/FLiP-en-sonar) → `mcv15/rank-1024/` | MCV v15 EN | SONAR | 1024 | 414 MB |
## Usage
See the [FLiP GitHub repo](https://github.com/BUTSpeechFIT/FLiP) for full installation instructions and training/evaluation scripts.
Quick start after downloading:
```python
import numpy as np
train_speech = np.load("cv_15/en/sonar_embeddings/train_speech_embs.npy")
train_text = np.load("cv_15/en/sonar_embeddings/train_text_embs.npy")
```
## Citation
```bibtex
@misc{kesiraju2026flip,
title = {{FLiP}: Towards understanding and interpreting multimodal multilingual sentence embeddings},
author = {Kesiraju, Santosh and Yusuf, Bolaji and Sedl{\'a}{\v{c}}ek, Simon and Plchot, Old{\v{r}}ich and Schwarz, Petr},
year = {2026},
eprint = {2604.18109},
archivePrefix = {arXiv},
primaryClass = {cs.CL},
url = {https://arxiv.org/abs/2604.18109},
}
```
|