Datasets:

skesiraju commited on
Commit
9db5fc9
·
verified ·
1 Parent(s): a452568

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +66 -0
README.md ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: cc-by-4.0
5
+ multilinguality: monolingual
6
+ task_categories:
7
+ - sentence-similarity
8
+ - feature-extraction
9
+ tags:
10
+ - sonar
11
+ - speech-embeddings
12
+ - text-embeddings
13
+ - common-voice
14
+ - interpretability
15
+ pretty_name: FLiP-data
16
+ ---
17
+
18
+ # FLiP-data
19
+
20
+ Preprocessed data for the [FLiP](https://github.com/BUTSpeechFIT/FLiP) project — **Factorized Linear Projection for Interpreting Multimodal Multilingual Sentence Embeddings**.
21
+
22
+ FLiP trains a factorized log-linear model to recover lexical content (keywords) from pretrained sentence embeddings via a single linear projection, with no fine-tuning of the encoder.
23
+
24
+ ## Contents
25
+
26
+ SONAR embeddings and transcripts for **Mozilla Common Voice v15 English** (train / dev / test):
27
+
28
+ | File | Description |
29
+ |------|-------------|
30
+ | `*_speech_embs.npy` | SONAR speech embeddings (float32, shape `[N, 1024]`) |
31
+ | `*_text_embs.npy` | SONAR text embeddings (float32, shape `[N, 1024]`) |
32
+ | `*_sim_scores.npy` | Cosine similarity between paired speech and text embeddings |
33
+ | `*_transcript.txt` | Reference transcripts (one utterance per line) |
34
+ | `*_entities_gemini2.5_flash_lite.jsonl` | Named entities extracted with Gemini 2.5 Flash Lite |
35
+
36
+ Splits: `train` (~650k utterances), `dev`, `test`.
37
+
38
+ ## Source data
39
+
40
+ Embeddings were computed from [Mozilla Common Voice v15](https://commonvoice.mozilla.org/) English using the [SONAR](https://github.com/facebookresearch/SONAR) encoder. Audio and transcripts from Common Voice are licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).
41
+
42
+ ## Usage
43
+
44
+ See the [FLiP GitHub repo](https://github.com/BUTSpeechFIT/FLiP) for full installation instructions and training/evaluation scripts.
45
+
46
+ Quick start after downloading:
47
+
48
+ ```python
49
+ import numpy as np
50
+
51
+ train_speech = np.load("cv_15/en/sonar_embeddings/train_speech_embs.npy")
52
+ train_text = np.load("cv_15/en/sonar_embeddings/train_text_embs.npy")
53
+ ```
54
+
55
+ ## Citation
56
+
57
+ ```bibtex
58
+ @misc{kesiraju2026flip,
59
+ title = {{FLiP}: Towards understanding and interpreting multimodal multilingual sentence embeddings},
60
+ author = {Kesiraju, Santosh and Yusuf, Bolaji and Sedl{\'a}{\v{c}}ek, Simon and Plchot, Old{\v{r}}ich and Schwarz, Petr},
61
+ year = {2026},
62
+ eprint = {2026.XXXXX},
63
+ archivePrefix = {arXiv},
64
+ primaryClass = {cs.CL}
65
+ }
66
+ ```