Geralt-Targaryen commited on
Commit
5326d7f
·
verified ·
1 Parent(s): 95eb4c8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -3
README.md CHANGED
@@ -1,3 +1,48 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ size_categories:
6
+ - 1M<n<10M
7
+ ---
8
+
9
+ The F2LLM dataset includes 6 million query-document-negative tuples curated solely from open-source, non-synthetic data, serving as a strong, budget-friendly baseline for training embedding models.
10
+
11
+ ## Data Format
12
+
13
+ Data are compiled into three categories: retrieval, classification, and clustering. Each retrieval and clustering data sample is accompanied by 24 hard negatives. Each classification data sample is accompanied by 1 hard negative.
14
+
15
+ The data fields are:
16
+ ```json
17
+ {
18
+ "query": ...
19
+ "passage": ...
20
+ "negative_1": ...
21
+ ...
22
+ "negative_n": ...
23
+ }
24
+ ```
25
+
26
+ For more details, please refer to our [technical report](https://arxiv.org/abs/2510.02294).
27
+
28
+ ## Usage
29
+
30
+ Code for training embedding models on the F2LLM data is available in our [Github repo](https://github.com/codefuse-ai/CodeFuse-Embeddings/tree/main/F2LLM).
31
+
32
+ ## Citation
33
+
34
+ If you use the F2LLM models, data, or code, please cite the following technical report.
35
+
36
+ ```
37
+ @article{2025F2LLM,
38
+ title={F2LLM Technical Report: Matching SOTA Embedding Performance with 6 Million Open-Source Data},
39
+ author={Ziyin Zhang and Zihan Liao and Hang Yu and Peng Di and Rui Wang},
40
+ journal = {CoRR},
41
+ volume = {abs/2510.02294},
42
+ year = {2025},
43
+ url = {https://doi.org/10.48550/arXiv.2510.02294},
44
+ doi = {10.48550/ARXIV.2510.02294},
45
+ eprinttype = {arXiv},
46
+ eprint = {2510.02294}
47
+ }
48
+ ```