ayushexel commited on
Commit
d9eed78
·
verified ·
1 Parent(s): b687d11

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +251 -0
README.md ADDED
@@ -0,0 +1,251 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - text-retrieval
5
+ - question-answering
6
+ language:
7
+ - en
8
+ tags:
9
+ - retrieval
10
+ - text
11
+ - lance
12
+ pretty_name: fineweb-edu-lance
13
+ size_categories:
14
+ - 1M<n<10M
15
+ ---
16
+ # FineWeb-Edu (Lance Format)
17
+
18
+ FineWeb-edu dataset with over 1.5 billion rows. Each passage ships with cleaned text, metadata, and 384-dim text embeddings for retrieval-heavy workloads.
19
+
20
+
21
+ ## Load via `datasets.load_dataset`
22
+
23
+ ```python
24
+ import datasets
25
+
26
+ hf_ds = datasets.load_dataset(
27
+ "lance-format/fineweb-edu",
28
+ split="train",
29
+ streaming=True,
30
+ )
31
+ # Take first three rows and print titles
32
+ for row in hf_ds.take(3):
33
+ print(row["title"])
34
+ ```
35
+
36
+ Use Lance's native connector when you need ANN search, FTS, or direct access to embeddings while still pointing to the copy hosted on Hugging Face:
37
+
38
+ ```python
39
+ import lance
40
+
41
+ ds = lance.dataset("hf://datasets/lance-format/fineweb-edu/data/train.lance")print(f"Total passages: {ds.count_rows():,}")
42
+ ```
43
+
44
+ These tables can also be consumed by [LanceDB](https://lancedb.github.io/lancedb/), the serverless vector database built on Lance, for simplified vector search and other queries.
45
+
46
+ ```python
47
+ import lancedb
48
+
49
+ db = lancedb.connect("hf://datasets/lance-format/fineweb-edu/data")
50
+ tbl = db.open_table("train")
51
+ print(f"LanceDB table opened with {len(tbl)} passages")
52
+ ```
53
+
54
+
55
+
56
+ > The dataset hosted on Hugging Face Hub does **not** currently have pre-built ANN (vector) or FTS (full-text search) indices.
57
+ >
58
+
59
+ > - For any search or similarity workloads, you should download the dataset locally and build indices yourself.
60
+ >
61
+ > ```bash
62
+ > # Download once
63
+ > huggingface-cli download lance-format/fineweb-edu --repo-type dataset --local-dir ./fineweb-edu
64
+ >
65
+ > # Then load locally and build indices
66
+ > import lance
67
+ > ds = lance.dataset("./fineweb-edu")
68
+ > # ds.create_index(...)
69
+ > ```
70
+ >
71
+
72
+
73
+ ## Why Lance?
74
+
75
+ Lance is an open-source format designed for multimodal AI data, offering significant advantages over traditional formats for modern AI workloads.
76
+
77
+ - **Blazing Fast Random Access**: Optimized for fetching scattered rows, making it ideal for random sampling, real-time ML serving, and interactive applications without performance degradation.
78
+ - **Native Multimodal Support**: Store text, embeddings, and other data types together in a single file. Large binary objects are loaded lazily, and vectors are optimized for fast similarity search.
79
+ - **Efficient Data Evolution**: Add new columns and backfill data without rewriting the entire dataset. This is perfect for evolving ML features, adding new embeddings, or introducing moderation tags over time.
80
+ - **Versatile Querying**: Supports combining vector similarity search, full-text search, and SQL-style filtering in a single query, all accelerated by on-disk indexes.
81
+
82
+
83
+ ## Quick Start (Lance Python)
84
+
85
+ ```python
86
+ import lance
87
+ import pyarrow as pa
88
+
89
+ lance_ds = lance.dataset("hf://datasets/lance-format/fineweb-edu/data/train.lance")
90
+
91
+ # Browse titles & language without touching embeddings
92
+ rows = lance_ds.scanner(
93
+ columns=["title", "language"],
94
+ limit=5
95
+ ).to_table().to_pylist()
96
+
97
+ # Vector similarity from the on-dataset ANN index
98
+ ref = lance_ds.take([0], columns=["text_embedding", "title"])
99
+ query_vec = pa.array([ref.to_pylist()[0]["text_embedding"]],
100
+ type=ref.schema.field("text_embedding").type)
101
+
102
+ results = lance_ds.scanner(
103
+ nearest={
104
+ "column": "text_embedding",
105
+ "q": query_vec[0],
106
+ "k": 5,
107
+ "nprobes": 8,
108
+ "refine_factor": 20,
109
+ },
110
+ columns=["title", "language", "text"],
111
+ ).to_table().to_pylist()
112
+ ```
113
+
114
+ > **Hugging Face Streaming Note**
115
+ > - Streaming uses conservative ANN parameters (`nprobes`, `refine_factor`) to stay within HF rate limits.
116
+ > - Prefer local copies (`huggingface-cli download lance-format/fineweb-edu --local-dir ./fineweb`) for heavy workloads, then point Lance at `./fineweb`.
117
+
118
+ ## Dataset Schema
119
+
120
+ Common columns you'll find in this Lance dataset:
121
+ - `text` – cleaned passage content.
122
+ - `title` – page/article title when available.
123
+ - `url` – canonical source URL.
124
+ - `language` + `language_probability` – detector outputs for filtering.
125
+ - Quality metadata from FineWeb-Edu (e.g., heuristic scores or length stats).
126
+ - `text_embedding` – 384-dimension float32 vector for retrieval.
127
+
128
+ ## Usage Examples
129
+
130
+ > **Search snippets for reference**
131
+ > The vector/FTS examples below show the Lance APIs you’ll use once indexes are available. The hosted dataset doesn’t yet ship ANN/FTS indexes—download locally (or build indexes yourself) before running them. Pre-built indexes are coming soon.
132
+
133
+ ### 1. Sample documents without embeddings
134
+
135
+ ```python
136
+ scanner = ds.scanner(
137
+ columns=["title", "language", "text"],
138
+ filter="language = 'en'",
139
+ limit=5,
140
+ )
141
+ for doc in scanner.to_table().to_pylist():
142
+ print(doc["title"], doc["language"])
143
+ print(doc["text"][:200], "...\n")
144
+ ```
145
+
146
+ ### 2. Vector search for semantically similar passages
147
+
148
+ ```python
149
+ ref_doc = ds.take([123], columns=["text_embedding", "title", "text"]).to_pylist()[0]
150
+ emb_type = ds.to_table(columns=["text_embedding"], limit=1).schema.field("text_embedding").type
151
+ query = pa.array([ref_doc["text_embedding"]], type=emb_type)
152
+
153
+ neighbors = ds.scanner(
154
+ nearest={
155
+ "column": "text_embedding",
156
+ "q": query[0],
157
+ "k": 6,
158
+ "nprobes": 8,
159
+ "refine_factor": 20,
160
+ },
161
+ columns=["title", "language", "text"],
162
+ ).to_table().to_pylist()[1:]
163
+ ```
164
+
165
+ ### LanceDB Vector Search
166
+ ```python
167
+ import lancedb
168
+
169
+ db = lancedb.connect("hf://datasets/lance-format/fineweb-edu/data")
170
+ tbl = db.open_table("train")
171
+
172
+ # Get a passage to use as a query
173
+ ref_passage = tbl.limit(1).offset(123).select(["text_embedding", "text"]).to_pandas().to_dict('records')[0]
174
+ query_embedding = ref_passage["text_embedding"]
175
+
176
+ results = tbl.search(query_embedding) \
177
+ .limit(5) \
178
+ .to_list()
179
+ ```
180
+
181
+ ### 3. Full-text search with Lance FTS
182
+
183
+ ```python
184
+ hits = ds.scanner(
185
+ full_text_query="quantum computing",
186
+ columns=["title", "language", "text"],
187
+ limit=10,
188
+ fast_search=True,
189
+ ).to_table().to_pylist()
190
+ ```
191
+
192
+ ### LanceDB Full-Text Search
193
+ ```python
194
+ import lancedb
195
+
196
+ db = lancedb.connect("hf://datasets/lance-format/fineweb-edu/data")
197
+ tbl = db.open_table("train")
198
+
199
+ results = tbl.search("quantum computing") \
200
+ .select(["title", "language", "text"]) \
201
+ .limit(10) \
202
+ .to_list()
203
+ ```
204
+
205
+
206
+ See `fineweb_edu/example.py` on lance-huggingface repo for a complete walkthrough that combines HF streaming batches with Lance-powered retrieval.
207
+
208
+ ## Dataset Evolution
209
+
210
+ Lance supports flexible schema and data evolution ([docs](https://lance.org/guide/data_evolution/?h=evol)). You can add/drop columns, backfill with SQL or Python, rename fields, or change data types without rewriting the whole dataset. In practice this lets you:
211
+ - Introduce fresh metadata (moderation labels, embeddings, quality scores) as new signals become available.
212
+ - Add new columns to existing datasets without re-exporting terabytes of video.
213
+ - Adjust column names or shrink storage (e.g., cast embeddings to float16) while keeping previous snapshots queryable for reproducibility.
214
+
215
+ ```python
216
+ import lance
217
+ import pyarrow as pa
218
+ import numpy as np
219
+
220
+ # Assume ds is a local Lance dataset
221
+ # ds = lance.dataset("./fineweb_edu_local")
222
+
223
+ base = pa.table({"id": pa.array([1, 2, 3]), "text": pa.array(["A", "B", "C"])})
224
+ dataset = lance.write_dataset(base, "fineweb_evolution", mode="overwrite")
225
+
226
+ # 1. Add a schema-only column (data to be added later)
227
+ dataset.add_columns(pa.field("subject", pa.string()))
228
+
229
+ # 2. Add a column with data
230
+ dataset.add_columns({"quality_bucket": "'unknown'"})
231
+
232
+ # 3. Generate rich columns via Python batch UDFs
233
+ @lance.batch_udf()
234
+ def random_embedding(batch):
235
+ vecs = np.random.rand(batch.num_rows, 384).astype("float32")
236
+ return pa.RecordBatch.from_arrays(
237
+ [pa.FixedSizeListArray.from_arrays(vecs.ravel(), 384)],
238
+ names=["text_embedding"],
239
+ )
240
+
241
+ dataset.add_columns(random_embedding)
242
+
243
+ # 4. Bring in annotations with merge
244
+ labels = pa.table({"id": pa.array([1, 2, 3]), "label": pa.array(["math", "history", "science"])})
245
+ dataset.merge(labels, "id")
246
+
247
+ # 5. Rename or cast columns as needs change
248
+ dataset.alter_columns({"path": "subject", "name": "topic"})
249
+ dataset.alter_columns({"path": "text_embedding", "data_type": pa.list_(pa.float16(), 384)})
250
+ ```
251
+ You can iterate on embeddings, quality tags, or moderation fields while keeping earlier dataset versions available for reproducible experiments.