--- license: cc-by-4.0 task_categories: - text-retrieval - question-answering language: - en tags: - retrieval - text - lance pretty_name: fineweb-edu-lance size_categories: - 1B FineWeb-Edu: The finest collection of educational content the web has to offer # FineWeb-Edu (Lance Format) FineWeb-edu dataset consists of over 1.5 billion rows of educational web pages filtered from the FineWeb dataset. Each passage ships with cleaned text, metadata, and 384-dim text embeddings for retrieval-heavy workloads. ## Why Lance? Lance is an open-source format designed for multimodal AI data, offering significant advantages over traditional formats for modern AI workloads. - **Blazing Fast Random Access**: Optimized for fetching scattered rows, making it ideal for random sampling, real-time ML serving, and interactive applications without performance degradation. - **Native Multimodal Support**: Store text, embeddings, and other data types together in a single file. Large binary objects are loaded lazily, and vectors are optimized for fast similarity search. - **Efficient Data Evolution**: Add new columns and backfill data without rewriting the entire dataset. This is perfect for evolving ML features, adding new embeddings, or introducing moderation tags over time. - **Versatile Querying**: Supports combining vector similarity search, full-text search, and SQL-style filtering in a single query, accelerated by on-disk indexes. ## Quick Start ### Load with `datasets.load_dataset` ```python import datasets hf_ds = datasets.load_dataset( "lance-format/fineweb-edu", split="train", streaming=True, ) # Take first three rows and print titles for row in hf_ds.take(3): print(row["title"]) ``` ### Load with Lance Use Lance's native connector when you need ANN search, FTS, or direct access to embeddings while still pointing to the copy hosted on Hugging Face: ```python import lance ds = lance.dataset("hf://datasets/lance-format/fineweb-edu/data/train.lance")print(f"Total passages: {ds.count_rows():,}") ``` > **⚠️ HuggingFace Streaming Note** > > **You may hit rate limits on HuggingFace's free tier.** For best performance and to avoid rate limits, pass a token for an account with a > Pro, Teams or Enterprise subscription (which come with much higher rate limits), or download the dataset locally: > > ```bash > # Download once > huggingface-cli download lance-format/fineweb-edu --repo-type dataset --local-dir ./fineweb-edu > > # Then load locally > ds = lance.dataset("./fineweb-edu") > ``` > > Streaming is recommended only for quick exploration and testing. ### Load with LanceDB These tables can also be consumed by [LanceDB](https://docs.lancedb.com/), the multimodal lakehouse for AI (built on top of Lance). LanceDB provides several convenience APIs for search, index creation and data updates on top of the Lance format. ```python import lancedb db = lancedb.connect("hf://datasets/lance-format/fineweb-edu/data") tbl = db.open_table("train") print(f"LanceDB table opened with {len(tbl)} passages") ``` ## Index Creation > [!WARNING] > This dataset does not currently come with a pre-built ANN (vector). To run vector search queries, you should download the dataset locally and build the index yourself. The following steps show how to do this. ```bash # Download once huggingface-cli download lance-format/fineweb-edu --repo-type dataset --local-dir ./fineweb-edu ``` ```python # Load the local dataset in Lance import lance ds = lance.dataset("./fineweb-edu") # Build a vector index as needed # ds.create_index(...) ``` See the [Lance documentation](https://lance.org/quickstart/vector-search/#build-the-search-index) for the index building API. ## Quick Start ```python import lance import pyarrow as pa lance_ds = lance.dataset("hf://datasets/lance-format/fineweb-edu/data/train.lance") # Browse titles & language without touching embeddings rows = lance_ds.scanner( columns=["title", "language"], limit=5 ).to_table().to_pylist() # Vector similarity from the on-dataset ANN index ref = lance_ds.take([0], columns=["text_embedding", "title"]) query_vec = pa.array([ref.to_pylist()[0]["text_embedding"]], type=ref.schema.field("text_embedding").type) results = lance_ds.scanner( nearest={ "column": "text_embedding", "q": query_vec[0], "k": 5, "nprobes": 8, "refine_factor": 20, }, columns=["title", "language", "text"], ).to_table().to_pylist() ``` > **Hugging Face Streaming Note** > - Streaming uses conservative ANN parameters (`nprobes`, `refine_factor`) to stay within HF rate limits. > - Prefer local copies (`huggingface-cli download lance-format/fineweb-edu --local-dir ./fineweb`) for heavy workloads, then point Lance at `./fineweb`. ## Usage Examples The steps below assume you've created an index on the dataset locally. ### 1. Sample documents You can project specific columns (excluding the embeddings) and run filter queries on them. ```python scanner = ds.scanner( columns=["title", "language", "text"], filter="language = 'en'", limit=5, ) for doc in scanner.to_table().to_pylist(): print(doc["title"], doc["language"]) print(doc["text"][:200], "...\n") ``` ### 2. Vector search for semantically similar passages The example below shows a vector search on the `text_embedding` column. ```python ref_doc = ds.take([123], columns=["text_embedding", "title", "text"]).to_pylist()[0] emb_type = ds.to_table(columns=["text_embedding"], limit=1).schema.field("text_embedding").type query = pa.array([ref_doc["text_embedding"]], type=emb_type) neighbors = ds.scanner( nearest={ "column": "text_embedding", "q": query[0], "k": 6, "nprobes": 8, "refine_factor": 20, }, columns=["title", "language", "text"], ).to_table().to_pylist()[1:] ``` ### 3. Full-text search with Lance FTS ```python hits = ds.scanner( full_text_query="quantum computing", columns=["title", "language", "text"], limit=10, fast_search=True, ).to_table().to_pylist() ``` ## Dataset Evolution Lance supports flexible schema and data evolution ([docs](https://lance.org/guide/data_evolution/?h=evol)). You can add/drop columns, backfill with SQL or Python, rename fields, or change data types without rewriting the whole dataset. In practice this lets you: - Introduce fresh metadata (moderation labels, embeddings, quality scores) as new signals become available. - Add new columns to existing datasets without re-exporting terabytes of video. - Adjust column names or shrink storage (e.g., cast embeddings to float16) while keeping previous snapshots queryable for reproducibility. ```python import lance import pyarrow as pa import numpy as np # Assume ds is a local Lance dataset # ds = lance.dataset("./fineweb_edu_local") base = pa.table({"id": pa.array([1, 2, 3]), "text": pa.array(["A", "B", "C"])}) dataset = lance.write_dataset(base, "fineweb_evolution", mode="overwrite") # 1. Add a schema-only column (data to be added later) dataset.add_columns(pa.field("subject", pa.string())) # 2. Add a column with data dataset.add_columns({"quality_bucket": "'unknown'"}) # 3. Generate rich columns via Python batch UDFs @lance.batch_udf() def random_embedding(batch): vecs = np.random.rand(batch.num_rows, 384).astype("float32") return pa.RecordBatch.from_arrays( [pa.FixedSizeListArray.from_arrays(vecs.ravel(), 384)], names=["text_embedding"], ) dataset.add_columns(random_embedding) # 4. Bring in annotations with merge labels = pa.table({"id": pa.array([1, 2, 3]), "label": pa.array(["math", "history", "science"])}) dataset.merge(labels, "id") # 5. Rename or cast columns as needs change dataset.alter_columns({"path": "subject", "name": "topic"}) dataset.alter_columns({"path": "text_embedding", "data_type": pa.list_(pa.float16(), 384)}) ``` You can iterate on embeddings, quality tags, or moderation fields while keeping earlier dataset versions available for reproducible experiments. ## LanceDB LanceDB users can follow the following examples to run search queries on the dataset. ### LanceDB Vector Search ```python import lancedb db = lancedb.connect("hf://datasets/lance-format/fineweb-edu/data") tbl = db.open_table("train") # Get a passage to use as a query ref_passage = tbl.limit(1).offset(123).select(["text_embedding", "text"]).to_pandas().to_dict('records')[0] query_embedding = ref_passage["text_embedding"] results = tbl.search(query_embedding) \ .limit(5) \ .to_list() ``` ### LanceDB Full-Text Search ```python import lancedb db = lancedb.connect("hf://datasets/lance-format/fineweb-edu/data") tbl = db.open_table("train") results = tbl.search("quantum computing") \ .select(["title", "language", "text"]) \ .limit(10) \ .to_list() ``` ## Citation You can cite the paper from orginal dataset (https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) https://arxiv.org/abs/2406.17557 or this dataset: ``` @misc{lozhkov2024fineweb-edu, author = { Lozhkov, Anton and Ben Allal, Loubna and von Werra, Leandro and Wolf, Thomas }, title = { FineWeb-Edu: the Finest Collection of Educational Content }, year = 2024, url = { https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu }, doi = { 10.57967/hf/2497 }, publisher = { Hugging Face } } ```