| | --- |
| | license: cc-by-4.0 |
| | task_categories: |
| | - feature-extraction |
| | - image-classification |
| | language: |
| | - en |
| | tags: |
| | - laion |
| | - clip |
| | - vision-language |
| | - lance |
| | pretty_name: laion-1m-lance |
| | size_categories: |
| | - 1M<n<10M |
| | --- |
| | # LAION-Subset (Lance Format) |
| |
|
| | A Lance dataset of LAION image-text corpus (~1M rows) with inline JPEG bytes, CLIP embeddings (`img_emb`), and full metadata available directly from the Hub: `hf://datasets/lance-format/laion-1m/data/train.lance`. |
| |
|
| |
|
| | ## Why Lance? |
| |
|
| | Lance is an open-source format designed for multimodal AI data, offering significant advantages over traditional formats for modern AI workloads. |
| |
|
| | - **Blazing Fast Random Access**: Optimized for fetching scattered rows, making it ideal for random sampling, real-time ML serving, and interactive applications without performance degradation. |
| | - **Native Multimodal Support**: Store text, embeddings, and other data types together in a single file. Large binary objects are loaded lazily, and vectors are optimized for fast similarity search. |
| | - **Efficient Data Evolution**: Add new columns and backfill data without rewriting the entire dataset. This is perfect for evolving ML features, adding new embeddings, or introducing moderation tags over time. |
| | - **Versatile Querying**: Supports combining vector similarity search, full-text search, and SQL-style filtering in a single query, accelerated by on-disk indexes. |
| |
|
| | ## Quick Start |
| |
|
| | ### Load with `datasets.load_dataset` |
| | |
| | ```python |
| | import datasets |
| | |
| | hf_ds = datasets.load_dataset( |
| | "lance-format/laion-1m", |
| | split="train", |
| | streaming=True |
| | ) |
| | # Take first three rows and print captions |
| | for row in hf_ds.take(3): |
| | print(row["caption"]) |
| | ``` |
| | |
| | ### Load with Lance |
| |
|
| | Use Lance for ANN search, image export, and any operation that needs the vector index or binary blobs: |
| |
|
| | ```python |
| | import lance |
| | |
| | ds = lance.dataset("hf://datasets/lance-format/laion-1m/data/train.lance") |
| | print(ds.count_rows()) |
| | ``` |
| |
|
| | ### Load with LanceDB |
| |
|
| | These tables can also be consumed by [LanceDB](https://docs.lancedb.com/), the multimodal lakehouse for AI (built on top of Lance). |
| | LanceDB provides several convenience APIs for search, index creation and data updates on top of the Lance format. |
| |
|
| | ```python |
| | import lancedb |
| | |
| | db = lancedb.connect("hf://datasets/lance-format/laion-subset/data") |
| | tbl = db.open_table("train") |
| | print(f"LanceDB table opened with {len(tbl)} image-text pairs") |
| | ``` |
| |
|
| | > **⚠️ HuggingFace Streaming Note** |
| | > |
| | > **You may hit rate limits on HuggingFace's free tier.** For best performance and to avoid rate limits, pass a token for an account with a |
| | > Pro, Teams or Enterprise subscription (which come with much higher rate limits), or download the dataset locally: |
| | > |
| | > ```bash |
| | > # Download once |
| | > huggingface-cli download lance-format/laion-1m --repo-type dataset --local-dir ./laion` |
| | > |
| | > # Then load locally |
| | > ds = lance.dataset("./laion") |
| | > ``` |
| | > |
| | > Streaming is recommended only for quick exploration and testing. |
| |
|
| | ### Inspecting Existing Indices |
| |
|
| | This dataset comes with a built in vector (IVF) index for image embeddings. You can inspect the prebuilt indices on the dataset: |
| |
|
| | ```python |
| | import lance |
| | |
| | dataset = lance.dataset("hf://datasets/lance-format/laion-1m/data/train.lance") |
| | |
| | # List all indices |
| | indices = dataset.list_indices() |
| | print(indices) |
| | ``` |
| |
|
| | ### Create New Index |
| |
|
| | While this dataset comes with pre-built indices, you can also create your own custom indices if needed. For example: |
| |
|
| | ```python |
| | # ds is a local Lance dataset |
| | ds.create_index( |
| | "img_emb", |
| | index_type="IVF_PQ", |
| | num_partitions=256, |
| | num_sub_vectors=96, |
| | replace=True, |
| | ) |
| | ``` |
| |
|
| | ```python |
| | # ds is a local Lance dataset |
| | ds.create_fts_index("caption") |
| | ``` |
| |
|
| | ## Native Support for Multimodal Data |
| |
|
| | ```python |
| | from pathlib import Path |
| | |
| | rows = lance_ds.take([0, 1], columns=["image", "caption"]).to_pylist() |
| | for idx, row in enumerate(rows): |
| | Path("samples").mkdir(exist_ok=True) |
| | with open(f"samples/{idx}.jpg", "wb") as f: |
| | f.write(row["image"]) |
| | ``` |
| |
|
| | In Lance, images are stored inline as binary columns (regular Lance binary, not the special blob handle used in OpenVid). They behave like any other column—scan captions without touching `image`, then `take()` when you want the bytes. |
| |
|
| | ## Usage Examples |
| |
|
| | ### 1. Browse metadata |
| |
|
| | ```python |
| | scanner = ds.scanner(columns=["caption", "url", "similarity"], limit=5) |
| | for row in scanner.to_table().to_pylist(): |
| | print(row) |
| | ``` |
| |
|
| | ### 2. Export images |
| |
|
| | ```python |
| | rows = ds.take(range(3), columns=["image", "caption"]).to_pylist() |
| | for i, row in enumerate(rows): |
| | with open(f"sample_{i}.jpg", "wb") as f: |
| | f.write(row["image"]) |
| | ``` |
| |
|
| | ### 3. Vector similarity search |
| |
|
| | ```python |
| | emb_field = ds.schema.field("img_emb") |
| | ref = ds.take([123], columns=["img_emb"]).to_pylist()[0] |
| | query = pa.array([ref["img_emb"]], type=emb_field.type) |
| | |
| | neighbors = ds.scanner( |
| | nearest={ |
| | "column": emb_field.name, |
| | "q": query[0], |
| | "k": 6, |
| | "nprobes": 16, |
| | "refine_factor": 30, |
| | }, |
| | columns=["caption", "url", "similarity"], |
| | ).to_table().to_pylist() |
| | ``` |
| |
|
| | ## Dataset Evolution |
| |
|
| | Lance supports flexible schema and data evolution ([docs](https://lance.org/guide/data_evolution/)). You can add/drop columns, backfill with SQL or Python, rename fields, or change data types without rewriting the whole dataset. In practice this lets you: |
| | - Introduce fresh metadata (moderation labels, embeddings, quality scores) as new signals become available. |
| | - Add new columns to existing datasets without re-exporting terabytes of video. |
| | - Adjust column names or shrink storage (e.g., cast embeddings to float16) while keeping previous snapshots queryable for reproducibility. |
| |
|
| | ```python |
| | import lance |
| | import pyarrow as pa |
| | import numpy as np |
| | |
| | # Assumes you ran the export to Lance example above to store a local subset of the data |
| | # ds = lance.dataset("./laion_1m_local") |
| | |
| | # 1. Add a schema-only column (data to be added later) |
| | dataset.add_columns(pa.field("moderation_label", pa.string())) |
| | |
| | # 2. Add a column with data backfill using a SQL expression |
| | dataset.add_columns( |
| | { |
| | "moderation_label": "case WHEN \"NSFW\" > 0.5 THEN 'review' ELSE 'ok' END" |
| | } |
| | ) |
| | |
| | # 3. Generate rich columns via Python batch UDFs |
| | @lance.batch_udf() |
| | def random_embedding(batch): |
| | arr = np.random.rand(batch.num_rows, 128).astype("float32") |
| | return pa.RecordBatch.from_arrays( |
| | [pa.FixedSizeListArray.from_arrays(arr.ravel(), 128)], |
| | names=["embedding"], |
| | ) |
| | |
| | dataset.add_columns(random_embedding) |
| | |
| | # 4. Bring in offline annotations with merge |
| | labels = pa.table({ |
| | "id": pa.array([1, 2, 3]), |
| | "label": pa.array(["horse", "rabbit", "cat"]), |
| | }) |
| | dataset.merge(labels, "id") |
| | |
| | # 5. Rename or cast columns as needs change |
| | dataset.alter_columns({"path": "quality_bucket", "name": "quality_tier"}) |
| | dataset.alter_columns({"path": "embedding", "data_type": pa.list_(pa.float16(), 128)}) |
| | ``` |
| |
|
| | These operations are automatically versioned, so prior experiments can still point to earlier versions while the dataset keeps evolving. |
| |
|
| | ## LanceDB |
| |
|
| | LanceDB users can follow the following examples to run search queries on the dataset. |
| |
|
| | ### LanceDB Vector Similarity Search |
| |
|
| | ```python |
| | import lancedb |
| | |
| | # In LanceDB, you open a database, then a table |
| | db = lancedb.connect("hf://datasets/lance-format/laion-1m/data") |
| | tbl = db.open_table("train") |
| | query_embedding = list(range(768)) |
| | |
| | results = tbl.search(query_embedding, vector_column_name="img_emb") \ |
| | .limit(5) \ |
| | .to_list() |
| | ``` |
| |
|
| | ### LanceDB Full-Text Search |
| |
|
| | ```python |
| | import lancedb |
| | |
| | # In LanceDB, you open a database, then a table |
| | db = lancedb.connect("hf://datasets/lance-format/laion-1m/data") |
| | tbl = db.open_table("train") |
| | |
| | results = tbl.search("dog running") \ |
| | .select(["caption", "url", "similarity"]) \ |
| | .limit(10) \ |
| | .to_list() |
| | ``` |
| |
|
| | ## Citation |
| |
|
| | ``` |
| | @article{schuhmann2022laion5b, |
| | title={LAION-5B: An open large-scale dataset for training next generation image-text models}, |
| | author={Schuhmann, Christoph and others}, |
| | journal={NeurIPS Datasets and Benchmarks Track}, |
| | year={2022} |
| | } |
| | ``` |
| |
|
| | ## License |
| |
|
| | Content inherits LAION’s original licensing and safety guidelines. Review [LAION policy](https://laion.ai/blog/laion-5b/) before downstream use. |
| |
|