Dataset Viewer
Auto-converted to Parquet Duplicate
url
string
repository_url
string
labels_url
string
comments_url
string
events_url
string
html_url
string
id
int64
node_id
string
number
int64
title
string
user
dict
labels
list
state
string
locked
bool
assignee
dict
assignees
list
milestone
dict
comments
list
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
string
type
null
active_lock_reason
null
draft
bool
pull_request
dict
body
string
closed_by
dict
reactions
dict
timeline_url
string
performed_via_github_app
null
state_reason
string
sub_issues_summary
dict
issue_dependencies_summary
dict
is_pull-rerquest
bool
https://api.github.com/repos/huggingface/datasets/issues/7933
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7933/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7933/comments
https://api.github.com/repos/huggingface/datasets/issues/7933/events
https://github.com/huggingface/datasets/pull/7933
3,780,607,384
PR_kwDODunzps67fNaP
7,933
feat: Add Apache TsFile format support
{ "login": "sinanshamsudheen", "id": 186699478, "node_id": "U_kgDOCyDO1g", "avatar_url": "https://avatars.githubusercontent.com/u/186699478?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sinanshamsudheen", "html_url": "https://github.com/sinanshamsudheen", "followers_url": "https://api.github.com/users/sinanshamsudheen/followers", "following_url": "https://api.github.com/users/sinanshamsudheen/following{/other_user}", "gists_url": "https://api.github.com/users/sinanshamsudheen/gists{/gist_id}", "starred_url": "https://api.github.com/users/sinanshamsudheen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sinanshamsudheen/subscriptions", "organizations_url": "https://api.github.com/users/sinanshamsudheen/orgs", "repos_url": "https://api.github.com/users/sinanshamsudheen/repos", "events_url": "https://api.github.com/users/sinanshamsudheen/events{/privacy}", "received_events_url": "https://api.github.com/users/sinanshamsudheen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7933). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2026-01-05T08:28:12
2026-01-05T09:50:23
null
NONE
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7933", "html_url": "https://github.com/huggingface/datasets/pull/7933", "diff_url": "https://github.com/huggingface/datasets/pull/7933.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7933.patch", "merged_at": null }
# Add Apache TsFile format support Adds support for loading `.tsfile` datasets. Closes #7922. ## What's TsFile? [Apache TsFile](https://tsfile.apache.org/) is a columnar time-series format popular in IoT. The TsFile community requested this integration and offered to help maintain it. ## What I did Created a new `TsFile` builder in `packaged_modules/tsfile/` following the same pattern as HDF5. Registered the module and added `.tsfile` extension mapping. Also added `tsfile>=2.0.0` as an optional dependency. The builder uses `tsfile.to_dataframe()` with iterator mode for memory-efficient reading, then converts to PyArrow tables. Schema is inferred automatically from file metadata. ## Config options - `batch_size` - rows per batch (default 10000) - `table_name` - which table to read (for multi-table files) - `columns` - filter specific columns - `start_time` / `end_time` - time-range filtering ## Usage ```python from datasets import load_dataset ds = load_dataset("tsfile", data_files=["data.tsfile"], split="train") # with filtering ds = load_dataset("tsfile", data_files=["data.tsfile"], columns=["temperature"], start_time=1609459200000) ``` ## Tests Added 11 tests covering config validation, basic loading, data integrity, feature inference, and error handling. All passing.
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7933/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7933/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7932
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7932/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7932/comments
https://api.github.com/repos/huggingface/datasets/issues/7932/events
https://github.com/huggingface/datasets/pull/7932
3,777,725,050
PR_kwDODunzps67WqhL
7,932
Fix duplicate keyword conflict in load_dataset_builder
{ "login": "Ashish570raj", "id": 110705207, "node_id": "U_kgDOBpk6Nw", "avatar_url": "https://avatars.githubusercontent.com/u/110705207?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ashish570raj", "html_url": "https://github.com/Ashish570raj", "followers_url": "https://api.github.com/users/Ashish570raj/followers", "following_url": "https://api.github.com/users/Ashish570raj/following{/other_user}", "gists_url": "https://api.github.com/users/Ashish570raj/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ashish570raj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ashish570raj/subscriptions", "organizations_url": "https://api.github.com/users/Ashish570raj/orgs", "repos_url": "https://api.github.com/users/Ashish570raj/repos", "events_url": "https://api.github.com/users/Ashish570raj/events{/privacy}", "received_events_url": "https://api.github.com/users/Ashish570raj/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi HuggingFace team\r\nThis PR fixes issue #4910 by safely merging builder_kwargs and config_kwargs to avoid duplicate keyword errors. \r\nA regression test is included to ensure this does not happen again. \r\n\r\nPlease let me know if you’d like any changes. Thanks!\r\n" ]
2026-01-03T05:49:06
2026-01-03T05:52:02
null
NONE
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7932", "html_url": "https://github.com/huggingface/datasets/pull/7932", "diff_url": "https://github.com/huggingface/datasets/pull/7932.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7932.patch", "merged_at": null }
Fixes #4910 This PR fixes a bug where passing the same keyword in builder_kwargs and config_kwargs caused a TypeError in load_dataset_builder. The kwargs are now merged safely so config_kwargs override builder_kwargs without duplication. A regression test is added to prevent this from happening again.
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7932/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7932/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7931
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7931/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7931/comments
https://api.github.com/repos/huggingface/datasets/issues/7931/events
https://github.com/huggingface/datasets/issues/7931
3,777,662,799
I_kwDODunzps7hKo9P
7,931
Enable CORS + HTTP Range support for browser partial reads on cas-bridge.xethub.hf.co (Parquet row-group access)
{ "login": "cornhundred", "id": 8352840, "node_id": "MDQ6VXNlcjgzNTI4NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/8352840?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cornhundred", "html_url": "https://github.com/cornhundred", "followers_url": "https://api.github.com/users/cornhundred/followers", "following_url": "https://api.github.com/users/cornhundred/following{/other_user}", "gists_url": "https://api.github.com/users/cornhundred/gists{/gist_id}", "starred_url": "https://api.github.com/users/cornhundred/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cornhundred/subscriptions", "organizations_url": "https://api.github.com/users/cornhundred/orgs", "repos_url": "https://api.github.com/users/cornhundred/repos", "events_url": "https://api.github.com/users/cornhundred/events{/privacy}", "received_events_url": "https://api.github.com/users/cornhundred/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Cc @assafvayner maybe ?" ]
2026-01-03T04:23:54
2026-01-04T20:43:31
null
NONE
null
null
null
null
### Feature request ## Summary Browser-based data tools need Range requests to read Parquet efficiently (footer + selected row groups). Downloads from the Hub redirect to cas-bridge.xethub.hf.co (Xet bridge). The redirected host fails CORS preflight for Range/HEAD workflows, blocking partial reads. ([Hugging Face](https://huggingface.co/blog/migrating-the-hub-to-xet)). See example [HuggingFace dataset](https://huggingface.co/datasets/cornhundred/Xenium_V1_human_Pancreas_FFPE_outs_row_groups/tree/main/Xenium_V1_human_Pancreas_FFPE_outs_row_groups) ## Current behavior Plain GET works via redirect. Range workflows fail with: “Response to preflight request doesn’t pass access control check: It does not have HTTP ok status.” This blocks parquet-wasm and DuckDB-Wasm style readers which rely on HEAD + Range or non-safelisted Range patterns. ([GitHub](https://github.com/duckdb/duckdb-wasm/issues/1852)) ## Expected behavior OPTIONS to the final redirected host returns 200/204 (no redirect) with appropriate CORS headers. Preflight responses must be “ok” status. ([GitHub](https://github.com/whatwg/fetch/issues/1588)) GET with Range returns 206 Partial Content and includes CORS headers, plus exposes Content-Range, Accept-Ranges, and Content-Length so browser JS can consume them. ([MDN WebDocument](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Content-Range)) ## Proposed CORS headers (public, anonymous files) For responses from cas-bridge.xethub.hf.co (and any sibling Xet bridge hosts): ### Preflight (OPTIONS) Access-Control-Allow-Origin: * Access-Control-Allow-Methods: GET, HEAD, OPTIONS Access-Control-Allow-Headers: Range, Content-Type (or echo Access-Control-Request-Headers) Access-Control-Max-Age: 86400 (optional, reduces preflight spam) ### Actual (GET/HEAD, including 206) Access-Control-Allow-Origin: * Access-Control-Expose-Headers: Content-Range, Accept-Ranges, Content-Length Ensure Accept-Ranges: bytes and Content-Range are present for range responses. ([MDN WebDocument](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Accept-Ranges)) ### Notes on credentials (optional) If any endpoint requires credentials, wildcard * cannot be used and the server must echo Origin and add Vary: Origin. ([MDN WebDocument](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Access-Control-Allow-Origin)) ## Impact This unblocks efficient browser analytics and visualization on HF-hosted datasets using Parquet row groups, DuckDB-Wasm, parquet-wasm, and similar tooling. DuckDB-Wasm documentation explicitly notes that remote data access requires correct CORS on the hosting site. ([DuckDB](https://duckdb.org/docs/stable/clients/wasm/extensions.html)) High-quality references worth linking in the issue thread Hugging Face: redirect to cas-bridge.xethub.hf.co shown in Xet migration blog ([Hugging Face](https://huggingface.co/blog/migrating-the-hub-to-xet)) Fetch/CORS: preflight must be “ok” status (200/204) ([GitHub](https://github.com/whatwg/fetch/issues/1588)) Fetch/CORS: redirect + preflight is a known sharp edge ([GitHub](https://github.com/whatwg/fetch/issues/204)) MDN CORS guide: Range safelist caveat ([MDN WebDocument](https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/CORS)) MDN Range header: single-range is safelisted, multi-range may preflight ([MDN WebDocument](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Range)) MDN Expose-Headers: non-safelisted headers must be exposed ([MDN WebDocument](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Access-Control-Expose-Headers)) DuckDB-Wasm: remote HTTPFS requires correct CORS ([DuckDB](https://duckdb.org/docs/stable/clients/wasm/extensions.html)) DuckDB-Wasm issue: HEAD blocked by CORS breaks the pipeline ([GitHub](https://github.com/duckdb/duckdb-wasm/issues/1852)) pdf.js historical issues about Accept-Ranges/Content-Range exposure ([GitHub](https://github.com/mozilla/pdf.js/issues/3150)) ## Summary Your request is standard: browser Parquet needs byte ranges. Redirect to cas-bridge.xethub.hf.co makes CORS enforcement happen on the Xet bridge host. ([Hugging Face](https://huggingface.co/blog/migrating-the-hub-to-xet)) Fix requires: OPTIONS returns 200/204 with CORS headers, and 206 responses include CORS + exposed headers. ([GitHub](https://github.com/whatwg/fetch/issues/1588)) Similar failures exist across pdf.js and DuckDB-Wasm ecosystems. ([GitHub](https://github.com/duckdb/duckdb-wasm/issues/1852)) ### Motivation I would like to be able to read subsets of large Parquet files using range requests using the parquet_wasm library on the front end. This is being used as part of a spatial data visualization project https://github.com/broadinstitute/celldega ### Your contribution I would be happy to provide code to make front-end range requests as an example.
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7931/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7931/timeline
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7930
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7930/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7930/comments
https://api.github.com/repos/huggingface/datasets/issues/7930/events
https://github.com/huggingface/datasets/pull/7930
3,777,628,848
PR_kwDODunzps67WYwc
7,930
Proposal: Protein 3D Structure Visualization for Dataset Viewer
{ "login": "behroozazarkhalili", "id": 80390531, "node_id": "MDQ6VXNlcjgwMzkwNTMx", "avatar_url": "https://avatars.githubusercontent.com/u/80390531?v=4", "gravatar_id": "", "url": "https://api.github.com/users/behroozazarkhalili", "html_url": "https://github.com/behroozazarkhalili", "followers_url": "https://api.github.com/users/behroozazarkhalili/followers", "following_url": "https://api.github.com/users/behroozazarkhalili/following{/other_user}", "gists_url": "https://api.github.com/users/behroozazarkhalili/gists{/gist_id}", "starred_url": "https://api.github.com/users/behroozazarkhalili/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/behroozazarkhalili/subscriptions", "organizations_url": "https://api.github.com/users/behroozazarkhalili/orgs", "repos_url": "https://api.github.com/users/behroozazarkhalili/repos", "events_url": "https://api.github.com/users/behroozazarkhalili/events{/privacy}", "received_events_url": "https://api.github.com/users/behroozazarkhalili/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[ "cc @georgia-hf - Following up on your question about protein visualization for the Dataset Viewer. This proposal recommends 3Dmol.js (~150KB gzipped) as a lightweight alternative to Mol* (~1.3MB gzipped).\n\nLooking forward to your feedback!", "Exciting ! cc @cfahlgren1 @severo for the Viewer part\r\n\r\nFor the `datasets` part I'll leave my feedbacks in the PRs :)", "I don't know the JS libraries, but indeed, the lighter the better, as we don't require advanced features.", "From a quick look at the PDB and mmCIF PRs I noticed that the dataset has one row = one atom. However I humbly believe that such datasets would be more practical to use if one row = one structure. This way each row is independent, which is practical in ML to perform train/test splits or dataset shuffling.\r\n\r\nThis would also make it easier to add labels and metadata for each structure, similar to what we already for images. E.g. you could group them per folder named after a label, or you can have a metadata.parquet file to add custom metadata per structure.\r\n\r\nAnd this way in the Viewer it could show one 3D render per row.\r\n\r\nWhat do you think ?", "@lhoestq @severo @georgia-hf I will be waiting for all your comments; then, I will start implementing the final plan. " ]
2026-01-03T03:30:01
2026-01-05T16:00:45
null
NONE
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7930", "html_url": "https://github.com/huggingface/datasets/pull/7930", "diff_url": "https://github.com/huggingface/datasets/pull/7930.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7930.patch", "merged_at": null }
# Proposal: Protein 3D Structure Visualization for HuggingFace Dataset Viewer ## Executive Summary This proposal outlines adding 3D protein structure visualization to the HuggingFace Dataset Viewer, enabling users to interactively view PDB and mmCIF molecular structures directly within the dataset preview interface. --- ## Data Type Support **Supported formats** (from recent PRs): - **PDB** (PR #7926): Legacy fixed-width format for 3D macromolecular structures - **mmCIF** (PR #7925): Modern standard format with full crystallographic data **What gets visualized**: - 3D atomic coordinates (x, y, z) - Chain structures - Residue information - Atom types and elements - Secondary structure (helices, sheets) **Not applicable** (1D sequence only): - FASTA (PR #7923) - text sequences, no 3D coordinates - FASTQ (PR #7924) - sequences with quality scores, no 3D coordinates --- ## Visualization Library Comparison | Library | Bundle Size (minified) | Bundle Size (gzipped) | License | Pros | Cons | |---------|------------------------|----------------------|---------|------|------| | **3Dmol.js** | 512 KB | **~150 KB** | BSD-3 | Lightweight, easy integration, good docs | Fewer advanced features | | **NGL Viewer** | 1.3 MB | ~350 KB | MIT | Excellent MMTF support, beautiful rendering | Moderate complexity | | **Mol*** | 4.6 MB | ~1.3 MB | MIT | Industry standard, used by RCSB PDB, feature-rich | Heavy, complex | | **PDBe Molstar** | 5.8 MB | ~1.6 MB | Apache 2.0 | EMBL-EBI maintained, simpler Mol* wrapper | Still very heavy | *Bundle sizes verified by downloading actual distribution files from npm/CDN (January 2026)* --- ## Recommendation: 3Dmol.js **Primary choice**: 3Dmol.js **Rationale**: 1. **Bundle size**: ~150 KB gzipped - the lightest option by far, ideal for lazy loading 2. **Simple API**: Easy to integrate with React/Next.js 3. **BSD-3 License**: Compatible with HuggingFace licensing 4. **Active maintenance**: Regular updates, good community support 5. **Format support**: Native PDB and mmCIF parsing built-in 6. **Sufficient features**: Rotation, zoom, style switching (cartoon, stick, sphere) **Why not Mol*?** As Georgia noted, Mol* is heavy (~1.3 MB gzipped). While it's the industry standard for RCSB PDB, it's overkill for a dataset preview where users just need to verify structure data looks correct. **Alternative for power users**: If users need advanced features like density maps, ligand interactions, or sequence alignment overlay, consider PDBe Molstar as an optional "full viewer" mode. --- ## Architecture for Dataset Viewer Integration ### Lazy Loading Pattern (React/Next.js) ```javascript // ProteinViewer.tsx import dynamic from 'next/dynamic'; const Protein3DViewer = dynamic( () => import('./Protein3DViewerCore'), { ssr: false, // WebGL requires client-side only loading: () => <ProteinViewerSkeleton /> } ); export function ProteinViewer({ data, format }) { // Only render when PDB/mmCIF format detected if (!['pdb', 'mmcif', 'cif'].includes(format)) { return <SequenceViewer data={data} />; } return <Protein3DViewer structureData={data} format={format} />; } ``` ### Core Viewer Component (3Dmol.js) ```javascript // Protein3DViewerCore.tsx import { useEffect, useRef } from 'react'; import $3Dmol from '3dmol'; export default function Protein3DViewerCore({ structureData, format }) { const viewerRef = useRef(null); const containerRef = useRef(null); useEffect(() => { if (!containerRef.current) return; // Initialize viewer const viewer = $3Dmol.createViewer(containerRef.current, { backgroundColor: 'white', antialias: true, }); viewerRef.current = viewer; // Add structure viewer.addModel(structureData, format); viewer.setStyle({}, { cartoon: { color: 'spectrum' } }); viewer.zoomTo(); viewer.render(); return () => viewer.clear(); }, [structureData, format]); return ( <div ref={containerRef} style={{ width: '100%', height: '400px', position: 'relative' }} /> ); } ``` --- ## Integration Points in Dataset Viewer ### File Type Detection ```javascript // Detect protein structure formats const PROTEIN_3D_FORMATS = ['pdb', 'ent', 'cif', 'mmcif']; function getViewerType(filename, datasetFeatures) { const ext = filename.split('.').pop().toLowerCase(); if (PROTEIN_3D_FORMATS.includes(ext)) { return 'protein-3d'; } // ... other format checks } ``` ### Data Flow ``` Dataset Row → Format Detection → Lazy Load Viewer → Render 3D Structure ↓ PDB/mmCIF text → 3Dmol.js parser → WebGL canvas → User interaction ``` --- ## UI/UX Considerations ### Viewer Controls - Rotate: Mouse drag - Zoom: Scroll wheel - Style toggle: Cartoon / Stick / Sphere / Surface - Reset view button - Full-screen toggle ### Style Dropdown Options ```javascript const STYLE_OPTIONS = [ { label: 'Cartoon (ribbon)', value: 'cartoon' }, { label: 'Sticks', value: 'stick' }, { label: 'Spheres (CPK)', value: 'sphere' }, { label: 'Line', value: 'line' }, { label: 'Surface', value: 'surface' }, ]; ``` ### Loading State - Skeleton placeholder (400px height) - "Loading 3D viewer..." text - Progressive: Show 2D preview while 3D loads --- ## Implementation Phases ### Phase 1: Basic Viewer (MVP) - Add 3Dmol.js dependency (~150 KB gzipped) - Create ProteinViewer component with lazy loading - Support PDB format display - Basic rotation/zoom controls - Single style (cartoon) ### Phase 2: Enhanced Features - mmCIF format support - Style switching dropdown - Full-screen mode - Chain coloring options ### Phase 3: Advanced (Optional) - Atom selection/highlighting - Distance measurements - Export snapshot as PNG - Consider PDBe Molstar for power users --- ## Bundle Impact Analysis **Without lazy loading**: +150 KB to initial bundle (acceptable but not ideal) **With lazy loading**: - Initial load: 0 KB additional - On-demand: ~150 KB when viewing PDB/mmCIF - Cached after first load **Comparison with other viewers**: | Viewer Type | Typical Bundle Size | |-------------|---------------------| | PDF viewer | ~500 KB | | Audio player | ~50 KB | | Image gallery | ~100 KB | | **Protein 3D (3Dmol.js)** | **~150 KB** | The protein viewer is comparable to other specialized viewers and well within acceptable limits for lazy-loaded content. --- ## Alternative Approach: CDN Loading If bundle size is critical: ```javascript // Load from CDN on-demand const load3Dmol = async () => { if (window.$3Dmol) return window.$3Dmol; return new Promise((resolve) => { const script = document.createElement('script'); script.src = 'https://3dmol.csb.pitt.edu/build/3Dmol-min.js'; script.onload = () => resolve(window.$3Dmol); document.head.appendChild(script); }); }; ``` **Pros**: Zero bundle impact **Cons**: External dependency, potential availability issues --- ## Files to Modify (in dataset-viewer repo) Since dataset-viewer is closed-source, this proposal should be shared with the HuggingFace team. They would need to: 1. `package.json` - Add 3dmol dependency 2. Create `components/viewers/ProteinViewer.tsx` 3. Create `components/viewers/Protein3DViewerCore.tsx` 4. Update viewer routing logic to detect PDB/mmCIF 5. Add viewer style controls component --- ## Summary **Recommended approach**: - Use **3Dmol.js** (~150 KB gzipped) with **lazy loading** - Only loads when user views PDB/mmCIF datasets - Simple integration, BSD-3 license, active community support **Why 3Dmol.js over Mol*?**: - 3Dmol.js: ~150 KB gzipped - Mol*: ~1.3 MB gzipped (nearly 9x heavier) **Key insight**: The PDB and mmCIF loaders we implemented (PRs #7925, #7926) extract the 3D coordinates needed for visualization. The viewer just needs to consume the raw file content. --- ## Next Steps 1. Get feedback on this proposal 2. Create proof-of-concept in a standalone demo if needed 3. Integrate into dataset-viewer once approach is approved
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7930/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7930/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7929
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7929/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7929/comments
https://api.github.com/repos/huggingface/datasets/issues/7929/events
https://github.com/huggingface/datasets/pull/7929
3,776,098,655
PR_kwDODunzps67Rayd
7,929
Raise early for invalid `revision` in `load_dataset`
{ "login": "Scott-Simmons", "id": 52365471, "node_id": "MDQ6VXNlcjUyMzY1NDcx", "avatar_url": "https://avatars.githubusercontent.com/u/52365471?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Scott-Simmons", "html_url": "https://github.com/Scott-Simmons", "followers_url": "https://api.github.com/users/Scott-Simmons/followers", "following_url": "https://api.github.com/users/Scott-Simmons/following{/other_user}", "gists_url": "https://api.github.com/users/Scott-Simmons/gists{/gist_id}", "starred_url": "https://api.github.com/users/Scott-Simmons/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Scott-Simmons/subscriptions", "organizations_url": "https://api.github.com/users/Scott-Simmons/orgs", "repos_url": "https://api.github.com/users/Scott-Simmons/repos", "events_url": "https://api.github.com/users/Scott-Simmons/events{/privacy}", "received_events_url": "https://api.github.com/users/Scott-Simmons/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[ "Passes\r\n```sh\r\npytest -k \"LoadTest and test_load_dataset_invalid_revision_with_cache\"\r\n```\r\n\r\nFails\r\n```sh\r\ngit checkout cc2399019a3a547ebc31ec68a1ff99abd4ec93ce\r\npytest -k \"LoadTest and test_load_dataset_invalid_revision_with_cache\"\r\n```\r\n\r\nRan `make test`, but failures look unrelated to the PR diff (same tests fail on `main` too)\r\n\r\n```sh\r\nFAILED tests/test_distributed.py::test_torch_distributed_run[False] - TypeError: Passing coroutines is forbidden...\r\nFAILED tests/test_distributed.py::test_torch_distributed_run[True] - TypeError: Passing coroutines is forbidden...\r\nFAILED tests/test_distributed.py::test_torch_distributed_run_streaming_with_num_workers[2-2] - TypeError: Passing coroutines is forbidden...\r\nFAILED tests/test_distributed.py::test_torch_distributed_run_streaming_with_num_workers[3-2] - TypeError: Passing coroutines is forbidden...\r\n= 4 failed, 3077 passed, 18 skipped, 491 warnings in 556.45s (0:09:16) =\r\nmake: *** [Makefile:20: test] Error 1\r\n```" ]
2026-01-02T10:40:49
2026-01-02T11:24:35
null
NONE
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7929", "html_url": "https://github.com/huggingface/datasets/pull/7929", "diff_url": "https://github.com/huggingface/datasets/pull/7929.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7929.patch", "merged_at": null }
Solves https://github.com/huggingface/datasets/issues/7928 Raise early for invalid revisions
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7929/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7929/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7928
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7928/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7928/comments
https://api.github.com/repos/huggingface/datasets/issues/7928/events
https://github.com/huggingface/datasets/issues/7928
3,775,842,185
I_kwDODunzps7hDseJ
7,928
`load_dataset` `revision` param not respected when fetching from cache
{ "login": "Scott-Simmons", "id": 52365471, "node_id": "MDQ6VXNlcjUyMzY1NDcx", "avatar_url": "https://avatars.githubusercontent.com/u/52365471?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Scott-Simmons", "html_url": "https://github.com/Scott-Simmons", "followers_url": "https://api.github.com/users/Scott-Simmons/followers", "following_url": "https://api.github.com/users/Scott-Simmons/following{/other_user}", "gists_url": "https://api.github.com/users/Scott-Simmons/gists{/gist_id}", "starred_url": "https://api.github.com/users/Scott-Simmons/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Scott-Simmons/subscriptions", "organizations_url": "https://api.github.com/users/Scott-Simmons/orgs", "repos_url": "https://api.github.com/users/Scott-Simmons/repos", "events_url": "https://api.github.com/users/Scott-Simmons/events{/privacy}", "received_events_url": "https://api.github.com/users/Scott-Simmons/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[ "This might be better placed as a feature request not a bug, since the logging `Using the latest cached version of the dataset since sentientfutures/ahb couldn't be found on the Hugging Face Hub` is clear.", "https://github.com/huggingface/datasets/pull/7929" ]
2026-01-02T08:20:47
2026-01-02T11:25:08
null
NONE
null
null
null
null
### Describe the bug `datasets.load_dataset` `revision` semantics are a bit inconsistent when the dataset is not found on the huggingface hub. When fetching the latest cached version of the dataset, the `revision` argument is ignored, so long as any cached versions of the dataset already exist in the HF cache. ### Steps to reproduce the bug ```python import datasets datasets.load_dataset( "sentientfutures/ahb", "dimensions", split="train", revision="main" ) # would expect some error to raise here datasets.load_dataset( "sentientfutures/ahb", "dimensions", split="train", revision="invalid_revision" ) ``` ### Expected behavior On the second call to `datasets.load_dataset` in the 'steps to reproduce the bug' example, expect something like: ```sh raise DatasetNotFoundError( datasets.exceptions.DatasetNotFoundError: Revision 'invalid_revision' doesn't exist for dataset 'sentientfutures/ahb' on the Hub. ``` ### Environment info - `datasets` version: 4.4.1 - Platform: Linux-6.2.0-39-generic-x86_64-with-glibc2.37 - Python version: 3.12.12 - `huggingface_hub` version: 0.36.0 - PyArrow version: 22.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2025.9.0
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7928/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7928/timeline
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7927
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7927/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7927/comments
https://api.github.com/repos/huggingface/datasets/issues/7927/events
https://github.com/huggingface/datasets/issues/7927
3,775,302,438
I_kwDODunzps7hBosm
7,927
Using Stateful Dataloader with Split Dataset By Node and DCP for DDP
{ "login": "conceptofmind", "id": 25208228, "node_id": "MDQ6VXNlcjI1MjA4MjI4", "avatar_url": "https://avatars.githubusercontent.com/u/25208228?v=4", "gravatar_id": "", "url": "https://api.github.com/users/conceptofmind", "html_url": "https://github.com/conceptofmind", "followers_url": "https://api.github.com/users/conceptofmind/followers", "following_url": "https://api.github.com/users/conceptofmind/following{/other_user}", "gists_url": "https://api.github.com/users/conceptofmind/gists{/gist_id}", "starred_url": "https://api.github.com/users/conceptofmind/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/conceptofmind/subscriptions", "organizations_url": "https://api.github.com/users/conceptofmind/orgs", "repos_url": "https://api.github.com/users/conceptofmind/repos", "events_url": "https://api.github.com/users/conceptofmind/events{/privacy}", "received_events_url": "https://api.github.com/users/conceptofmind/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[ "Does it need to be pickled?\n\n```python\n def load_state_dict(self, state_dict):\n hf_state = pickle.loads(state_dict[\"data\"])\n self.train_dataset.load_state_dict(hf_state)\n\n def state_dict(self):\n return {\"data\": pickle.dumps(self.train_dataset.state_dict())}\n```", "Pickling seems to have resolved the issue but it is not clear at all to me why this is necessary" ]
2026-01-01T22:27:07
2026-01-02T02:48:21
null
NONE
null
null
null
null
### Describe the bug I am trying to determine how to save and load the Stateful Dataloader State with DCP and Split Dataset by Node for DDP. Currently, I am running into the issue where I am receiving a slow resume. ``` Neither dataset nor iter(dataset) defines state_dict/load_state_dict so we are naively fast-forwarding your dataset by 5000 steps. For more efficient resumes, please implement `state_dict` and `load_state_dict` in your IterableDataset and/or iterator. ``` ### Steps to reproduce the bug Say we have a streaming dataset: ```python class StreamingDataset(IterableDataset): def __init__( self, path: str, tokenizer: AutoTokenizer, name: Optional[str] = None, split: str = "train", max_length: int = 2048, ddp_rank: int = 0, ddp_world_size: int = 1, ): dataset = load_dataset(path, name, split=split, streaming=True) self.train_dataset = split_dataset_by_node( dataset=dataset, rank=ddp_rank, world_size=ddp_world_size ) self.tokenizer = tokenizer self.max_length = max_length def __iter__(self): for sample in iter(self.train_dataset): tokenized = self.tokenizer( sample["text"], padding="max_length", truncation=True, max_length=self.max_length, return_special_tokens_mask=True, ) yield tokenized ``` We load that dataset into the Stateful Dataloader: ```python trainloader = StatefulDataLoader( dataset=train_dataset, batch_size=args.batch_size, collate_fn=data_collator, ) ``` We then have code for checkpointing and resuming the state using DCP: ```python import os from typing import Optional import torch import torch.distributed as dist import torch.distributed.checkpoint as dcp from torch.distributed.checkpoint.format_utils import dcp_to_torch_save from torch.distributed.checkpoint.state_dict import get_state_dict, set_state_dict from blitzbert.utils import print_rank_0 class Checkpoint: def __init__( self, model: torch.nn.Module, optimizer: torch.optim.Optimizer, trainloader, step: Optional[int] = None, epoch: Optional[int] = None, ): self.model = model self.optimizer = optimizer self.trainloader = trainloader self.step = step self.epoch = epoch def get_state_dict(self) -> dict: model_state_dict, optimizer_state_dict = get_state_dict( self.model, self.optimizer ) return { "model": model_state_dict, "optim": optimizer_state_dict, "trainloader": self.trainloader.state_dict(), "step": self.step, "epoch": self.epoch, } def save_checkpoint( args, model, optimizer, trainloader, step: Optional[int] = None, epoch: Optional[int] = None, final_checkpoint: bool = False, ): checkpointer = Checkpoint( model=model, optimizer=optimizer, trainloader=trainloader, step=step, epoch=epoch, ) state_dict = checkpointer.get_state_dict() if final_checkpoint: print_rank_0("Saving final model") save_path = os.path.join(args.checkpoint_dir, "final_model") dcp.save(state_dict, checkpoint_id=save_path) dist.barrier() single_file_path = os.path.join(args.checkpoint_dir, "final_checkpoint.pth") dcp_to_torch_save(save_path, single_file_path) else: if step % args.checkpointing_steps == 0 and step != 0: print_rank_0(f"Saving model at step: {step}") save_path = os.path.join(args.checkpoint_dir, f"epoch_{epoch}_step_{step}") dcp.save(state_dict, checkpoint_id=save_path) dist.barrier() def load_checkpoint(args, model, optimizer, trainloader): if not args.resume_from_checkpoint: return 0, 0 checkpoint_path = args.resume_from_checkpoint print_rank_0(f"Resumed from checkpoint: {checkpoint_path}") checkpointer = Checkpoint( model=model, optimizer=optimizer, trainloader=trainloader, ) state_dict = checkpointer.get_state_dict() dcp.load( state_dict=state_dict, checkpoint_id=checkpoint_path, ) set_state_dict( model, optimizer, model_state_dict=state_dict["model"], optim_state_dict=state_dict["optim"], ) trainloader.load_state_dict(state_dict["trainloader"]) step = state_dict["step"] epoch = state_dict["epoch"] return step, epoch ``` and then loading the checkpoint: ```python completed_steps, current_epoch = load_checkpoint( args=args, model=model, optimizer=optimizer, trainloader=trainloader ) ``` ### Expected behavior If I implement what the warning says: ```python def state_dict(self): return self.train_dataset.state_dict() def load_state_dict(self, state): self.train_dataset.load_state_dict(state) ``` I then get: ``` [rank0]: raise RuntimeError(f"Missing key in checkpoint state_dict: {fqn}.") [rank0]: RuntimeError: Missing key in checkpoint state_dict: trainloader.dataset_state.examples_iterable.examples_iterable.previous_state. ``` How exactly should one be saving and resuming the Stateful Dataloader with Hugging Face datasets? ### Environment info "datasets>=4.4.1",
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7927/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7927/timeline
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7926
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7926/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7926/comments
https://api.github.com/repos/huggingface/datasets/issues/7926/events
https://github.com/huggingface/datasets/pull/7926
3,773,696,472
PR_kwDODunzps67Jxxz
7,926
Add lightweight PDB (Protein Data Bank) file support
{ "login": "behroozazarkhalili", "id": 80390531, "node_id": "MDQ6VXNlcjgwMzkwNTMx", "avatar_url": "https://avatars.githubusercontent.com/u/80390531?v=4", "gravatar_id": "", "url": "https://api.github.com/users/behroozazarkhalili", "html_url": "https://github.com/behroozazarkhalili", "followers_url": "https://api.github.com/users/behroozazarkhalili/followers", "following_url": "https://api.github.com/users/behroozazarkhalili/following{/other_user}", "gists_url": "https://api.github.com/users/behroozazarkhalili/gists{/gist_id}", "starred_url": "https://api.github.com/users/behroozazarkhalili/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/behroozazarkhalili/subscriptions", "organizations_url": "https://api.github.com/users/behroozazarkhalili/orgs", "repos_url": "https://api.github.com/users/behroozazarkhalili/repos", "events_url": "https://api.github.com/users/behroozazarkhalili/events{/privacy}", "received_events_url": "https://api.github.com/users/behroozazarkhalili/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[]
2025-12-31T21:01:04
2025-12-31T21:01:04
null
NONE
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7926", "html_url": "https://github.com/huggingface/datasets/pull/7926", "diff_url": "https://github.com/huggingface/datasets/pull/7926.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7926.patch", "merged_at": null }
## Summary This PR adds support for loading PDB (Protein Data Bank) files directly with `load_dataset()`. PDB is the legacy fixed-width format for representing 3D macromolecular structures, widely used for historical datasets and still common in computational biology workflows. ### Key Features - **Zero external dependencies** - Pure Python parser using fixed-width column positions per official PDB specification - **Record type filtering** - Load ATOM, HETATM, or both record types - **Column selection** - Choose specific columns to reduce memory usage - **Compression support** - Automatic detection of gzip, bzip2, and xz compressed files via magic bytes - **Batch processing** - Configurable batch size for memory-efficient processing ### Columns (ATOM/HETATM records) | Column | Type | Description | |--------|------|-------------| | `record_type` | string | ATOM or HETATM | | `atom_serial` | int32 | Atom serial number | | `atom_name` | string | Atom name (e.g., CA, N, C) | | `residue_name` | string | Residue name (e.g., ALA, GLY) | | `chain_id` | string | Chain identifier | | `residue_seq` | int32 | Residue sequence number | | `x`, `y`, `z` | float32 | Coordinates (Å) | | `occupancy` | float32 | Occupancy factor | | `temp_factor` | float32 | Temperature factor (B-factor) | | `element` | string | Element symbol | ### Supported Extensions `.pdb`, `.ent` (and compressed variants) ### Usage ```python from datasets import load_dataset # Load PDB file dataset = load_dataset("pdb", data_files="structure.pdb") # Load only ATOM records (exclude ligands/water) dataset = load_dataset("pdb", data_files="structure.pdb", record_types=["ATOM"]) # Load specific columns dataset = load_dataset("pdb", data_files="structure.pdb", columns=["atom_name", "residue_name", "x", "y", "z"]) ``` ### Use Cases - Legacy structure dataset processing - Molecular dynamics trajectory analysis - Structure-based ML training data - Protein visualization data preparation ### References - PDB format specification: https://www.wwpdb.org/documentation/file-format ### Test Results All 24 tests pass: - Basic loading, column filtering, record type filtering - Gzip compression, multi-chain structures, alternate locations - Charged atoms, batch sizes, schema types, feature casting - Empty files, multiple files, insertion codes, negative coordinates Part of the bioinformatics file format support series (FASTA #7923, FASTQ #7924, mmCIF #7925). cc @georgia-hf
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7926/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7926/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7925
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7925/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7925/comments
https://api.github.com/repos/huggingface/datasets/issues/7925/events
https://github.com/huggingface/datasets/pull/7925
3,773,577,850
PR_kwDODunzps67JW3g
7,925
feat: Add mmCIF file support for macromolecular structures
{ "login": "behroozazarkhalili", "id": 80390531, "node_id": "MDQ6VXNlcjgwMzkwNTMx", "avatar_url": "https://avatars.githubusercontent.com/u/80390531?v=4", "gravatar_id": "", "url": "https://api.github.com/users/behroozazarkhalili", "html_url": "https://github.com/behroozazarkhalili", "followers_url": "https://api.github.com/users/behroozazarkhalili/followers", "following_url": "https://api.github.com/users/behroozazarkhalili/following{/other_user}", "gists_url": "https://api.github.com/users/behroozazarkhalili/gists{/gist_id}", "starred_url": "https://api.github.com/users/behroozazarkhalili/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/behroozazarkhalili/subscriptions", "organizations_url": "https://api.github.com/users/behroozazarkhalili/orgs", "repos_url": "https://api.github.com/users/behroozazarkhalili/repos", "events_url": "https://api.github.com/users/behroozazarkhalili/events{/privacy}", "received_events_url": "https://api.github.com/users/behroozazarkhalili/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[]
2025-12-31T20:11:32
2025-12-31T20:11:32
null
NONE
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7925", "html_url": "https://github.com/huggingface/datasets/pull/7925", "diff_url": "https://github.com/huggingface/datasets/pull/7925.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7925.patch", "merged_at": null }
## Summary This PR adds support for loading mmCIF (macromolecular Crystallographic Information File) files directly with `load_dataset()`. mmCIF is the modern standard format for representing 3D macromolecular structures, used by the Protein Data Bank (PDB) since 2014. This format is essential for machine learning applications in structural biology, drug discovery, and protein engineering. ### Key Features - **Zero external dependencies**: Pure Python parser for CIF syntax (no BioPython or other heavy dependencies) - **Streaming support**: Generator-based parsing for memory-efficient handling of large structure files - **Compression support**: Automatic detection and handling of gzip, bzip2, and xz compressed files - **ML-ready output**: Atomic coordinates structured for use in structure-based ML models (AlphaFold, graph neural networks, etc.) ### Default Columns (atom_site category) | Column | Type | Description | |--------|------|-------------| | `id` | int | Atom serial number | | `type_symbol` | string | Element symbol | | `label_atom_id` | string | Atom name (e.g., CA, N, C) | | `label_comp_id` | string | Residue name (e.g., ALA, GLY) | | `label_asym_id` | string | Chain identifier | | `label_seq_id` | int | Residue sequence number | | `Cartn_x` | float | X coordinate (Å) | | `Cartn_y` | float | Y coordinate (Å) | | `Cartn_z` | float | Z coordinate (Å) | | `occupancy` | float | Occupancy factor | | `B_iso_or_equiv` | float | Temperature factor (B-factor) | ### Configuration Options - `columns`: Select subset of atom_site columns - `include_hetatm`: Option to exclude ligand/water HETATM records (default: True) - `batch_size`: Control atoms per batch for memory management (default: 100000) ### Supported Extensions `.cif`, `.mmcif` (and gzip/bzip2/xz compressed variants) ### Usage ```python from datasets import load_dataset # Load mmCIF file dataset = load_dataset("mmcif", data_files="structure.cif") # Load with column filtering dataset = load_dataset("mmcif", data_files="structure.cif", columns=["label_atom_id", "Cartn_x", "Cartn_y", "Cartn_z"]) # Exclude HETATM records (ligands, water) dataset = load_dataset("mmcif", data_files="structure.cif", include_hetatm=False) # Load compressed file dataset = load_dataset("mmcif", data_files="structure.cif.gz") ``` ### Use Cases - AlphaFold/structure prediction training data - Protein-ligand interaction datasets - Graph neural networks on molecular structures - Structure-based drug discovery ### References - mmCIF specification: https://mmcif.wwpdb.org/ - PDB archive: https://www.rcsb.org/ - Part of bioinformatics file format support initiative (following #7923 FASTA and #7924 FASTQ) cc @georgia-hf
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7925/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7925/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7924
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7924/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7924/comments
https://api.github.com/repos/huggingface/datasets/issues/7924/events
https://github.com/huggingface/datasets/pull/7924
3,773,509,771
PR_kwDODunzps67JHNF
7,924
Add lightweight FASTQ file format support
{ "login": "behroozazarkhalili", "id": 80390531, "node_id": "MDQ6VXNlcjgwMzkwNTMx", "avatar_url": "https://avatars.githubusercontent.com/u/80390531?v=4", "gravatar_id": "", "url": "https://api.github.com/users/behroozazarkhalili", "html_url": "https://github.com/behroozazarkhalili", "followers_url": "https://api.github.com/users/behroozazarkhalili/followers", "following_url": "https://api.github.com/users/behroozazarkhalili/following{/other_user}", "gists_url": "https://api.github.com/users/behroozazarkhalili/gists{/gist_id}", "starred_url": "https://api.github.com/users/behroozazarkhalili/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/behroozazarkhalili/subscriptions", "organizations_url": "https://api.github.com/users/behroozazarkhalili/orgs", "repos_url": "https://api.github.com/users/behroozazarkhalili/repos", "events_url": "https://api.github.com/users/behroozazarkhalili/events{/privacy}", "received_events_url": "https://api.github.com/users/behroozazarkhalili/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[]
2025-12-31T19:46:42
2025-12-31T19:49:41
null
NONE
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7924", "html_url": "https://github.com/huggingface/datasets/pull/7924", "diff_url": "https://github.com/huggingface/datasets/pull/7924.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7924.patch", "merged_at": null }
## Summary This PR adds support for loading FASTQ files directly with `load_dataset()`. FASTQ is an extension of FASTA that includes quality scores for each base, widely used for storing output from high-throughput sequencing instruments. ### Key Features - **Zero external dependencies** - Pure Python parser based on [readfq.py](https://github.com/lh3/readfq) by Heng Li - **Quality score support** - Preserves per-base quality scores as ASCII-encoded strings - **Streaming support** - Generator-based parsing for memory efficiency with large NGS files - **Compression support** - Automatic detection of gzip, bzip2, and xz compressed files - **Large sequence support** - Uses `large_string` for both sequence and quality columns - **Parquet-safe batching** - Dual-threshold batching (batch_size + max_batch_bytes) prevents page size errors ### Columns | Column | Type | Description | |--------|------|-------------| | `id` | string | Sequence identifier (first word after `@`) | | `description` | string | Full description line (everything after id) | | `sequence` | large_string | The nucleotide sequence | | `quality` | large_string | ASCII-encoded quality scores (Phred+33 by default) | ### Supported Extensions `.fq`, `.fastq` (and compressed variants: `.fq.gz`, `.fastq.gz`, `.fq.bz2`, `.fq.xz`) ### Usage ```python from datasets import load_dataset # Load FASTQ file dataset = load_dataset("fastq", data_files="reads.fastq") # Load gzipped file dataset = load_dataset("fastq", data_files="reads.fq.gz") # Filter columns dataset = load_dataset("fastq", data_files="reads.fq", columns=["sequence", "quality"]) ``` ### Quality Score Format Quality scores use Sanger/Illumina 1.8+ encoding (Phred+33): - ASCII character `\!` (33) = quality 0 - ASCII character `I` (73) = quality 40 ### Testing - 22 comprehensive tests covering basic loading, multi-line sequences, compression, batching, schema types, and edge cases - All tests passing - Linting clean ### References - Follows pattern established in #7923 (FASTA support) - Parser based on: https://github.com/lh3/readfq - Addresses feedback from #7851 cc: @georgia-hf
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7924/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7924/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7923
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7923/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7923/comments
https://api.github.com/repos/huggingface/datasets/issues/7923/events
https://github.com/huggingface/datasets/pull/7923
3,773,472,998
PR_kwDODunzps67I-y3
7,923
feat(fasta): add lightweight FASTA file format support
{ "login": "behroozazarkhalili", "id": 80390531, "node_id": "MDQ6VXNlcjgwMzkwNTMx", "avatar_url": "https://avatars.githubusercontent.com/u/80390531?v=4", "gravatar_id": "", "url": "https://api.github.com/users/behroozazarkhalili", "html_url": "https://github.com/behroozazarkhalili", "followers_url": "https://api.github.com/users/behroozazarkhalili/followers", "following_url": "https://api.github.com/users/behroozazarkhalili/following{/other_user}", "gists_url": "https://api.github.com/users/behroozazarkhalili/gists{/gist_id}", "starred_url": "https://api.github.com/users/behroozazarkhalili/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/behroozazarkhalili/subscriptions", "organizations_url": "https://api.github.com/users/behroozazarkhalili/orgs", "repos_url": "https://api.github.com/users/behroozazarkhalili/repos", "events_url": "https://api.github.com/users/behroozazarkhalili/events{/privacy}", "received_events_url": "https://api.github.com/users/behroozazarkhalili/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[]
2025-12-31T19:33:00
2025-12-31T19:50:29
null
NONE
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7923", "html_url": "https://github.com/huggingface/datasets/pull/7923", "diff_url": "https://github.com/huggingface/datasets/pull/7923.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7923.patch", "merged_at": null }
## Summary This PR adds support for loading FASTA files directly with `load_dataset()`, addressing feedback from #7851. FASTA is a text-based format for representing nucleotide sequences (DNA/RNA) or peptide sequences (proteins), widely used in bioinformatics. ## Key Features - **Zero external dependencies** - Uses a lightweight pure Python parser based on [readfq.py](https://github.com/lh3/readfq) by Heng Li - **Streaming support** - Generator-based parsing for memory efficiency with large genomic files - **Compression support** - Automatic detection and handling of gzip, bzip2, and xz compressed files via magic bytes - **Large sequence support** - Uses `large_string` Arrow type to handle viral genomes and long sequences (fixes UTF-8 overflow) - **Adaptive batching** - `max_batch_bytes` parameter (default 256MB) prevents Parquet page size errors with very large sequences ## Technical Decisions (Addressing #7851 Feedback) | Concern | Solution | |---------|----------| | Long sequences → UTF-8 overflow (@apcamargo, @UriNeri) | Uses `pa.large_string()` for sequence column | | BioPython is overkill (@apcamargo) | Pure Python parser based on Heng Li's readfq.py | | Parquet page size limit i32::MAX (@UriNeri) | Adaptive dual-threshold batching with `max_batch_bytes` | ## Columns | Column | Type | Description | |--------|------|-------------| | `id` | string | Sequence identifier (first word after `>`) | | `description` | string | Full description line (everything after id) | | `sequence` | large_string | The biological sequence (DNA/RNA/protein) | ## Supported Extensions `.fa`, `.fasta`, `.fna`, `.ffn`, `.faa`, `.frn` (and compressed variants) ## Usage ```python from datasets import load_dataset # Load FASTA file dataset = load_dataset("fasta", data_files="sequences.fasta") # Load with column filtering dataset = load_dataset("fasta", data_files="sequences.fa", columns=["id", "sequence"]) # Load gzipped file dataset = load_dataset("fasta", data_files="sequences.fa.gz") # Configure batching for very large genomes dataset = load_dataset("fasta", data_files="genome.fasta", max_batch_bytes=128*1024*1024) ``` ## Test Plan - [x] Basic FASTA loading (3 sequences, multi-line) - [x] Multiple extension support (.fa, .fasta, .fna, .ffn, .faa, .frn) - [x] Compression formats (gzip, bz2, xz) - [x] Long sequences with `large_string` type - [x] Column filtering - [x] Batch size configuration - [x] Byte-based batching (`max_batch_bytes`) - [x] Large genome handling (simulated 50KB sequences) - [x] Empty description handling - [x] Multiple files loading - [x] Custom feature casting All 22 tests passing. cc: @georgia-hf
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7923/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7923/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7922
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7922/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7922/comments
https://api.github.com/repos/huggingface/datasets/issues/7922/events
https://github.com/huggingface/datasets/issues/7922
3,772,247,021
I_kwDODunzps7g1-vt
7,922
Support Apache TsFile Datasets
{ "login": "qiaojialin", "id": 7240743, "node_id": "MDQ6VXNlcjcyNDA3NDM=", "avatar_url": "https://avatars.githubusercontent.com/u/7240743?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qiaojialin", "html_url": "https://github.com/qiaojialin", "followers_url": "https://api.github.com/users/qiaojialin/followers", "following_url": "https://api.github.com/users/qiaojialin/following{/other_user}", "gists_url": "https://api.github.com/users/qiaojialin/gists{/gist_id}", "starred_url": "https://api.github.com/users/qiaojialin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qiaojialin/subscriptions", "organizations_url": "https://api.github.com/users/qiaojialin/orgs", "repos_url": "https://api.github.com/users/qiaojialin/repos", "events_url": "https://api.github.com/users/qiaojialin/events{/privacy}", "received_events_url": "https://api.github.com/users/qiaojialin/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "A large quantity of industrial timeseries data has been stored as TsFile, and I have been constantly hearing about AI fellows complaining about the lack of data or the insufficiency of data quality.\n\nI like the ambition that uses TsFile as the bridge between AI research and industrial analysis requirements. This may help both sides improve their works with high-quality data and realtime data access.", "It will be so convenient to have such a method to directly load tsfile into memory for further analysis.", "Looking forward to see the tsfile become the part of the AI eco-systems.", "Looking forward to the support for TsFile format!", "Hey folks! I’ve added TsFile support by following the existing HDF5/Parquet patterns.\n\nThis includes:\n\nA TsFile builder with schema inference from file metadata\n\nTime-range filtering and column selection\n\nMemory-efficient reading using the tsfile library’s iterator API\n\n11 tests, all passing ✅\n\nI’ll be opening a PR shortly, would love any suggestions or feedback you might have!" ]
2025-12-31T08:07:51
2026-01-05T08:23:21
null
NONE
null
null
null
null
### Feature request I would love to use Hugging Face datasets library to directly load datasets composed of .tsfile files, for example: `ds = load_dataset("username/dataset-with-tsfile-files")` This feature would allow researchers working on time-series tasks to seamlessly integrate datasets stored in the Apache TsFile format into the Hugging Face ecosystem. ### Motivation [Apache TsFile](https://tsfile.apache.org/) is a mature Apache project and a dedicated file format designed for efficient time-series data storage and retrieval. The repository is [here](https://github.com/apache/tsfile). It has been widely adopted in the IoT community and serves as the underlying storage format for projects like [Apache IoTDB](https://iotdb.apache.org/). Apache TsFile has the following advantages in the time-series area: - Time-series native schema. Time-series data is organized by device and sensor IDs. - A complete multi-language API (Python, Java, C++, C) for reading and writing tsfile. - Superior write throughput and query efficiency. - High compression ratio through per-series encoding and compression schemes. - Efficient dataset transformation. ETL-free file compaction and efficient random access to time-series chunks, enabling faster data loading and lower query latency. These properties make TsFile highly suitable for time-series model training, especially where time-series random access and efficient I/O are critical. More details can be referred from this paper “[Apache TsFile: An IoT-native Time Series File Format (VLDB 2024)](https://www.vldb.org/pvldb/vol17/p4064-song.pdf)”. Integrating TsFile support into datasets will benefit the broader machine learning community working on tasks such as forecasting and anomaly detection. ### Your contribution As a member of the TsFile community, I recently initiated a [proposal ](https://lists.apache.org/thread/119vc9nh03dz4583cx9fwt83fp8v68vy)to integrate TsFile with Huggingface, which has received enthusiastic responses from the community. We are willing to do the following contributions: - Implement and contribute the PR that adds TsFile dataset support to Hugging Face datasets. - Provide long-term maintenance for this integration. - Any other needs for TsFile to support large-scale time-series datasets. We are excited to contribute and continuously participate in the future evolution of TsFile and datasets to better support time-series data workload.
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7922/reactions", "total_count": 24, "+1": 6, "-1": 0, "laugh": 0, "hooray": 4, "confused": 0, "heart": 4, "rocket": 6, "eyes": 4 }
https://api.github.com/repos/huggingface/datasets/issues/7922/timeline
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7921
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7921/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7921/comments
https://api.github.com/repos/huggingface/datasets/issues/7921/events
https://github.com/huggingface/datasets/pull/7921
3,766,879,197
PR_kwDODunzps66zE_q
7,921
Add beginner-friendly quick installation verification tip in README
{ "login": "ashupaul2005-byte", "id": 237550974, "node_id": "U_kgDODii9fg", "avatar_url": "https://avatars.githubusercontent.com/u/237550974?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ashupaul2005-byte", "html_url": "https://github.com/ashupaul2005-byte", "followers_url": "https://api.github.com/users/ashupaul2005-byte/followers", "following_url": "https://api.github.com/users/ashupaul2005-byte/following{/other_user}", "gists_url": "https://api.github.com/users/ashupaul2005-byte/gists{/gist_id}", "starred_url": "https://api.github.com/users/ashupaul2005-byte/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ashupaul2005-byte/subscriptions", "organizations_url": "https://api.github.com/users/ashupaul2005-byte/orgs", "repos_url": "https://api.github.com/users/ashupaul2005-byte/repos", "events_url": "https://api.github.com/users/ashupaul2005-byte/events{/privacy}", "received_events_url": "https://api.github.com/users/ashupaul2005-byte/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[]
2025-12-29T09:22:27
2025-12-29T09:22:27
null
NONE
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7921", "html_url": "https://github.com/huggingface/datasets/pull/7921", "diff_url": "https://github.com/huggingface/datasets/pull/7921.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7921.patch", "merged_at": null }
This PR adds a small beginner-friendly tip to help users quickly verify whether 🤗 Datasets is installed correctly by loading a simple dataset. This improves onboarding experience for first-time users and reduces confusion for beginners.
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7921/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7921/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7920
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7920/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7920/comments
https://api.github.com/repos/huggingface/datasets/issues/7920/events
https://github.com/huggingface/datasets/pull/7920
3,766,070,566
PR_kwDODunzps66wgLx
7,920
Add progress_format support for machine-readable progress output
{ "login": "podarok", "id": 563412, "node_id": "MDQ6VXNlcjU2MzQxMg==", "avatar_url": "https://avatars.githubusercontent.com/u/563412?v=4", "gravatar_id": "", "url": "https://api.github.com/users/podarok", "html_url": "https://github.com/podarok", "followers_url": "https://api.github.com/users/podarok/followers", "following_url": "https://api.github.com/users/podarok/following{/other_user}", "gists_url": "https://api.github.com/users/podarok/gists{/gist_id}", "starred_url": "https://api.github.com/users/podarok/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/podarok/subscriptions", "organizations_url": "https://api.github.com/users/podarok/orgs", "repos_url": "https://api.github.com/users/podarok/repos", "events_url": "https://api.github.com/users/podarok/events{/privacy}", "received_events_url": "https://api.github.com/users/podarok/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[]
2025-12-28T22:35:24
2025-12-28T22:35:24
null
NONE
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7920", "html_url": "https://github.com/huggingface/datasets/pull/7920", "diff_url": "https://github.com/huggingface/datasets/pull/7920.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7920.patch", "merged_at": null }
## Summary Adds support to , enabling machine-readable JSON progress output similar to [huggingface/tokenizers#1921](https://github.com/huggingface/tokenizers/pull/1921). ## Motivation When using `datasets` in automated pipelines or UI applications, it's useful to emit machine-readable progress instead of ANSI progress bars. This PR adds the same `progress_format` option that was implemented in tokenizers. ## Changes ### New Functions - `set_progress_format(format: str)`: Set global progress format - `get_progress_format() -> str`: Get current progress format ### Supported Formats 1. **"tqdm"** (default): Interactive progress bars 2. **"json"**: Machine-readable JSON lines to stderr 3. **"silent"**: No output ### JSON Format When `progress_format="json"`, emits JSON every 5% progress change or completion: ```json {"stage":"Processing","current":50,"total":100,"percent":50.0} ``` ## Usage Example ```python from datasets import load_dataset from datasets.utils import set_progress_format # Enable JSON output set_progress_format("json") # Progress will now be emitted as JSON lines dataset = load_dataset("Goader/kobza", split="train", streaming=True) for sample in dataset: process(sample) ``` ## Implementation Details - Suppresses visual output using `io.StringIO()` when format is "json" - Keeps progress tracking active (unlike `disable=True`) - Emits JSON to stderr every 5% progress change - Exports new functions from `datasets.utils` ## Cross-Reference This implementation mirrors the approach from: - [huggingface/tokenizers#1921](https://github.com/huggingface/tokenizers/pull/1921) ## Testing Tested with: ```python from datasets.utils import set_progress_format, tqdm set_progress_format('json') for i in tqdm(range(100), desc='Test'): process(i) # Outputs: {"stage":"Test","current":10,"total":100,"percent":10.0} ``` ## Checklist - [x] New functions added to `datasets.utils.tqdm` - [x] Functions exported from `datasets.utils.__init__` - [x] JSON format emits to stderr - [x] Visual output suppressed when format="json" - [x] Progress tracking remains active - [x] Cross-referenced with tokenizers#1921
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7920/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7920/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7919
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7919/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7919/comments
https://api.github.com/repos/huggingface/datasets/issues/7919/events
https://github.com/huggingface/datasets/pull/7919
3,765,768,457
PR_kwDODunzps66vmQC
7,919
Fix load_from_disk progress bar with redirected stdout
{ "login": "omarfarhoud", "id": 118056245, "node_id": "U_kgDOBwllNQ", "avatar_url": "https://avatars.githubusercontent.com/u/118056245?v=4", "gravatar_id": "", "url": "https://api.github.com/users/omarfarhoud", "html_url": "https://github.com/omarfarhoud", "followers_url": "https://api.github.com/users/omarfarhoud/followers", "following_url": "https://api.github.com/users/omarfarhoud/following{/other_user}", "gists_url": "https://api.github.com/users/omarfarhoud/gists{/gist_id}", "starred_url": "https://api.github.com/users/omarfarhoud/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/omarfarhoud/subscriptions", "organizations_url": "https://api.github.com/users/omarfarhoud/orgs", "repos_url": "https://api.github.com/users/omarfarhoud/repos", "events_url": "https://api.github.com/users/omarfarhoud/events{/privacy}", "received_events_url": "https://api.github.com/users/omarfarhoud/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[ "this seems to contradict the comment that says \r\n\r\n> set `disable=None` rather than `disable=False` by default to disable progress bar when no TTY attached\r\n\r\nI believe the right approach is to do the same as in https://github.com/huggingface/huggingface_hub/pull/2698", "> this seems to contradict the comment that says\r\n> \r\n> > set `disable=None` rather than `disable=False` by default to disable progress bar when no TTY attached\r\n> \r\n> I believe the right approach is to do the same as in [huggingface/huggingface_hub#2698](https://github.com/huggingface/huggingface_hub/pull/2698)\r\n\r\nUpdated to check TQDM_POSITION=-1 to force-enable progress bars in cloud environments, \r\nfollowing the same pattern as huggingface_hub#2698.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7919). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Moved the TQDM_POSITION check to the tqdm class in utils/tqdm.py so all progress bars \r\nin the codebase have consistent behavior. Thanks for the suggestion!", "@lhoestq thanks again for the suggestion. I’ve applied it and everything should now be consistent across all tqdm usage. Happy to adjust anything else if needed." ]
2025-12-28T15:39:31
2026-01-02T11:00:20
null
NONE
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7919", "html_url": "https://github.com/huggingface/datasets/pull/7919", "diff_url": "https://github.com/huggingface/datasets/pull/7919.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7919.patch", "merged_at": null }
Fixes #7918 ## Problem When using `load_from_disk()` with `contextlib.redirect_stdout()`, the progress bar was not showing even for datasets with >16 files. ## Root Cause The `disable` parameter was set to `None` which triggers TTY auto-detection. This fails when stdout is redirected, causing the progress bar to be hidden. ## Solution Changed `disable=len(state["_data_files"]) <= 16 or None` to `disable=len(state["_data_files"]) <= 16` to force the progress bar to show for datasets with >16 files, regardless of stdout redirection. ## Testing Verified that progress bars now appear correctly both with and without stdout redirection for datasets with >16 shards.
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7919/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7919/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7918
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7918/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7918/comments
https://api.github.com/repos/huggingface/datasets/issues/7918/events
https://github.com/huggingface/datasets/issues/7918
3,765,489,462
I_kwDODunzps7gcM82
7,918
datasets.load_from_disk doesn't show progress bar
{ "login": "Tommigun1980", "id": 60286968, "node_id": "MDQ6VXNlcjYwMjg2OTY4", "avatar_url": "https://avatars.githubusercontent.com/u/60286968?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Tommigun1980", "html_url": "https://github.com/Tommigun1980", "followers_url": "https://api.github.com/users/Tommigun1980/followers", "following_url": "https://api.github.com/users/Tommigun1980/following{/other_user}", "gists_url": "https://api.github.com/users/Tommigun1980/gists{/gist_id}", "starred_url": "https://api.github.com/users/Tommigun1980/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tommigun1980/subscriptions", "organizations_url": "https://api.github.com/users/Tommigun1980/orgs", "repos_url": "https://api.github.com/users/Tommigun1980/repos", "events_url": "https://api.github.com/users/Tommigun1980/events{/privacy}", "received_events_url": "https://api.github.com/users/Tommigun1980/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[ "#self-assign" ]
2025-12-28T09:14:41
2025-12-28T15:07:01
null
NONE
null
null
null
null
### Describe the bug This is the inverse of the bug at [https://github.com/huggingface/datasets/issues/7030](https://github.com/huggingface/datasets/issues/7030), i.e. that `datasets.load_from_disk(path)` displays no progress bar. My dataset has > 16 files in it. I am redirecting stdout as I capture the log, could this have something to do with it? All other progress bars work fine though except for HF dataset progress bars. ### Steps to reproduce the bug ```py with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file): datasets.load_from_disk(path) ``` ### Expected behavior The progress bar should show when loading a dataset. ### Environment info Python 3.13.9 Datasets 4.4.1 macOS Tahoe 26.2
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7918/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7918/timeline
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7917
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7917/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7917/comments
https://api.github.com/repos/huggingface/datasets/issues/7917/events
https://github.com/huggingface/datasets/issues/7917
3,764,913,807
I_kwDODunzps7gaAaP
7,917
IterableDataset supports automatic sharding
{ "login": "howitry", "id": 61858900, "node_id": "MDQ6VXNlcjYxODU4OTAw", "avatar_url": "https://avatars.githubusercontent.com/u/61858900?v=4", "gravatar_id": "", "url": "https://api.github.com/users/howitry", "html_url": "https://github.com/howitry", "followers_url": "https://api.github.com/users/howitry/followers", "following_url": "https://api.github.com/users/howitry/following{/other_user}", "gists_url": "https://api.github.com/users/howitry/gists{/gist_id}", "starred_url": "https://api.github.com/users/howitry/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/howitry/subscriptions", "organizations_url": "https://api.github.com/users/howitry/orgs", "repos_url": "https://api.github.com/users/howitry/repos", "events_url": "https://api.github.com/users/howitry/events{/privacy}", "received_events_url": "https://api.github.com/users/howitry/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[ "You can already use `.shard()` instead like this:\n\n```python\ndataset = dataset.shard(index=rank, num_shards=world_size)\n```\n\nnote that it requires that `dataset.num_shards >= world_size`, and that it may result in nodes having the same number of shards +- 1", "> You can already use `.shard()` instead like this:\n> \n> dataset = dataset.shard(index=rank, num_shards=world_size)\n> note that it requires that `dataset.num_shards >= world_size`, and that it may result in nodes having the same number of shards +- 1\n\nThis means I have to ensure that the initial num_shards is greater than the number of GPUs I use each time, which seems inflexible. Is there a way to dynamically divide the data into multiple shards based on the number of GPUs used each time? For example:\n```\ndataset = load_dataset(*, stream=True) # dataset.num_shards()=1\nnum_shards=world_size*dataloader_num_workers\ndataset = dataset.dynamically_shard(num_shards=num_shards, num_samples=num_samples) #We may need to know the total number of samples (num_samples) in advance.\n```\n\n", "> Is there a way to dynamically divide the data into multiple shards based on the number of GPUs used each time?\n\nNo it's not possible without either\n\n1. doing data skipping, which degrades the data loading performance significantly (every node has to download the same data and skip most samples)\n2. or divide the original files further, which requires additional logic for every file format\n\nI would be interested in exploring 2 though, maybe if we start with Parquet support. Right now it fails because `ArrowExamplesIterable` doesn't know how to shard more than num_shards. We could have instead a `ReshardableArrowExamplesIterable` that would pass the right arguments to `_generate_tables()` in parquet.py to only read the data requested for a specific node", "> ReshardableArrowExamplesIterable\n\nOkay, my datasets are all on my local disk, so I haven't considered the overhead of data download. Are there any tutorials on creating custom iterable datasets? For example, a custom `iterabledataset.__iter__` function can be used to skip data, and it can inherit operations like `iterabledataset.map`." ]
2025-12-27T16:48:29
2025-12-29T16:06:52
null
NONE
null
null
null
null
### Feature request Added sharding function support to the streaming IterableDataset, allowing users to adjust the number of shards according to their training resources. For example: ``` dataset = load_dataset(*, stream=True) dataset = dataset.shard(num_shards=num_shards, num_samples=num_samples) #We may need to know the total number of samples (num_samples) in advance. ``` ### Motivation When performing large-scale pre-training in a distributed environment, large datasets may only be loaded in a streaming manner. To improve training efficiency, my current approach is as follows: ``` file_type="parquet" dataset_path="./*.parquet" dataset = load_dataset(file_type,data_files=dataset_path, stream=True) dataset = split_dataset_by_node(dataset, rank=rank, world_size=world_size) ``` I split a large file into N = world_size * dataloader_num_workers files and placed them under dataset_path. This ensures that each GPU processes different shards. However, this approach has some issues. If the number of GPUs used to train the model changes next time, I need to split the large file again to ensure that IterableDataset.num_shards = world_size * dataloader_num_workers. I'd like to know if there's a better approach, such as directly loading the large dataset in a streaming manner and then sharding the IterableDataset based on the number of GPUs and num_workers, similar to the approach in Example 1 of https://docs.pytorch.org/docs/stable/data.html#torch.utils.data.IterableDataset @lhoestq
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7917/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7917/timeline
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7916
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7916/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7916/comments
https://api.github.com/repos/huggingface/datasets/issues/7916/events
https://github.com/huggingface/datasets/issues/7916
3,764,901,707
I_kwDODunzps7gZ9dL
7,916
No description provided.
{ "login": "howitry", "id": 61858900, "node_id": "MDQ6VXNlcjYxODU4OTAw", "avatar_url": "https://avatars.githubusercontent.com/u/61858900?v=4", "gravatar_id": "", "url": "https://api.github.com/users/howitry", "html_url": "https://github.com/howitry", "followers_url": "https://api.github.com/users/howitry/followers", "following_url": "https://api.github.com/users/howitry/following{/other_user}", "gists_url": "https://api.github.com/users/howitry/gists{/gist_id}", "starred_url": "https://api.github.com/users/howitry/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/howitry/subscriptions", "organizations_url": "https://api.github.com/users/howitry/orgs", "repos_url": "https://api.github.com/users/howitry/repos", "events_url": "https://api.github.com/users/howitry/events{/privacy}", "received_events_url": "https://api.github.com/users/howitry/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[]
2025-12-27T16:33:11
2025-12-27T16:45:22
2025-12-27T16:45:22
NONE
null
null
null
null
null
{ "login": "howitry", "id": 61858900, "node_id": "MDQ6VXNlcjYxODU4OTAw", "avatar_url": "https://avatars.githubusercontent.com/u/61858900?v=4", "gravatar_id": "", "url": "https://api.github.com/users/howitry", "html_url": "https://github.com/howitry", "followers_url": "https://api.github.com/users/howitry/followers", "following_url": "https://api.github.com/users/howitry/following{/other_user}", "gists_url": "https://api.github.com/users/howitry/gists{/gist_id}", "starred_url": "https://api.github.com/users/howitry/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/howitry/subscriptions", "organizations_url": "https://api.github.com/users/howitry/orgs", "repos_url": "https://api.github.com/users/howitry/repos", "events_url": "https://api.github.com/users/howitry/events{/privacy}", "received_events_url": "https://api.github.com/users/howitry/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7916/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7916/timeline
null
completed
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7915
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7915/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7915/comments
https://api.github.com/repos/huggingface/datasets/issues/7915/events
https://github.com/huggingface/datasets/issues/7915
3,762,042,396
I_kwDODunzps7gPDYc
7,915
GDPval dataset Word docs corrupted
{ "login": "alexheat", "id": 12248575, "node_id": "MDQ6VXNlcjEyMjQ4NTc1", "avatar_url": "https://avatars.githubusercontent.com/u/12248575?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alexheat", "html_url": "https://github.com/alexheat", "followers_url": "https://api.github.com/users/alexheat/followers", "following_url": "https://api.github.com/users/alexheat/following{/other_user}", "gists_url": "https://api.github.com/users/alexheat/gists{/gist_id}", "starred_url": "https://api.github.com/users/alexheat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexheat/subscriptions", "organizations_url": "https://api.github.com/users/alexheat/orgs", "repos_url": "https://api.github.com/users/alexheat/repos", "events_url": "https://api.github.com/users/alexheat/events{/privacy}", "received_events_url": "https://api.github.com/users/alexheat/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[ "tentatively tagging @simonpfish ^\n\n(if it's an option you could enable PRs/Discussions on your dataset on HF)" ]
2025-12-25T13:56:55
2025-12-26T09:06:13
null
NONE
null
null
null
null
The [openai/gdpval](https://huggingface.co/datasets/openai/gdpval) dataset on Hugging Face contains Word .docx files with two types of corruption that cause Microsoft Word to display an "unreadable content" error. ### Root Causes 1. **Corrupted settings.xml**: The `word/settings.xml` file uses incorrect namespace prefixes (`ns0:`, `ns1:`, etc.) instead of the proper prefixes (`w:`, `mc:`, `m:`, etc.) 2. **Malformed TargetMode attributes**: Some files have `TargetMode="External"` attributes missing their closing `/>` tag in hyperlink relationships Both issues cause Word to reject the files even though the XML structure is technically valid. I have a fix for the issue here https://github.com/alexheat/gdpval-docx-fix
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7915/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7915/timeline
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7914
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7914/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7914/comments
https://api.github.com/repos/huggingface/datasets/issues/7914/events
https://github.com/huggingface/datasets/issues/7914
3,760,894,100
I_kwDODunzps7gKrCU
7,914
[ROCm] please install 'torchcodec'
{ "login": "AndreasKaratzas", "id": 42451412, "node_id": "MDQ6VXNlcjQyNDUxNDEy", "avatar_url": "https://avatars.githubusercontent.com/u/42451412?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AndreasKaratzas", "html_url": "https://github.com/AndreasKaratzas", "followers_url": "https://api.github.com/users/AndreasKaratzas/followers", "following_url": "https://api.github.com/users/AndreasKaratzas/following{/other_user}", "gists_url": "https://api.github.com/users/AndreasKaratzas/gists{/gist_id}", "starred_url": "https://api.github.com/users/AndreasKaratzas/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AndreasKaratzas/subscriptions", "organizations_url": "https://api.github.com/users/AndreasKaratzas/orgs", "repos_url": "https://api.github.com/users/AndreasKaratzas/repos", "events_url": "https://api.github.com/users/AndreasKaratzas/events{/privacy}", "received_events_url": "https://api.github.com/users/AndreasKaratzas/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[ "I was able to install torchcodec by building it from source and have put together a PR: https://github.com/vllm-project/vllm/pull/31323\n\nStill I think it would make this framework more robust to add at least one fallback lib (that is more widely used) in place should torchcodec installation fail or library is not found." ]
2025-12-24T19:39:17
2025-12-28T07:25:42
null
NONE
null
null
null
null
### Describe the bug Datasets library is widely used by many Python packages. Naturally, it is a requirement on many platforms. This includes vLLM for ROCm. During audio dataset tests, there is an exception triggered: ```python def decode_example( self, value: dict, token_per_repo_id: Optional[dict[str, Union[str, bool, None]]] = None ) -> "AudioDecoder": """Decode example audio file into audio data. Args: value (`dict`): A dictionary with keys: - `path`: String with relative audio file path. - `bytes`: Bytes of the audio file. token_per_repo_id (`dict`, *optional*): To access and decode audio files from private repositories on the Hub, you can pass a dictionary repo_id (`str`) -> token (`bool` or `str`) Returns: `torchcodec.decoders.AudioDecoder` """ if config.TORCHCODEC_AVAILABLE: from ._torchcodec import AudioDecoder else: > raise ImportError("To support decoding audio data, please install 'torchcodec'.") E ImportError: To support decoding audio data, please install 'torchcodec'. ``` At the same time, `torchcodec` cannot be installed on ROCm, because Its GPU acceleration uses NVIDIA's NVDEC (hardware decoder), which is NVIDIA-specific. Therefore, code paths that call this block trigger errors on ROCm. Can you add an alternative package there as fallback instead of an ImportError? ### Steps to reproduce the bug On a machine with MI300/MI325/MI355: ```bash pytest -s -v tests/entrypoints/openai/correctness/test_transcription_api_correctness.py::test_wer_correctness[12.74498-D4nt3/esb-datasets-earnings22-validation-tiny-filtered-openai/whisper-large-v3] ``` ### Expected behavior ```log _________________________________________________ test_wer_correctness[12.74498-D4nt3/esb-datasets-earnings22-validation-tiny-filtered-openai/whisper-large-v3] ________________________________________[383/535$ model_name = 'openai/whisper-large-v3', dataset_repo = 'D4nt3/esb-datasets-earnings22-validation-tiny-filtered', expected_wer = 12.74498, n_examples = -1, max_concurrent_request = None @pytest.mark.parametrize("model_name", ["openai/whisper-large-v3"]) # Original dataset is 20GB+ in size, hence we use a pre-filtered slice. @pytest.mark.parametrize( "dataset_repo", ["D4nt3/esb-datasets-earnings22-validation-tiny-filtered"] ) # NOTE: Expected WER measured with equivalent hf.transformers args: # whisper-large-v3 + esb-datasets-earnings22-validation-tiny-filtered. @pytest.mark.parametrize("expected_wer", [12.744980]) def test_wer_correctness( model_name, dataset_repo, expected_wer, n_examples=-1, max_concurrent_request=None ): # TODO refactor to use `ASRDataset` with RemoteOpenAIServer(model_name, ["--enforce-eager"]) as remote_server: > dataset = load_hf_dataset(dataset_repo) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ tests/entrypoints/openai/correctness/test_transcription_api_correctness.py:160: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tests/entrypoints/openai/correctness/test_transcription_api_correctness.py:111: in load_hf_dataset if "duration_ms" not in dataset[0]: ^^^^^^^^^^ /usr/local/lib/python3.12/dist-packages/datasets/arrow_dataset.py:2876: in __getitem__ return self._getitem(key) ^^^^^^^^^^^^^^^^^^ /usr/local/lib/python3.12/dist-packages/datasets/arrow_dataset.py:2858: in _getitem formatted_output = format_table( /usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py:658: in format_table return formatter(pa_table, query_type=query_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ /usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py:411: in __call__ return self.format_row(pa_table) ^^^^^^^^^^^^^^^^^^^^^^^^^ /usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py:460: in format_row row = self.python_features_decoder.decode_row(row) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ /usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py:224: in decode_row return self.features.decode_example(row, token_per_repo_id=self.token_per_repo_id) if self.features else row ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ /usr/local/lib/python3.12/dist-packages/datasets/features/features.py:2111: in decode_example column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ /usr/local/lib/python3.12/dist-packages/datasets/features/features.py:1419: in decode_nested_example return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ``` ### Environment info - `datasets` version: 4.4.2 - Platform: Linux-5.15.0-161-generic-x86_64-with-glibc2.35 - Python version: 3.12.12 - `huggingface_hub` version: 0.36.0 - PyArrow version: 22.0.0 - Pandas version: 2.3.3 - `fsspec` version: 2025.10.0
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7914/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7914/timeline
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7913
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7913/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7913/comments
https://api.github.com/repos/huggingface/datasets/issues/7913/events
https://github.com/huggingface/datasets/pull/7913
3,758,884,376
PR_kwDODunzps66aEsF
7,913
Add lance format support
{ "login": "eddyxu", "id": 17097, "node_id": "MDQ6VXNlcjE3MDk3", "avatar_url": "https://avatars.githubusercontent.com/u/17097?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eddyxu", "html_url": "https://github.com/eddyxu", "followers_url": "https://api.github.com/users/eddyxu/followers", "following_url": "https://api.github.com/users/eddyxu/following{/other_user}", "gists_url": "https://api.github.com/users/eddyxu/gists{/gist_id}", "starred_url": "https://api.github.com/users/eddyxu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eddyxu/subscriptions", "organizations_url": "https://api.github.com/users/eddyxu/orgs", "repos_url": "https://api.github.com/users/eddyxu/repos", "events_url": "https://api.github.com/users/eddyxu/events{/privacy}", "received_events_url": "https://api.github.com/users/eddyxu/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[ "Mentioned https://github.com/huggingface/datasets/issues/7863 as well", "@pdames for vis", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7913). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Cool ! I notice the current implementation doesn't support streaming because of the symlink hack.\r\n\r\nI believe you can do something like this instead:\r\n\r\n```python\r\ndef _generate_tables(self, paths: list[str]):\r\n for path in paths:\r\n ds = lance.dataset(path)\r\n for frag_idx, fragment in enumerate(ds.get_fragments()):\r\n for batch_idx, batch in enumerate(\r\n fragment.to_batches(columns=self.config.columns, batch_size=self.config.batch_size)\r\n ):\r\n table = pa.Table.from_batches([batch])\r\n table = self._cast_table(table)\r\n yield Key(frag_idx, batch_idx), table\r\n```\r\n\r\nnote that path can be a local one, but also a `hf://` URI", "@lhoestq Take another look? " ]
2025-12-24T00:52:20
2026-01-06T07:01:43
null
NONE
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7913", "html_url": "https://github.com/huggingface/datasets/pull/7913", "diff_url": "https://github.com/huggingface/datasets/pull/7913.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7913.patch", "merged_at": null }
Add lance format as one of the `packaged_modules`. ```py import datasets ds = datasets.load_dataset("org/lance_repo", split="train") # Or ds = datasets.load_dataset("./local/data.lance") ```
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7913/reactions", "total_count": 5, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 3, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7913/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7912
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7912/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7912/comments
https://api.github.com/repos/huggingface/datasets/issues/7912/events
https://github.com/huggingface/datasets/pull/7912
3,755,023,829
PR_kwDODunzps66NQzG
7,912
fix low but large example indexerror
{ "login": "CloseChoice", "id": 31857876, "node_id": "MDQ6VXNlcjMxODU3ODc2", "avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4", "gravatar_id": "", "url": "https://api.github.com/users/CloseChoice", "html_url": "https://github.com/CloseChoice", "followers_url": "https://api.github.com/users/CloseChoice/followers", "following_url": "https://api.github.com/users/CloseChoice/following{/other_user}", "gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}", "starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions", "organizations_url": "https://api.github.com/users/CloseChoice/orgs", "repos_url": "https://api.github.com/users/CloseChoice/repos", "events_url": "https://api.github.com/users/CloseChoice/events{/privacy}", "received_events_url": "https://api.github.com/users/CloseChoice/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[]
2025-12-22T19:53:59
2025-12-22T19:53:59
null
CONTRIBUTOR
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7912", "html_url": "https://github.com/huggingface/datasets/pull/7912", "diff_url": "https://github.com/huggingface/datasets/pull/7912.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7912.patch", "merged_at": null }
Fixes #7911. This PR simply implements the approach outlined in the corresponding issue, that if we have large examples, the number of shards should never be more than the number of samples. This is an absolute edge case, but can happen for image data.
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7912/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7912/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7911
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7911/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7911/comments
https://api.github.com/repos/huggingface/datasets/issues/7911/events
https://github.com/huggingface/datasets/issues/7911
3,753,447,559
I_kwDODunzps7fuRCH
7,911
IndexError when saving few large examples to disk
{ "login": "CloseChoice", "id": 31857876, "node_id": "MDQ6VXNlcjMxODU3ODc2", "avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4", "gravatar_id": "", "url": "https://api.github.com/users/CloseChoice", "html_url": "https://github.com/CloseChoice", "followers_url": "https://api.github.com/users/CloseChoice/followers", "following_url": "https://api.github.com/users/CloseChoice/following{/other_user}", "gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}", "starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions", "organizations_url": "https://api.github.com/users/CloseChoice/orgs", "repos_url": "https://api.github.com/users/CloseChoice/repos", "events_url": "https://api.github.com/users/CloseChoice/events{/privacy}", "received_events_url": "https://api.github.com/users/CloseChoice/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[]
2025-12-22T11:33:19
2025-12-22T11:33:19
null
CONTRIBUTOR
null
null
null
null
### Describe the bug I ran into this issue when processing a file (900MB) with just one example but simplified for a quicker reproducer below. The problem is that, if `num_shards` is not explicitly set, we calculate it manually using https://github.com/huggingface/datasets/blob/main/src/datasets/utils/py_utils.py#L96 with the default `config.MAX_SHARD_SIZE` which is 500MB. If a single example is now larger than this, we run into an index error since the shards should be processed individually. An easy workaround is: `dataset.save_to_disk(output_path, max_shard_size="1GB")` or `dataset.save_to_disk(output_path, num_shards=1)`. I believe this should be fixed and can happen in edge cases for image data, especially when just testing single partitions. The fix would be rather easy, just using a `num_shards = min(num_examples, <previously_calculated_num_shards>)` ### Steps to reproduce the bug ```python from datasets import Dataset target_size = 2 * 1024 * 1024 # 2 MB in bytes base_text = ( "This is a sample sentence that will be repeated many times to create a large dataset. " * 100 ) large_text = "" while len(large_text.encode("utf-8")) < target_size: large_text += base_text actual_size = len(large_text.encode("utf-8")) size_mb = actual_size / (1024 * 1024) data = {"text": [large_text], "label": [0], "id": [1]} dataset = Dataset.from_dict(data) output_path = "./sample_dataset" # make sure this is split into 2 shards dataset.save_to_disk(output_path, max_shard_size="1MB") ``` this results in ``` ```bash Saving the dataset (1/3 shards): 100%|████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 162.96 examples/s] Traceback (most recent call last): File "/home/tpitters/programming/toy-mmu/create_dataset.py", line 27, in <module> dataset.save_to_disk(output_path, max_shard_size="1MB") ~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tpitters/programming/toy-mmu/.venv/lib/python3.13/site-packages/datasets/arrow_dataset.py", line 1640, in save_to_disk for kwargs in kwargs_per_job: ^^^^^^^^^^^^^^ File "/home/tpitters/programming/toy-mmu/.venv/lib/python3.13/site-packages/datasets/arrow_dataset.py", line 1617, in <genexpr> "shard": self.shard(num_shards=num_shards, index=shard_idx, contiguous=True), ~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tpitters/programming/toy-mmu/.venv/lib/python3.13/site-packages/datasets/arrow_dataset.py", line 4987, in shard return self.select( ~~~~~~~~~~~^ indices=indices, ^^^^^^^^^^^^^^^^ ...<2 lines>... writer_batch_size=writer_batch_size, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/home/tpitters/programming/toy-mmu/.venv/lib/python3.13/site-packages/datasets/arrow_dataset.py", line 562, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ~~~~^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tpitters/programming/toy-mmu/.venv/lib/python3.13/site-packages/datasets/fingerprint.py", line 442, in wrapper out = func(dataset, *args, **kwargs) File "/home/tpitters/programming/toy-mmu/.venv/lib/python3.13/site-packages/datasets/arrow_dataset.py", line 4104, in select return self._select_contiguous(start, length, new_fingerprint=new_fingerprint) ~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tpitters/programming/toy-mmu/.venv/lib/python3.13/site-packages/datasets/arrow_dataset.py", line 562, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ~~~~^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tpitters/programming/toy-mmu/.venv/lib/python3.13/site-packages/datasets/fingerprint.py", line 442, in wrapper out = func(dataset, *args, **kwargs) File "/home/tpitters/programming/toy-mmu/.venv/lib/python3.13/site-packages/datasets/arrow_dataset.py", line 4164, in _select_contiguous _check_valid_indices_value(start, len(self)) ~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^ File "/home/tpitters/programming/toy-mmu/.venv/lib/python3.13/site-packages/datasets/arrow_dataset.py", line 624, in _check_valid_indices_value raise IndexError(f"Index {index} out of range for dataset of size {size}.") IndexError: Index 1 out of range for dataset of size 1. ``` ### Expected behavior should pass ### Environment info datasets==4.4.2
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7911/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7911/timeline
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7910
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7910/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7910/comments
https://api.github.com/repos/huggingface/datasets/issues/7910/events
https://github.com/huggingface/datasets/pull/7910
3,749,894,414
PR_kwDODunzps658oGv
7,910
Enhance cast_column() with cast_kwargs parameter
{ "login": "Moenupa", "id": 49304833, "node_id": "MDQ6VXNlcjQ5MzA0ODMz", "avatar_url": "https://avatars.githubusercontent.com/u/49304833?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Moenupa", "html_url": "https://github.com/Moenupa", "followers_url": "https://api.github.com/users/Moenupa/followers", "following_url": "https://api.github.com/users/Moenupa/following{/other_user}", "gists_url": "https://api.github.com/users/Moenupa/gists{/gist_id}", "starred_url": "https://api.github.com/users/Moenupa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Moenupa/subscriptions", "organizations_url": "https://api.github.com/users/Moenupa/orgs", "repos_url": "https://api.github.com/users/Moenupa/repos", "events_url": "https://api.github.com/users/Moenupa/events{/privacy}", "received_events_url": "https://api.github.com/users/Moenupa/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[]
2025-12-20T10:09:11
2025-12-20T10:09:11
null
NONE
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7910", "html_url": "https://github.com/huggingface/datasets/pull/7910", "diff_url": "https://github.com/huggingface/datasets/pull/7910.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7910.patch", "merged_at": null }
Fixes #7909.
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7910/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7910/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7909
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7909/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7909/comments
https://api.github.com/repos/huggingface/datasets/issues/7909/events
https://github.com/huggingface/datasets/issues/7909
3,749,885,131
I_kwDODunzps7fgrTL
7,909
Support cast_kwargs in cast_columns
{ "login": "Moenupa", "id": 49304833, "node_id": "MDQ6VXNlcjQ5MzA0ODMz", "avatar_url": "https://avatars.githubusercontent.com/u/49304833?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Moenupa", "html_url": "https://github.com/Moenupa", "followers_url": "https://api.github.com/users/Moenupa/followers", "following_url": "https://api.github.com/users/Moenupa/following{/other_user}", "gists_url": "https://api.github.com/users/Moenupa/gists{/gist_id}", "starred_url": "https://api.github.com/users/Moenupa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Moenupa/subscriptions", "organizations_url": "https://api.github.com/users/Moenupa/orgs", "repos_url": "https://api.github.com/users/Moenupa/repos", "events_url": "https://api.github.com/users/Moenupa/events{/privacy}", "received_events_url": "https://api.github.com/users/Moenupa/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
2025-12-20T10:02:07
2025-12-20T10:28:01
null
NONE
null
null
null
null
### Feature request expose `cast(**cast_kwargs)` to `cast_column()` https://github.com/huggingface/datasets/blob/0feb65dd8733191dd2d1e74215b422fc5939a56a/src/datasets/arrow_dataset.py#L2205 ### Motivation `cast_column()` wraps `cast()` function without exposing any `cast()` args. For large multi-modal datasets, e.g. ```py # a dataset with list[{"bytes"}: b'', ...], much more than one image load_dataset("MLLM-CL/VTCBench").cast_column("images", List(Image(decode=False))) ``` This would fail due to #6206, #7167, where the default value `1000` for batch size in `cast()` is too large and causes `pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays`. https://github.com/huggingface/datasets/blob/0feb65dd8733191dd2d1e74215b422fc5939a56a/src/datasets/arrow_dataset.py#L2164-L2205 ### Your contribution #7910
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7909/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7909/timeline
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7908
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7908/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7908/comments
https://api.github.com/repos/huggingface/datasets/issues/7908/events
https://github.com/huggingface/datasets/pull/7908
3,747,829,610
PR_kwDODunzps651xlf
7,908
set dev version
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7908). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-12-19T15:06:21
2025-12-19T15:11:05
2025-12-19T15:06:29
MEMBER
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7908", "html_url": "https://github.com/huggingface/datasets/pull/7908", "diff_url": "https://github.com/huggingface/datasets/pull/7908.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7908.patch", "merged_at": "2025-12-19T15:06:29" }
null
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7908/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7908/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7907
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7907/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7907/comments
https://api.github.com/repos/huggingface/datasets/issues/7907/events
https://github.com/huggingface/datasets/pull/7907
3,747,818,613
PR_kwDODunzps651vMp
7,907
release: 4.4.2
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7907). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-12-19T15:02:23
2025-12-19T15:06:46
2025-12-19T15:03:22
MEMBER
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7907", "html_url": "https://github.com/huggingface/datasets/pull/7907", "diff_url": "https://github.com/huggingface/datasets/pull/7907.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7907.patch", "merged_at": "2025-12-19T15:03:22" }
null
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7907/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7907/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7906
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7906/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7906/comments
https://api.github.com/repos/huggingface/datasets/issues/7906/events
https://github.com/huggingface/datasets/pull/7906
3,747,764,992
PR_kwDODunzps651jiI
7,906
Don't save original_shard_lengths by default for backward compat
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7906). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-12-19T14:44:09
2025-12-19T14:57:25
2025-12-19T14:57:23
MEMBER
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7906", "html_url": "https://github.com/huggingface/datasets/pull/7906", "diff_url": "https://github.com/huggingface/datasets/pull/7906.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7906.patch", "merged_at": "2025-12-19T14:57:23" }
following #7897 but let users enable it with `datasets.config.SAVE_ORIGINAL_SHARD_LENGTHS = True` this is useful for the Dataset Viewer to know where each row comes from after converting to parquet
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7906/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7906/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7905
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7905/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7905/comments
https://api.github.com/repos/huggingface/datasets/issues/7905/events
https://github.com/huggingface/datasets/issues/7905
3,734,233,245
I_kwDODunzps7ek-Cd
7,905
Unbounded network usage when opening Data Studio
{ "login": "alizaredornica-sys", "id": 225014457, "node_id": "U_kgDODWlyuQ", "avatar_url": "https://avatars.githubusercontent.com/u/225014457?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alizaredornica-sys", "html_url": "https://github.com/alizaredornica-sys", "followers_url": "https://api.github.com/users/alizaredornica-sys/followers", "following_url": "https://api.github.com/users/alizaredornica-sys/following{/other_user}", "gists_url": "https://api.github.com/users/alizaredornica-sys/gists{/gist_id}", "starred_url": "https://api.github.com/users/alizaredornica-sys/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alizaredornica-sys/subscriptions", "organizations_url": "https://api.github.com/users/alizaredornica-sys/orgs", "repos_url": "https://api.github.com/users/alizaredornica-sys/repos", "events_url": "https://api.github.com/users/alizaredornica-sys/events{/privacy}", "received_events_url": "https://api.github.com/users/alizaredornica-sys/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[ "cc @cfahlgren1", "Thanks for reporting! Looking into this!" ]
2025-12-16T10:45:02
2025-12-19T15:29:56
null
NONE
null
null
null
null
### Describe the bug Opening the Data Studio tab on a dataset page triggers continuous and unbounded network traffic. This issue occurs across multiple browsers and continues even without user interaction. ### Steps to reproduce the bug https://huggingface.co/datasets/slone/nllb-200-10M-sample/viewer ### Expected behavior Data Studio should load a limited, finite amount of data and stop further network activity unless explicitly requested by the user. ### Environment info - OS: Windows 10 - Browsers: Chrome, Firefox, Edge - Device: Desktop - Network: Standard broadband connection
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7905/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7905/timeline
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7904
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7904/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7904/comments
https://api.github.com/repos/huggingface/datasets/issues/7904/events
https://github.com/huggingface/datasets/issues/7904
3,727,978,498
I_kwDODunzps7eNHAC
7,904
Request: Review pending neuroimaging PRs (#7886 BIDS loader, #7887 lazy loading)
{ "login": "The-Obstacle-Is-The-Way", "id": 175985783, "node_id": "U_kgDOCn1Udw", "avatar_url": "https://avatars.githubusercontent.com/u/175985783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/The-Obstacle-Is-The-Way", "html_url": "https://github.com/The-Obstacle-Is-The-Way", "followers_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/followers", "following_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/following{/other_user}", "gists_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/gists{/gist_id}", "starred_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/subscriptions", "organizations_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/orgs", "repos_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/repos", "events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/events{/privacy}", "received_events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi ! sure I'll be happy to take a look, sorry for the delay :)" ]
2025-12-14T20:34:31
2025-12-15T11:25:29
null
CONTRIBUTOR
null
null
null
null
## Summary I'm building production neuroimaging pipelines that depend on `datasets` and would benefit greatly from two pending PRs being reviewed/merged. ## Pending PRs | PR | Description | Status | Open Since | |----|-------------|--------|------------| | [#7886](https://github.com/huggingface/datasets/pull/7886) | BIDS dataset loader | Open | Nov 29 | | [#7887](https://github.com/huggingface/datasets/pull/7887) | Lazy loading for NIfTI | Open | Nov 29 | ## Use Case The neuroimaging community uses the BIDS (Brain Imaging Data Structure) standard for organizing MRI/fMRI data. These PRs would enable: 1. **#7886**: `load_dataset('bids', data_dir='/path/to/bids')` - Load local BIDS directories directly 2. **#7887**: Memory-efficient NIfTI handling (single 4D fMRI file can be 1-2GB) ## Current Workaround Without these, users must either: - Upload to Hub first, then consume (works but slow iteration) - Hand-roll BIDS parsing (duplicates effort) ## Request Could a maintainer review these PRs? Happy to address any feedback. The BIDS loader has tests passing and was end-to-end tested with real OpenNeuro data. Thank you for the great work on `Nifti()` support - these PRs build on that foundation. ## Related - Contributes to #7804 (Support scientific data formats) - Built on @TobiasPitters's Nifti feature work
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7904/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7904/timeline
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7903
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7903/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7903/comments
https://api.github.com/repos/huggingface/datasets/issues/7903/events
https://github.com/huggingface/datasets/pull/7903
3,723,395,305
PR_kwDODunzps64kO0d
7,903
Docs: add minimal usage example to dataset card guidelines
{ "login": "an-enigma", "id": 44645629, "node_id": "MDQ6VXNlcjQ0NjQ1NjI5", "avatar_url": "https://avatars.githubusercontent.com/u/44645629?v=4", "gravatar_id": "", "url": "https://api.github.com/users/an-enigma", "html_url": "https://github.com/an-enigma", "followers_url": "https://api.github.com/users/an-enigma/followers", "following_url": "https://api.github.com/users/an-enigma/following{/other_user}", "gists_url": "https://api.github.com/users/an-enigma/gists{/gist_id}", "starred_url": "https://api.github.com/users/an-enigma/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/an-enigma/subscriptions", "organizations_url": "https://api.github.com/users/an-enigma/orgs", "repos_url": "https://api.github.com/users/an-enigma/repos", "events_url": "https://api.github.com/users/an-enigma/events{/privacy}", "received_events_url": "https://api.github.com/users/an-enigma/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[]
2025-12-12T13:16:46
2025-12-12T13:16:46
null
NONE
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7903", "html_url": "https://github.com/huggingface/datasets/pull/7903", "diff_url": "https://github.com/huggingface/datasets/pull/7903.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7903.patch", "merged_at": null }
Adds a short, minimal load_dataset example to the dataset card documentation to help first-time users quickly load and inspect datasets.
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7903/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7903/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7902
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7902/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7902/comments
https://api.github.com/repos/huggingface/datasets/issues/7902/events
https://github.com/huggingface/datasets/issues/7902
3,723,281,150
I_kwDODunzps7d7ML-
7,902
The child process retrieves the dataset directly from the main process instead of executing `memory_mapped_arrow_table_from_file`.
{ "login": "HQF2017", "id": 32055029, "node_id": "MDQ6VXNlcjMyMDU1MDI5", "avatar_url": "https://avatars.githubusercontent.com/u/32055029?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HQF2017", "html_url": "https://github.com/HQF2017", "followers_url": "https://api.github.com/users/HQF2017/followers", "following_url": "https://api.github.com/users/HQF2017/following{/other_user}", "gists_url": "https://api.github.com/users/HQF2017/gists{/gist_id}", "starred_url": "https://api.github.com/users/HQF2017/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HQF2017/subscriptions", "organizations_url": "https://api.github.com/users/HQF2017/orgs", "repos_url": "https://api.github.com/users/HQF2017/repos", "events_url": "https://api.github.com/users/HQF2017/events{/privacy}", "received_events_url": "https://api.github.com/users/HQF2017/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Memory mapping is actually the way for processes to share memory efficiently and without copy. It is efficient when on are using a local disk, and it's discouraged to use it on remote disk for the reasons you observed.\n\nWhat you can do instead is save the dataset as Parquet on your remote storage (or on Hugging Face Datasets which offers fast uploads thanks to Xet), and then your can reload it in streaming mode. Streaming mode is ideal to use a dataset that is hosted in a remote storage" ]
2025-12-12T12:37:44
2025-12-15T11:48:16
null
NONE
null
null
null
null
### Feature request The child process retrieves the dataset directly from the main process instead of executing `memory_mapped_arrow_table_from_file`. ### Motivation Because my local disk space is insufficient, I can only store a dataset on a remote Ceph server and process it using datasets. I used the data-juicer[https://github.com/datajuicer/data-juicer] framework as an outer layer which uses datasets, but it doesn't support streaming datasets. I then encountered a problem: for each load, map, and filter operation, I had to wait for a large number of child processes to execute `memory_mapped_arrow_table_from_file`. Since the actual file was on the remote Ceph server, this operation was limited by network I/O. I don't know if it's a problem with my usage or if this is how datasets are currently designed.However, I think that if the instances obtained after datasets.load_datasets are directly passed to the child process instead of re-executing `memory_mapped_arrow_table_from_file`, it might solve my problem.Or datasets already support this capability, but I just didn't know it? ### Your contribution 。。。
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7902/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7902/timeline
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7901
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7901/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7901/comments
https://api.github.com/repos/huggingface/datasets/issues/7901/events
https://github.com/huggingface/datasets/issues/7901
3,722,243,543
I_kwDODunzps7d3O3X
7,901
ShuffledDataSourcesArrowExamplesIterable cannot properly resume from checkpoint
{ "login": "howitry", "id": 61858900, "node_id": "MDQ6VXNlcjYxODU4OTAw", "avatar_url": "https://avatars.githubusercontent.com/u/61858900?v=4", "gravatar_id": "", "url": "https://api.github.com/users/howitry", "html_url": "https://github.com/howitry", "followers_url": "https://api.github.com/users/howitry/followers", "following_url": "https://api.github.com/users/howitry/following{/other_user}", "gists_url": "https://api.github.com/users/howitry/gists{/gist_id}", "starred_url": "https://api.github.com/users/howitry/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/howitry/subscriptions", "organizations_url": "https://api.github.com/users/howitry/orgs", "repos_url": "https://api.github.com/users/howitry/repos", "events_url": "https://api.github.com/users/howitry/events{/privacy}", "received_events_url": "https://api.github.com/users/howitry/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi ! As you can read in the logs, the shuffle buffer content is lost when resuming a shuffled dataset. The default size is 1000 examples, but you can tweak it\n\ne.g. if you run your code with this\n\n```diff\n- ds = Dataset.from_dict({\"a\": range(12)}).to_iterable_dataset(num_shards=1)\n- ds = ds.shuffle(seed=42)\n+ ds = Dataset.from_dict({\"a\": range(100)}).to_iterable_dataset(num_shards=1)\n+ ds = ds.shuffle(seed=42, buffer_size=10)\n```\n\nthen you get\n\n```\n{'a': 0}\n{'a': 7}\n{'a': 6}\ncheckpoint\nrestart from checkpoint\nLoading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.\n{'a': 17}\n{'a': 15}\n{'a': 24}\n{'a': 19}\n{'a': 21}\n{'a': 23}\n...\n```\n\nwhere you only lose 10 rows ([1, 2, 3, 4, 5, 8, 9, 10, 11, 12])", "> Hi ! As you can read in the logs, the shuffle buffer content is lost when resuming a shuffled dataset. The default size is 1000 examples, but you can tweak it\n> \n> e.g. if you run your code with this\n> \n> - ds = Dataset.from_dict({\"a\": range(12)}).to_iterable_dataset(num_shards=1)\n> - ds = ds.shuffle(seed=42)\n> + ds = Dataset.from_dict({\"a\": range(100)}).to_iterable_dataset(num_shards=1)\n> + ds = ds.shuffle(seed=42, buffer_size=10)\n> then you get\n> \n> ```\n> {'a': 0}\n> {'a': 7}\n> {'a': 6}\n> checkpoint\n> restart from checkpoint\n> Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.\n> {'a': 17}\n> {'a': 15}\n> {'a': 24}\n> {'a': 19}\n> {'a': 21}\n> {'a': 23}\n> ...\n> ```\n> \n> where you only lose 10 rows ([1, 2, 3, 4, 5, 8, 9, 10, 11, 12])\n\nThank you for your answer. So, when ShuffledDataSourcesArrowExamplesIterable resumes training, it will definitely discard unused data in buffer_size?", "Yes correct. This is because the state_dict doesn't save the content of the buffer, so when resuming the buffer starts empty and the examples that were in the buffer are lost." ]
2025-12-12T06:57:32
2025-12-16T19:34:46
null
NONE
null
null
null
null
### Describe the bug ShuffledDataSourcesArrowExamplesIterable cannot properly resume from checkpoint ### Steps to reproduce the bug 1. The reproducible code is as follows: ``` from datasets import Dataset, concatenate_datasets, interleave_datasets ds = Dataset.from_dict({"a": range(12)}).to_iterable_dataset(num_shards=1) ds = ds.shuffle(seed=42) for idx, example in enumerate(ds): print(example) if idx == 2: #The checkpoint can be loaded correctly only when idx <= 1. state_dict = ds.state_dict() print("checkpoint") break print("state_dict: ",state_dict) ds.load_state_dict(state_dict) print(f"restart from checkpoint") for example in ds: print(example) ``` 2. The error message is as follows: ``` {'a': 0} {'a': 7} {'a': 6} checkpoint state_dict: {'examples_iterable': {'examples_iterable': {'examples_iterable': {'shard_idx': 1, 'shard_example_idx': 0, 'type': 'ShuffledDataSourcesArrowExamplesIterable'}, 'previous_state': {'shard_idx': 0, 'shard_example_idx': 0, 'type': 'ShuffledDataSourcesArrowExamplesIterable'}, 'batch_idx': 12, 'num_chunks_since_previous_state': 12, 'cropped_chunk_length': 0, 'type': 'RebatchedArrowExamplesIterable'}, 'previous_state': {'examples_iterable': {'shard_idx': 1, 'shard_example_idx': 0, 'type': 'ShuffledDataSourcesArrowExamplesIterable'}, 'previous_state': {'shard_idx': 0, 'shard_example_idx': 0, 'type': 'ShuffledDataSourcesArrowExamplesIterable'}, 'batch_idx': 12, 'num_chunks_since_previous_state': 12, 'cropped_chunk_length': 0, 'type': 'RebatchedArrowExamplesIterable'}, 'batch_idx': 3, 'num_chunks_since_previous_state': 2, 'cropped_chunk_length': 0, 'type': 'RebatchedArrowExamplesIterable'}, 'epoch': 0} restart from checkpoint Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. ``` ### Expected behavior I want a correct resume from any checkpoint, but currently the checkpoint can only be loaded correctly when idx <= 1. ### Environment info datasets Version: 4.4.1 @lhoestq
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7901/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7901/timeline
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7900
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7900/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7900/comments
https://api.github.com/repos/huggingface/datasets/issues/7900/events
https://github.com/huggingface/datasets/issues/7900
3,711,751,590
I_kwDODunzps7dPNWm
7,900
`Permission denied` when sharing cache between users
{ "login": "qthequartermasterman", "id": 19497738, "node_id": "MDQ6VXNlcjE5NDk3NzM4", "avatar_url": "https://avatars.githubusercontent.com/u/19497738?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qthequartermasterman", "html_url": "https://github.com/qthequartermasterman", "followers_url": "https://api.github.com/users/qthequartermasterman/followers", "following_url": "https://api.github.com/users/qthequartermasterman/following{/other_user}", "gists_url": "https://api.github.com/users/qthequartermasterman/gists{/gist_id}", "starred_url": "https://api.github.com/users/qthequartermasterman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qthequartermasterman/subscriptions", "organizations_url": "https://api.github.com/users/qthequartermasterman/orgs", "repos_url": "https://api.github.com/users/qthequartermasterman/repos", "events_url": "https://api.github.com/users/qthequartermasterman/events{/privacy}", "received_events_url": "https://api.github.com/users/qthequartermasterman/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[ "I remember a fix from last year to usr the current umask for filelock 3.10.0, which filelock version are you using ? can you try another version ?", "I believe we we are using `filelock==3.19.1`. Do you have a recommended version to use?" ]
2025-12-09T16:41:47
2025-12-16T15:39:06
null
NONE
null
null
null
null
### Describe the bug We want to use `datasets` and `transformers` on a shared machine. Right now, each user has a separate HF_HOME in their home directory. To reduce duplicates of the datasets, we want to share that cache. While experimenting, we are running into `Permission denied` errors. It looks like this was supported in the past (see #6589)? Is there a correct way to share caches across users? ### Steps to reproduce the bug 1. Create a directory `/models/hf_hub_shared_experiment` with read/write permissions for two different users 2. For each user run the script below ```python import os os.environ["HF_HOME"] = "/models/hf_hub_shared_experiment" os.environ["HF_DATASETS_CACHE"] = "/models/hf_hub_shared_experiment/data" import datasets import transformers DATASET = "tatsu-lab/alpaca" MODEL = "meta-llama/Llama-3.2-1B-Instruct" model = transformers.AutoModelForCausalLM.from_pretrained(MODEL) tokenizer = transformers.AutoTokenizer.from_pretrained(MODEL) dataset = datasets.load_dataset(DATASET) ``` The first user is able to download and use the model and dataset. The second user gets these errors: ``` $ python ./experiment_with_shared.py Could not cache non-existence of file. Will ignore error and continue. Error: [Errno 13] Permission denied: '/models/hf_hub_shared_experiment/hub/models--meta-llama--Llama-3.2-1B-Instruct/.no_exist/9213176726f574b556790deb65791e0c5aa438b6/custom_generate/generate.py' Could not cache non-existence of file. Will ignore error and continue. Error: [Errno 13] Permission denied: '/models/hf_hub_shared_experiment/hub/datasets--tatsu-lab--alpaca/.no_exist/dce01c9b08f87459cf36a430d809084718273017/alpaca.py' Could not cache non-existence of file. Will ignore error and continue. Error: [Errno 13] Permission denied: '/models/hf_hub_shared_experiment/hub/datasets--tatsu-lab--alpaca/.no_exist/dce01c9b08f87459cf36a430d809084718273017/.huggingface.yaml' Could not cache non-existence of file. Will ignore error and continue. Error: [Errno 13] Permission denied: '/models/hf_hub_shared_experiment/hub/datasets--tatsu-lab--alpaca/.no_exist/dce01c9b08f87459cf36a430d809084718273017/dataset_infos.json' Traceback (most recent call last): File "/home/user2/.venv/experiment_with_shared.py", line 17, in <module> dataset = datasets.load_dataset(DATASET) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/user2/.venv/lib/python3.12/site-packages/datasets/load.py", line 1397, in load_dataset builder_instance = load_dataset_builder( ^^^^^^^^^^^^^^^^^^^^^ File "/home/user2/.venv/lib/python3.12/site-packages/datasets/load.py", line 1171, in load_dataset_builder builder_instance: DatasetBuilder = builder_cls( ^^^^^^^^^^^^ File "/home/user2/.venv/lib/python3.12/site-packages/datasets/builder.py", line 390, in __init__ with FileLock(lock_path): File "/home/user2/.venv/lib/python3.12/site-packages/filelock/_api.py", line 377, in __enter__ self.acquire() File "/home/user2/.venv/lib/python3.12/site-packages/filelock/_api.py", line 333, in acquire self._acquire() File "/home/user2/.venv/lib/python3.12/site-packages/filelock/_unix.py", line 45, in _acquire fd = os.open(self.lock_file, open_flags, self._context.mode) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ PermissionError: [Errno 13] Permission denied: '/models/hf_hub_shared_experiment/data/_models_hf_hub_shared_experiment_data_tatsu-lab___alpaca_default_0.0.0_dce01c9b08f87459cf36a430d809084718273017.lock' ``` ### Expected behavior The second user should be able to read the shared cache files. ### Environment info $ datasets-cli env - `datasets` version: 4.4.1 - Platform: Linux-6.8.0-88-generic-x86_64-with-glibc2.39 - Python version: 3.12.3 - `huggingface_hub` version: 0.36.0 - PyArrow version: 22.0.0 - Pandas version: 2.3.3 - `fsspec` version: 2025.10.0
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7900/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/7900/timeline
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7899
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7899/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7899/comments
https://api.github.com/repos/huggingface/datasets/issues/7899/events
https://github.com/huggingface/datasets/pull/7899
3,707,063,236
PR_kwDODunzps63t1LS
7,899
Add inspect_ai eval logs support
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7899). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Very cool! \r\n\r\nAny reason not to directly use inspect for loading/converting from the binary format to JSON? https://inspect.aisi.org.uk/reference/inspect_ai.log.html#convert_eval_logs ", "The format is simple enough to not have to rely on an additional dependency :)" ]
2025-12-08T16:14:40
2025-12-09T14:45:15
2025-12-09T14:45:13
MEMBER
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7899", "html_url": "https://github.com/huggingface/datasets/pull/7899", "diff_url": "https://github.com/huggingface/datasets/pull/7899.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7899.patch", "merged_at": "2025-12-09T14:45:13" }
Support for .eval log files from inspect_ai They are actually ZIP files according to the source code at https://github.com/UKGovernmentBEIS/inspect_ai/blob/main/src/inspect_ai/log/_log.py Unfortunately their format can't be converted to Parquet, so I had to JSON-encode all the nested values ```python ds = load_dataset("dvilasuero/kimi-bfcl") ``` this will enable the Viewer for datasets like https://huggingface.co/datasets/dvilasuero/kimi-bfcl original tweet for context: https://x.com/dvilasuero/status/1996936988176343220?s=20 cc @dvsrepo @julien-c @davanstrien
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7899/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7899/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7898
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7898/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7898/comments
https://api.github.com/repos/huggingface/datasets/issues/7898/events
https://github.com/huggingface/datasets/pull/7898
3,698,376,429
PR_kwDODunzps63Q9BO
7,898
docs: making PyPi to PyPI ensuring no spelling errors
{ "login": "kapoor1309", "id": 152784163, "node_id": "U_kgDOCRtNIw", "avatar_url": "https://avatars.githubusercontent.com/u/152784163?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kapoor1309", "html_url": "https://github.com/kapoor1309", "followers_url": "https://api.github.com/users/kapoor1309/followers", "following_url": "https://api.github.com/users/kapoor1309/following{/other_user}", "gists_url": "https://api.github.com/users/kapoor1309/gists{/gist_id}", "starred_url": "https://api.github.com/users/kapoor1309/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kapoor1309/subscriptions", "organizations_url": "https://api.github.com/users/kapoor1309/orgs", "repos_url": "https://api.github.com/users/kapoor1309/repos", "events_url": "https://api.github.com/users/kapoor1309/events{/privacy}", "received_events_url": "https://api.github.com/users/kapoor1309/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[ "Hey pls review " ]
2025-12-05T10:20:48
2025-12-10T14:16:31
null
NONE
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7898", "html_url": "https://github.com/huggingface/datasets/pull/7898", "diff_url": "https://github.com/huggingface/datasets/pull/7898.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7898.patch", "merged_at": null }
This PR adds a short clarification in the README section wherein PyPI the python package was mistakenly typed as PyPi which i have fixed
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7898/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7898/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7897
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7897/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7897/comments
https://api.github.com/repos/huggingface/datasets/issues/7897/events
https://github.com/huggingface/datasets/pull/7897
3,691,300,022
PR_kwDODunzps624-k2
7,897
Save input shard lengths
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7897). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-12-03T17:56:55
2025-12-05T16:21:06
2025-12-05T16:21:03
MEMBER
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7897", "html_url": "https://github.com/huggingface/datasets/pull/7897", "diff_url": "https://github.com/huggingface/datasets/pull/7897.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7897.patch", "merged_at": "2025-12-05T16:21:03" }
will be useful for the Viewer, to know what (original) shard each row belongs to cc @cfahlgren1 next step is use it in Dataset Viewer and expose an API that returns the file containing the row at rowId (took the opportunity to remove unusued code)
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7897/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/7897/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7896
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7896/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7896/comments
https://api.github.com/repos/huggingface/datasets/issues/7896/events
https://github.com/huggingface/datasets/pull/7896
3,688,480,675
PR_kwDODunzps62vZtn
7,896
fix: force contiguous copy for sliced list arrays in embed_array_storage
{ "login": "The-Obstacle-Is-The-Way", "id": 175985783, "node_id": "U_kgDOCn1Udw", "avatar_url": "https://avatars.githubusercontent.com/u/175985783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/The-Obstacle-Is-The-Way", "html_url": "https://github.com/The-Obstacle-Is-The-Way", "followers_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/followers", "following_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/following{/other_user}", "gists_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/gists{/gist_id}", "starred_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/subscriptions", "organizations_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/orgs", "repos_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/repos", "events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/events{/privacy}", "received_events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7896). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Hi @The-Obstacle-Is-The-Way, I haven't had a chance to try your repro at https://github.com/huggingface/datasets/issues/7894#issuecomment-3618888430 but I tried the test functions in your PRs and they all pass without your change, is it expected ? (I ran the tests without the checks on the offsets)", "Hi @lhoestq, you're right - the tests pass without the fix. \r\n\r\nI think the SIGKILL crash is scale-dependent. As I documented in #7894, it only manifests with large, real-world data (the 273GB ARC dataset) and other large dataset upload issues, not with synthetic test files:\r\n\r\n| Data | Crash? |\r\n|------|--------|\r\n| Synthetic 64³ NIfTI | ✅ No crash |\r\n| Real ARC (273GB) | ❌ SIGKILL |\r\n\r\nThe tests verify that the fix's mechanism *works* (sliced arrays get made contiguous), but they can't reproduce the crash because that requires the full-scale data.\r\n\r\nI understand this makes it harder to review - the fix is essentially defensive against a crash that's difficult to reproduce in CI. A few options:\r\n\r\n1. **Keep as-is** - the fix is low-risk (`pa.concat_arrays` on sliced list arrays) and I've confirmed it resolves the real-world crash\r\n2. **Simplify tests** - remove the offset assertions if they're confusing, keep just the \"doesn't crash\" aspect\r\n3. **Close** - if the crash can't be reproduced in CI, I understand if this doesn't meet the bar for merging\r\n\r\nHappy to do whatever you think is best. Thank you for your time and dedication!" ]
2025-12-03T04:34:26
2025-12-19T16:45:54
null
CONTRIBUTOR
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7896", "html_url": "https://github.com/huggingface/datasets/pull/7896", "diff_url": "https://github.com/huggingface/datasets/pull/7896.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7896.patch", "merged_at": null }
## Summary Fixes SIGKILL crash in `embed_array_storage` when processing sliced/sharded datasets with nested types like `Sequence(Nifti())` or `Sequence(Image())`. **Root cause**: When `ds.shard()` or `ds.select()` creates a sliced view, `array.values` on a sliced `ListArray` returns values with internal offset references. For nested types, PyArrow's C++ layer can crash (SIGKILL, exit code 137) when materializing these sliced nested structs. **Fix**: Force a contiguous copy via `pa.concat_arrays([array])` when the array has a non-zero offset before processing list/large_list arrays. ## Changes - Add offset check in `embed_array_storage` for list/large_list arrays - Force contiguous copy when `array.offset > 0` to break internal references - Add regression tests for sliced arrays with Image, Nifti, and LargeList types ## Test plan - [x] Added `tests/features/test_embed_storage_sliced.py` with 3 tests: - `test_embed_array_storage_sliced_list_image` - `test_embed_array_storage_sliced_list_nifti` - `test_embed_array_storage_sliced_large_list` - [x] All tests verify `embedded.offset == 0` (contiguous result) - [x] All tests pass locally - [x] ruff check passes ## Context This was discovered while uploading a 270GB neuroimaging dataset (ARC) with `Sequence(Nifti())` columns. The process crashed with SIGKILL (no Python traceback) when `embed_table_storage` was called on sharded data. Workaround that confirmed the fix: pandas round-trip (`shard.to_pandas()` → `Dataset.from_pandas()`) which forces a contiguous copy. Fixes #7894 Related: #6686, #7852, #6790
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7896/reactions", "total_count": 3, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/7896/timeline
null
null
null
null
true
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
13