url
string | repository_url
string | labels_url
string | comments_url
string | events_url
string | html_url
string | id
int64 | node_id
string | number
int64 | title
string | user
dict | labels
list | state
string | locked
bool | assignee
dict | assignees
list | milestone
dict | comments
list | created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
string | type
null | active_lock_reason
null | draft
bool | pull_request
dict | body
string | closed_by
dict | reactions
dict | timeline_url
string | performed_via_github_app
null | state_reason
string | sub_issues_summary
dict | issue_dependencies_summary
dict | is_pull-rerquest
bool |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/7933
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7933/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7933/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7933/events
|
https://github.com/huggingface/datasets/pull/7933
| 3,780,607,384
|
PR_kwDODunzps67fNaP
| 7,933
|
feat: Add Apache TsFile format support
|
{
"login": "sinanshamsudheen",
"id": 186699478,
"node_id": "U_kgDOCyDO1g",
"avatar_url": "https://avatars.githubusercontent.com/u/186699478?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sinanshamsudheen",
"html_url": "https://github.com/sinanshamsudheen",
"followers_url": "https://api.github.com/users/sinanshamsudheen/followers",
"following_url": "https://api.github.com/users/sinanshamsudheen/following{/other_user}",
"gists_url": "https://api.github.com/users/sinanshamsudheen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sinanshamsudheen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sinanshamsudheen/subscriptions",
"organizations_url": "https://api.github.com/users/sinanshamsudheen/orgs",
"repos_url": "https://api.github.com/users/sinanshamsudheen/repos",
"events_url": "https://api.github.com/users/sinanshamsudheen/events{/privacy}",
"received_events_url": "https://api.github.com/users/sinanshamsudheen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7933). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-01-05T08:28:12
| 2026-01-05T09:50:23
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7933",
"html_url": "https://github.com/huggingface/datasets/pull/7933",
"diff_url": "https://github.com/huggingface/datasets/pull/7933.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7933.patch",
"merged_at": null
}
|
# Add Apache TsFile format support
Adds support for loading `.tsfile` datasets. Closes #7922.
## What's TsFile?
[Apache TsFile](https://tsfile.apache.org/) is a columnar time-series format popular in IoT. The TsFile community requested this integration and offered to help maintain it.
## What I did
Created a new `TsFile` builder in `packaged_modules/tsfile/` following the same pattern as HDF5. Registered the module and added `.tsfile` extension mapping. Also added `tsfile>=2.0.0` as an optional dependency.
The builder uses `tsfile.to_dataframe()` with iterator mode for memory-efficient reading, then converts to PyArrow tables. Schema is inferred automatically from file metadata.
## Config options
- `batch_size` - rows per batch (default 10000)
- `table_name` - which table to read (for multi-table files)
- `columns` - filter specific columns
- `start_time` / `end_time` - time-range filtering
## Usage
```python
from datasets import load_dataset
ds = load_dataset("tsfile", data_files=["data.tsfile"], split="train")
# with filtering
ds = load_dataset("tsfile", data_files=["data.tsfile"],
columns=["temperature"], start_time=1609459200000)
```
## Tests
Added 11 tests covering config validation, basic loading, data integrity, feature inference, and error handling. All passing.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7933/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7933/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7932
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7932/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7932/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7932/events
|
https://github.com/huggingface/datasets/pull/7932
| 3,777,725,050
|
PR_kwDODunzps67WqhL
| 7,932
|
Fix duplicate keyword conflict in load_dataset_builder
|
{
"login": "Ashish570raj",
"id": 110705207,
"node_id": "U_kgDOBpk6Nw",
"avatar_url": "https://avatars.githubusercontent.com/u/110705207?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ashish570raj",
"html_url": "https://github.com/Ashish570raj",
"followers_url": "https://api.github.com/users/Ashish570raj/followers",
"following_url": "https://api.github.com/users/Ashish570raj/following{/other_user}",
"gists_url": "https://api.github.com/users/Ashish570raj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ashish570raj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ashish570raj/subscriptions",
"organizations_url": "https://api.github.com/users/Ashish570raj/orgs",
"repos_url": "https://api.github.com/users/Ashish570raj/repos",
"events_url": "https://api.github.com/users/Ashish570raj/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ashish570raj/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi HuggingFace team\r\nThis PR fixes issue #4910 by safely merging builder_kwargs and config_kwargs to avoid duplicate keyword errors. \r\nA regression test is included to ensure this does not happen again. \r\n\r\nPlease let me know if you’d like any changes. Thanks!\r\n"
] | 2026-01-03T05:49:06
| 2026-01-03T05:52:02
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7932",
"html_url": "https://github.com/huggingface/datasets/pull/7932",
"diff_url": "https://github.com/huggingface/datasets/pull/7932.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7932.patch",
"merged_at": null
}
|
Fixes #4910
This PR fixes a bug where passing the same keyword in builder_kwargs and
config_kwargs caused a TypeError in load_dataset_builder.
The kwargs are now merged safely so config_kwargs override builder_kwargs
without duplication. A regression test is added to prevent this from
happening again.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7932/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7932/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7931
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7931/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7931/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7931/events
|
https://github.com/huggingface/datasets/issues/7931
| 3,777,662,799
|
I_kwDODunzps7hKo9P
| 7,931
|
Enable CORS + HTTP Range support for browser partial reads on cas-bridge.xethub.hf.co (Parquet row-group access)
|
{
"login": "cornhundred",
"id": 8352840,
"node_id": "MDQ6VXNlcjgzNTI4NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8352840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cornhundred",
"html_url": "https://github.com/cornhundred",
"followers_url": "https://api.github.com/users/cornhundred/followers",
"following_url": "https://api.github.com/users/cornhundred/following{/other_user}",
"gists_url": "https://api.github.com/users/cornhundred/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cornhundred/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cornhundred/subscriptions",
"organizations_url": "https://api.github.com/users/cornhundred/orgs",
"repos_url": "https://api.github.com/users/cornhundred/repos",
"events_url": "https://api.github.com/users/cornhundred/events{/privacy}",
"received_events_url": "https://api.github.com/users/cornhundred/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null |
[
"Cc @assafvayner maybe ?"
] | 2026-01-03T04:23:54
| 2026-01-04T20:43:31
| null |
NONE
| null | null | null | null |
### Feature request
## Summary
Browser-based data tools need Range requests to read Parquet efficiently (footer + selected row groups). Downloads from the Hub redirect to cas-bridge.xethub.hf.co (Xet bridge). The redirected host fails CORS preflight for Range/HEAD workflows, blocking partial reads. ([Hugging Face](https://huggingface.co/blog/migrating-the-hub-to-xet)). See example [HuggingFace dataset](https://huggingface.co/datasets/cornhundred/Xenium_V1_human_Pancreas_FFPE_outs_row_groups/tree/main/Xenium_V1_human_Pancreas_FFPE_outs_row_groups)
## Current behavior
Plain GET works via redirect.
Range workflows fail with: “Response to preflight request doesn’t pass access control check: It does not have HTTP ok status.”
This blocks parquet-wasm and DuckDB-Wasm style readers which rely on HEAD + Range or non-safelisted Range patterns. ([GitHub](https://github.com/duckdb/duckdb-wasm/issues/1852))
## Expected behavior
OPTIONS to the final redirected host returns 200/204 (no redirect) with appropriate CORS headers. Preflight responses must be “ok” status. ([GitHub](https://github.com/whatwg/fetch/issues/1588))
GET with Range returns 206 Partial Content and includes CORS headers, plus exposes Content-Range, Accept-Ranges, and Content-Length so browser JS can consume them. ([MDN WebDocument](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Content-Range))
## Proposed CORS headers (public, anonymous files)
For responses from cas-bridge.xethub.hf.co (and any sibling Xet bridge hosts):
### Preflight (OPTIONS)
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET, HEAD, OPTIONS
Access-Control-Allow-Headers: Range, Content-Type (or echo Access-Control-Request-Headers)
Access-Control-Max-Age: 86400 (optional, reduces preflight spam)
### Actual (GET/HEAD, including 206)
Access-Control-Allow-Origin: *
Access-Control-Expose-Headers: Content-Range, Accept-Ranges, Content-Length
Ensure Accept-Ranges: bytes and Content-Range are present for range responses. ([MDN WebDocument](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Accept-Ranges))
### Notes on credentials (optional)
If any endpoint requires credentials, wildcard * cannot be used and the server must echo Origin and add Vary: Origin. ([MDN WebDocument](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Access-Control-Allow-Origin))
## Impact
This unblocks efficient browser analytics and visualization on HF-hosted datasets using Parquet row groups, DuckDB-Wasm, parquet-wasm, and similar tooling. DuckDB-Wasm documentation explicitly notes that remote data access requires correct CORS on the hosting site. ([DuckDB](https://duckdb.org/docs/stable/clients/wasm/extensions.html))
High-quality references worth linking in the issue thread
Hugging Face: redirect to cas-bridge.xethub.hf.co shown in Xet migration blog ([Hugging Face](https://huggingface.co/blog/migrating-the-hub-to-xet))
Fetch/CORS: preflight must be “ok” status (200/204) ([GitHub](https://github.com/whatwg/fetch/issues/1588))
Fetch/CORS: redirect + preflight is a known sharp edge ([GitHub](https://github.com/whatwg/fetch/issues/204))
MDN CORS guide: Range safelist caveat ([MDN WebDocument](https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/CORS))
MDN Range header: single-range is safelisted, multi-range may preflight ([MDN WebDocument](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Range))
MDN Expose-Headers: non-safelisted headers must be exposed ([MDN WebDocument](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Access-Control-Expose-Headers))
DuckDB-Wasm: remote HTTPFS requires correct CORS ([DuckDB](https://duckdb.org/docs/stable/clients/wasm/extensions.html))
DuckDB-Wasm issue: HEAD blocked by CORS breaks the pipeline ([GitHub](https://github.com/duckdb/duckdb-wasm/issues/1852))
pdf.js historical issues about Accept-Ranges/Content-Range exposure ([GitHub](https://github.com/mozilla/pdf.js/issues/3150))
## Summary
Your request is standard: browser Parquet needs byte ranges.
Redirect to cas-bridge.xethub.hf.co makes CORS enforcement happen on the Xet bridge host. ([Hugging Face](https://huggingface.co/blog/migrating-the-hub-to-xet))
Fix requires: OPTIONS returns 200/204 with CORS headers, and 206 responses include CORS + exposed headers. ([GitHub](https://github.com/whatwg/fetch/issues/1588))
Similar failures exist across pdf.js and DuckDB-Wasm ecosystems. ([GitHub](https://github.com/duckdb/duckdb-wasm/issues/1852))
### Motivation
I would like to be able to read subsets of large Parquet files using range requests using the parquet_wasm library on the front end. This is being used as part of a spatial data visualization project https://github.com/broadinstitute/celldega
### Your contribution
I would be happy to provide code to make front-end range requests as an example.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7931/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7931/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7930
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7930/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7930/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7930/events
|
https://github.com/huggingface/datasets/pull/7930
| 3,777,628,848
|
PR_kwDODunzps67WYwc
| 7,930
|
Proposal: Protein 3D Structure Visualization for Dataset Viewer
|
{
"login": "behroozazarkhalili",
"id": 80390531,
"node_id": "MDQ6VXNlcjgwMzkwNTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/80390531?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/behroozazarkhalili",
"html_url": "https://github.com/behroozazarkhalili",
"followers_url": "https://api.github.com/users/behroozazarkhalili/followers",
"following_url": "https://api.github.com/users/behroozazarkhalili/following{/other_user}",
"gists_url": "https://api.github.com/users/behroozazarkhalili/gists{/gist_id}",
"starred_url": "https://api.github.com/users/behroozazarkhalili/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/behroozazarkhalili/subscriptions",
"organizations_url": "https://api.github.com/users/behroozazarkhalili/orgs",
"repos_url": "https://api.github.com/users/behroozazarkhalili/repos",
"events_url": "https://api.github.com/users/behroozazarkhalili/events{/privacy}",
"received_events_url": "https://api.github.com/users/behroozazarkhalili/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"cc @georgia-hf - Following up on your question about protein visualization for the Dataset Viewer. This proposal recommends 3Dmol.js (~150KB gzipped) as a lightweight alternative to Mol* (~1.3MB gzipped).\n\nLooking forward to your feedback!",
"Exciting ! cc @cfahlgren1 @severo for the Viewer part\r\n\r\nFor the `datasets` part I'll leave my feedbacks in the PRs :)",
"I don't know the JS libraries, but indeed, the lighter the better, as we don't require advanced features.",
"From a quick look at the PDB and mmCIF PRs I noticed that the dataset has one row = one atom. However I humbly believe that such datasets would be more practical to use if one row = one structure. This way each row is independent, which is practical in ML to perform train/test splits or dataset shuffling.\r\n\r\nThis would also make it easier to add labels and metadata for each structure, similar to what we already for images. E.g. you could group them per folder named after a label, or you can have a metadata.parquet file to add custom metadata per structure.\r\n\r\nAnd this way in the Viewer it could show one 3D render per row.\r\n\r\nWhat do you think ?",
"@lhoestq @severo @georgia-hf I will be waiting for all your comments; then, I will start implementing the final plan. "
] | 2026-01-03T03:30:01
| 2026-01-05T16:00:45
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7930",
"html_url": "https://github.com/huggingface/datasets/pull/7930",
"diff_url": "https://github.com/huggingface/datasets/pull/7930.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7930.patch",
"merged_at": null
}
|
# Proposal: Protein 3D Structure Visualization for HuggingFace Dataset Viewer
## Executive Summary
This proposal outlines adding 3D protein structure visualization to the HuggingFace Dataset Viewer, enabling users to interactively view PDB and mmCIF molecular structures directly within the dataset preview interface.
---
## Data Type Support
**Supported formats** (from recent PRs):
- **PDB** (PR #7926): Legacy fixed-width format for 3D macromolecular structures
- **mmCIF** (PR #7925): Modern standard format with full crystallographic data
**What gets visualized**:
- 3D atomic coordinates (x, y, z)
- Chain structures
- Residue information
- Atom types and elements
- Secondary structure (helices, sheets)
**Not applicable** (1D sequence only):
- FASTA (PR #7923) - text sequences, no 3D coordinates
- FASTQ (PR #7924) - sequences with quality scores, no 3D coordinates
---
## Visualization Library Comparison
| Library | Bundle Size (minified) | Bundle Size (gzipped) | License | Pros | Cons |
|---------|------------------------|----------------------|---------|------|------|
| **3Dmol.js** | 512 KB | **~150 KB** | BSD-3 | Lightweight, easy integration, good docs | Fewer advanced features |
| **NGL Viewer** | 1.3 MB | ~350 KB | MIT | Excellent MMTF support, beautiful rendering | Moderate complexity |
| **Mol*** | 4.6 MB | ~1.3 MB | MIT | Industry standard, used by RCSB PDB, feature-rich | Heavy, complex |
| **PDBe Molstar** | 5.8 MB | ~1.6 MB | Apache 2.0 | EMBL-EBI maintained, simpler Mol* wrapper | Still very heavy |
*Bundle sizes verified by downloading actual distribution files from npm/CDN (January 2026)*
---
## Recommendation: 3Dmol.js
**Primary choice**: 3Dmol.js
**Rationale**:
1. **Bundle size**: ~150 KB gzipped - the lightest option by far, ideal for lazy loading
2. **Simple API**: Easy to integrate with React/Next.js
3. **BSD-3 License**: Compatible with HuggingFace licensing
4. **Active maintenance**: Regular updates, good community support
5. **Format support**: Native PDB and mmCIF parsing built-in
6. **Sufficient features**: Rotation, zoom, style switching (cartoon, stick, sphere)
**Why not Mol*?** As Georgia noted, Mol* is heavy (~1.3 MB gzipped). While it's the industry standard for RCSB PDB, it's overkill for a dataset preview where users just need to verify structure data looks correct.
**Alternative for power users**: If users need advanced features like density maps, ligand interactions, or sequence alignment overlay, consider PDBe Molstar as an optional "full viewer" mode.
---
## Architecture for Dataset Viewer Integration
### Lazy Loading Pattern (React/Next.js)
```javascript
// ProteinViewer.tsx
import dynamic from 'next/dynamic';
const Protein3DViewer = dynamic(
() => import('./Protein3DViewerCore'),
{
ssr: false, // WebGL requires client-side only
loading: () => <ProteinViewerSkeleton />
}
);
export function ProteinViewer({ data, format }) {
// Only render when PDB/mmCIF format detected
if (!['pdb', 'mmcif', 'cif'].includes(format)) {
return <SequenceViewer data={data} />;
}
return <Protein3DViewer structureData={data} format={format} />;
}
```
### Core Viewer Component (3Dmol.js)
```javascript
// Protein3DViewerCore.tsx
import { useEffect, useRef } from 'react';
import $3Dmol from '3dmol';
export default function Protein3DViewerCore({ structureData, format }) {
const viewerRef = useRef(null);
const containerRef = useRef(null);
useEffect(() => {
if (!containerRef.current) return;
// Initialize viewer
const viewer = $3Dmol.createViewer(containerRef.current, {
backgroundColor: 'white',
antialias: true,
});
viewerRef.current = viewer;
// Add structure
viewer.addModel(structureData, format);
viewer.setStyle({}, { cartoon: { color: 'spectrum' } });
viewer.zoomTo();
viewer.render();
return () => viewer.clear();
}, [structureData, format]);
return (
<div
ref={containerRef}
style={{ width: '100%', height: '400px', position: 'relative' }}
/>
);
}
```
---
## Integration Points in Dataset Viewer
### File Type Detection
```javascript
// Detect protein structure formats
const PROTEIN_3D_FORMATS = ['pdb', 'ent', 'cif', 'mmcif'];
function getViewerType(filename, datasetFeatures) {
const ext = filename.split('.').pop().toLowerCase();
if (PROTEIN_3D_FORMATS.includes(ext)) {
return 'protein-3d';
}
// ... other format checks
}
```
### Data Flow
```
Dataset Row → Format Detection → Lazy Load Viewer → Render 3D Structure
↓
PDB/mmCIF text → 3Dmol.js parser → WebGL canvas → User interaction
```
---
## UI/UX Considerations
### Viewer Controls
- Rotate: Mouse drag
- Zoom: Scroll wheel
- Style toggle: Cartoon / Stick / Sphere / Surface
- Reset view button
- Full-screen toggle
### Style Dropdown Options
```javascript
const STYLE_OPTIONS = [
{ label: 'Cartoon (ribbon)', value: 'cartoon' },
{ label: 'Sticks', value: 'stick' },
{ label: 'Spheres (CPK)', value: 'sphere' },
{ label: 'Line', value: 'line' },
{ label: 'Surface', value: 'surface' },
];
```
### Loading State
- Skeleton placeholder (400px height)
- "Loading 3D viewer..." text
- Progressive: Show 2D preview while 3D loads
---
## Implementation Phases
### Phase 1: Basic Viewer (MVP)
- Add 3Dmol.js dependency (~150 KB gzipped)
- Create ProteinViewer component with lazy loading
- Support PDB format display
- Basic rotation/zoom controls
- Single style (cartoon)
### Phase 2: Enhanced Features
- mmCIF format support
- Style switching dropdown
- Full-screen mode
- Chain coloring options
### Phase 3: Advanced (Optional)
- Atom selection/highlighting
- Distance measurements
- Export snapshot as PNG
- Consider PDBe Molstar for power users
---
## Bundle Impact Analysis
**Without lazy loading**: +150 KB to initial bundle (acceptable but not ideal)
**With lazy loading**:
- Initial load: 0 KB additional
- On-demand: ~150 KB when viewing PDB/mmCIF
- Cached after first load
**Comparison with other viewers**:
| Viewer Type | Typical Bundle Size |
|-------------|---------------------|
| PDF viewer | ~500 KB |
| Audio player | ~50 KB |
| Image gallery | ~100 KB |
| **Protein 3D (3Dmol.js)** | **~150 KB** |
The protein viewer is comparable to other specialized viewers and well within acceptable limits for lazy-loaded content.
---
## Alternative Approach: CDN Loading
If bundle size is critical:
```javascript
// Load from CDN on-demand
const load3Dmol = async () => {
if (window.$3Dmol) return window.$3Dmol;
return new Promise((resolve) => {
const script = document.createElement('script');
script.src = 'https://3dmol.csb.pitt.edu/build/3Dmol-min.js';
script.onload = () => resolve(window.$3Dmol);
document.head.appendChild(script);
});
};
```
**Pros**: Zero bundle impact
**Cons**: External dependency, potential availability issues
---
## Files to Modify (in dataset-viewer repo)
Since dataset-viewer is closed-source, this proposal should be shared with the HuggingFace team. They would need to:
1. `package.json` - Add 3dmol dependency
2. Create `components/viewers/ProteinViewer.tsx`
3. Create `components/viewers/Protein3DViewerCore.tsx`
4. Update viewer routing logic to detect PDB/mmCIF
5. Add viewer style controls component
---
## Summary
**Recommended approach**:
- Use **3Dmol.js** (~150 KB gzipped) with **lazy loading**
- Only loads when user views PDB/mmCIF datasets
- Simple integration, BSD-3 license, active community support
**Why 3Dmol.js over Mol*?**:
- 3Dmol.js: ~150 KB gzipped
- Mol*: ~1.3 MB gzipped (nearly 9x heavier)
**Key insight**: The PDB and mmCIF loaders we implemented (PRs #7925, #7926) extract the 3D coordinates needed for visualization. The viewer just needs to consume the raw file content.
---
## Next Steps
1. Get feedback on this proposal
2. Create proof-of-concept in a standalone demo if needed
3. Integrate into dataset-viewer once approach is approved
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7930/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7930/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7929
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7929/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7929/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7929/events
|
https://github.com/huggingface/datasets/pull/7929
| 3,776,098,655
|
PR_kwDODunzps67Rayd
| 7,929
|
Raise early for invalid `revision` in `load_dataset`
|
{
"login": "Scott-Simmons",
"id": 52365471,
"node_id": "MDQ6VXNlcjUyMzY1NDcx",
"avatar_url": "https://avatars.githubusercontent.com/u/52365471?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Scott-Simmons",
"html_url": "https://github.com/Scott-Simmons",
"followers_url": "https://api.github.com/users/Scott-Simmons/followers",
"following_url": "https://api.github.com/users/Scott-Simmons/following{/other_user}",
"gists_url": "https://api.github.com/users/Scott-Simmons/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Scott-Simmons/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Scott-Simmons/subscriptions",
"organizations_url": "https://api.github.com/users/Scott-Simmons/orgs",
"repos_url": "https://api.github.com/users/Scott-Simmons/repos",
"events_url": "https://api.github.com/users/Scott-Simmons/events{/privacy}",
"received_events_url": "https://api.github.com/users/Scott-Simmons/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"Passes\r\n```sh\r\npytest -k \"LoadTest and test_load_dataset_invalid_revision_with_cache\"\r\n```\r\n\r\nFails\r\n```sh\r\ngit checkout cc2399019a3a547ebc31ec68a1ff99abd4ec93ce\r\npytest -k \"LoadTest and test_load_dataset_invalid_revision_with_cache\"\r\n```\r\n\r\nRan `make test`, but failures look unrelated to the PR diff (same tests fail on `main` too)\r\n\r\n```sh\r\nFAILED tests/test_distributed.py::test_torch_distributed_run[False] - TypeError: Passing coroutines is forbidden...\r\nFAILED tests/test_distributed.py::test_torch_distributed_run[True] - TypeError: Passing coroutines is forbidden...\r\nFAILED tests/test_distributed.py::test_torch_distributed_run_streaming_with_num_workers[2-2] - TypeError: Passing coroutines is forbidden...\r\nFAILED tests/test_distributed.py::test_torch_distributed_run_streaming_with_num_workers[3-2] - TypeError: Passing coroutines is forbidden...\r\n= 4 failed, 3077 passed, 18 skipped, 491 warnings in 556.45s (0:09:16) =\r\nmake: *** [Makefile:20: test] Error 1\r\n```"
] | 2026-01-02T10:40:49
| 2026-01-02T11:24:35
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7929",
"html_url": "https://github.com/huggingface/datasets/pull/7929",
"diff_url": "https://github.com/huggingface/datasets/pull/7929.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7929.patch",
"merged_at": null
}
|
Solves https://github.com/huggingface/datasets/issues/7928
Raise early for invalid revisions
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7929/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7929/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7928
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7928/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7928/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7928/events
|
https://github.com/huggingface/datasets/issues/7928
| 3,775,842,185
|
I_kwDODunzps7hDseJ
| 7,928
|
`load_dataset` `revision` param not respected when fetching from cache
|
{
"login": "Scott-Simmons",
"id": 52365471,
"node_id": "MDQ6VXNlcjUyMzY1NDcx",
"avatar_url": "https://avatars.githubusercontent.com/u/52365471?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Scott-Simmons",
"html_url": "https://github.com/Scott-Simmons",
"followers_url": "https://api.github.com/users/Scott-Simmons/followers",
"following_url": "https://api.github.com/users/Scott-Simmons/following{/other_user}",
"gists_url": "https://api.github.com/users/Scott-Simmons/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Scott-Simmons/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Scott-Simmons/subscriptions",
"organizations_url": "https://api.github.com/users/Scott-Simmons/orgs",
"repos_url": "https://api.github.com/users/Scott-Simmons/repos",
"events_url": "https://api.github.com/users/Scott-Simmons/events{/privacy}",
"received_events_url": "https://api.github.com/users/Scott-Simmons/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"This might be better placed as a feature request not a bug, since the logging `Using the latest cached version of the dataset since sentientfutures/ahb couldn't be found on the Hugging Face Hub` is clear.",
"https://github.com/huggingface/datasets/pull/7929"
] | 2026-01-02T08:20:47
| 2026-01-02T11:25:08
| null |
NONE
| null | null | null | null |
### Describe the bug
`datasets.load_dataset` `revision` semantics are a bit inconsistent when the dataset is not found on the huggingface hub. When fetching the latest cached version of the dataset, the `revision` argument is ignored, so long as any cached versions of the dataset already exist in the HF cache.
### Steps to reproduce the bug
```python
import datasets
datasets.load_dataset(
"sentientfutures/ahb",
"dimensions",
split="train",
revision="main"
)
# would expect some error to raise here
datasets.load_dataset(
"sentientfutures/ahb",
"dimensions",
split="train",
revision="invalid_revision"
)
```
### Expected behavior
On the second call to `datasets.load_dataset` in the 'steps to reproduce the bug' example, expect something like:
```sh
raise DatasetNotFoundError(
datasets.exceptions.DatasetNotFoundError: Revision 'invalid_revision' doesn't exist for dataset 'sentientfutures/ahb' on the Hub.
```
### Environment info
- `datasets` version: 4.4.1
- Platform: Linux-6.2.0-39-generic-x86_64-with-glibc2.37
- Python version: 3.12.12
- `huggingface_hub` version: 0.36.0
- PyArrow version: 22.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2025.9.0
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7928/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7928/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7927
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7927/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7927/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7927/events
|
https://github.com/huggingface/datasets/issues/7927
| 3,775,302,438
|
I_kwDODunzps7hBosm
| 7,927
|
Using Stateful Dataloader with Split Dataset By Node and DCP for DDP
|
{
"login": "conceptofmind",
"id": 25208228,
"node_id": "MDQ6VXNlcjI1MjA4MjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/25208228?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/conceptofmind",
"html_url": "https://github.com/conceptofmind",
"followers_url": "https://api.github.com/users/conceptofmind/followers",
"following_url": "https://api.github.com/users/conceptofmind/following{/other_user}",
"gists_url": "https://api.github.com/users/conceptofmind/gists{/gist_id}",
"starred_url": "https://api.github.com/users/conceptofmind/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/conceptofmind/subscriptions",
"organizations_url": "https://api.github.com/users/conceptofmind/orgs",
"repos_url": "https://api.github.com/users/conceptofmind/repos",
"events_url": "https://api.github.com/users/conceptofmind/events{/privacy}",
"received_events_url": "https://api.github.com/users/conceptofmind/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"Does it need to be pickled?\n\n```python\n def load_state_dict(self, state_dict):\n hf_state = pickle.loads(state_dict[\"data\"])\n self.train_dataset.load_state_dict(hf_state)\n\n def state_dict(self):\n return {\"data\": pickle.dumps(self.train_dataset.state_dict())}\n```",
"Pickling seems to have resolved the issue but it is not clear at all to me why this is necessary"
] | 2026-01-01T22:27:07
| 2026-01-02T02:48:21
| null |
NONE
| null | null | null | null |
### Describe the bug
I am trying to determine how to save and load the Stateful Dataloader State with DCP and Split Dataset by Node for DDP.
Currently, I am running into the issue where I am receiving a slow resume.
```
Neither dataset nor iter(dataset) defines state_dict/load_state_dict so we are naively fast-forwarding your dataset by 5000 steps. For more efficient resumes, please implement `state_dict` and `load_state_dict` in your IterableDataset and/or iterator.
```
### Steps to reproduce the bug
Say we have a streaming dataset:
```python
class StreamingDataset(IterableDataset):
def __init__(
self,
path: str,
tokenizer: AutoTokenizer,
name: Optional[str] = None,
split: str = "train",
max_length: int = 2048,
ddp_rank: int = 0,
ddp_world_size: int = 1,
):
dataset = load_dataset(path, name, split=split, streaming=True)
self.train_dataset = split_dataset_by_node(
dataset=dataset, rank=ddp_rank, world_size=ddp_world_size
)
self.tokenizer = tokenizer
self.max_length = max_length
def __iter__(self):
for sample in iter(self.train_dataset):
tokenized = self.tokenizer(
sample["text"],
padding="max_length",
truncation=True,
max_length=self.max_length,
return_special_tokens_mask=True,
)
yield tokenized
```
We load that dataset into the Stateful Dataloader:
```python
trainloader = StatefulDataLoader(
dataset=train_dataset,
batch_size=args.batch_size,
collate_fn=data_collator,
)
```
We then have code for checkpointing and resuming the state using DCP:
```python
import os
from typing import Optional
import torch
import torch.distributed as dist
import torch.distributed.checkpoint as dcp
from torch.distributed.checkpoint.format_utils import dcp_to_torch_save
from torch.distributed.checkpoint.state_dict import get_state_dict, set_state_dict
from blitzbert.utils import print_rank_0
class Checkpoint:
def __init__(
self,
model: torch.nn.Module,
optimizer: torch.optim.Optimizer,
trainloader,
step: Optional[int] = None,
epoch: Optional[int] = None,
):
self.model = model
self.optimizer = optimizer
self.trainloader = trainloader
self.step = step
self.epoch = epoch
def get_state_dict(self) -> dict:
model_state_dict, optimizer_state_dict = get_state_dict(
self.model, self.optimizer
)
return {
"model": model_state_dict,
"optim": optimizer_state_dict,
"trainloader": self.trainloader.state_dict(),
"step": self.step,
"epoch": self.epoch,
}
def save_checkpoint(
args,
model,
optimizer,
trainloader,
step: Optional[int] = None,
epoch: Optional[int] = None,
final_checkpoint: bool = False,
):
checkpointer = Checkpoint(
model=model,
optimizer=optimizer,
trainloader=trainloader,
step=step,
epoch=epoch,
)
state_dict = checkpointer.get_state_dict()
if final_checkpoint:
print_rank_0("Saving final model")
save_path = os.path.join(args.checkpoint_dir, "final_model")
dcp.save(state_dict, checkpoint_id=save_path)
dist.barrier()
single_file_path = os.path.join(args.checkpoint_dir, "final_checkpoint.pth")
dcp_to_torch_save(save_path, single_file_path)
else:
if step % args.checkpointing_steps == 0 and step != 0:
print_rank_0(f"Saving model at step: {step}")
save_path = os.path.join(args.checkpoint_dir, f"epoch_{epoch}_step_{step}")
dcp.save(state_dict, checkpoint_id=save_path)
dist.barrier()
def load_checkpoint(args, model, optimizer, trainloader):
if not args.resume_from_checkpoint:
return 0, 0
checkpoint_path = args.resume_from_checkpoint
print_rank_0(f"Resumed from checkpoint: {checkpoint_path}")
checkpointer = Checkpoint(
model=model,
optimizer=optimizer,
trainloader=trainloader,
)
state_dict = checkpointer.get_state_dict()
dcp.load(
state_dict=state_dict,
checkpoint_id=checkpoint_path,
)
set_state_dict(
model,
optimizer,
model_state_dict=state_dict["model"],
optim_state_dict=state_dict["optim"],
)
trainloader.load_state_dict(state_dict["trainloader"])
step = state_dict["step"]
epoch = state_dict["epoch"]
return step, epoch
```
and then loading the checkpoint:
```python
completed_steps, current_epoch = load_checkpoint(
args=args, model=model, optimizer=optimizer, trainloader=trainloader
)
```
### Expected behavior
If I implement what the warning says:
```python
def state_dict(self):
return self.train_dataset.state_dict()
def load_state_dict(self, state):
self.train_dataset.load_state_dict(state)
```
I then get:
```
[rank0]: raise RuntimeError(f"Missing key in checkpoint state_dict: {fqn}.")
[rank0]: RuntimeError: Missing key in checkpoint state_dict: trainloader.dataset_state.examples_iterable.examples_iterable.previous_state.
```
How exactly should one be saving and resuming the Stateful Dataloader with Hugging Face datasets?
### Environment info
"datasets>=4.4.1",
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7927/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7927/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7926
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7926/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7926/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7926/events
|
https://github.com/huggingface/datasets/pull/7926
| 3,773,696,472
|
PR_kwDODunzps67Jxxz
| 7,926
|
Add lightweight PDB (Protein Data Bank) file support
|
{
"login": "behroozazarkhalili",
"id": 80390531,
"node_id": "MDQ6VXNlcjgwMzkwNTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/80390531?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/behroozazarkhalili",
"html_url": "https://github.com/behroozazarkhalili",
"followers_url": "https://api.github.com/users/behroozazarkhalili/followers",
"following_url": "https://api.github.com/users/behroozazarkhalili/following{/other_user}",
"gists_url": "https://api.github.com/users/behroozazarkhalili/gists{/gist_id}",
"starred_url": "https://api.github.com/users/behroozazarkhalili/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/behroozazarkhalili/subscriptions",
"organizations_url": "https://api.github.com/users/behroozazarkhalili/orgs",
"repos_url": "https://api.github.com/users/behroozazarkhalili/repos",
"events_url": "https://api.github.com/users/behroozazarkhalili/events{/privacy}",
"received_events_url": "https://api.github.com/users/behroozazarkhalili/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-12-31T21:01:04
| 2025-12-31T21:01:04
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7926",
"html_url": "https://github.com/huggingface/datasets/pull/7926",
"diff_url": "https://github.com/huggingface/datasets/pull/7926.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7926.patch",
"merged_at": null
}
|
## Summary
This PR adds support for loading PDB (Protein Data Bank) files directly with `load_dataset()`.
PDB is the legacy fixed-width format for representing 3D macromolecular structures, widely used for historical datasets and still common in computational biology workflows.
### Key Features
- **Zero external dependencies** - Pure Python parser using fixed-width column positions per official PDB specification
- **Record type filtering** - Load ATOM, HETATM, or both record types
- **Column selection** - Choose specific columns to reduce memory usage
- **Compression support** - Automatic detection of gzip, bzip2, and xz compressed files via magic bytes
- **Batch processing** - Configurable batch size for memory-efficient processing
### Columns (ATOM/HETATM records)
| Column | Type | Description |
|--------|------|-------------|
| `record_type` | string | ATOM or HETATM |
| `atom_serial` | int32 | Atom serial number |
| `atom_name` | string | Atom name (e.g., CA, N, C) |
| `residue_name` | string | Residue name (e.g., ALA, GLY) |
| `chain_id` | string | Chain identifier |
| `residue_seq` | int32 | Residue sequence number |
| `x`, `y`, `z` | float32 | Coordinates (Å) |
| `occupancy` | float32 | Occupancy factor |
| `temp_factor` | float32 | Temperature factor (B-factor) |
| `element` | string | Element symbol |
### Supported Extensions
`.pdb`, `.ent` (and compressed variants)
### Usage
```python
from datasets import load_dataset
# Load PDB file
dataset = load_dataset("pdb", data_files="structure.pdb")
# Load only ATOM records (exclude ligands/water)
dataset = load_dataset("pdb", data_files="structure.pdb", record_types=["ATOM"])
# Load specific columns
dataset = load_dataset("pdb", data_files="structure.pdb",
columns=["atom_name", "residue_name", "x", "y", "z"])
```
### Use Cases
- Legacy structure dataset processing
- Molecular dynamics trajectory analysis
- Structure-based ML training data
- Protein visualization data preparation
### References
- PDB format specification: https://www.wwpdb.org/documentation/file-format
### Test Results
All 24 tests pass:
- Basic loading, column filtering, record type filtering
- Gzip compression, multi-chain structures, alternate locations
- Charged atoms, batch sizes, schema types, feature casting
- Empty files, multiple files, insertion codes, negative coordinates
Part of the bioinformatics file format support series (FASTA #7923, FASTQ #7924, mmCIF #7925).
cc @georgia-hf
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7926/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7926/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7925
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7925/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7925/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7925/events
|
https://github.com/huggingface/datasets/pull/7925
| 3,773,577,850
|
PR_kwDODunzps67JW3g
| 7,925
|
feat: Add mmCIF file support for macromolecular structures
|
{
"login": "behroozazarkhalili",
"id": 80390531,
"node_id": "MDQ6VXNlcjgwMzkwNTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/80390531?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/behroozazarkhalili",
"html_url": "https://github.com/behroozazarkhalili",
"followers_url": "https://api.github.com/users/behroozazarkhalili/followers",
"following_url": "https://api.github.com/users/behroozazarkhalili/following{/other_user}",
"gists_url": "https://api.github.com/users/behroozazarkhalili/gists{/gist_id}",
"starred_url": "https://api.github.com/users/behroozazarkhalili/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/behroozazarkhalili/subscriptions",
"organizations_url": "https://api.github.com/users/behroozazarkhalili/orgs",
"repos_url": "https://api.github.com/users/behroozazarkhalili/repos",
"events_url": "https://api.github.com/users/behroozazarkhalili/events{/privacy}",
"received_events_url": "https://api.github.com/users/behroozazarkhalili/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-12-31T20:11:32
| 2025-12-31T20:11:32
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7925",
"html_url": "https://github.com/huggingface/datasets/pull/7925",
"diff_url": "https://github.com/huggingface/datasets/pull/7925.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7925.patch",
"merged_at": null
}
|
## Summary
This PR adds support for loading mmCIF (macromolecular Crystallographic Information File) files directly with `load_dataset()`.
mmCIF is the modern standard format for representing 3D macromolecular structures, used by the Protein Data Bank (PDB) since 2014. This format is essential for machine learning applications in structural biology, drug discovery, and protein engineering.
### Key Features
- **Zero external dependencies**: Pure Python parser for CIF syntax (no BioPython or other heavy dependencies)
- **Streaming support**: Generator-based parsing for memory-efficient handling of large structure files
- **Compression support**: Automatic detection and handling of gzip, bzip2, and xz compressed files
- **ML-ready output**: Atomic coordinates structured for use in structure-based ML models (AlphaFold, graph neural networks, etc.)
### Default Columns (atom_site category)
| Column | Type | Description |
|--------|------|-------------|
| `id` | int | Atom serial number |
| `type_symbol` | string | Element symbol |
| `label_atom_id` | string | Atom name (e.g., CA, N, C) |
| `label_comp_id` | string | Residue name (e.g., ALA, GLY) |
| `label_asym_id` | string | Chain identifier |
| `label_seq_id` | int | Residue sequence number |
| `Cartn_x` | float | X coordinate (Å) |
| `Cartn_y` | float | Y coordinate (Å) |
| `Cartn_z` | float | Z coordinate (Å) |
| `occupancy` | float | Occupancy factor |
| `B_iso_or_equiv` | float | Temperature factor (B-factor) |
### Configuration Options
- `columns`: Select subset of atom_site columns
- `include_hetatm`: Option to exclude ligand/water HETATM records (default: True)
- `batch_size`: Control atoms per batch for memory management (default: 100000)
### Supported Extensions
`.cif`, `.mmcif` (and gzip/bzip2/xz compressed variants)
### Usage
```python
from datasets import load_dataset
# Load mmCIF file
dataset = load_dataset("mmcif", data_files="structure.cif")
# Load with column filtering
dataset = load_dataset("mmcif", data_files="structure.cif",
columns=["label_atom_id", "Cartn_x", "Cartn_y", "Cartn_z"])
# Exclude HETATM records (ligands, water)
dataset = load_dataset("mmcif", data_files="structure.cif", include_hetatm=False)
# Load compressed file
dataset = load_dataset("mmcif", data_files="structure.cif.gz")
```
### Use Cases
- AlphaFold/structure prediction training data
- Protein-ligand interaction datasets
- Graph neural networks on molecular structures
- Structure-based drug discovery
### References
- mmCIF specification: https://mmcif.wwpdb.org/
- PDB archive: https://www.rcsb.org/
- Part of bioinformatics file format support initiative (following #7923 FASTA and #7924 FASTQ)
cc @georgia-hf
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7925/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7925/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7924
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7924/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7924/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7924/events
|
https://github.com/huggingface/datasets/pull/7924
| 3,773,509,771
|
PR_kwDODunzps67JHNF
| 7,924
|
Add lightweight FASTQ file format support
|
{
"login": "behroozazarkhalili",
"id": 80390531,
"node_id": "MDQ6VXNlcjgwMzkwNTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/80390531?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/behroozazarkhalili",
"html_url": "https://github.com/behroozazarkhalili",
"followers_url": "https://api.github.com/users/behroozazarkhalili/followers",
"following_url": "https://api.github.com/users/behroozazarkhalili/following{/other_user}",
"gists_url": "https://api.github.com/users/behroozazarkhalili/gists{/gist_id}",
"starred_url": "https://api.github.com/users/behroozazarkhalili/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/behroozazarkhalili/subscriptions",
"organizations_url": "https://api.github.com/users/behroozazarkhalili/orgs",
"repos_url": "https://api.github.com/users/behroozazarkhalili/repos",
"events_url": "https://api.github.com/users/behroozazarkhalili/events{/privacy}",
"received_events_url": "https://api.github.com/users/behroozazarkhalili/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-12-31T19:46:42
| 2025-12-31T19:49:41
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7924",
"html_url": "https://github.com/huggingface/datasets/pull/7924",
"diff_url": "https://github.com/huggingface/datasets/pull/7924.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7924.patch",
"merged_at": null
}
|
## Summary
This PR adds support for loading FASTQ files directly with `load_dataset()`.
FASTQ is an extension of FASTA that includes quality scores for each base, widely used for storing output from high-throughput sequencing instruments.
### Key Features
- **Zero external dependencies** - Pure Python parser based on [readfq.py](https://github.com/lh3/readfq) by Heng Li
- **Quality score support** - Preserves per-base quality scores as ASCII-encoded strings
- **Streaming support** - Generator-based parsing for memory efficiency with large NGS files
- **Compression support** - Automatic detection of gzip, bzip2, and xz compressed files
- **Large sequence support** - Uses `large_string` for both sequence and quality columns
- **Parquet-safe batching** - Dual-threshold batching (batch_size + max_batch_bytes) prevents page size errors
### Columns
| Column | Type | Description |
|--------|------|-------------|
| `id` | string | Sequence identifier (first word after `@`) |
| `description` | string | Full description line (everything after id) |
| `sequence` | large_string | The nucleotide sequence |
| `quality` | large_string | ASCII-encoded quality scores (Phred+33 by default) |
### Supported Extensions
`.fq`, `.fastq` (and compressed variants: `.fq.gz`, `.fastq.gz`, `.fq.bz2`, `.fq.xz`)
### Usage
```python
from datasets import load_dataset
# Load FASTQ file
dataset = load_dataset("fastq", data_files="reads.fastq")
# Load gzipped file
dataset = load_dataset("fastq", data_files="reads.fq.gz")
# Filter columns
dataset = load_dataset("fastq", data_files="reads.fq", columns=["sequence", "quality"])
```
### Quality Score Format
Quality scores use Sanger/Illumina 1.8+ encoding (Phred+33):
- ASCII character `\!` (33) = quality 0
- ASCII character `I` (73) = quality 40
### Testing
- 22 comprehensive tests covering basic loading, multi-line sequences, compression, batching, schema types, and edge cases
- All tests passing
- Linting clean
### References
- Follows pattern established in #7923 (FASTA support)
- Parser based on: https://github.com/lh3/readfq
- Addresses feedback from #7851
cc: @georgia-hf
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7924/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7924/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7923
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7923/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7923/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7923/events
|
https://github.com/huggingface/datasets/pull/7923
| 3,773,472,998
|
PR_kwDODunzps67I-y3
| 7,923
|
feat(fasta): add lightweight FASTA file format support
|
{
"login": "behroozazarkhalili",
"id": 80390531,
"node_id": "MDQ6VXNlcjgwMzkwNTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/80390531?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/behroozazarkhalili",
"html_url": "https://github.com/behroozazarkhalili",
"followers_url": "https://api.github.com/users/behroozazarkhalili/followers",
"following_url": "https://api.github.com/users/behroozazarkhalili/following{/other_user}",
"gists_url": "https://api.github.com/users/behroozazarkhalili/gists{/gist_id}",
"starred_url": "https://api.github.com/users/behroozazarkhalili/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/behroozazarkhalili/subscriptions",
"organizations_url": "https://api.github.com/users/behroozazarkhalili/orgs",
"repos_url": "https://api.github.com/users/behroozazarkhalili/repos",
"events_url": "https://api.github.com/users/behroozazarkhalili/events{/privacy}",
"received_events_url": "https://api.github.com/users/behroozazarkhalili/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-12-31T19:33:00
| 2025-12-31T19:50:29
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7923",
"html_url": "https://github.com/huggingface/datasets/pull/7923",
"diff_url": "https://github.com/huggingface/datasets/pull/7923.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7923.patch",
"merged_at": null
}
|
## Summary
This PR adds support for loading FASTA files directly with `load_dataset()`, addressing feedback from #7851.
FASTA is a text-based format for representing nucleotide sequences (DNA/RNA) or peptide sequences (proteins), widely used in bioinformatics.
## Key Features
- **Zero external dependencies** - Uses a lightweight pure Python parser based on [readfq.py](https://github.com/lh3/readfq) by Heng Li
- **Streaming support** - Generator-based parsing for memory efficiency with large genomic files
- **Compression support** - Automatic detection and handling of gzip, bzip2, and xz compressed files via magic bytes
- **Large sequence support** - Uses `large_string` Arrow type to handle viral genomes and long sequences (fixes UTF-8 overflow)
- **Adaptive batching** - `max_batch_bytes` parameter (default 256MB) prevents Parquet page size errors with very large sequences
## Technical Decisions (Addressing #7851 Feedback)
| Concern | Solution |
|---------|----------|
| Long sequences → UTF-8 overflow (@apcamargo, @UriNeri) | Uses `pa.large_string()` for sequence column |
| BioPython is overkill (@apcamargo) | Pure Python parser based on Heng Li's readfq.py |
| Parquet page size limit i32::MAX (@UriNeri) | Adaptive dual-threshold batching with `max_batch_bytes` |
## Columns
| Column | Type | Description |
|--------|------|-------------|
| `id` | string | Sequence identifier (first word after `>`) |
| `description` | string | Full description line (everything after id) |
| `sequence` | large_string | The biological sequence (DNA/RNA/protein) |
## Supported Extensions
`.fa`, `.fasta`, `.fna`, `.ffn`, `.faa`, `.frn` (and compressed variants)
## Usage
```python
from datasets import load_dataset
# Load FASTA file
dataset = load_dataset("fasta", data_files="sequences.fasta")
# Load with column filtering
dataset = load_dataset("fasta", data_files="sequences.fa", columns=["id", "sequence"])
# Load gzipped file
dataset = load_dataset("fasta", data_files="sequences.fa.gz")
# Configure batching for very large genomes
dataset = load_dataset("fasta", data_files="genome.fasta", max_batch_bytes=128*1024*1024)
```
## Test Plan
- [x] Basic FASTA loading (3 sequences, multi-line)
- [x] Multiple extension support (.fa, .fasta, .fna, .ffn, .faa, .frn)
- [x] Compression formats (gzip, bz2, xz)
- [x] Long sequences with `large_string` type
- [x] Column filtering
- [x] Batch size configuration
- [x] Byte-based batching (`max_batch_bytes`)
- [x] Large genome handling (simulated 50KB sequences)
- [x] Empty description handling
- [x] Multiple files loading
- [x] Custom feature casting
All 22 tests passing.
cc: @georgia-hf
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7923/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7923/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7922
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7922/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7922/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7922/events
|
https://github.com/huggingface/datasets/issues/7922
| 3,772,247,021
|
I_kwDODunzps7g1-vt
| 7,922
|
Support Apache TsFile Datasets
|
{
"login": "qiaojialin",
"id": 7240743,
"node_id": "MDQ6VXNlcjcyNDA3NDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7240743?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qiaojialin",
"html_url": "https://github.com/qiaojialin",
"followers_url": "https://api.github.com/users/qiaojialin/followers",
"following_url": "https://api.github.com/users/qiaojialin/following{/other_user}",
"gists_url": "https://api.github.com/users/qiaojialin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qiaojialin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qiaojialin/subscriptions",
"organizations_url": "https://api.github.com/users/qiaojialin/orgs",
"repos_url": "https://api.github.com/users/qiaojialin/repos",
"events_url": "https://api.github.com/users/qiaojialin/events{/privacy}",
"received_events_url": "https://api.github.com/users/qiaojialin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null |
[
"A large quantity of industrial timeseries data has been stored as TsFile, and I have been constantly hearing about AI fellows complaining about the lack of data or the insufficiency of data quality.\n\nI like the ambition that uses TsFile as the bridge between AI research and industrial analysis requirements. This may help both sides improve their works with high-quality data and realtime data access.",
"It will be so convenient to have such a method to directly load tsfile into memory for further analysis.",
"Looking forward to see the tsfile become the part of the AI eco-systems.",
"Looking forward to the support for TsFile format!",
"Hey folks! I’ve added TsFile support by following the existing HDF5/Parquet patterns.\n\nThis includes:\n\nA TsFile builder with schema inference from file metadata\n\nTime-range filtering and column selection\n\nMemory-efficient reading using the tsfile library’s iterator API\n\n11 tests, all passing ✅\n\nI’ll be opening a PR shortly, would love any suggestions or feedback you might have!"
] | 2025-12-31T08:07:51
| 2026-01-05T08:23:21
| null |
NONE
| null | null | null | null |
### Feature request
I would love to use Hugging Face datasets library to directly load datasets composed of .tsfile files, for example:
`ds = load_dataset("username/dataset-with-tsfile-files")`
This feature would allow researchers working on time-series tasks to seamlessly integrate datasets stored in the Apache TsFile format into the Hugging Face ecosystem.
### Motivation
[Apache TsFile](https://tsfile.apache.org/) is a mature Apache project and a dedicated file format designed for efficient time-series data storage and retrieval. The repository is [here](https://github.com/apache/tsfile).
It has been widely adopted in the IoT community and serves as the underlying storage format for projects like [Apache IoTDB](https://iotdb.apache.org/).
Apache TsFile has the following advantages in the time-series area:
- Time-series native schema. Time-series data is organized by device and sensor IDs.
- A complete multi-language API (Python, Java, C++, C) for reading and writing tsfile.
- Superior write throughput and query efficiency.
- High compression ratio through per-series encoding and compression schemes.
- Efficient dataset transformation. ETL-free file compaction and efficient random access to time-series chunks, enabling faster data loading and lower query latency.
These properties make TsFile highly suitable for time-series model training, especially where time-series random access and efficient I/O are critical.
More details can be referred from this paper “[Apache TsFile: An IoT-native Time Series File Format (VLDB 2024)](https://www.vldb.org/pvldb/vol17/p4064-song.pdf)”.
Integrating TsFile support into datasets will benefit the broader machine learning community working on tasks such as forecasting and anomaly detection.
### Your contribution
As a member of the TsFile community, I recently initiated a [proposal ](https://lists.apache.org/thread/119vc9nh03dz4583cx9fwt83fp8v68vy)to integrate TsFile with Huggingface, which has received enthusiastic responses from the community.
We are willing to do the following contributions:
- Implement and contribute the PR that adds TsFile dataset support to Hugging Face datasets.
- Provide long-term maintenance for this integration.
- Any other needs for TsFile to support large-scale time-series datasets.
We are excited to contribute and continuously participate in the future evolution of TsFile and datasets to better support time-series data workload.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7922/reactions",
"total_count": 24,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 4,
"confused": 0,
"heart": 4,
"rocket": 6,
"eyes": 4
}
|
https://api.github.com/repos/huggingface/datasets/issues/7922/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7921
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7921/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7921/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7921/events
|
https://github.com/huggingface/datasets/pull/7921
| 3,766,879,197
|
PR_kwDODunzps66zE_q
| 7,921
|
Add beginner-friendly quick installation verification tip in README
|
{
"login": "ashupaul2005-byte",
"id": 237550974,
"node_id": "U_kgDODii9fg",
"avatar_url": "https://avatars.githubusercontent.com/u/237550974?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ashupaul2005-byte",
"html_url": "https://github.com/ashupaul2005-byte",
"followers_url": "https://api.github.com/users/ashupaul2005-byte/followers",
"following_url": "https://api.github.com/users/ashupaul2005-byte/following{/other_user}",
"gists_url": "https://api.github.com/users/ashupaul2005-byte/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ashupaul2005-byte/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ashupaul2005-byte/subscriptions",
"organizations_url": "https://api.github.com/users/ashupaul2005-byte/orgs",
"repos_url": "https://api.github.com/users/ashupaul2005-byte/repos",
"events_url": "https://api.github.com/users/ashupaul2005-byte/events{/privacy}",
"received_events_url": "https://api.github.com/users/ashupaul2005-byte/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-12-29T09:22:27
| 2025-12-29T09:22:27
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7921",
"html_url": "https://github.com/huggingface/datasets/pull/7921",
"diff_url": "https://github.com/huggingface/datasets/pull/7921.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7921.patch",
"merged_at": null
}
|
This PR adds a small beginner-friendly tip to help users quickly verify whether 🤗 Datasets is installed correctly by loading a simple dataset.
This improves onboarding experience for first-time users and reduces confusion for beginners.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7921/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7921/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7920
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7920/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7920/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7920/events
|
https://github.com/huggingface/datasets/pull/7920
| 3,766,070,566
|
PR_kwDODunzps66wgLx
| 7,920
|
Add progress_format support for machine-readable progress output
|
{
"login": "podarok",
"id": 563412,
"node_id": "MDQ6VXNlcjU2MzQxMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/563412?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/podarok",
"html_url": "https://github.com/podarok",
"followers_url": "https://api.github.com/users/podarok/followers",
"following_url": "https://api.github.com/users/podarok/following{/other_user}",
"gists_url": "https://api.github.com/users/podarok/gists{/gist_id}",
"starred_url": "https://api.github.com/users/podarok/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/podarok/subscriptions",
"organizations_url": "https://api.github.com/users/podarok/orgs",
"repos_url": "https://api.github.com/users/podarok/repos",
"events_url": "https://api.github.com/users/podarok/events{/privacy}",
"received_events_url": "https://api.github.com/users/podarok/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-12-28T22:35:24
| 2025-12-28T22:35:24
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7920",
"html_url": "https://github.com/huggingface/datasets/pull/7920",
"diff_url": "https://github.com/huggingface/datasets/pull/7920.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7920.patch",
"merged_at": null
}
|
## Summary
Adds support to , enabling machine-readable JSON progress output similar to [huggingface/tokenizers#1921](https://github.com/huggingface/tokenizers/pull/1921).
## Motivation
When using `datasets` in automated pipelines or UI applications, it's useful to emit machine-readable progress instead of ANSI progress bars. This PR adds the same `progress_format` option that was implemented in tokenizers.
## Changes
### New Functions
- `set_progress_format(format: str)`: Set global progress format
- `get_progress_format() -> str`: Get current progress format
### Supported Formats
1. **"tqdm"** (default): Interactive progress bars
2. **"json"**: Machine-readable JSON lines to stderr
3. **"silent"**: No output
### JSON Format
When `progress_format="json"`, emits JSON every 5% progress change or completion:
```json
{"stage":"Processing","current":50,"total":100,"percent":50.0}
```
## Usage Example
```python
from datasets import load_dataset
from datasets.utils import set_progress_format
# Enable JSON output
set_progress_format("json")
# Progress will now be emitted as JSON lines
dataset = load_dataset("Goader/kobza", split="train", streaming=True)
for sample in dataset:
process(sample)
```
## Implementation Details
- Suppresses visual output using `io.StringIO()` when format is "json"
- Keeps progress tracking active (unlike `disable=True`)
- Emits JSON to stderr every 5% progress change
- Exports new functions from `datasets.utils`
## Cross-Reference
This implementation mirrors the approach from:
- [huggingface/tokenizers#1921](https://github.com/huggingface/tokenizers/pull/1921)
## Testing
Tested with:
```python
from datasets.utils import set_progress_format, tqdm
set_progress_format('json')
for i in tqdm(range(100), desc='Test'):
process(i)
# Outputs: {"stage":"Test","current":10,"total":100,"percent":10.0}
```
## Checklist
- [x] New functions added to `datasets.utils.tqdm`
- [x] Functions exported from `datasets.utils.__init__`
- [x] JSON format emits to stderr
- [x] Visual output suppressed when format="json"
- [x] Progress tracking remains active
- [x] Cross-referenced with tokenizers#1921
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7920/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7920/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7919
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7919/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7919/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7919/events
|
https://github.com/huggingface/datasets/pull/7919
| 3,765,768,457
|
PR_kwDODunzps66vmQC
| 7,919
|
Fix load_from_disk progress bar with redirected stdout
|
{
"login": "omarfarhoud",
"id": 118056245,
"node_id": "U_kgDOBwllNQ",
"avatar_url": "https://avatars.githubusercontent.com/u/118056245?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarfarhoud",
"html_url": "https://github.com/omarfarhoud",
"followers_url": "https://api.github.com/users/omarfarhoud/followers",
"following_url": "https://api.github.com/users/omarfarhoud/following{/other_user}",
"gists_url": "https://api.github.com/users/omarfarhoud/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarfarhoud/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarfarhoud/subscriptions",
"organizations_url": "https://api.github.com/users/omarfarhoud/orgs",
"repos_url": "https://api.github.com/users/omarfarhoud/repos",
"events_url": "https://api.github.com/users/omarfarhoud/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarfarhoud/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"this seems to contradict the comment that says \r\n\r\n> set `disable=None` rather than `disable=False` by default to disable progress bar when no TTY attached\r\n\r\nI believe the right approach is to do the same as in https://github.com/huggingface/huggingface_hub/pull/2698",
"> this seems to contradict the comment that says\r\n> \r\n> > set `disable=None` rather than `disable=False` by default to disable progress bar when no TTY attached\r\n> \r\n> I believe the right approach is to do the same as in [huggingface/huggingface_hub#2698](https://github.com/huggingface/huggingface_hub/pull/2698)\r\n\r\nUpdated to check TQDM_POSITION=-1 to force-enable progress bars in cloud environments, \r\nfollowing the same pattern as huggingface_hub#2698.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7919). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Moved the TQDM_POSITION check to the tqdm class in utils/tqdm.py so all progress bars \r\nin the codebase have consistent behavior. Thanks for the suggestion!",
"@lhoestq thanks again for the suggestion. I’ve applied it and everything should now be consistent across all tqdm usage. Happy to adjust anything else if needed."
] | 2025-12-28T15:39:31
| 2026-01-02T11:00:20
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7919",
"html_url": "https://github.com/huggingface/datasets/pull/7919",
"diff_url": "https://github.com/huggingface/datasets/pull/7919.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7919.patch",
"merged_at": null
}
|
Fixes #7918
## Problem
When using `load_from_disk()` with `contextlib.redirect_stdout()`, the progress bar was not showing even for datasets with >16 files.
## Root Cause
The `disable` parameter was set to `None` which triggers TTY auto-detection. This fails when stdout is redirected, causing the progress bar to be hidden.
## Solution
Changed `disable=len(state["_data_files"]) <= 16 or None` to `disable=len(state["_data_files"]) <= 16` to force the progress bar to show for datasets with >16 files, regardless of stdout redirection.
## Testing
Verified that progress bars now appear correctly both with and without stdout redirection for datasets with >16 shards.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7919/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7919/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7918
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7918/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7918/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7918/events
|
https://github.com/huggingface/datasets/issues/7918
| 3,765,489,462
|
I_kwDODunzps7gcM82
| 7,918
|
datasets.load_from_disk doesn't show progress bar
|
{
"login": "Tommigun1980",
"id": 60286968,
"node_id": "MDQ6VXNlcjYwMjg2OTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/60286968?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tommigun1980",
"html_url": "https://github.com/Tommigun1980",
"followers_url": "https://api.github.com/users/Tommigun1980/followers",
"following_url": "https://api.github.com/users/Tommigun1980/following{/other_user}",
"gists_url": "https://api.github.com/users/Tommigun1980/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Tommigun1980/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tommigun1980/subscriptions",
"organizations_url": "https://api.github.com/users/Tommigun1980/orgs",
"repos_url": "https://api.github.com/users/Tommigun1980/repos",
"events_url": "https://api.github.com/users/Tommigun1980/events{/privacy}",
"received_events_url": "https://api.github.com/users/Tommigun1980/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"#self-assign"
] | 2025-12-28T09:14:41
| 2025-12-28T15:07:01
| null |
NONE
| null | null | null | null |
### Describe the bug
This is the inverse of the bug at [https://github.com/huggingface/datasets/issues/7030](https://github.com/huggingface/datasets/issues/7030), i.e. that `datasets.load_from_disk(path)` displays no progress bar. My dataset has > 16 files in it.
I am redirecting stdout as I capture the log, could this have something to do with it? All other progress bars work fine though except for HF dataset progress bars.
### Steps to reproduce the bug
```py
with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
datasets.load_from_disk(path)
```
### Expected behavior
The progress bar should show when loading a dataset.
### Environment info
Python 3.13.9
Datasets 4.4.1
macOS Tahoe 26.2
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7918/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7918/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7917
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7917/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7917/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7917/events
|
https://github.com/huggingface/datasets/issues/7917
| 3,764,913,807
|
I_kwDODunzps7gaAaP
| 7,917
|
IterableDataset supports automatic sharding
|
{
"login": "howitry",
"id": 61858900,
"node_id": "MDQ6VXNlcjYxODU4OTAw",
"avatar_url": "https://avatars.githubusercontent.com/u/61858900?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/howitry",
"html_url": "https://github.com/howitry",
"followers_url": "https://api.github.com/users/howitry/followers",
"following_url": "https://api.github.com/users/howitry/following{/other_user}",
"gists_url": "https://api.github.com/users/howitry/gists{/gist_id}",
"starred_url": "https://api.github.com/users/howitry/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/howitry/subscriptions",
"organizations_url": "https://api.github.com/users/howitry/orgs",
"repos_url": "https://api.github.com/users/howitry/repos",
"events_url": "https://api.github.com/users/howitry/events{/privacy}",
"received_events_url": "https://api.github.com/users/howitry/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"You can already use `.shard()` instead like this:\n\n```python\ndataset = dataset.shard(index=rank, num_shards=world_size)\n```\n\nnote that it requires that `dataset.num_shards >= world_size`, and that it may result in nodes having the same number of shards +- 1",
"> You can already use `.shard()` instead like this:\n> \n> dataset = dataset.shard(index=rank, num_shards=world_size)\n> note that it requires that `dataset.num_shards >= world_size`, and that it may result in nodes having the same number of shards +- 1\n\nThis means I have to ensure that the initial num_shards is greater than the number of GPUs I use each time, which seems inflexible. Is there a way to dynamically divide the data into multiple shards based on the number of GPUs used each time? For example:\n```\ndataset = load_dataset(*, stream=True) # dataset.num_shards()=1\nnum_shards=world_size*dataloader_num_workers\ndataset = dataset.dynamically_shard(num_shards=num_shards, num_samples=num_samples) #We may need to know the total number of samples (num_samples) in advance.\n```\n\n",
"> Is there a way to dynamically divide the data into multiple shards based on the number of GPUs used each time?\n\nNo it's not possible without either\n\n1. doing data skipping, which degrades the data loading performance significantly (every node has to download the same data and skip most samples)\n2. or divide the original files further, which requires additional logic for every file format\n\nI would be interested in exploring 2 though, maybe if we start with Parquet support. Right now it fails because `ArrowExamplesIterable` doesn't know how to shard more than num_shards. We could have instead a `ReshardableArrowExamplesIterable` that would pass the right arguments to `_generate_tables()` in parquet.py to only read the data requested for a specific node",
"> ReshardableArrowExamplesIterable\n\nOkay, my datasets are all on my local disk, so I haven't considered the overhead of data download. Are there any tutorials on creating custom iterable datasets? For example, a custom `iterabledataset.__iter__` function can be used to skip data, and it can inherit operations like `iterabledataset.map`."
] | 2025-12-27T16:48:29
| 2025-12-29T16:06:52
| null |
NONE
| null | null | null | null |
### Feature request
Added sharding function support to the streaming IterableDataset, allowing users to adjust the number of shards according to their training resources. For example:
```
dataset = load_dataset(*, stream=True)
dataset = dataset.shard(num_shards=num_shards, num_samples=num_samples) #We may need to know the total number of samples (num_samples) in advance.
```
### Motivation
When performing large-scale pre-training in a distributed environment, large datasets may only be loaded in a streaming manner. To improve training efficiency, my current approach is as follows:
```
file_type="parquet"
dataset_path="./*.parquet"
dataset = load_dataset(file_type,data_files=dataset_path, stream=True)
dataset = split_dataset_by_node(dataset, rank=rank, world_size=world_size)
```
I split a large file into N = world_size * dataloader_num_workers files and placed them under dataset_path. This ensures that each GPU processes different shards. However, this approach has some issues. If the number of GPUs used to train the model changes next time, I need to split the large file again to ensure that IterableDataset.num_shards = world_size * dataloader_num_workers.
I'd like to know if there's a better approach, such as directly loading the large dataset in a streaming manner and then sharding the IterableDataset based on the number of GPUs and num_workers, similar to the approach in Example 1 of https://docs.pytorch.org/docs/stable/data.html#torch.utils.data.IterableDataset @lhoestq
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7917/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7917/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7916
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7916/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7916/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7916/events
|
https://github.com/huggingface/datasets/issues/7916
| 3,764,901,707
|
I_kwDODunzps7gZ9dL
| 7,916
|
No description provided.
|
{
"login": "howitry",
"id": 61858900,
"node_id": "MDQ6VXNlcjYxODU4OTAw",
"avatar_url": "https://avatars.githubusercontent.com/u/61858900?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/howitry",
"html_url": "https://github.com/howitry",
"followers_url": "https://api.github.com/users/howitry/followers",
"following_url": "https://api.github.com/users/howitry/following{/other_user}",
"gists_url": "https://api.github.com/users/howitry/gists{/gist_id}",
"starred_url": "https://api.github.com/users/howitry/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/howitry/subscriptions",
"organizations_url": "https://api.github.com/users/howitry/orgs",
"repos_url": "https://api.github.com/users/howitry/repos",
"events_url": "https://api.github.com/users/howitry/events{/privacy}",
"received_events_url": "https://api.github.com/users/howitry/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null |
[] | 2025-12-27T16:33:11
| 2025-12-27T16:45:22
| 2025-12-27T16:45:22
|
NONE
| null | null | null | null | null |
{
"login": "howitry",
"id": 61858900,
"node_id": "MDQ6VXNlcjYxODU4OTAw",
"avatar_url": "https://avatars.githubusercontent.com/u/61858900?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/howitry",
"html_url": "https://github.com/howitry",
"followers_url": "https://api.github.com/users/howitry/followers",
"following_url": "https://api.github.com/users/howitry/following{/other_user}",
"gists_url": "https://api.github.com/users/howitry/gists{/gist_id}",
"starred_url": "https://api.github.com/users/howitry/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/howitry/subscriptions",
"organizations_url": "https://api.github.com/users/howitry/orgs",
"repos_url": "https://api.github.com/users/howitry/repos",
"events_url": "https://api.github.com/users/howitry/events{/privacy}",
"received_events_url": "https://api.github.com/users/howitry/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7916/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7916/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7915
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7915/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7915/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7915/events
|
https://github.com/huggingface/datasets/issues/7915
| 3,762,042,396
|
I_kwDODunzps7gPDYc
| 7,915
|
GDPval dataset Word docs corrupted
|
{
"login": "alexheat",
"id": 12248575,
"node_id": "MDQ6VXNlcjEyMjQ4NTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/12248575?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexheat",
"html_url": "https://github.com/alexheat",
"followers_url": "https://api.github.com/users/alexheat/followers",
"following_url": "https://api.github.com/users/alexheat/following{/other_user}",
"gists_url": "https://api.github.com/users/alexheat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexheat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexheat/subscriptions",
"organizations_url": "https://api.github.com/users/alexheat/orgs",
"repos_url": "https://api.github.com/users/alexheat/repos",
"events_url": "https://api.github.com/users/alexheat/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexheat/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"tentatively tagging @simonpfish ^\n\n(if it's an option you could enable PRs/Discussions on your dataset on HF)"
] | 2025-12-25T13:56:55
| 2025-12-26T09:06:13
| null |
NONE
| null | null | null | null |
The [openai/gdpval](https://huggingface.co/datasets/openai/gdpval) dataset on Hugging Face contains Word .docx files with two types of corruption that cause Microsoft Word to display an "unreadable content" error.
### Root Causes
1. **Corrupted settings.xml**: The `word/settings.xml` file uses incorrect namespace prefixes (`ns0:`, `ns1:`, etc.) instead of the proper prefixes (`w:`, `mc:`, `m:`, etc.)
2. **Malformed TargetMode attributes**: Some files have `TargetMode="External"` attributes missing their closing `/>` tag in hyperlink relationships
Both issues cause Word to reject the files even though the XML structure is technically valid.
I have a fix for the issue here https://github.com/alexheat/gdpval-docx-fix
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7915/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7915/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7914
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7914/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7914/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7914/events
|
https://github.com/huggingface/datasets/issues/7914
| 3,760,894,100
|
I_kwDODunzps7gKrCU
| 7,914
|
[ROCm] please install 'torchcodec'
|
{
"login": "AndreasKaratzas",
"id": 42451412,
"node_id": "MDQ6VXNlcjQyNDUxNDEy",
"avatar_url": "https://avatars.githubusercontent.com/u/42451412?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AndreasKaratzas",
"html_url": "https://github.com/AndreasKaratzas",
"followers_url": "https://api.github.com/users/AndreasKaratzas/followers",
"following_url": "https://api.github.com/users/AndreasKaratzas/following{/other_user}",
"gists_url": "https://api.github.com/users/AndreasKaratzas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AndreasKaratzas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AndreasKaratzas/subscriptions",
"organizations_url": "https://api.github.com/users/AndreasKaratzas/orgs",
"repos_url": "https://api.github.com/users/AndreasKaratzas/repos",
"events_url": "https://api.github.com/users/AndreasKaratzas/events{/privacy}",
"received_events_url": "https://api.github.com/users/AndreasKaratzas/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"I was able to install torchcodec by building it from source and have put together a PR: https://github.com/vllm-project/vllm/pull/31323\n\nStill I think it would make this framework more robust to add at least one fallback lib (that is more widely used) in place should torchcodec installation fail or library is not found."
] | 2025-12-24T19:39:17
| 2025-12-28T07:25:42
| null |
NONE
| null | null | null | null |
### Describe the bug
Datasets library is widely used by many Python packages. Naturally, it is a requirement on many platforms. This includes vLLM for ROCm. During audio dataset tests, there is an exception triggered:
```python
def decode_example(
self, value: dict, token_per_repo_id: Optional[dict[str, Union[str, bool, None]]] = None
) -> "AudioDecoder":
"""Decode example audio file into audio data.
Args:
value (`dict`):
A dictionary with keys:
- `path`: String with relative audio file path.
- `bytes`: Bytes of the audio file.
token_per_repo_id (`dict`, *optional*):
To access and decode
audio files from private repositories on the Hub, you can pass
a dictionary repo_id (`str`) -> token (`bool` or `str`)
Returns:
`torchcodec.decoders.AudioDecoder`
"""
if config.TORCHCODEC_AVAILABLE:
from ._torchcodec import AudioDecoder
else:
> raise ImportError("To support decoding audio data, please install 'torchcodec'.")
E ImportError: To support decoding audio data, please install 'torchcodec'.
```
At the same time, `torchcodec` cannot be installed on ROCm, because Its GPU acceleration uses NVIDIA's NVDEC (hardware decoder), which is NVIDIA-specific. Therefore, code paths that call this block trigger errors on ROCm. Can you add an alternative package there as fallback instead of an ImportError?
### Steps to reproduce the bug
On a machine with MI300/MI325/MI355:
```bash
pytest -s -v tests/entrypoints/openai/correctness/test_transcription_api_correctness.py::test_wer_correctness[12.74498-D4nt3/esb-datasets-earnings22-validation-tiny-filtered-openai/whisper-large-v3]
```
### Expected behavior
```log
_________________________________________________ test_wer_correctness[12.74498-D4nt3/esb-datasets-earnings22-validation-tiny-filtered-openai/whisper-large-v3] ________________________________________[383/535$
model_name = 'openai/whisper-large-v3', dataset_repo = 'D4nt3/esb-datasets-earnings22-validation-tiny-filtered', expected_wer = 12.74498, n_examples = -1, max_concurrent_request = None
@pytest.mark.parametrize("model_name", ["openai/whisper-large-v3"])
# Original dataset is 20GB+ in size, hence we use a pre-filtered slice.
@pytest.mark.parametrize(
"dataset_repo", ["D4nt3/esb-datasets-earnings22-validation-tiny-filtered"]
)
# NOTE: Expected WER measured with equivalent hf.transformers args:
# whisper-large-v3 + esb-datasets-earnings22-validation-tiny-filtered.
@pytest.mark.parametrize("expected_wer", [12.744980])
def test_wer_correctness(
model_name, dataset_repo, expected_wer, n_examples=-1, max_concurrent_request=None
):
# TODO refactor to use `ASRDataset`
with RemoteOpenAIServer(model_name, ["--enforce-eager"]) as remote_server:
> dataset = load_hf_dataset(dataset_repo)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
tests/entrypoints/openai/correctness/test_transcription_api_correctness.py:160:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/entrypoints/openai/correctness/test_transcription_api_correctness.py:111: in load_hf_dataset
if "duration_ms" not in dataset[0]:
^^^^^^^^^^
/usr/local/lib/python3.12/dist-packages/datasets/arrow_dataset.py:2876: in __getitem__
return self._getitem(key)
^^^^^^^^^^^^^^^^^^
/usr/local/lib/python3.12/dist-packages/datasets/arrow_dataset.py:2858: in _getitem
formatted_output = format_table(
/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py:658: in format_table
return formatter(pa_table, query_type=query_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py:411: in __call__
return self.format_row(pa_table)
^^^^^^^^^^^^^^^^^^^^^^^^^
/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py:460: in format_row
row = self.python_features_decoder.decode_row(row)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py:224: in decode_row
return self.features.decode_example(row, token_per_repo_id=self.token_per_repo_id) if self.features else row
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
/usr/local/lib/python3.12/dist-packages/datasets/features/features.py:2111: in decode_example
column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
/usr/local/lib/python3.12/dist-packages/datasets/features/features.py:1419: in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
```
### Environment info
- `datasets` version: 4.4.2
- Platform: Linux-5.15.0-161-generic-x86_64-with-glibc2.35
- Python version: 3.12.12
- `huggingface_hub` version: 0.36.0
- PyArrow version: 22.0.0
- Pandas version: 2.3.3
- `fsspec` version: 2025.10.0
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7914/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7914/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7913
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7913/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7913/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7913/events
|
https://github.com/huggingface/datasets/pull/7913
| 3,758,884,376
|
PR_kwDODunzps66aEsF
| 7,913
|
Add lance format support
|
{
"login": "eddyxu",
"id": 17097,
"node_id": "MDQ6VXNlcjE3MDk3",
"avatar_url": "https://avatars.githubusercontent.com/u/17097?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eddyxu",
"html_url": "https://github.com/eddyxu",
"followers_url": "https://api.github.com/users/eddyxu/followers",
"following_url": "https://api.github.com/users/eddyxu/following{/other_user}",
"gists_url": "https://api.github.com/users/eddyxu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eddyxu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eddyxu/subscriptions",
"organizations_url": "https://api.github.com/users/eddyxu/orgs",
"repos_url": "https://api.github.com/users/eddyxu/repos",
"events_url": "https://api.github.com/users/eddyxu/events{/privacy}",
"received_events_url": "https://api.github.com/users/eddyxu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"Mentioned https://github.com/huggingface/datasets/issues/7863 as well",
"@pdames for vis",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7913). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Cool ! I notice the current implementation doesn't support streaming because of the symlink hack.\r\n\r\nI believe you can do something like this instead:\r\n\r\n```python\r\ndef _generate_tables(self, paths: list[str]):\r\n for path in paths:\r\n ds = lance.dataset(path)\r\n for frag_idx, fragment in enumerate(ds.get_fragments()):\r\n for batch_idx, batch in enumerate(\r\n fragment.to_batches(columns=self.config.columns, batch_size=self.config.batch_size)\r\n ):\r\n table = pa.Table.from_batches([batch])\r\n table = self._cast_table(table)\r\n yield Key(frag_idx, batch_idx), table\r\n```\r\n\r\nnote that path can be a local one, but also a `hf://` URI",
"@lhoestq Take another look? "
] | 2025-12-24T00:52:20
| 2026-01-06T07:01:43
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7913",
"html_url": "https://github.com/huggingface/datasets/pull/7913",
"diff_url": "https://github.com/huggingface/datasets/pull/7913.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7913.patch",
"merged_at": null
}
|
Add lance format as one of the `packaged_modules`.
```py
import datasets
ds = datasets.load_dataset("org/lance_repo", split="train")
# Or
ds = datasets.load_dataset("./local/data.lance")
```
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7913/reactions",
"total_count": 5,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 3,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7913/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7912
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7912/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7912/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7912/events
|
https://github.com/huggingface/datasets/pull/7912
| 3,755,023,829
|
PR_kwDODunzps66NQzG
| 7,912
|
fix low but large example indexerror
|
{
"login": "CloseChoice",
"id": 31857876,
"node_id": "MDQ6VXNlcjMxODU3ODc2",
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CloseChoice",
"html_url": "https://github.com/CloseChoice",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions",
"organizations_url": "https://api.github.com/users/CloseChoice/orgs",
"repos_url": "https://api.github.com/users/CloseChoice/repos",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"received_events_url": "https://api.github.com/users/CloseChoice/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-12-22T19:53:59
| 2025-12-22T19:53:59
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7912",
"html_url": "https://github.com/huggingface/datasets/pull/7912",
"diff_url": "https://github.com/huggingface/datasets/pull/7912.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7912.patch",
"merged_at": null
}
|
Fixes #7911.
This PR simply implements the approach outlined in the corresponding issue, that if we have large examples, the number of shards should never be more than the number of samples. This is an absolute edge case, but can happen for image data.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7912/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7912/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7911
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7911/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7911/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7911/events
|
https://github.com/huggingface/datasets/issues/7911
| 3,753,447,559
|
I_kwDODunzps7fuRCH
| 7,911
|
IndexError when saving few large examples to disk
|
{
"login": "CloseChoice",
"id": 31857876,
"node_id": "MDQ6VXNlcjMxODU3ODc2",
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CloseChoice",
"html_url": "https://github.com/CloseChoice",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions",
"organizations_url": "https://api.github.com/users/CloseChoice/orgs",
"repos_url": "https://api.github.com/users/CloseChoice/repos",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"received_events_url": "https://api.github.com/users/CloseChoice/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-12-22T11:33:19
| 2025-12-22T11:33:19
| null |
CONTRIBUTOR
| null | null | null | null |
### Describe the bug
I ran into this issue when processing a file (900MB) with just one example but simplified for a quicker reproducer below. The problem is that, if `num_shards` is not explicitly set, we calculate it manually using https://github.com/huggingface/datasets/blob/main/src/datasets/utils/py_utils.py#L96 with the default `config.MAX_SHARD_SIZE` which is 500MB. If a single example is now larger than this, we run into an index error since the shards should be processed individually.
An easy workaround is:
`dataset.save_to_disk(output_path, max_shard_size="1GB")` or `dataset.save_to_disk(output_path, num_shards=1)`.
I believe this should be fixed and can happen in edge cases for image data, especially when just testing single partitions. The fix would be rather easy, just using a `num_shards = min(num_examples, <previously_calculated_num_shards>)`
### Steps to reproduce the bug
```python
from datasets import Dataset
target_size = 2 * 1024 * 1024 # 2 MB in bytes
base_text = (
"This is a sample sentence that will be repeated many times to create a large dataset. "
* 100
)
large_text = ""
while len(large_text.encode("utf-8")) < target_size:
large_text += base_text
actual_size = len(large_text.encode("utf-8"))
size_mb = actual_size / (1024 * 1024)
data = {"text": [large_text], "label": [0], "id": [1]}
dataset = Dataset.from_dict(data)
output_path = "./sample_dataset"
# make sure this is split into 2 shards
dataset.save_to_disk(output_path, max_shard_size="1MB")
```
this results in
```
```bash
Saving the dataset (1/3 shards): 100%|████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 162.96 examples/s]
Traceback (most recent call last):
File "/home/tpitters/programming/toy-mmu/create_dataset.py", line 27, in <module>
dataset.save_to_disk(output_path, max_shard_size="1MB")
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tpitters/programming/toy-mmu/.venv/lib/python3.13/site-packages/datasets/arrow_dataset.py", line 1640, in save_to_disk
for kwargs in kwargs_per_job:
^^^^^^^^^^^^^^
File "/home/tpitters/programming/toy-mmu/.venv/lib/python3.13/site-packages/datasets/arrow_dataset.py", line 1617, in <genexpr>
"shard": self.shard(num_shards=num_shards, index=shard_idx, contiguous=True),
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tpitters/programming/toy-mmu/.venv/lib/python3.13/site-packages/datasets/arrow_dataset.py", line 4987, in shard
return self.select(
~~~~~~~~~~~^
indices=indices,
^^^^^^^^^^^^^^^^
...<2 lines>...
writer_batch_size=writer_batch_size,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/home/tpitters/programming/toy-mmu/.venv/lib/python3.13/site-packages/datasets/arrow_dataset.py", line 562, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
~~~~^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tpitters/programming/toy-mmu/.venv/lib/python3.13/site-packages/datasets/fingerprint.py", line 442, in wrapper
out = func(dataset, *args, **kwargs)
File "/home/tpitters/programming/toy-mmu/.venv/lib/python3.13/site-packages/datasets/arrow_dataset.py", line 4104, in select
return self._select_contiguous(start, length, new_fingerprint=new_fingerprint)
~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tpitters/programming/toy-mmu/.venv/lib/python3.13/site-packages/datasets/arrow_dataset.py", line 562, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
~~~~^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tpitters/programming/toy-mmu/.venv/lib/python3.13/site-packages/datasets/fingerprint.py", line 442, in wrapper
out = func(dataset, *args, **kwargs)
File "/home/tpitters/programming/toy-mmu/.venv/lib/python3.13/site-packages/datasets/arrow_dataset.py", line 4164, in _select_contiguous
_check_valid_indices_value(start, len(self))
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
File "/home/tpitters/programming/toy-mmu/.venv/lib/python3.13/site-packages/datasets/arrow_dataset.py", line 624, in _check_valid_indices_value
raise IndexError(f"Index {index} out of range for dataset of size {size}.")
IndexError: Index 1 out of range for dataset of size 1.
```
### Expected behavior
should pass
### Environment info
datasets==4.4.2
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7911/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7911/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7910
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7910/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7910/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7910/events
|
https://github.com/huggingface/datasets/pull/7910
| 3,749,894,414
|
PR_kwDODunzps658oGv
| 7,910
|
Enhance cast_column() with cast_kwargs parameter
|
{
"login": "Moenupa",
"id": 49304833,
"node_id": "MDQ6VXNlcjQ5MzA0ODMz",
"avatar_url": "https://avatars.githubusercontent.com/u/49304833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Moenupa",
"html_url": "https://github.com/Moenupa",
"followers_url": "https://api.github.com/users/Moenupa/followers",
"following_url": "https://api.github.com/users/Moenupa/following{/other_user}",
"gists_url": "https://api.github.com/users/Moenupa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Moenupa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Moenupa/subscriptions",
"organizations_url": "https://api.github.com/users/Moenupa/orgs",
"repos_url": "https://api.github.com/users/Moenupa/repos",
"events_url": "https://api.github.com/users/Moenupa/events{/privacy}",
"received_events_url": "https://api.github.com/users/Moenupa/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-12-20T10:09:11
| 2025-12-20T10:09:11
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7910",
"html_url": "https://github.com/huggingface/datasets/pull/7910",
"diff_url": "https://github.com/huggingface/datasets/pull/7910.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7910.patch",
"merged_at": null
}
|
Fixes #7909.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7910/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7910/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7909
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7909/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7909/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7909/events
|
https://github.com/huggingface/datasets/issues/7909
| 3,749,885,131
|
I_kwDODunzps7fgrTL
| 7,909
|
Support cast_kwargs in cast_columns
|
{
"login": "Moenupa",
"id": 49304833,
"node_id": "MDQ6VXNlcjQ5MzA0ODMz",
"avatar_url": "https://avatars.githubusercontent.com/u/49304833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Moenupa",
"html_url": "https://github.com/Moenupa",
"followers_url": "https://api.github.com/users/Moenupa/followers",
"following_url": "https://api.github.com/users/Moenupa/following{/other_user}",
"gists_url": "https://api.github.com/users/Moenupa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Moenupa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Moenupa/subscriptions",
"organizations_url": "https://api.github.com/users/Moenupa/orgs",
"repos_url": "https://api.github.com/users/Moenupa/repos",
"events_url": "https://api.github.com/users/Moenupa/events{/privacy}",
"received_events_url": "https://api.github.com/users/Moenupa/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null |
[] | 2025-12-20T10:02:07
| 2025-12-20T10:28:01
| null |
NONE
| null | null | null | null |
### Feature request
expose `cast(**cast_kwargs)` to `cast_column()`
https://github.com/huggingface/datasets/blob/0feb65dd8733191dd2d1e74215b422fc5939a56a/src/datasets/arrow_dataset.py#L2205
### Motivation
`cast_column()` wraps `cast()` function without exposing any `cast()` args. For large multi-modal datasets, e.g.
```py
# a dataset with list[{"bytes"}: b'', ...], much more than one image
load_dataset("MLLM-CL/VTCBench").cast_column("images", List(Image(decode=False)))
```
This would fail due to #6206, #7167, where the default value `1000` for batch size in `cast()` is too large and causes `pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays`.
https://github.com/huggingface/datasets/blob/0feb65dd8733191dd2d1e74215b422fc5939a56a/src/datasets/arrow_dataset.py#L2164-L2205
### Your contribution
#7910
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7909/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7909/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7908
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7908/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7908/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7908/events
|
https://github.com/huggingface/datasets/pull/7908
| 3,747,829,610
|
PR_kwDODunzps651xlf
| 7,908
|
set dev version
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7908). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-12-19T15:06:21
| 2025-12-19T15:11:05
| 2025-12-19T15:06:29
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7908",
"html_url": "https://github.com/huggingface/datasets/pull/7908",
"diff_url": "https://github.com/huggingface/datasets/pull/7908.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7908.patch",
"merged_at": "2025-12-19T15:06:29"
}
| null |
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7908/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7908/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7907
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7907/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7907/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7907/events
|
https://github.com/huggingface/datasets/pull/7907
| 3,747,818,613
|
PR_kwDODunzps651vMp
| 7,907
|
release: 4.4.2
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7907). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-12-19T15:02:23
| 2025-12-19T15:06:46
| 2025-12-19T15:03:22
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7907",
"html_url": "https://github.com/huggingface/datasets/pull/7907",
"diff_url": "https://github.com/huggingface/datasets/pull/7907.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7907.patch",
"merged_at": "2025-12-19T15:03:22"
}
| null |
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7907/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7907/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7906
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7906/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7906/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7906/events
|
https://github.com/huggingface/datasets/pull/7906
| 3,747,764,992
|
PR_kwDODunzps651jiI
| 7,906
|
Don't save original_shard_lengths by default for backward compat
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7906). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-12-19T14:44:09
| 2025-12-19T14:57:25
| 2025-12-19T14:57:23
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7906",
"html_url": "https://github.com/huggingface/datasets/pull/7906",
"diff_url": "https://github.com/huggingface/datasets/pull/7906.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7906.patch",
"merged_at": "2025-12-19T14:57:23"
}
|
following #7897
but let users enable it with `datasets.config.SAVE_ORIGINAL_SHARD_LENGTHS = True`
this is useful for the Dataset Viewer to know where each row comes from after converting to parquet
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7906/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7906/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7905
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7905/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7905/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7905/events
|
https://github.com/huggingface/datasets/issues/7905
| 3,734,233,245
|
I_kwDODunzps7ek-Cd
| 7,905
|
Unbounded network usage when opening Data Studio
|
{
"login": "alizaredornica-sys",
"id": 225014457,
"node_id": "U_kgDODWlyuQ",
"avatar_url": "https://avatars.githubusercontent.com/u/225014457?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alizaredornica-sys",
"html_url": "https://github.com/alizaredornica-sys",
"followers_url": "https://api.github.com/users/alizaredornica-sys/followers",
"following_url": "https://api.github.com/users/alizaredornica-sys/following{/other_user}",
"gists_url": "https://api.github.com/users/alizaredornica-sys/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alizaredornica-sys/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alizaredornica-sys/subscriptions",
"organizations_url": "https://api.github.com/users/alizaredornica-sys/orgs",
"repos_url": "https://api.github.com/users/alizaredornica-sys/repos",
"events_url": "https://api.github.com/users/alizaredornica-sys/events{/privacy}",
"received_events_url": "https://api.github.com/users/alizaredornica-sys/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"cc @cfahlgren1",
"Thanks for reporting! Looking into this!"
] | 2025-12-16T10:45:02
| 2025-12-19T15:29:56
| null |
NONE
| null | null | null | null |
### Describe the bug
Opening the Data Studio tab on a dataset page triggers continuous and unbounded network traffic. This issue occurs across multiple browsers and continues even without user interaction.
### Steps to reproduce the bug
https://huggingface.co/datasets/slone/nllb-200-10M-sample/viewer
### Expected behavior
Data Studio should load a limited, finite amount of data and stop further network activity unless explicitly requested by the user.
### Environment info
- OS: Windows 10
- Browsers: Chrome, Firefox, Edge
- Device: Desktop
- Network: Standard broadband connection
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7905/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7905/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7904
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7904/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7904/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7904/events
|
https://github.com/huggingface/datasets/issues/7904
| 3,727,978,498
|
I_kwDODunzps7eNHAC
| 7,904
|
Request: Review pending neuroimaging PRs (#7886 BIDS loader, #7887 lazy loading)
|
{
"login": "The-Obstacle-Is-The-Way",
"id": 175985783,
"node_id": "U_kgDOCn1Udw",
"avatar_url": "https://avatars.githubusercontent.com/u/175985783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/The-Obstacle-Is-The-Way",
"html_url": "https://github.com/The-Obstacle-Is-The-Way",
"followers_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/followers",
"following_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/following{/other_user}",
"gists_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/gists{/gist_id}",
"starred_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/subscriptions",
"organizations_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/orgs",
"repos_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/repos",
"events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/events{/privacy}",
"received_events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi ! sure I'll be happy to take a look, sorry for the delay :)"
] | 2025-12-14T20:34:31
| 2025-12-15T11:25:29
| null |
CONTRIBUTOR
| null | null | null | null |
## Summary
I'm building production neuroimaging pipelines that depend on `datasets` and would benefit greatly from two pending PRs being reviewed/merged.
## Pending PRs
| PR | Description | Status | Open Since |
|----|-------------|--------|------------|
| [#7886](https://github.com/huggingface/datasets/pull/7886) | BIDS dataset loader | Open | Nov 29 |
| [#7887](https://github.com/huggingface/datasets/pull/7887) | Lazy loading for NIfTI | Open | Nov 29 |
## Use Case
The neuroimaging community uses the BIDS (Brain Imaging Data Structure) standard for organizing MRI/fMRI data. These PRs would enable:
1. **#7886**: `load_dataset('bids', data_dir='/path/to/bids')` - Load local BIDS directories directly
2. **#7887**: Memory-efficient NIfTI handling (single 4D fMRI file can be 1-2GB)
## Current Workaround
Without these, users must either:
- Upload to Hub first, then consume (works but slow iteration)
- Hand-roll BIDS parsing (duplicates effort)
## Request
Could a maintainer review these PRs? Happy to address any feedback. The BIDS loader has tests passing and was end-to-end tested with real OpenNeuro data.
Thank you for the great work on `Nifti()` support - these PRs build on that foundation.
## Related
- Contributes to #7804 (Support scientific data formats)
- Built on @TobiasPitters's Nifti feature work
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7904/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7904/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7903
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7903/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7903/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7903/events
|
https://github.com/huggingface/datasets/pull/7903
| 3,723,395,305
|
PR_kwDODunzps64kO0d
| 7,903
|
Docs: add minimal usage example to dataset card guidelines
|
{
"login": "an-enigma",
"id": 44645629,
"node_id": "MDQ6VXNlcjQ0NjQ1NjI5",
"avatar_url": "https://avatars.githubusercontent.com/u/44645629?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/an-enigma",
"html_url": "https://github.com/an-enigma",
"followers_url": "https://api.github.com/users/an-enigma/followers",
"following_url": "https://api.github.com/users/an-enigma/following{/other_user}",
"gists_url": "https://api.github.com/users/an-enigma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/an-enigma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/an-enigma/subscriptions",
"organizations_url": "https://api.github.com/users/an-enigma/orgs",
"repos_url": "https://api.github.com/users/an-enigma/repos",
"events_url": "https://api.github.com/users/an-enigma/events{/privacy}",
"received_events_url": "https://api.github.com/users/an-enigma/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-12-12T13:16:46
| 2025-12-12T13:16:46
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7903",
"html_url": "https://github.com/huggingface/datasets/pull/7903",
"diff_url": "https://github.com/huggingface/datasets/pull/7903.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7903.patch",
"merged_at": null
}
|
Adds a short, minimal load_dataset example to the dataset card documentation to help first-time users quickly load and inspect datasets.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7903/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7903/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7902
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7902/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7902/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7902/events
|
https://github.com/huggingface/datasets/issues/7902
| 3,723,281,150
|
I_kwDODunzps7d7ML-
| 7,902
|
The child process retrieves the dataset directly from the main process instead of executing `memory_mapped_arrow_table_from_file`.
|
{
"login": "HQF2017",
"id": 32055029,
"node_id": "MDQ6VXNlcjMyMDU1MDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/32055029?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HQF2017",
"html_url": "https://github.com/HQF2017",
"followers_url": "https://api.github.com/users/HQF2017/followers",
"following_url": "https://api.github.com/users/HQF2017/following{/other_user}",
"gists_url": "https://api.github.com/users/HQF2017/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HQF2017/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HQF2017/subscriptions",
"organizations_url": "https://api.github.com/users/HQF2017/orgs",
"repos_url": "https://api.github.com/users/HQF2017/repos",
"events_url": "https://api.github.com/users/HQF2017/events{/privacy}",
"received_events_url": "https://api.github.com/users/HQF2017/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null |
[
"Memory mapping is actually the way for processes to share memory efficiently and without copy. It is efficient when on are using a local disk, and it's discouraged to use it on remote disk for the reasons you observed.\n\nWhat you can do instead is save the dataset as Parquet on your remote storage (or on Hugging Face Datasets which offers fast uploads thanks to Xet), and then your can reload it in streaming mode. Streaming mode is ideal to use a dataset that is hosted in a remote storage"
] | 2025-12-12T12:37:44
| 2025-12-15T11:48:16
| null |
NONE
| null | null | null | null |
### Feature request
The child process retrieves the dataset directly from the main process instead of executing `memory_mapped_arrow_table_from_file`.
### Motivation
Because my local disk space is insufficient, I can only store a dataset on a remote Ceph server and process it using datasets.
I used the data-juicer[https://github.com/datajuicer/data-juicer] framework as an outer layer which uses datasets, but it doesn't support streaming datasets. I then encountered a problem: for each load, map, and filter operation, I had to wait for a large number of child processes to execute `memory_mapped_arrow_table_from_file`. Since the actual file was on the remote Ceph server, this operation was limited by network I/O.
I don't know if it's a problem with my usage or if this is how datasets are currently designed.However, I think that if the instances obtained after datasets.load_datasets are directly passed to the child process instead of re-executing `memory_mapped_arrow_table_from_file`, it might solve my problem.Or datasets already support this capability, but I just didn't know it?
### Your contribution
。。。
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7902/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7902/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7901
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7901/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7901/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7901/events
|
https://github.com/huggingface/datasets/issues/7901
| 3,722,243,543
|
I_kwDODunzps7d3O3X
| 7,901
|
ShuffledDataSourcesArrowExamplesIterable cannot properly resume from checkpoint
|
{
"login": "howitry",
"id": 61858900,
"node_id": "MDQ6VXNlcjYxODU4OTAw",
"avatar_url": "https://avatars.githubusercontent.com/u/61858900?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/howitry",
"html_url": "https://github.com/howitry",
"followers_url": "https://api.github.com/users/howitry/followers",
"following_url": "https://api.github.com/users/howitry/following{/other_user}",
"gists_url": "https://api.github.com/users/howitry/gists{/gist_id}",
"starred_url": "https://api.github.com/users/howitry/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/howitry/subscriptions",
"organizations_url": "https://api.github.com/users/howitry/orgs",
"repos_url": "https://api.github.com/users/howitry/repos",
"events_url": "https://api.github.com/users/howitry/events{/privacy}",
"received_events_url": "https://api.github.com/users/howitry/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi ! As you can read in the logs, the shuffle buffer content is lost when resuming a shuffled dataset. The default size is 1000 examples, but you can tweak it\n\ne.g. if you run your code with this\n\n```diff\n- ds = Dataset.from_dict({\"a\": range(12)}).to_iterable_dataset(num_shards=1)\n- ds = ds.shuffle(seed=42)\n+ ds = Dataset.from_dict({\"a\": range(100)}).to_iterable_dataset(num_shards=1)\n+ ds = ds.shuffle(seed=42, buffer_size=10)\n```\n\nthen you get\n\n```\n{'a': 0}\n{'a': 7}\n{'a': 6}\ncheckpoint\nrestart from checkpoint\nLoading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.\n{'a': 17}\n{'a': 15}\n{'a': 24}\n{'a': 19}\n{'a': 21}\n{'a': 23}\n...\n```\n\nwhere you only lose 10 rows ([1, 2, 3, 4, 5, 8, 9, 10, 11, 12])",
"> Hi ! As you can read in the logs, the shuffle buffer content is lost when resuming a shuffled dataset. The default size is 1000 examples, but you can tweak it\n> \n> e.g. if you run your code with this\n> \n> - ds = Dataset.from_dict({\"a\": range(12)}).to_iterable_dataset(num_shards=1)\n> - ds = ds.shuffle(seed=42)\n> + ds = Dataset.from_dict({\"a\": range(100)}).to_iterable_dataset(num_shards=1)\n> + ds = ds.shuffle(seed=42, buffer_size=10)\n> then you get\n> \n> ```\n> {'a': 0}\n> {'a': 7}\n> {'a': 6}\n> checkpoint\n> restart from checkpoint\n> Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.\n> {'a': 17}\n> {'a': 15}\n> {'a': 24}\n> {'a': 19}\n> {'a': 21}\n> {'a': 23}\n> ...\n> ```\n> \n> where you only lose 10 rows ([1, 2, 3, 4, 5, 8, 9, 10, 11, 12])\n\nThank you for your answer. So, when ShuffledDataSourcesArrowExamplesIterable resumes training, it will definitely discard unused data in buffer_size?",
"Yes correct. This is because the state_dict doesn't save the content of the buffer, so when resuming the buffer starts empty and the examples that were in the buffer are lost."
] | 2025-12-12T06:57:32
| 2025-12-16T19:34:46
| null |
NONE
| null | null | null | null |
### Describe the bug
ShuffledDataSourcesArrowExamplesIterable cannot properly resume from checkpoint
### Steps to reproduce the bug
1. The reproducible code is as follows:
```
from datasets import Dataset, concatenate_datasets, interleave_datasets
ds = Dataset.from_dict({"a": range(12)}).to_iterable_dataset(num_shards=1)
ds = ds.shuffle(seed=42)
for idx, example in enumerate(ds):
print(example)
if idx == 2: #The checkpoint can be loaded correctly only when idx <= 1.
state_dict = ds.state_dict()
print("checkpoint")
break
print("state_dict: ",state_dict)
ds.load_state_dict(state_dict)
print(f"restart from checkpoint")
for example in ds:
print(example)
```
2. The error message is as follows:
```
{'a': 0}
{'a': 7}
{'a': 6}
checkpoint
state_dict: {'examples_iterable': {'examples_iterable': {'examples_iterable': {'shard_idx': 1, 'shard_example_idx': 0, 'type': 'ShuffledDataSourcesArrowExamplesIterable'}, 'previous_state': {'shard_idx': 0, 'shard_example_idx': 0, 'type': 'ShuffledDataSourcesArrowExamplesIterable'}, 'batch_idx': 12, 'num_chunks_since_previous_state': 12, 'cropped_chunk_length': 0, 'type': 'RebatchedArrowExamplesIterable'}, 'previous_state': {'examples_iterable': {'shard_idx': 1, 'shard_example_idx': 0, 'type': 'ShuffledDataSourcesArrowExamplesIterable'}, 'previous_state': {'shard_idx': 0, 'shard_example_idx': 0, 'type': 'ShuffledDataSourcesArrowExamplesIterable'}, 'batch_idx': 12, 'num_chunks_since_previous_state': 12, 'cropped_chunk_length': 0, 'type': 'RebatchedArrowExamplesIterable'}, 'batch_idx': 3, 'num_chunks_since_previous_state': 2, 'cropped_chunk_length': 0, 'type': 'RebatchedArrowExamplesIterable'}, 'epoch': 0}
restart from checkpoint
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
```
### Expected behavior
I want a correct resume from any checkpoint, but currently the checkpoint can only be loaded correctly when idx <= 1.
### Environment info
datasets Version: 4.4.1
@lhoestq
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7901/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7901/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7900
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7900/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7900/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7900/events
|
https://github.com/huggingface/datasets/issues/7900
| 3,711,751,590
|
I_kwDODunzps7dPNWm
| 7,900
|
`Permission denied` when sharing cache between users
|
{
"login": "qthequartermasterman",
"id": 19497738,
"node_id": "MDQ6VXNlcjE5NDk3NzM4",
"avatar_url": "https://avatars.githubusercontent.com/u/19497738?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qthequartermasterman",
"html_url": "https://github.com/qthequartermasterman",
"followers_url": "https://api.github.com/users/qthequartermasterman/followers",
"following_url": "https://api.github.com/users/qthequartermasterman/following{/other_user}",
"gists_url": "https://api.github.com/users/qthequartermasterman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qthequartermasterman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qthequartermasterman/subscriptions",
"organizations_url": "https://api.github.com/users/qthequartermasterman/orgs",
"repos_url": "https://api.github.com/users/qthequartermasterman/repos",
"events_url": "https://api.github.com/users/qthequartermasterman/events{/privacy}",
"received_events_url": "https://api.github.com/users/qthequartermasterman/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"I remember a fix from last year to usr the current umask for filelock 3.10.0, which filelock version are you using ? can you try another version ?",
"I believe we we are using `filelock==3.19.1`. Do you have a recommended version to use?"
] | 2025-12-09T16:41:47
| 2025-12-16T15:39:06
| null |
NONE
| null | null | null | null |
### Describe the bug
We want to use `datasets` and `transformers` on a shared machine. Right now, each user has a separate HF_HOME in their home directory. To reduce duplicates of the datasets, we want to share that cache. While experimenting, we are running into `Permission denied` errors.
It looks like this was supported in the past (see #6589)?
Is there a correct way to share caches across users?
### Steps to reproduce the bug
1. Create a directory `/models/hf_hub_shared_experiment` with read/write permissions for two different users
2. For each user run the script below
```python
import os
os.environ["HF_HOME"] = "/models/hf_hub_shared_experiment"
os.environ["HF_DATASETS_CACHE"] = "/models/hf_hub_shared_experiment/data"
import datasets
import transformers
DATASET = "tatsu-lab/alpaca"
MODEL = "meta-llama/Llama-3.2-1B-Instruct"
model = transformers.AutoModelForCausalLM.from_pretrained(MODEL)
tokenizer = transformers.AutoTokenizer.from_pretrained(MODEL)
dataset = datasets.load_dataset(DATASET)
```
The first user is able to download and use the model and dataset. The second user gets these errors:
```
$ python ./experiment_with_shared.py
Could not cache non-existence of file. Will ignore error and continue. Error: [Errno 13] Permission denied: '/models/hf_hub_shared_experiment/hub/models--meta-llama--Llama-3.2-1B-Instruct/.no_exist/9213176726f574b556790deb65791e0c5aa438b6/custom_generate/generate.py'
Could not cache non-existence of file. Will ignore error and continue. Error: [Errno 13] Permission denied: '/models/hf_hub_shared_experiment/hub/datasets--tatsu-lab--alpaca/.no_exist/dce01c9b08f87459cf36a430d809084718273017/alpaca.py'
Could not cache non-existence of file. Will ignore error and continue. Error: [Errno 13] Permission denied: '/models/hf_hub_shared_experiment/hub/datasets--tatsu-lab--alpaca/.no_exist/dce01c9b08f87459cf36a430d809084718273017/.huggingface.yaml'
Could not cache non-existence of file. Will ignore error and continue. Error: [Errno 13] Permission denied: '/models/hf_hub_shared_experiment/hub/datasets--tatsu-lab--alpaca/.no_exist/dce01c9b08f87459cf36a430d809084718273017/dataset_infos.json'
Traceback (most recent call last):
File "/home/user2/.venv/experiment_with_shared.py", line 17, in <module>
dataset = datasets.load_dataset(DATASET)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user2/.venv/lib/python3.12/site-packages/datasets/load.py", line 1397, in load_dataset
builder_instance = load_dataset_builder(
^^^^^^^^^^^^^^^^^^^^^
File "/home/user2/.venv/lib/python3.12/site-packages/datasets/load.py", line 1171, in load_dataset_builder
builder_instance: DatasetBuilder = builder_cls(
^^^^^^^^^^^^
File "/home/user2/.venv/lib/python3.12/site-packages/datasets/builder.py", line 390, in __init__
with FileLock(lock_path):
File "/home/user2/.venv/lib/python3.12/site-packages/filelock/_api.py", line 377, in __enter__
self.acquire()
File "/home/user2/.venv/lib/python3.12/site-packages/filelock/_api.py", line 333, in acquire
self._acquire()
File "/home/user2/.venv/lib/python3.12/site-packages/filelock/_unix.py", line 45, in _acquire
fd = os.open(self.lock_file, open_flags, self._context.mode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
PermissionError: [Errno 13] Permission denied: '/models/hf_hub_shared_experiment/data/_models_hf_hub_shared_experiment_data_tatsu-lab___alpaca_default_0.0.0_dce01c9b08f87459cf36a430d809084718273017.lock'
```
### Expected behavior
The second user should be able to read the shared cache files.
### Environment info
$ datasets-cli env
- `datasets` version: 4.4.1
- Platform: Linux-6.8.0-88-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- `huggingface_hub` version: 0.36.0
- PyArrow version: 22.0.0
- Pandas version: 2.3.3
- `fsspec` version: 2025.10.0
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7900/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/datasets/issues/7900/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7899
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7899/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7899/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7899/events
|
https://github.com/huggingface/datasets/pull/7899
| 3,707,063,236
|
PR_kwDODunzps63t1LS
| 7,899
|
Add inspect_ai eval logs support
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7899). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Very cool! \r\n\r\nAny reason not to directly use inspect for loading/converting from the binary format to JSON? https://inspect.aisi.org.uk/reference/inspect_ai.log.html#convert_eval_logs ",
"The format is simple enough to not have to rely on an additional dependency :)"
] | 2025-12-08T16:14:40
| 2025-12-09T14:45:15
| 2025-12-09T14:45:13
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7899",
"html_url": "https://github.com/huggingface/datasets/pull/7899",
"diff_url": "https://github.com/huggingface/datasets/pull/7899.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7899.patch",
"merged_at": "2025-12-09T14:45:13"
}
|
Support for .eval log files from inspect_ai
They are actually ZIP files according to the source code at https://github.com/UKGovernmentBEIS/inspect_ai/blob/main/src/inspect_ai/log/_log.py
Unfortunately their format can't be converted to Parquet, so I had to JSON-encode all the nested values
```python
ds = load_dataset("dvilasuero/kimi-bfcl")
```
this will enable the Viewer for datasets like https://huggingface.co/datasets/dvilasuero/kimi-bfcl
original tweet for context: https://x.com/dvilasuero/status/1996936988176343220?s=20
cc @dvsrepo @julien-c @davanstrien
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7899/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7899/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7898
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7898/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7898/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7898/events
|
https://github.com/huggingface/datasets/pull/7898
| 3,698,376,429
|
PR_kwDODunzps63Q9BO
| 7,898
|
docs: making PyPi to PyPI ensuring no spelling errors
|
{
"login": "kapoor1309",
"id": 152784163,
"node_id": "U_kgDOCRtNIw",
"avatar_url": "https://avatars.githubusercontent.com/u/152784163?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kapoor1309",
"html_url": "https://github.com/kapoor1309",
"followers_url": "https://api.github.com/users/kapoor1309/followers",
"following_url": "https://api.github.com/users/kapoor1309/following{/other_user}",
"gists_url": "https://api.github.com/users/kapoor1309/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kapoor1309/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kapoor1309/subscriptions",
"organizations_url": "https://api.github.com/users/kapoor1309/orgs",
"repos_url": "https://api.github.com/users/kapoor1309/repos",
"events_url": "https://api.github.com/users/kapoor1309/events{/privacy}",
"received_events_url": "https://api.github.com/users/kapoor1309/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"Hey pls review "
] | 2025-12-05T10:20:48
| 2025-12-10T14:16:31
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7898",
"html_url": "https://github.com/huggingface/datasets/pull/7898",
"diff_url": "https://github.com/huggingface/datasets/pull/7898.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7898.patch",
"merged_at": null
}
|
This PR adds a short clarification in the README section wherein PyPI the python package was mistakenly typed as PyPi which i have fixed
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7898/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7898/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7897
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7897/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7897/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7897/events
|
https://github.com/huggingface/datasets/pull/7897
| 3,691,300,022
|
PR_kwDODunzps624-k2
| 7,897
|
Save input shard lengths
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7897). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-12-03T17:56:55
| 2025-12-05T16:21:06
| 2025-12-05T16:21:03
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7897",
"html_url": "https://github.com/huggingface/datasets/pull/7897",
"diff_url": "https://github.com/huggingface/datasets/pull/7897.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7897.patch",
"merged_at": "2025-12-05T16:21:03"
}
|
will be useful for the Viewer, to know what (original) shard each row belongs to
cc @cfahlgren1
next step is use it in Dataset Viewer and expose an API that returns the file containing the row at rowId
(took the opportunity to remove unusued code)
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7897/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/datasets/issues/7897/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7896
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7896/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7896/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7896/events
|
https://github.com/huggingface/datasets/pull/7896
| 3,688,480,675
|
PR_kwDODunzps62vZtn
| 7,896
|
fix: force contiguous copy for sliced list arrays in embed_array_storage
|
{
"login": "The-Obstacle-Is-The-Way",
"id": 175985783,
"node_id": "U_kgDOCn1Udw",
"avatar_url": "https://avatars.githubusercontent.com/u/175985783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/The-Obstacle-Is-The-Way",
"html_url": "https://github.com/The-Obstacle-Is-The-Way",
"followers_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/followers",
"following_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/following{/other_user}",
"gists_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/gists{/gist_id}",
"starred_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/subscriptions",
"organizations_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/orgs",
"repos_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/repos",
"events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/events{/privacy}",
"received_events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7896). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Hi @The-Obstacle-Is-The-Way, I haven't had a chance to try your repro at https://github.com/huggingface/datasets/issues/7894#issuecomment-3618888430 but I tried the test functions in your PRs and they all pass without your change, is it expected ? (I ran the tests without the checks on the offsets)",
"Hi @lhoestq, you're right - the tests pass without the fix. \r\n\r\nI think the SIGKILL crash is scale-dependent. As I documented in #7894, it only manifests with large, real-world data (the 273GB ARC dataset) and other large dataset upload issues, not with synthetic test files:\r\n\r\n| Data | Crash? |\r\n|------|--------|\r\n| Synthetic 64³ NIfTI | ✅ No crash |\r\n| Real ARC (273GB) | ❌ SIGKILL |\r\n\r\nThe tests verify that the fix's mechanism *works* (sliced arrays get made contiguous), but they can't reproduce the crash because that requires the full-scale data.\r\n\r\nI understand this makes it harder to review - the fix is essentially defensive against a crash that's difficult to reproduce in CI. A few options:\r\n\r\n1. **Keep as-is** - the fix is low-risk (`pa.concat_arrays` on sliced list arrays) and I've confirmed it resolves the real-world crash\r\n2. **Simplify tests** - remove the offset assertions if they're confusing, keep just the \"doesn't crash\" aspect\r\n3. **Close** - if the crash can't be reproduced in CI, I understand if this doesn't meet the bar for merging\r\n\r\nHappy to do whatever you think is best. Thank you for your time and dedication!"
] | 2025-12-03T04:34:26
| 2025-12-19T16:45:54
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7896",
"html_url": "https://github.com/huggingface/datasets/pull/7896",
"diff_url": "https://github.com/huggingface/datasets/pull/7896.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7896.patch",
"merged_at": null
}
|
## Summary
Fixes SIGKILL crash in `embed_array_storage` when processing sliced/sharded datasets with nested types like `Sequence(Nifti())` or `Sequence(Image())`.
**Root cause**: When `ds.shard()` or `ds.select()` creates a sliced view, `array.values` on a sliced `ListArray` returns values with internal offset references. For nested types, PyArrow's C++ layer can crash (SIGKILL, exit code 137) when materializing these sliced nested structs.
**Fix**: Force a contiguous copy via `pa.concat_arrays([array])` when the array has a non-zero offset before processing list/large_list arrays.
## Changes
- Add offset check in `embed_array_storage` for list/large_list arrays
- Force contiguous copy when `array.offset > 0` to break internal references
- Add regression tests for sliced arrays with Image, Nifti, and LargeList types
## Test plan
- [x] Added `tests/features/test_embed_storage_sliced.py` with 3 tests:
- `test_embed_array_storage_sliced_list_image`
- `test_embed_array_storage_sliced_list_nifti`
- `test_embed_array_storage_sliced_large_list`
- [x] All tests verify `embedded.offset == 0` (contiguous result)
- [x] All tests pass locally
- [x] ruff check passes
## Context
This was discovered while uploading a 270GB neuroimaging dataset (ARC) with `Sequence(Nifti())` columns. The process crashed with SIGKILL (no Python traceback) when `embed_table_storage` was called on sharded data.
Workaround that confirmed the fix: pandas round-trip (`shard.to_pandas()` → `Dataset.from_pandas()`) which forces a contiguous copy.
Fixes #7894
Related: #6686, #7852, #6790
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7896/reactions",
"total_count": 3,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/datasets/issues/7896/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7895
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7895/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7895/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7895/events
|
https://github.com/huggingface/datasets/pull/7895
| 3,688,479,825
|
PR_kwDODunzps62vZik
| 7,895
|
fix: use temp files in push_to_hub to prevent OOM on large datasets
|
{
"login": "The-Obstacle-Is-The-Way",
"id": 175985783,
"node_id": "U_kgDOCn1Udw",
"avatar_url": "https://avatars.githubusercontent.com/u/175985783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/The-Obstacle-Is-The-Way",
"html_url": "https://github.com/The-Obstacle-Is-The-Way",
"followers_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/followers",
"following_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/following{/other_user}",
"gists_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/gists{/gist_id}",
"starred_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/subscriptions",
"organizations_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/orgs",
"repos_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/repos",
"events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/events{/privacy}",
"received_events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"Closing this PR.\n\nAfter further investigation (prompted by @lhoestq's feedback on #7893), I discovered that `free_memory=True` does work correctly - memory doesn't actually accumulate in the `additions` list as I originally thought.\n\nI tested standard `push_to_hub()` with 902 shards and it processed 625 shards (69%) without OOM issues. The crash we were experiencing was actually **#7894** (`embed_table_storage` crash on `Sequence()` types), not memory accumulation.\n\nI've closed #7893 as invalid. This PR is no longer needed.\n\nApologies for the noise - lesson learned about testing before filing."
] | 2025-12-03T04:33:55
| 2025-12-06T13:58:44
| 2025-12-05T22:47:50
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7895",
"html_url": "https://github.com/huggingface/datasets/pull/7895",
"diff_url": "https://github.com/huggingface/datasets/pull/7895.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7895.patch",
"merged_at": null
}
|
## Summary
Fixes memory accumulation in `_push_parquet_shards_to_hub_single` that causes OOM when uploading large datasets with many shards.
**Root cause**: The current implementation stores ALL parquet shard bytes in memory via `BytesIO`, accumulating in the `additions` list. For N shards of ~300MB each, this requires N × 300MB RAM.
**Fix**: Write parquet to temp file instead of `BytesIO`, pass file path to `CommitOperationAdd`. Delete temp file after `preupload_lfs_files` completes (for LFS uploads only - regular uploads need the file until `create_commit`).
## Changes
- Replace `BytesIO` with `tempfile.NamedTemporaryFile` in `_push_parquet_shards_to_hub_single`
- Use file path in `CommitOperationAdd.path_or_fileobj` instead of bytes
- Delete temp file after upload (only for LFS mode - regular uploads keep file for `create_commit`)
- Add `try...finally` for safe cleanup even on errors
- Remove unused `BytesIO` import
## Test plan
- [x] Added `tests/test_push_to_hub_memory.py` with 4 tests:
- `test_push_to_hub_uses_file_path_not_bytes_in_commit_operation`
- `test_push_to_hub_cleans_up_temp_files_for_lfs_uploads`
- `test_push_to_hub_keeps_temp_files_for_regular_uploads`
- `test_push_to_hub_uploaded_size_still_calculated`
- [x] All tests pass locally
- [x] ruff check passes
## Context
This was discovered while uploading a 270GB neuroimaging dataset (ARC) with 902 shards. The process was killed by OOM after accumulating ~270GB in the `additions` list.
Fixes #7893
Related: #5990, #7400
|
{
"login": "The-Obstacle-Is-The-Way",
"id": 175985783,
"node_id": "U_kgDOCn1Udw",
"avatar_url": "https://avatars.githubusercontent.com/u/175985783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/The-Obstacle-Is-The-Way",
"html_url": "https://github.com/The-Obstacle-Is-The-Way",
"followers_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/followers",
"following_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/following{/other_user}",
"gists_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/gists{/gist_id}",
"starred_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/subscriptions",
"organizations_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/orgs",
"repos_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/repos",
"events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/events{/privacy}",
"received_events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7895/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7895/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7894
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7894/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7894/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7894/events
|
https://github.com/huggingface/datasets/issues/7894
| 3,688,455,006
|
I_kwDODunzps7b2Vte
| 7,894
|
embed_table_storage crashes (SIGKILL) on sharded datasets with Sequence() nested types
|
{
"login": "The-Obstacle-Is-The-Way",
"id": 175985783,
"node_id": "U_kgDOCn1Udw",
"avatar_url": "https://avatars.githubusercontent.com/u/175985783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/The-Obstacle-Is-The-Way",
"html_url": "https://github.com/The-Obstacle-Is-The-Way",
"followers_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/followers",
"following_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/following{/other_user}",
"gists_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/gists{/gist_id}",
"starred_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/subscriptions",
"organizations_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/orgs",
"repos_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/repos",
"events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/events{/privacy}",
"received_events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"I wasn't able to reproduce the crash on my side (macos arm 54, pyarrow 22 and a nifti file I found [online](https://s3.amazonaws.com/openneuro.org/ds004884/sub-M2001/ses-1076/anat/sub-M2001_ses-1076_acq-tfl3_run-4_T1w.nii.gz?versionId=9aVGb3C.VcoBgxrhNzFnL6O0MvxQsXX7&AWSAccessKeyId=AKIARTA7OOV5WQ3DGSOB&Signature=LQMLzjsuzSV7MtNAdQaFdqWqmbM%3D&Expires=1765473937))\n\ncould the issue be specific to your env ? have you tried on other environments like colab maybe ?",
"Hi @lhoestq,\n\nThank you so much for taking the time to investigate this. Your comment about not being able to reproduce it with a single NIfTI file actually helped me understand the bug better.\n\n**Key finding:** This bug is scale-dependent. It only manifests with real, full-scale data, and not with synthetic test files.\n\nI created a sandbox branch that isolates the exact state before the workaround:\n\n**🔗 Reproduction branch:** https://github.com/The-Obstacle-Is-The-Way/arc-aphasia-bids/tree/sandbox/reproduce-bug-7894\n\n### What we confirmed\n\n| Test | Result |\n|------|--------|\n| Synthetic 2x2x2 NIfTI files | ✅ No crash |\n| Synthetic 64³ NIfTI files (1MB each) | ✅ No crash |\n| Real ARC dataset (273GB, 902 sessions) | ❌ **SIGKILL at 0%** |\n\n### Environment (same as yours)\n\n- macOS ARM64\n- PyArrow 22.0.0\n- datasets 4.4.2.dev0 (git main)\n\n### Crash output\n\n```\nCasting the dataset: 100%|██████████| 902/902\nUploading Shards: 0%| | 0/902\nUserWarning: resource_tracker: There appear to be 1 leaked semaphore objects\n```\n**Exit code: 137 (SIGKILL)**\n\nThe crash happens on the very first shard, at `embed_table_storage()`, when processing `Sequence(Nifti())` columns after `ds.shard()`.\n\n### The workaround (in main branch)\n\nA pandas round-trip before embedding breaks the problematic Arrow references:\n\n```python\nshard_df = shard.to_pandas()\nfresh_shard = Dataset.from_pandas(shard_df, preserve_index=False)\nfresh_shard = fresh_shard.cast(ds.features)\n# Now embed_table_storage works\n```\n\nWe understand that downloading 273GB to reproduce this isn't practical. The reproduction guide in the branch has full details if you'd like to dig deeper. Happy to help debug further if useful.\n\nThank you again for your time and for maintaining this library. ",
"@lhoestq Brief update - I've added a reproduction that uses standard `ds.push_to_hub()` (no custom code).\n\n**Reproduction branch:** https://github.com/The-Obstacle-Is-The-Way/arc-aphasia-bids/tree/sandbox/reproduce-bug-7894\n\n**To reproduce with standard library:**\n```bash\ngit clone -b sandbox/reproduce-bug-7894 https://github.com/The-Obstacle-Is-The-Way/arc-aphasia-bids.git\ncd arc-aphasia-bids && uv sync --all-extras\n# Download dataset (~273GB): aws s3 sync --no-sign-request s3://openneuro.org/ds004884 data/openneuro/ds004884\nHF_REPO=\"your-username/test\" uv run python test_prove_7894_standard.py\n```\n\nCrashes at 0% with the same semaphore warning.\n\nFull details in [REPRODUCE_BUG_7894.md](https://github.com/The-Obstacle-Is-The-Way/arc-aphasia-bids/blob/sandbox/reproduce-bug-7894/REPRODUCE_BUG_7894.md).\n\nAlso - you were right about #7893. I closed it. `free_memory=True` works as you said. That issue was my mistake."
] | 2025-12-03T04:20:06
| 2025-12-06T13:10:34
| null |
CONTRIBUTOR
| null | null | null | null |
## Summary
`embed_table_storage` crashes with SIGKILL (exit code 137) when processing sharded datasets containing `Sequence()` nested types like `Sequence(Nifti())`. Likely affects `Sequence(Image())` and `Sequence(Audio())` as well.
The crash occurs at the C++ level with no Python traceback.
### Related Issues
- #7852 - Problems with NifTI (closed, but related embedding issues)
- #6790 - PyArrow 'Memory mapping file failed' (potentially related)
- #7893 - OOM issue (separate bug, but discovered together)
### Context
Discovered while uploading the [Aphasia Recovery Cohort (ARC)](https://openneuro.org/datasets/ds004884) neuroimaging dataset to HuggingFace Hub. Even after fixing the OOM issue (#7893), this crash blocked uploads.
Working implementation with workaround: [arc-aphasia-bids](https://github.com/The-Obstacle-Is-The-Way/arc-aphasia-bids)
## Reproduction
```python
from datasets import Dataset, Features, Sequence, Value
from datasets.features import Nifti
from datasets.table import embed_table_storage
features = Features({
"id": Value("string"),
"images": Sequence(Nifti()),
})
ds = Dataset.from_dict({
"id": ["a", "b"],
"images": [["/path/to/file.nii.gz"], []],
}).cast(features)
# This works fine:
table = ds._data.table.combine_chunks()
embedded = embed_table_storage(table) # OK
# This crashes with SIGKILL:
shard = ds.shard(num_shards=2, index=0)
shard_table = shard._data.table.combine_chunks()
embedded = embed_table_storage(shard_table) # CRASH - no Python traceback
```
## Key Observations
| Scenario | Result |
|----------|--------|
| Single `Nifti()` column | Works |
| `Sequence(Nifti())` on full dataset | Works |
| `Sequence(Nifti())` after `ds.shard()` | **CRASHES** |
| `Sequence(Nifti())` after `ds.select([i])` | **CRASHES** |
| Crash with empty Sequence `[]` | **YES** - not file-size related |
## Workaround
Convert shard to pandas and recreate the Dataset to break internal Arrow references:
```python
shard = ds.shard(num_shards=num_shards, index=i, contiguous=True)
# CRITICAL: Pandas round-trip breaks problematic references
shard_df = shard.to_pandas()
fresh_shard = Dataset.from_pandas(shard_df, preserve_index=False)
fresh_shard = fresh_shard.cast(ds.features)
# Now embedding works
table = fresh_shard._data.table.combine_chunks()
embedded = embed_table_storage(table) # OK!
```
## Disproven Hypotheses
| Hypothesis | Test | Result |
|------------|------|--------|
| PyArrow 2GB binary limit | Monkey-patched `Nifti.pa_type` to `pa.large_binary()` | Still crashed |
| Memory fragmentation | Called `table.combine_chunks()` | Still crashed |
| File size issue | Tested with tiny NIfTI files | Still crashed |
## Root Cause Hypothesis
When `ds.shard()` or `ds.select()` creates a subset, the resulting Arrow table retains internal references/views to the parent table. When `embed_table_storage` processes nested struct types like `Sequence(Nifti())`, these references cause a crash in the C++ layer.
The pandas round-trip forces a full data copy, breaking these problematic references.
## Environment
- datasets version: main branch (post-0.22.0)
- Platform: macOS 14.x ARM64 (may be platform-specific)
- Python: 3.13
- PyArrow: 18.1.0
## Notes
This may ultimately be a PyArrow issue surfacing through datasets. Happy to help debug further if maintainers can point to where to look in the embedding logic.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7894/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7894/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7893
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7893/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7893/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7893/events
|
https://github.com/huggingface/datasets/issues/7893
| 3,688,454,085
|
I_kwDODunzps7b2VfF
| 7,893
|
push_to_hub OOM: _push_parquet_shards_to_hub accumulates all shard bytes in memory
|
{
"login": "The-Obstacle-Is-The-Way",
"id": 175985783,
"node_id": "U_kgDOCn1Udw",
"avatar_url": "https://avatars.githubusercontent.com/u/175985783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/The-Obstacle-Is-The-Way",
"html_url": "https://github.com/The-Obstacle-Is-The-Way",
"followers_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/followers",
"following_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/following{/other_user}",
"gists_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/gists{/gist_id}",
"starred_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/subscriptions",
"organizations_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/orgs",
"repos_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/repos",
"events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/events{/privacy}",
"received_events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"`preupload_lfs_files` removes the parquet bytes in `shard_addition` since the default is `free_memory=True`: it doesn't accumulate in memory. Can you check this is indeed the case, i.e. that `shard_addition.path_or_fileobj` is indeed empty ?",
"@lhoestq Thank you for pushing back on this and helping me understand the code better.\n\nYou're correct, `free_memory=True` does prevent memory accumulation. I went back and tested this properly: I ran standard `push_to_hub()` with 902 shards and it processed **625 shards (69%)** without any OOM issues before I stopped it. Memory stayed reasonable throughout.\n\nI filed this issue based on reading the code and seeing `additions.append(shard_addition)`, without fully understanding that `preupload_lfs_files()` clears the bytes first. That was my mistake.\n\nThe crash we were actually experiencing was **#7894** (`embed_table_storage` crash on `Sequence(Nifti())`), which we've now isolated and reproduced separately.\n\nClosing this issue. Thanks again for the clarification; learned something important about the codebase today."
] | 2025-12-03T04:19:34
| 2025-12-05T22:45:59
| 2025-12-05T22:44:16
|
CONTRIBUTOR
| null | null | null | null |
## Summary
Large dataset uploads crash or hang due to memory exhaustion. This appears to be the root cause of several long-standing issues.
### Related Issues
This is the root cause of:
- #5990 - Pushing a large dataset on the hub consistently hangs (46 comments, open since 2023)
- #7400 - 504 Gateway Timeout when uploading large dataset
- #6686 - Question: Is there any way for uploading a large image dataset?
### Context
Discovered while uploading the [Aphasia Recovery Cohort (ARC)](https://openneuro.org/datasets/ds004884) neuroimaging dataset (~270GB, 902 sessions) to HuggingFace Hub using the `Nifti()` feature.
Working implementation with workaround: [arc-aphasia-bids](https://github.com/The-Obstacle-Is-The-Way/arc-aphasia-bids)
## Root Cause
In `_push_parquet_shards_to_hub` (arrow_dataset.py), the `additions` list accumulates every `CommitOperationAdd` with full Parquet bytes in memory:
```python
additions = []
for shard in shards:
parquet_content = shard.to_parquet_bytes() # ~300 MB per shard
shard_addition = CommitOperationAdd(path_or_fileobj=parquet_content)
api.preupload_lfs_files(additions=[shard_addition])
additions.append(shard_addition) # THE BUG: bytes stay in memory forever
```
For a 902-shard dataset: **902 × 300 MB = ~270 GB RAM requested → OOM/hang**.
The bytes are held until the final `create_commit()` call, preventing garbage collection.
## Reproduction
```python
from datasets import load_dataset
# Any large dataset with embedded files (Image, Audio, Nifti, etc.)
ds = load_dataset("imagefolder", data_dir="path/to/large/dataset")
ds.push_to_hub("repo-id", num_shards=500) # Watch memory grow until crash
```
## Workaround
Process one shard at a time, upload via `HfApi.upload_file(path=...)`, delete before next iteration:
```python
from huggingface_hub import HfApi
import pyarrow.parquet as pq
api = HfApi()
for i in range(num_shards):
shard = ds.shard(num_shards=num_shards, index=i, contiguous=True)
# Write to disk, not memory
shard.to_parquet(local_path)
# Upload from file path (streams from disk)
api.upload_file(
path_or_fileobj=str(local_path),
path_in_repo=f"data/train-{i:05d}-of-{num_shards:05d}.parquet",
repo_id=repo_id,
repo_type="dataset",
)
# Clean up before next iteration
local_path.unlink()
del shard
```
Memory usage stays constant (~1-2 GB) instead of growing linearly.
## Suggested Fix
After `preupload_lfs_files` succeeds for each shard, release the bytes:
1. Clear `path_or_fileobj` from the `CommitOperationAdd` after preupload
2. Or write to temp file and pass file path instead of bytes
3. Or commit incrementally instead of batching all additions
## Environment
- datasets version: main branch (post-0.22.0)
- Platform: macOS 14.x ARM64
- Python: 3.13
- PyArrow: 18.1.0
- Dataset: 902 shards, ~270 GB total embedded NIfTI files
|
{
"login": "The-Obstacle-Is-The-Way",
"id": 175985783,
"node_id": "U_kgDOCn1Udw",
"avatar_url": "https://avatars.githubusercontent.com/u/175985783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/The-Obstacle-Is-The-Way",
"html_url": "https://github.com/The-Obstacle-Is-The-Way",
"followers_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/followers",
"following_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/following{/other_user}",
"gists_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/gists{/gist_id}",
"starred_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/subscriptions",
"organizations_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/orgs",
"repos_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/repos",
"events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/events{/privacy}",
"received_events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7893/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7893/timeline
| null |
not_planned
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7892
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7892/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7892/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7892/events
|
https://github.com/huggingface/datasets/pull/7892
| 3,681,848,709
|
PR_kwDODunzps62ZFzh
| 7,892
|
encode nifti correctly when uploading lazily
|
{
"login": "CloseChoice",
"id": 31857876,
"node_id": "MDQ6VXNlcjMxODU3ODc2",
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CloseChoice",
"html_url": "https://github.com/CloseChoice",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions",
"organizations_url": "https://api.github.com/users/CloseChoice/orgs",
"repos_url": "https://api.github.com/users/CloseChoice/repos",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"received_events_url": "https://api.github.com/users/CloseChoice/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7892). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-12-01T16:35:07
| 2025-12-19T14:13:43
| 2025-12-19T14:13:43
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7892",
"html_url": "https://github.com/huggingface/datasets/pull/7892",
"diff_url": "https://github.com/huggingface/datasets/pull/7892.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7892.patch",
"merged_at": "2025-12-19T14:13:43"
}
|
When trying to upload nifti datasets lazily I got the error:
```python
from pathlib import Path
from datasets import load_dataset
nifti_dir = Path("<local_path>")
dataset = load_dataset(
"niftifolder",
data_dir=str(nifti_dir.absolute()),
streaming=True,
)
dataset.push_to_hub(repo_id="TobiasPitters/test-nifti-papaya-testdata")
```
```python
pyarrow.lib.ArrowInvalid: Could not convert <datasets.features.nifti.Nifti1ImageWrapper object at 0x77633407af90> with type Nifti1ImageWrapper: did not recognize Python value type when inferring an Arrow data type
```
This PR fixes that by encoding the Nifti1ImageWrappers correctly to bytes.
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7892/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7892/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7891
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7891/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7891/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7891/events
|
https://github.com/huggingface/datasets/pull/7891
| 3,681,592,636
|
PR_kwDODunzps62YNeR
| 7,891
|
fix(fingerprint): treat TMPDIR as strict API and fail (Issue #7877)
|
{
"login": "ada-ggf25",
"id": 133336746,
"node_id": "U_kgDOB_KOqg",
"avatar_url": "https://avatars.githubusercontent.com/u/133336746?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ada-ggf25",
"html_url": "https://github.com/ada-ggf25",
"followers_url": "https://api.github.com/users/ada-ggf25/followers",
"following_url": "https://api.github.com/users/ada-ggf25/following{/other_user}",
"gists_url": "https://api.github.com/users/ada-ggf25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ada-ggf25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ada-ggf25/subscriptions",
"organizations_url": "https://api.github.com/users/ada-ggf25/orgs",
"repos_url": "https://api.github.com/users/ada-ggf25/repos",
"events_url": "https://api.github.com/users/ada-ggf25/events{/privacy}",
"received_events_url": "https://api.github.com/users/ada-ggf25/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7891). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"> lgtm !\r\n\r\nthanks for taking a look!"
] | 2025-12-01T15:37:57
| 2025-12-16T14:24:11
| 2025-12-16T14:20:46
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7891",
"html_url": "https://github.com/huggingface/datasets/pull/7891",
"diff_url": "https://github.com/huggingface/datasets/pull/7891.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7891.patch",
"merged_at": "2025-12-16T14:20:46"
}
|
Fixes #7877 and follows up on feedback from [`#7890`](https://github.com/huggingface/datasets/pull/7890).
## Context
In [`#7890`](https://github.com/huggingface/datasets/pull/7890) it was introduced an automatic creation of the `TMPDIR` directory in `_TempCacheDir` so that `datasets` does not silently fall back to `/tmp` when `TMPDIR` points to a non‑existent path. This addressed the original “`tempfile` silently ignores TMPDIR” issue in [`#7877`](https://github.com/huggingface/datasets/issues/7877).
During review, it was pointed out by @stas00 (see [review comment](https://github.com/huggingface/datasets/pull/7890#discussion_r...)) that if the code treats `TMPDIR` as part of the public API, then failures to use it should **fail loudly**, not just emit warnings, because warnings are easy to miss in complex multi‑GPU setups.
This PR implements that stricter behaviour.
## What this PR changes
### `_TempCacheDir` behaviour In `src/datasets/fingerprint.py`:
- Continue to:
- Detect `TMPDIR` from the environment
- Normalise the path
- Auto‑create the directory when it does not exist
- Pass the (validated) directory explicitly to `tempfile.mkdtemp(...)` so `TMPDIR` is honoured even if `tempfile.gettempdir()` was already cached
- **New behaviour** (in response to review on [`#7890`](https://github.com/huggingface/datasets/pull/7890)):
- If `TMPDIR` is set, but the directory cannot be created, we now **re‑raise an `OSError`** with a clear, actionable message:
- “TMPDIR is set to '…' but the directory does not exist and could not be created: … Please create it manually or unset TMPDIR to fall back to the default temporary directory.”
- If `TMPDIR` is set but points to something that is **not a directory**, we also **raise `OSError`** with guidance:
- “TMPDIR is set to '…' but it is not a directory. Please point TMPDIR to a writable directory or unset it to fall back to the default temporary directory.”
- When `TMPDIR` is **not** set, behaviour is unchanged: we pass `dir=None` and let `tempfile` use the system default temp directory.
This aligns with the @stas00’s suggestion that TMPDIR should be treated as a strict API contract: if the user chooses a TMPDIR, we either use it or clearly fail, rather than silently falling back.
### Tests
In `tests/test_fingerprint.py`:
- Updated tests that previously expected warning‑and‑fallback to now expect **hard failures**:
- `test_temp_cache_dir_tmpdir_creation_failure`
- Uses `unittest.mock.patch` to force `os.makedirs` to raise `OSError("Permission denied")`
- Asserts that constructing `_TempCacheDir()` raises `OSError` and that the message contains both “TMPDIR is set to” and “could not be created”
- `test_temp_cache_dir_tmpdir_not_directory`
- Points `TMPDIR` to a regular file and asserts that `_TempCacheDir()` raises `OSError` with a message mentioning “is not a directory”
- Left the positive‑path tests in place:
- `test_temp_cache_dir_with_tmpdir_nonexistent` – verifies that a non‑existent `TMPDIR` is created and used
- `test_temp_cache_dir_with_tmpdir_existing` – verifies that an existing `TMPDIR` directory is used as the base for the temp cache dir
- `test_temp_cache_dir_without_tmpdir` – verifies behaviour when `TMPDIR` is not set (default temp directory)
- Kept the earlier fix to `test_fingerprint_in_multiprocessing`, which now uses `Pool.map` and asserts that fingerprints are stable across processes.
## Rationale
- Treating `TMPDIR` as part of the API for cache placement means:
- Users can rely on it to move large temporary Arrow files away from small `/tmp` partitions.
- Misconfigured TMPDIR should be **immediately visible** as a hard error, not as a warning lost among many logs.
- The stricter failure mode matches the concern on [`#7890`](https://github.com/huggingface/datasets/pull/7890) that “warnings are very easy to miss in complex applications where there are already dozens of warnings multiplied by multiple GPU processes”.
## Testing
- `pytest tests/test_fingerprint.py`
- `make style`
- No new linter issues introduced.
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7891/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7891/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7890
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7890/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7890/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7890/events
|
https://github.com/huggingface/datasets/pull/7890
| 3,677,077,051
|
PR_kwDODunzps62JMyH
| 7,890
|
Fix: Auto-create TMPDIR directory when it doesn't exist (Issue #7877)
|
{
"login": "ada-ggf25",
"id": 133336746,
"node_id": "U_kgDOB_KOqg",
"avatar_url": "https://avatars.githubusercontent.com/u/133336746?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ada-ggf25",
"html_url": "https://github.com/ada-ggf25",
"followers_url": "https://api.github.com/users/ada-ggf25/followers",
"following_url": "https://api.github.com/users/ada-ggf25/following{/other_user}",
"gists_url": "https://api.github.com/users/ada-ggf25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ada-ggf25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ada-ggf25/subscriptions",
"organizations_url": "https://api.github.com/users/ada-ggf25/orgs",
"repos_url": "https://api.github.com/users/ada-ggf25/repos",
"events_url": "https://api.github.com/users/ada-ggf25/events{/privacy}",
"received_events_url": "https://api.github.com/users/ada-ggf25/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"> Thank you for implementing this, @ada-ggf25\r\n> \r\n> I see there are a few more uses of `tempfile` in `datasets` besides the one treated here - should they be treated in the same way as well?\r\n\r\nI had a look through the other tempfile call sites:\r\n- In `arrow_dataset.py` the code always pass an explicit `dir=cache_dir` when using `NamedTemporaryFile`, so TMPDIR is not involved there.\r\n- In `search.py` the `NamedTemporaryFile` is only used to generate a random Elasticsearch index name, it doesn’t control where big cache files are written, so it doesn’t have the same “No space left on device” impact.\r\n\r\nGiven that, the fingerprint temp directory is the only place where respecting TMPDIR is part of the user‑visible API for disk usage. If you’d like, I’m happy to also thread TMPDIR through the `search.py` name generation, but I think this PR should keep scoped to the place that actually writes large temporary files.\r\n",
"Thank you for checking the other instances of tempfile use, @ada-ggf25 "
] | 2025-11-29T20:35:24
| 2025-12-02T12:24:17
| 2025-12-02T12:24:17
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7890",
"html_url": "https://github.com/huggingface/datasets/pull/7890",
"diff_url": "https://github.com/huggingface/datasets/pull/7890.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7890.patch",
"merged_at": null
}
|
# Fix: Auto-create TMPDIR directory when it doesn't exist
Fixes #7877
## Description
This PR fixes issue #7877 by implementing automatic creation of the `TMPDIR` directory when it is set but doesn't exist. Previously, `tempfile.mkdtemp()` would silently ignore `TMPDIR` and fall back to `/tmp` if the specified directory didn't exist, causing confusion for users experiencing "No space left on device" errors.
## Problem
When users set `TMPDIR` to a non-existent directory (e.g., `export TMPDIR=/some/big/storage`), Python's `tempfile` module silently ignores it and falls back to the default temporary directory (`/tmp`). This leads to:
- Users unable to use their specified temporary directory
- Silent failures that are difficult to debug
- Continued "No space left on device" errors even after setting `TMPDIR`
## Solution
The fix automatically creates the `TMPDIR` directory if it is set but doesn't exist, ensuring that:
1. Users' `TMPDIR` settings are respected
2. Clear logging is provided when the directory is created
3. Graceful fallback with warnings if directory creation fails
4. The fix works even if `tempfile.gettempdir()` was already called and cached
## Changes
### Implementation (`src/datasets/fingerprint.py`)
- Modified `_TempCacheDir.__init__()` to check if `TMPDIR` environment variable is set
- Added logic to auto-create the directory if it doesn't exist using `os.makedirs()`
- Added informative logging when directory is created
- Added warning logging when directory creation fails, with graceful fallback
- Added path normalisation to handle path resolution issues
- Explicitly pass `dir` parameter to `tempfile.mkdtemp()` to ensure TMPDIR is respected
### Tests (`tests/test_fingerprint.py`)
Added comprehensive test coverage:
- `test_temp_cache_dir_with_tmpdir_nonexistent`: Tests auto-creation of non-existent TMPDIR
- `test_temp_cache_dir_with_tmpdir_existing`: Tests behaviour when TMPDIR already exists
- `test_temp_cache_dir_without_tmpdir`: Tests default behaviour when TMPDIR is not set
- `test_temp_cache_dir_tmpdir_creation_failure`: Tests error handling when directory creation fails
Also fixed incomplete `test_fingerprint_in_multiprocessing` test that was missing implementation.
## Testing
- All existing tests pass
- New tests added for TMPDIR handling scenarios
- Code formatted with `make style`
- No linter errors
- Manual testing confirms the fix works as expected
## Example Usage
Before this fix:
```bash
$ export TMPDIR='/tmp/username' # Directory doesn't exist
$ python -c "import tempfile; print(tempfile.gettempdir())"
/tmp # Silently falls back, ignoring TMPDIR
```
After this fix:
```bash
$ export TMPDIR='/tmp/username' # Directory doesn't exist
$ python -c "from datasets.fingerprint import get_temporary_cache_files_directory; print(get_temporary_cache_files_directory())"
# Directory is automatically created and used
# Log: "Created TMPDIR directory: /tmp/username"
```
## Type of Change
- [x] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] Documentation update
---
**Note**: This implementation follows the approach suggested in the issue, automatically creating the TMPDIR directory when it doesn't exist, which provides the best user experience whilst maintaining security (we only create directories explicitly specified by the user via environment variable).
|
{
"login": "ada-ggf25",
"id": 133336746,
"node_id": "U_kgDOB_KOqg",
"avatar_url": "https://avatars.githubusercontent.com/u/133336746?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ada-ggf25",
"html_url": "https://github.com/ada-ggf25",
"followers_url": "https://api.github.com/users/ada-ggf25/followers",
"following_url": "https://api.github.com/users/ada-ggf25/following{/other_user}",
"gists_url": "https://api.github.com/users/ada-ggf25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ada-ggf25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ada-ggf25/subscriptions",
"organizations_url": "https://api.github.com/users/ada-ggf25/orgs",
"repos_url": "https://api.github.com/users/ada-ggf25/repos",
"events_url": "https://api.github.com/users/ada-ggf25/events{/privacy}",
"received_events_url": "https://api.github.com/users/ada-ggf25/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7890/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7890/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7889
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7889/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7889/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7889/events
|
https://github.com/huggingface/datasets/pull/7889
| 3,676,933,025
|
PR_kwDODunzps62Iykj
| 7,889
|
fix(tests): stabilize flaky Hub LFS integration test
|
{
"login": "The-Obstacle-Is-The-Way",
"id": 175985783,
"node_id": "U_kgDOCn1Udw",
"avatar_url": "https://avatars.githubusercontent.com/u/175985783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/The-Obstacle-Is-The-Way",
"html_url": "https://github.com/The-Obstacle-Is-The-Way",
"followers_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/followers",
"following_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/following{/other_user}",
"gists_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/gists{/gist_id}",
"starred_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/subscriptions",
"organizations_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/orgs",
"repos_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/repos",
"events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/events{/privacy}",
"received_events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7889). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Maybe `push_to_hub()` can retry on error 400 when committing to the Hub instead ? This way we make sure `push_to_hub()` works without having to add the waiting step",
"Thanks for the pointer @lhoestq! Updated to add LFS 400 retry handling directly in `push_to_hub()` across all implementations (Dataset, DatasetDict, IterableDataset, IterableDatasetDict). Reverted the test-side waits."
] | 2025-11-29T17:13:18
| 2025-12-19T16:46:01
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7889",
"html_url": "https://github.com/huggingface/datasets/pull/7889",
"diff_url": "https://github.com/huggingface/datasets/pull/7889.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7889.patch",
"merged_at": null
}
|
## Problem
`test_push_dataset_dict_to_hub_overwrite_files` intermittently fails with:
```
BadRequestError: LFS pointer pointed to a file that does not exist
```
This has been causing the `deps-latest` integration tests to fail on main (visible in recent CI runs). I ran into this while working on the BIDS loader PR and dug into the root cause.
## Root Cause
Two race conditions in the test:
1. **LFS propagation timing** - Rapid successive `push_to_hub` calls don't wait for Hub to fully propagate LFS objects between pushes
2. **Repo name reuse** - The second test scenario reused the same repo name from scenario 1, creating a race between deletion and recreation
## Solution
- Add `_wait_for_repo_ready()` helper that polls `list_repo_files` to ensure the repo is consistent before subsequent operations
- Use a unique repo name (`ds_name_2`) for the second scenario, eliminating the delete/create race entirely
## Testing
All 4 integration test variants now pass:
- ✅ `ubuntu-latest, deps-latest` (was failing)
- ✅ `ubuntu-latest, deps-minimum`
- ✅ `windows-latest, deps-latest` (was failing)
- ✅ `windows-latest, deps-minimum`
Validated on fork: https://github.com/The-Obstacle-Is-The-Way/datasets/pull/4
## Related
- #7600 (push_to_hub concurrency)
- #6392 (push_to_hub connection robustness)
cc @lhoestq - small fix but should help CI reliability
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7889/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7889/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7888
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7888/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7888/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7888/events
|
https://github.com/huggingface/datasets/pull/7888
| 3,676,407,260
|
PR_kwDODunzps62HOM4
| 7,888
|
Add type overloads to load_dataset for better static type inference
|
{
"login": "Aditya2755",
"id": 157872593,
"node_id": "U_kgDOCWjx0Q",
"avatar_url": "https://avatars.githubusercontent.com/u/157872593?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aditya2755",
"html_url": "https://github.com/Aditya2755",
"followers_url": "https://api.github.com/users/Aditya2755/followers",
"following_url": "https://api.github.com/users/Aditya2755/following{/other_user}",
"gists_url": "https://api.github.com/users/Aditya2755/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Aditya2755/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aditya2755/subscriptions",
"organizations_url": "https://api.github.com/users/Aditya2755/orgs",
"repos_url": "https://api.github.com/users/Aditya2755/repos",
"events_url": "https://api.github.com/users/Aditya2755/events{/privacy}",
"received_events_url": "https://api.github.com/users/Aditya2755/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7888). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-11-29T06:19:30
| 2025-12-08T12:06:57
| 2025-12-08T12:06:57
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7888",
"html_url": "https://github.com/huggingface/datasets/pull/7888",
"diff_url": "https://github.com/huggingface/datasets/pull/7888.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7888.patch",
"merged_at": "2025-12-08T12:06:57"
}
|
Fixes #7883
This PR adds @overload decorators to load_dataset() to help type checkers like Pylance and mypy correctly infer the return type based on the split and streaming parameters.
Changes:
- Added typing imports (Literal, overload) to load.py
- Added 4 @overload signatures that map argument combinations to specific return types:
* split=None, streaming=False -> DatasetDict
* split specified, streaming=False -> Dataset
* split=None, streaming=True -> IterableDatasetDict
* split specified, streaming=True -> IterableDataset
This resolves the Pylance error where to_csv() was not recognized on Dataset objects returned by load_dataset(..., split='train'), since the type checker previously saw the return type as a Union that included types without to_csv().
No runtime behavior changes - this is purely a static typing improvement.
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7888/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7888/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7887
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7887/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7887/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7887/events
|
https://github.com/huggingface/datasets/pull/7887
| 3,676,203,387
|
PR_kwDODunzps62GltX
| 7,887
|
fix(nifti): enable lazy loading for Nifti1ImageWrapper
|
{
"login": "The-Obstacle-Is-The-Way",
"id": 175985783,
"node_id": "U_kgDOCn1Udw",
"avatar_url": "https://avatars.githubusercontent.com/u/175985783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/The-Obstacle-Is-The-Way",
"html_url": "https://github.com/The-Obstacle-Is-The-Way",
"followers_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/followers",
"following_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/following{/other_user}",
"gists_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/gists{/gist_id}",
"starred_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/subscriptions",
"organizations_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/orgs",
"repos_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/repos",
"events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/events{/privacy}",
"received_events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7887). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-11-29T01:40:27
| 2025-12-19T14:14:35
| 2025-12-19T14:14:35
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7887",
"html_url": "https://github.com/huggingface/datasets/pull/7887",
"diff_url": "https://github.com/huggingface/datasets/pull/7887.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7887.patch",
"merged_at": "2025-12-19T14:14:35"
}
|
## Summary
- **Single-line fix**: Change `dataobj=nifti_image.get_fdata()` → `dataobj=nifti_image.dataobj`
- Preserves nibabel's `ArrayProxy` for true lazy loading instead of eagerly loading entire NIfTI files into memory
- Improves error handling: corrupted files now fail at access time with clear context instead of silently during decode
## Problem
The `Nifti1ImageWrapper.__init__` was calling `get_fdata()` which immediately loads the entire image into memory. For large 4D fMRI files (often 1-2GB), this causes:
1. **Memory issues** - Full data loaded during decode, not on demand
2. **Poor error handling** - Corrupted files crash at access time with unclear error messages (e.g., `EOFError` with no file path)
3. **No graceful recovery** - Entire dataset iteration fails on one bad file
## Solution
Use `nifti_image.dataobj` which preserves the underlying `ArrayProxy`, deferring actual I/O to `get_fdata()` calls.
## Test Plan
- [x] Added `test_nifti_lazy_loading` to verify `ArrayProxy` is preserved
- [x] All 22 existing NIfTI tests pass
- [x] End-to-end tested with real OpenNeuro data (ds000102)
- [x] CodeRabbit approved: "Switch to dataobj correctly restores nibabel's lazy loading semantics... This looks solid"
## Related
- Discovered while testing BIDS loader PR #7886
- Complements the NIfTI + NiiVue viewer work from #7878 and #7874
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7887/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7887/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7886
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7886/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7886/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7886/events
|
https://github.com/huggingface/datasets/pull/7886
| 3,676,185,151
|
PR_kwDODunzps62Gh1c
| 7,886
|
feat(bids): Add BIDS dataset loader for neuroimaging data
|
{
"login": "The-Obstacle-Is-The-Way",
"id": 175985783,
"node_id": "U_kgDOCn1Udw",
"avatar_url": "https://avatars.githubusercontent.com/u/175985783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/The-Obstacle-Is-The-Way",
"html_url": "https://github.com/The-Obstacle-Is-The-Way",
"followers_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/followers",
"following_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/following{/other_user}",
"gists_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/gists{/gist_id}",
"starred_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/subscriptions",
"organizations_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/orgs",
"repos_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/repos",
"events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/events{/privacy}",
"received_events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-11-29T01:22:06
| 2025-12-19T16:45:55
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7886",
"html_url": "https://github.com/huggingface/datasets/pull/7886",
"diff_url": "https://github.com/huggingface/datasets/pull/7886.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7886.patch",
"merged_at": null
}
|
## Summary
Adds native BIDS (Brain Imaging Data Structure) dataset loading support using PyBIDS, enabling `load_dataset('bids', data_dir='/path/to/bids')` workflow for neuroimaging researchers.
**Contributes to #7804** (Support scientific data formats) - BIDS is a widely-used standard for organizing neuroimaging data built on NIfTI files.
## Changes
### Core Implementation
- `src/datasets/packaged_modules/bids/bids.py` - GeneratorBasedBuilder implementation
- `src/datasets/packaged_modules/bids/__init__.py` - Module exports
- `src/datasets/packaged_modules/__init__.py` - Registration with module registry
- `src/datasets/config.py` - `PYBIDS_AVAILABLE` config flag
- `setup.py` - Optional `pybids>=0.21.0` + nibabel dependency
### Features
- Automatic BIDS structure validation
- Subject/session/datatype filtering via config
- JSON sidecar metadata extraction
- NIfTI file decoding via existing Nifti feature
### Documentation & Tests
- `docs/source/bids_dataset.mdx` - User guide with examples
- `tests/packaged_modules/test_bids.py` - Unit tests (4 tests)
## Usage
```python
from datasets import load_dataset
# Load entire BIDS dataset
ds = load_dataset('bids', data_dir='/path/to/bids_dataset')
# Filter by subject/session
ds = load_dataset('bids',
data_dir='/path/to/bids_dataset',
subjects=['01', '02'],
sessions=['baseline']
)
# Access samples
sample = ds['train'][0]
print(sample['subject']) # '01'
print(sample['nifti'].shape) # (176, 256, 256)
print(sample['metadata']) # JSON sidecar data
```
## Test plan
- [x] All 4 unit tests pass (`pytest tests/packaged_modules/test_bids.py`)
- [x] `make quality` passes (ruff check)
- [x] End-to-end tested with real OpenNeuro data (ds000102)
## Context
This PR is part of the neuroimaging initiative discussed with @TobiasPitters. Follows the BIDS 1.10.1 specification and leverages the existing Nifti feature for NIfTI file handling.
Related PRs:
- #7874 (Nifti visualization support)
- #7878 (Replace papaya with niivue)
|
{
"login": "The-Obstacle-Is-The-Way",
"id": 175985783,
"node_id": "U_kgDOCn1Udw",
"avatar_url": "https://avatars.githubusercontent.com/u/175985783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/The-Obstacle-Is-The-Way",
"html_url": "https://github.com/The-Obstacle-Is-The-Way",
"followers_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/followers",
"following_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/following{/other_user}",
"gists_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/gists{/gist_id}",
"starred_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/subscriptions",
"organizations_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/orgs",
"repos_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/repos",
"events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/events{/privacy}",
"received_events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7886/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7886/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7885
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7885/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7885/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7885/events
|
https://github.com/huggingface/datasets/pull/7885
| 3,675,116,624
|
PR_kwDODunzps62DBlN
| 7,885
|
Add visualization paragraph to nifti readme
|
{
"login": "CloseChoice",
"id": 31857876,
"node_id": "MDQ6VXNlcjMxODU3ODc2",
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CloseChoice",
"html_url": "https://github.com/CloseChoice",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions",
"organizations_url": "https://api.github.com/users/CloseChoice/orgs",
"repos_url": "https://api.github.com/users/CloseChoice/repos",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"received_events_url": "https://api.github.com/users/CloseChoice/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-11-28T14:31:28
| 2025-12-10T10:25:14
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7885",
"html_url": "https://github.com/huggingface/datasets/pull/7885",
"diff_url": "https://github.com/huggingface/datasets/pull/7885.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7885.patch",
"merged_at": null
}
|
Add small paragraph and video.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7885/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7885/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7884
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7884/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7884/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7884/events
|
https://github.com/huggingface/datasets/pull/7884
| 3,672,811,099
|
PR_kwDODunzps617Uk6
| 7,884
|
Fix 7846: add_column and add_item erroneously(?) require new_fingerprint parameter
|
{
"login": "sajmaru",
"id": 44121755,
"node_id": "MDQ6VXNlcjQ0MTIxNzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/44121755?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sajmaru",
"html_url": "https://github.com/sajmaru",
"followers_url": "https://api.github.com/users/sajmaru/followers",
"following_url": "https://api.github.com/users/sajmaru/following{/other_user}",
"gists_url": "https://api.github.com/users/sajmaru/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sajmaru/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sajmaru/subscriptions",
"organizations_url": "https://api.github.com/users/sajmaru/orgs",
"repos_url": "https://api.github.com/users/sajmaru/repos",
"events_url": "https://api.github.com/users/sajmaru/events{/privacy}",
"received_events_url": "https://api.github.com/users/sajmaru/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"@lhoestq can you help review this pull request?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7884). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-11-27T19:32:10
| 2025-12-04T16:09:51
| 2025-12-04T16:09:51
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7884",
"html_url": "https://github.com/huggingface/datasets/pull/7884",
"diff_url": "https://github.com/huggingface/datasets/pull/7884.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7884.patch",
"merged_at": "2025-12-04T16:09:51"
}
|
Summary of the change:
Made new_fingerprint optional (Optional[str] = None) in Dataset.add_column and Dataset.add_item
Added a simple test to verify both methods work without providing a fingerprint
Why this change is safe:
The Dataset constructor already handles fingerprint=None by generating a new fingerprint automatically
No internal logic is broken — if a user provides a fingerprint, it’s still used as before
The change only affects the function signature, making it more user-friendly without changing any functionality
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7884/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7884/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7883
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7883/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7883/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7883/events
|
https://github.com/huggingface/datasets/issues/7883
| 3,668,182,561
|
I_kwDODunzps7apAYh
| 7,883
|
Data.to_csv() cannot be recognized by pylance
|
{
"login": "xi4ngxin",
"id": 154290630,
"node_id": "U_kgDOCTJJxg",
"avatar_url": "https://avatars.githubusercontent.com/u/154290630?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xi4ngxin",
"html_url": "https://github.com/xi4ngxin",
"followers_url": "https://api.github.com/users/xi4ngxin/followers",
"following_url": "https://api.github.com/users/xi4ngxin/following{/other_user}",
"gists_url": "https://api.github.com/users/xi4ngxin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xi4ngxin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xi4ngxin/subscriptions",
"organizations_url": "https://api.github.com/users/xi4ngxin/orgs",
"repos_url": "https://api.github.com/users/xi4ngxin/repos",
"events_url": "https://api.github.com/users/xi4ngxin/events{/privacy}",
"received_events_url": "https://api.github.com/users/xi4ngxin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-11-26T16:16:56
| 2025-12-08T12:06:58
| 2025-12-08T12:06:58
|
NONE
| null | null | null | null |
### Describe the bug
Hi, everyone ! I am a beginner with datasets.
I am testing reading multiple CSV files from a zip archive. The result of reading the dataset shows success, and it can ultimately be correctly saved to CSV.
Intermediate results:
```
Generating train split: 62973 examples [00:00, 175939.01 examples/s]
DatasetDict({
train: Dataset({
features: ['交易时间\t', '收支方向\t', '业务(产品)种类\t', '交易金额\t', '币种\t', '时点余额\t', '对手方名称\t', '对方机构名称\t', ' 对方钱包ID/账号\t', '交易对手名称\t', '交易对手编号\t', '交易流水号\t', '摘要\t', '附言\t', '备注\t', '用途\t', '客户流水号\t'],
num_rows: 62973
})
})
```
However, Pylance gives me the following error:
```
Cannot access attribute "to_csv" for class "DatasetDict"
Attribute "to_csv" is unknownPylance[reportAttributeAccessIssue](https://github.com/microsoft/pylance-release/blob/main/docs/diagnostics/reportAttributeAccessIssue.md)```
Cannot access attribute "to_csv" for class "IterableDatasetDict"
Attribute "to_csv" is unknownPylance[reportAttributeAccessIssue](https://github.com/microsoft/pylance-release/blob/main/docs/diagnostics/reportAttributeAccessIssue.md)
(method) to_csv: Unknown | ((path_or_buf: datasets.utils.typing.PathLike | BinaryIO, batch_size: int | None = None, num_proc: int | None = None, storage_options: dict[Unknown, Unknown] | None = None, **to_csv_kwargs: Unknown) -> int) | ((path_or_buf: datasets.utils.typing.PathLike | BinaryIO, batch_size: int | None = None, storage_options: dict[Unknown, Unknown] | None = None, **to_csv_kwargs: Unknown) -> int)
```
I ignored the error and continued executing to get the correct result:
```
Dataset({
features: ['交易时间\t', '收支方向\t', '业务(产品)种类\t', '交易金额\t', '币种\t', '时点余额\t', '对手方名称\t', '对方机构名称\t', '对方 钱包ID/账号\t', '交易对手名称\t', '交易对手编号\t', '交易流水号\t', '摘要\t', '附言\t', '备注\t', '用途\t', '客户流水号\t'],
num_rows: 62973
})
```
Since the data volume is small, I manually merged the CSV files, and the final result is consistent with what the program saved.
looks like :
<img width="1264" height="150" alt="Image" src="https://github.com/user-attachments/assets/743540d7-ad8c-4531-ae7e-de71a5243a32" />
### Steps to reproduce the bug
this is my code.
```
from datasets import load_dataset
def main():
url = "data/test.zip"
data_files = {"train": url}
dataset = load_dataset("csv", data_files=data_files,split="train", encoding="gbk", skiprows=2)
# print(dataset)
dataset.to_csv("data/test.csv")
if __name__ == "__main__":
main()
```
### Expected behavior
I want to know why this happens. Is there something wrong with my code?
### Environment info
OS: Windows 11 **upgrade from** OS: Windows_NT x64 10.0.22631
Editor:
VS Code Version: 1.106.2 (user setup)
"datasets" version = "4.4.1"
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7883/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7883/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7882
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7882/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7882/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7882/events
|
https://github.com/huggingface/datasets/issues/7882
| 3,667,664,527
|
I_kwDODunzps7anB6P
| 7,882
|
Inconsistent loading of LFS-hosted files in epfml/FineWeb-HQ dataset
|
{
"login": "Oligou",
"id": 6270922,
"node_id": "MDQ6VXNlcjYyNzA5MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6270922?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Oligou",
"html_url": "https://github.com/Oligou",
"followers_url": "https://api.github.com/users/Oligou/followers",
"following_url": "https://api.github.com/users/Oligou/following{/other_user}",
"gists_url": "https://api.github.com/users/Oligou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Oligou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Oligou/subscriptions",
"organizations_url": "https://api.github.com/users/Oligou/orgs",
"repos_url": "https://api.github.com/users/Oligou/repos",
"events_url": "https://api.github.com/users/Oligou/events{/privacy}",
"received_events_url": "https://api.github.com/users/Oligou/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"we are investigating the issue, I'll post updates here",
"It should be all good now, sorry for the inconvenience.\n\nWe found an error that happened during the migration of this repository from Git LFS to Xet and fixed the issue"
] | 2025-11-26T14:06:02
| 2025-12-19T14:09:45
| null |
NONE
| null | null | null | null |
### Describe the bug
Some files in the `epfml/FineWeb-HQ` dataset fail to load via the Hugging Face `datasets` library.
- xet-hosted files load fine
- LFS-hosted files sometimes fail
Example:
- Fails: https://huggingface.co/datasets/epfml/FineWeb-HQ/blob/main/data/CC-MAIN-2024-26/000_00003.parquet
- Works: https://huggingface.co/datasets/epfml/FineWeb-HQ/blob/main/data/CC-MAIN-2024-42/000_00027.parquet
Discussion: https://huggingface.co/datasets/epfml/FineWeb-HQ/discussions/2
### Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset(
"epfml/FineWeb-HQ",
data_files="data/CC-MAIN-2024-26/000_00003.parquet",
)
```
Error message:
```
HfHubHTTPError: 403 Forbidden: None.
Cannot access content at: https://cdn-lfs-us-1.hf.co/repos/...
Make sure your token has the correct permissions.
...
<Error><Code>AccessDenied</Code><Message>Access Denied</Message></Error>
```
### Expected behavior
It should load the dataset for all files.
### Environment info
- python 3.10
- datasets 4.4.1
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7882/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7882/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7881
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7881/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7881/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7881/events
|
https://github.com/huggingface/datasets/pull/7881
| 3,667,642,524
|
PR_kwDODunzps61qI8F
| 7,881
|
Fix spurious label column when directories match split names
|
{
"login": "neha222222",
"id": 132138786,
"node_id": "U_kgDOB-BHIg",
"avatar_url": "https://avatars.githubusercontent.com/u/132138786?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/neha222222",
"html_url": "https://github.com/neha222222",
"followers_url": "https://api.github.com/users/neha222222/followers",
"following_url": "https://api.github.com/users/neha222222/following{/other_user}",
"gists_url": "https://api.github.com/users/neha222222/gists{/gist_id}",
"starred_url": "https://api.github.com/users/neha222222/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neha222222/subscriptions",
"organizations_url": "https://api.github.com/users/neha222222/orgs",
"repos_url": "https://api.github.com/users/neha222222/repos",
"events_url": "https://api.github.com/users/neha222222/events{/privacy}",
"received_events_url": "https://api.github.com/users/neha222222/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-11-26T13:59:46
| 2025-12-08T12:21:54
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7881",
"html_url": "https://github.com/huggingface/datasets/pull/7881",
"diff_url": "https://github.com/huggingface/datasets/pull/7881.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7881.patch",
"merged_at": null
}
|
Issue - https://github.com/huggingface/datasets/issues/7880
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7881/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7881/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7880
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7880/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7880/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7880/events
|
https://github.com/huggingface/datasets/issues/7880
| 3,667,561,864
|
I_kwDODunzps7amo2I
| 7,880
|
Spurious label column created when audiofolder/imagefolder directories match split names
|
{
"login": "neha222222",
"id": 132138786,
"node_id": "U_kgDOB-BHIg",
"avatar_url": "https://avatars.githubusercontent.com/u/132138786?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/neha222222",
"html_url": "https://github.com/neha222222",
"followers_url": "https://api.github.com/users/neha222222/followers",
"following_url": "https://api.github.com/users/neha222222/following{/other_user}",
"gists_url": "https://api.github.com/users/neha222222/gists{/gist_id}",
"starred_url": "https://api.github.com/users/neha222222/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neha222222/subscriptions",
"organizations_url": "https://api.github.com/users/neha222222/orgs",
"repos_url": "https://api.github.com/users/neha222222/repos",
"events_url": "https://api.github.com/users/neha222222/events{/privacy}",
"received_events_url": "https://api.github.com/users/neha222222/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-11-26T13:36:24
| 2025-11-26T13:36:24
| null |
NONE
| null | null | null | null |
## Describe the bug
When using `audiofolder` or `imagefolder` with directories for **splits** (train/test) rather than class labels, a spurious `label` column is incorrectly created.
**Example:** https://huggingface.co/datasets/datasets-examples/doc-audio-4
```
from datasets import load_dataset
ds = load_dataset("datasets-examples/doc-audio-4")
print(ds["train"].features)
```
Shows 'label' column with ClassLabel(names=['test', 'train']) - incorrect!## Root cause
In `folder_based_builder.py`, the `labels` set is accumulated across ALL splits (line 77). When directories are `train/` and `test/`:
- `labels = {"train", "test"}` → `len(labels) > 1` → `add_labels = True`
- Spurious label column is created with split names as class labels
## Expected behavior
No `label` column should be added when directory names match split names.
## Proposed fix
Skip label inference when inferred labels match split names.
cc @lhoestq
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7880/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7880/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7879
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7879/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7879/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7879/events
|
https://github.com/huggingface/datasets/issues/7879
| 3,657,249,446
|
I_kwDODunzps7Z_TKm
| 7,879
|
python core dump when downloading dataset
|
{
"login": "hansewetz",
"id": 5960219,
"node_id": "MDQ6VXNlcjU5NjAyMTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5960219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hansewetz",
"html_url": "https://github.com/hansewetz",
"followers_url": "https://api.github.com/users/hansewetz/followers",
"following_url": "https://api.github.com/users/hansewetz/following{/other_user}",
"gists_url": "https://api.github.com/users/hansewetz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hansewetz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hansewetz/subscriptions",
"organizations_url": "https://api.github.com/users/hansewetz/orgs",
"repos_url": "https://api.github.com/users/hansewetz/repos",
"events_url": "https://api.github.com/users/hansewetz/events{/privacy}",
"received_events_url": "https://api.github.com/users/hansewetz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi @hansewetz I'm curious, for me it works just fine. Are you still observing the issue?",
"Yup ... still the same issue.\nHowever, after adding a ```sleep(1)``` call after the ``` for``` loop by accident during debugging, the program terminates properly (not a good solution though ... :-) ).\nAre there some threads created that handles the download that are still running when the program exits?\nHaven't had time yet to go through the code in ```iterable_dataset.py::IterableDataset```\n",
"Interesting, I was able to reproduce it, on a jupyter notebook the code runs just fine, as a Python script indeed it seems to never finish running (which is probably leading to the core dumped error). I'll try and take a look at the source code as well to see if I can figure it out.",
"Hi @hansewetz ,\nIf possible can I be assigned with this issue?\n\n",
"```If possible can I be assigned with this issue?```\nHi, I don't know how assignments work here and who can take decisions about assignments ... ",
"Hi @hansewetz and @Aymuos22, I have made some progress:\n\n1) Confirmed last working version is 3.1.0\n\n2) From 3.1.0 to 3.2.0, there was a change in how parquet files are read (see [here](https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/parquet/parquet.py/#168).\n\nThe issue seems to be the following code:\n\n```\nparquet_fragment.to_batches(\n batch_size=batch_size,\n columns=self.config.columns,\n filter=filter_expr,\n batch_readahead=0,\n fragment_readahead=0,\n )\n```\n\nAdding a `use_threads=False` parameter to the `to_batches` call solves the bug. However, this seems far from an optimal solution, since we'd like to be able to use multiple threads for reading the fragments. \n\nI'll keep investigating to see if there's a better solution.",
"Hi @lhoestq, may I ask if the current behaviour was expected by you folks and you don't think it needs solving, or should I keep on investigating a compromise between using multithreading / avoid unexpected behaviour? Thanks in advance :) ",
"Having the same issue. the code never stops executing. Using datasets 4.4.1\nTried with \"islice\" as well. When the streaming flag is True, the code doesn't end execution. On vs-code.",
"The issue on pyarrow side is here: https://github.com/apache/arrow/issues/45214 and the original issue in `datasets` here: https://github.com/huggingface/datasets/issues/7357\n\nIt would be cool to have a fix on the pyarrow side",
"Thank you very much @lhoestq, I'm reading the issue thread in pyarrow and realizing you've been raising awareness around this for a long time now. When I have some time I'll look at @pitrou's PR to see if I can get a better understanding of what's going on on pyarrow. "
] | 2025-11-24T06:22:53
| 2025-11-25T20:45:55
| null |
NONE
| null | null | null | null |
### Describe the bug
When downloading a dataset in streamed mode and exiting the program before the download completes, the python program core dumps when exiting:
```
terminate called without an active exception
Aborted (core dumped)
```
Tested with python 3.12.3, python 3.9.21
### Steps to reproduce the bug
Create python venv:
```bash
python -m venv venv
./venv/bin/activate
pip install datasets==4.4.1
```
Execute the following program:
```
from datasets import load_dataset
ds = load_dataset("HuggingFaceFW/fineweb-2", 'hrv_Latn', split="test", streaming=True)
for sample in ds:
break
```
### Expected behavior
Clean program exit
### Environment info
described above
**note**: the example works correctly when using ```datasets==3.1.0```
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7879/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7879/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7878
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7878/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7878/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7878/events
|
https://github.com/huggingface/datasets/pull/7878
| 3,653,262,027
|
PR_kwDODunzps606R81
| 7,878
|
Replace papaya with niivue
|
{
"login": "CloseChoice",
"id": 31857876,
"node_id": "MDQ6VXNlcjMxODU3ODc2",
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CloseChoice",
"html_url": "https://github.com/CloseChoice",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions",
"organizations_url": "https://api.github.com/users/CloseChoice/orgs",
"repos_url": "https://api.github.com/users/CloseChoice/repos",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"received_events_url": "https://api.github.com/users/CloseChoice/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"@CloseChoice thanks for your work on this. As you mentioned, the prime developers for Papaya have moved on, so it is in maintenance mode, albeit it is mature and may fill all your requirements. \r\n\r\nPapaya does reflect the era of its creation, so it uses WebGL1 (which only supports 2D textures) for display and pako for decompression. In contrast, NiiVue uses WebGL2 (where 3D textures provide a native representation for volumes) and compression streams (x4 decoding speed). A major benefit of 3D textures is simple support for 3D volume rendering using ray casting. Note the Papaya README shows an isosurface rendering based on a triangulated mesh. In contrast, NiiVue can show both volume rendering (good for data with fuzzy boundaries) as well as surface rendering (good when a clean isosurface can be defined). I think the [gallery](https://niivue.com/gallery) provides a nice example of NiiVue capabilities as well as minimal recipes.\r\n\r\nI do agree that Papaya UI is more advanced: by design NiiVue is a graphic widget that can be embedded into a container that provides your preferred user interface (React, Angular, Vue, pure html, or even jupyter notebooks). \r\n\r\nI think DICOM support is a challenge for any tool for several reasons: the diversity of the implementations and compression methods (transfer syntaxes), the fact that in classic DICOM each 2D slice is saved as a separate file (though note modern enhanced DICOM can save an entire 3D volume or even 4D timeseries in a single file), and the rate that this format has evolved over time. Papaya uses [Daikon](https://github.com/rii-mango/Daikon) to handle DICOM images, and I think it is only one file at a time. In contrast, NiiVue provides plugins for complex image formats, so you can choose your desired tool. We do provide illustrate how to use [dcm2niix WASM](https://github.com/niivue/niivue-dcm2niix) as a DICOM loader, and it can extract coherent volumes from a random assortment of files or a manifest of files - see the [live demo](https://github.com/niivue/niivue-dcm2niix). Note that diakon development has halted, while dcm2niix is actively maintained, which impacts support for emerging compression methods (e.g. JPEG2000-HT). Having said that, if your primary focus is DICOM, [cornerstonejs](https://www.cornerstonejs.org/) is probably a better choice than NiiVue or Papaya.\r\n\r\nAnother feature that may or may not be worth noting is that NiiVue has a plugin model that allows you to use a lot of mature image processing tools. So you can do image conversion, image processing (itk-wasm, niimath), image registration (flirt, elastix) and edge-based AI models. [brainchop](https://brainchop.org/) illustrates edge-based AI model inference for brain segmentation, extraction and parcellation, though we provide minimal examples for ONNX, tensorflowjs and tinygrad. This would provide a convenient way for huggingface inference models to be shared. After training, the models could be converted to ONNX and deployed on a web page, allowing the user to drag-and-drop images and process them regardless of operating system or graphics card manufacturer. Since the AI model inference leverages the users own graphics card, the privacy issues and hardware scaling concerns of cloud distribution are mitigated.\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n",
"@neurolabusc thanks so much for the nuanced and informative reply.\r\nI am convinced that niivue is the better option here, having 3D support is huge and Papaya's UI features are actually not necessary at all, and AFAIS we can get what we need and more with some additional configuration for niivue as well.\r\nThanks a lot for words about DICOM, though the focus of this PR is not NifTI and not DICOM, I think having one tool being able to load both (and potentially more formats) is best, I'll definitely test the live demo. My primary interest in your thoughts about DICOM is to enable visualization as a follow-up to this PR #https://github.com/huggingface/datasets/pull/7835. Even for the DICOM case NiiVue seems like a great option using the [dcm2niix](https://github.com/niivue/niivue-dcm2niix) webassembly plugin, I think the main challenge is here how we let the user organize files in an intuitive way (e.g. provide DICOM folder class, and a DICOM document class where one folder can contain multiple documents and 3d visualization is on the folder level). \r\n\r\nGiven that NiiVue is a modern neuroimaging viewer, well maintained and widely used and we have @neurolabusc attention in case of questions/problems I think we should go ahead with NiiVue.\r\n\r\n@lhoestq your thoughts are highly appreciated.",
"Following the @neurolabusc 's suggestion I updated to [ipyniivue](https://github.com/niivue/ipyniivue?tab=readme-ov-file) which helps so that we don't need to bother with javascript and speeds up load times since ipyniivue comes with a bundled niivue version and therefore avoids to download. Since DICOM is out of the picture for now, I consider this ready to be reviewed.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7878). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-11-21T22:19:56
| 2025-11-27T20:37:04
| 2025-11-27T18:00:19
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7878",
"html_url": "https://github.com/huggingface/datasets/pull/7878",
"diff_url": "https://github.com/huggingface/datasets/pull/7878.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7878.patch",
"merged_at": "2025-11-27T18:00:19"
}
|
I was contacted by Chris Rorden whose group is developing NiiVue (see https://github.com/niivue/niivue), which leverages WebGL2 (in contrast to Papaya which is WebGL1 based). He also offered support in the implementation, which might come in handy in case of any questions later on (see DICOM implemenation). I completely overlooked NiiVue when searching for frameworks.
Development speed or lack thereof was already mentioned as a potential risk with Papaya. NiiVue is well and actively maintained, simply compare these two contribution charts:
NiiVue:
<img width="920" height="378" alt="image" src="https://github.com/user-attachments/assets/37a0a256-60aa-4758-bb07-97e421c68ae1" />
Papaya:
<img width="920" height="378" alt="image" src="https://github.com/user-attachments/assets/1e1cf0c9-ec0a-4ffc-ae03-a79ea12bcb3b" />
I gave NiiVue a try and it supports all features Papaya does, though I find Papaya's UI slightly more appealing but that is just personal taste. There is also a 3D image of the scanned object included in the NiiVue UI, but that is possible for Papaya aswell (at least in some way, check the image in their github repo README.md).
```python
from datasets import load_dataset
# new dataset compared to papaya PR, this has more interesting images
ds = load_dataset("TobiasPitters/nifti-papaya-testdata",
split="train")
ds[1]['nifti'] # ds[2]['nifti'] is also interesting
```
Here's a brief video how this looks with NiiVue: https://github.com/user-attachments/assets/3f2a52d4-2109-45e2-aca8-e4a4b1e46b32
NOTE: I explicitly created this as draft PR since I suspect the DICOM support to be a crucial factor to decide which of these two is better suited for our needs. DICOM is supported by Papaya, and for NiiVue as well using a plugin, but as far as I understand one DICOM file contains one 2D image, therefore support for loading a whole folder, containing all 2D layers for a complete 3D image is desired. NiiVue supports this according to their docs, I am unsure about Papaya.
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7878/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7878/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7877
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7877/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7877/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7877/events
|
https://github.com/huggingface/datasets/issues/7877
| 3,652,906,788
|
I_kwDODunzps7Zuu8k
| 7,877
|
work around `tempfile` silently ignoring `TMPDIR` if the dir doesn't exist
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi! Just created a Pull Request (#7890) to try to fix this using your suggestions. I hope it helps!"
] | 2025-11-21T19:51:48
| 2025-12-16T14:20:48
| 2025-12-16T14:20:48
|
CONTRIBUTOR
| null | null | null | null |
This should help a lot of users running into `No space left on device` while using `datasets`. Normally the issue is is that `/tmp` is too small and the user needs to use another path, which they would normally set as `export TMPDIR=/some/big/storage`
However, the `tempfile` facility that `datasets` and `pyarrow` use is somewhat broken. If the path doesn't exist it'd ignore it and fall back to using `/tmp`. Watch this:
```
$ export TMPDIR='/tmp/username'
$ python -c "\
import os
import tempfile
print(os.environ['TMPDIR'])
print(tempfile.gettempdir())"
/tmp/username
/tmp
```
Now let's ensure the path exists:
```
$ export TMPDIR='/tmp/username'
$ mkdir -p $TMPDIR
$ python -c "\
import os
import tempfile
print(os.environ['TMPDIR'])
print(tempfile.gettempdir())"
/tmp/username
/tmp/username
```
So I recommend `datasets` do either of the 2:
1. assert if `$TMPDIR` dir doesn't exist, telling the user to create it
2. auto-create it
The reason for (1) is that I don't know why `tempdir` doesn't auto-create the dir - perhaps some security implication? I will let you guys make the decision, but the key is not to let things silently fall through and the user puzzling why no matter what they do they can't break past `No space left on device` while using `datasets`
Thank you.
I found this via https://stackoverflow.com/questions/37229398/python-tempfile-gettempdir-does-not-respect-tmpdir while trying to help a colleague to solve this exact issue.
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7877/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7877/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7876
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7876/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7876/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7876/events
|
https://github.com/huggingface/datasets/pull/7876
| 3,652,170,832
|
PR_kwDODunzps602lac
| 7,876
|
test: add verification for HuggingFaceM4/InterleavedWebDocuments
|
{
"login": "venkatsai2004",
"id": 122142345,
"node_id": "U_kgDOB0e-iQ",
"avatar_url": "https://avatars.githubusercontent.com/u/122142345?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/venkatsai2004",
"html_url": "https://github.com/venkatsai2004",
"followers_url": "https://api.github.com/users/venkatsai2004/followers",
"following_url": "https://api.github.com/users/venkatsai2004/following{/other_user}",
"gists_url": "https://api.github.com/users/venkatsai2004/gists{/gist_id}",
"starred_url": "https://api.github.com/users/venkatsai2004/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/venkatsai2004/subscriptions",
"organizations_url": "https://api.github.com/users/venkatsai2004/orgs",
"repos_url": "https://api.github.com/users/venkatsai2004/repos",
"events_url": "https://api.github.com/users/venkatsai2004/events{/privacy}",
"received_events_url": "https://api.github.com/users/venkatsai2004/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-11-21T15:42:09
| 2025-11-21T15:42:09
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7876",
"html_url": "https://github.com/huggingface/datasets/pull/7876",
"diff_url": "https://github.com/huggingface/datasets/pull/7876.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7876.patch",
"merged_at": null
}
|
Adds an integration test for the `HuggingFaceM4/InterleavedWebDocuments` dataset.
- Gracefully skips if the dataset is not yet available on the Hub
- Checks basic loading and structure once it becomes available
Closes #7394
First-time contributor to `datasets` — really excited about this! Happy to make any adjustments needed. 🙂
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7876/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7876/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7875
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7875/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7875/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7875/events
|
https://github.com/huggingface/datasets/pull/7875
| 3,649,326,175
|
PR_kwDODunzps60s9my
| 7,875
|
Add quickstart example to datasets README
|
{
"login": "hajermabrouk",
"id": 101023542,
"node_id": "U_kgDOBgV_Ng",
"avatar_url": "https://avatars.githubusercontent.com/u/101023542?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hajermabrouk",
"html_url": "https://github.com/hajermabrouk",
"followers_url": "https://api.github.com/users/hajermabrouk/followers",
"following_url": "https://api.github.com/users/hajermabrouk/following{/other_user}",
"gists_url": "https://api.github.com/users/hajermabrouk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hajermabrouk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hajermabrouk/subscriptions",
"organizations_url": "https://api.github.com/users/hajermabrouk/orgs",
"repos_url": "https://api.github.com/users/hajermabrouk/repos",
"events_url": "https://api.github.com/users/hajermabrouk/events{/privacy}",
"received_events_url": "https://api.github.com/users/hajermabrouk/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-11-20T22:13:52
| 2025-11-20T22:13:52
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7875",
"html_url": "https://github.com/huggingface/datasets/pull/7875",
"diff_url": "https://github.com/huggingface/datasets/pull/7875.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7875.patch",
"merged_at": null
}
| null | null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7875/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7875/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7874
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7874/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7874/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7874/events
|
https://github.com/huggingface/datasets/pull/7874
| 3,644,558,046
|
PR_kwDODunzps60c4sg
| 7,874
|
Nifti visualization support
|
{
"login": "CloseChoice",
"id": 31857876,
"node_id": "MDQ6VXNlcjMxODU3ODc2",
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CloseChoice",
"html_url": "https://github.com/CloseChoice",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions",
"organizations_url": "https://api.github.com/users/CloseChoice/orgs",
"repos_url": "https://api.github.com/users/CloseChoice/repos",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"received_events_url": "https://api.github.com/users/CloseChoice/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7874). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"I tested in Colab and it works perfectly :) now I want to add `_repr_html_` everywhere xD\r\n\r\nRe: testing, I think it's fine to test manually such features"
] | 2025-11-19T21:56:56
| 2025-11-21T12:41:43
| 2025-11-21T12:31:18
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7874",
"html_url": "https://github.com/huggingface/datasets/pull/7874",
"diff_url": "https://github.com/huggingface/datasets/pull/7874.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7874.patch",
"merged_at": "2025-11-21T12:31:18"
}
|
closes #7870
leverage Papaya to visualize nifti images. For this I created a Wrapper class for `nibabel.nifti1.Nifti1Image` that provides the same interface but exposes an additional `_repr_html_` method, which is needed to visualize the image in jupyter (didn't test in colab, but that should work equivalently).
Code to test (execute in a notebook):
```python
from datasets import load_dataset
ds = load_dataset("TobiasPitters/nifti-nitest-extracted",
split="train")
image = ds[1]
image
```
Here a small video, not the most exciting scan though:
https://github.com/user-attachments/assets/1cca5f01-6fd2-48ef-a4d7-a92c1259c224
Am open to good ways to test this.
EDIT: papaya also supports dicom, didn't test it yet though
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7874/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7874/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7873
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7873/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7873/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7873/events
|
https://github.com/huggingface/datasets/pull/7873
| 3,643,993,705
|
PR_kwDODunzps60a_IZ
| 7,873
|
Fix chunk casting and schema unification in dataset
|
{
"login": "ArjunJagdale",
"id": 142811259,
"node_id": "U_kgDOCIMgew",
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArjunJagdale",
"html_url": "https://github.com/ArjunJagdale",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"\r\n@lhoestq would like to hear from you!\r\n"
] | 2025-11-19T18:43:47
| 2025-11-22T19:51:30
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7873",
"html_url": "https://github.com/huggingface/datasets/pull/7873",
"diff_url": "https://github.com/huggingface/datasets/pull/7873.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7873.patch",
"merged_at": null
}
|
Updated chunk handling to cast to expected schema when features are provided or to unify schemas when not. This ensures proper schema alignment for the yielded batches.
fixes #7872
This PR fixes a bug where `IterableDataset` created from a generator with explicit `features` parameter would fail during arrow operations (like `.to_pandas()`) when the data contains missing or null values.
## Problem
When an `IterableDataset` is created with explicit features but the generator yields data with missing values (e.g., empty lists), PyArrow would infer different schemas for different batches based on the actual data rather than using the provided schema. This caused `ArrowInvalid` errors when trying to concatenate batches with mismatched schemas.
### Example error:
```python
pyarrow.lib.ArrowInvalid: Schema at index 1 was different:
a: int64
b: list
vs
a: int64
b: list>
```
## Solution
Modified `RebatchedArrowExamplesIterable._iter_arrow()` to:
1. Cast chunks to the expected schema when explicit features are provided
2. Unify schemas across chunks when no explicit features are set
3. Gracefully handle cast failures by falling back to the original chunk
This ensures that the user-provided schema is respected throughout the iteration process.
## Testing
Verified the fix with the following test case:
```python
import datasets
from datasets import features
def test_to_pandas_works_with_explicit_schema():
common_features = features.Features(
{
"a": features.Value("int64"),
"b": features.List({"c": features.Value("int64")}),
}
)
def row_generator():
data = [{"a": 1, "b": []}, {"a": 1, "b": [{"c": 1}]}]
for row in data:
yield row
d = datasets.IterableDataset.from_generator(row_generator, features=common_features)
print("Iterating…")
for _ in d.to_pandas():
pass
test_to_pandas_works_with_explicit_schema()
```
Before Patch -
```
@ArjunJagdale ➜ /workspaces/datasets (main) $ python test_arjun.py
Iterating…
Traceback (most recent call last):
File "/workspaces/datasets/test_arjun.py", line 24, in <module>
test_to_pandas_works_with_explicit_schema()
File "/workspaces/datasets/test_arjun.py", line 21, in test_to_pandas_works_with_explicit_schema
for _ in d.to_pandas():
File "/workspaces/datasets/src/datasets/iterable_dataset.py", line 3736, in to_pandas
table = pa.concat_tables(list(self.with_format("arrow").iter(batch_size=1000)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspaces/datasets/src/datasets/iterable_dataset.py", line 2596, in iter
for key, pa_table in iterator:
File "/workspaces/datasets/src/datasets/iterable_dataset.py", line 2111, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
File "/workspaces/datasets/src/datasets/iterable_dataset.py", line 632, in _iter_arrow
yield new_key, pa.Table.from_batches(chunks_buffer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/table.pxi", line 5039, in pyarrow.lib.Table.from_batches
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Schema at index 1 was different:
a: int64
b: list<item: null>
vs
a: int64
b: list<item: struct<c: int64>>
```
After Patch -
```
@ArjunJagdale ➜ /workspaces/datasets (main) $ python test_arjun.py
Iterating…
@ArjunJagdale ➜ /workspaces/datasets (main) $
```
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7873/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7873/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7872
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7872/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7872/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7872/events
|
https://github.com/huggingface/datasets/issues/7872
| 3,643,681,893
|
I_kwDODunzps7ZLixl
| 7,872
|
IterableDataset does not use features information in to_pandas
|
{
"login": "bonext",
"id": 790640,
"node_id": "MDQ6VXNlcjc5MDY0MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/790640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bonext",
"html_url": "https://github.com/bonext",
"followers_url": "https://api.github.com/users/bonext/followers",
"following_url": "https://api.github.com/users/bonext/following{/other_user}",
"gists_url": "https://api.github.com/users/bonext/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bonext/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bonext/subscriptions",
"organizations_url": "https://api.github.com/users/bonext/orgs",
"repos_url": "https://api.github.com/users/bonext/repos",
"events_url": "https://api.github.com/users/bonext/events{/privacy}",
"received_events_url": "https://api.github.com/users/bonext/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"Created A PR!",
"Another test script that can be used to test the behavior - \n\n```\nimport datasets\nfrom datasets import features\n\ndef test_crash():\n common_features = features.Features({\n \"a\": features.Value(\"int64\"),\n \"b\": features.List({\"c\": features.Value(\"int64\")}),\n })\n\n def row_generator():\n yield {\"a\": 1, \"b\": []}\n yield {\"a\": 1, \"b\": [{\"c\": 1}]}\n\n d = datasets.IterableDataset.from_generator(row_generator, features=common_features)\n\n list(d.to_pandas()) # <-- this triggers the crash\n\n```"
] | 2025-11-19T17:12:59
| 2025-11-19T18:52:14
| null |
NONE
| null | null | null | null |
### Describe the bug
`IterableDataset` created from generator with explicit `features=` parameter seems to ignore provided features description for certain operations, e.g. `.to_pandas(...)` when data coming from the generator has missing values.
### Steps to reproduce the bug
```python
import datasets
from datasets import features
def test_to_pandas_works_with_explicit_schema():
common_features = features.Features(
{
"a": features.Value("int64"),
"b": features.List({"c": features.Value("int64")}),
}
)
def row_generator():
data = [{"a": 1, "b": []}, {"a": 1, "b": [{"c": 1}]}]
for row in data:
yield row
d = datasets.IterableDataset.from_generator(row_generator, features=common_features)
for _ in d.to_pandas():
pass
# _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
# .venv/lib/python3.13/site-packages/datasets/iterable_dataset.py:3703: in to_pandas
# table = pa.concat_tables(list(self.with_format("arrow").iter(batch_size=1000)))
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# .venv/lib/python3.13/site-packages/datasets/iterable_dataset.py:2563: in iter
# for key, pa_table in iterator:
# ^^^^^^^^
# .venv/lib/python3.13/site-packages/datasets/iterable_dataset.py:2078: in _iter_arrow
# for key, pa_table in self.ex_iterable._iter_arrow():
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# .venv/lib/python3.13/site-packages/datasets/iterable_dataset.py:599: in _iter_arrow
# yield new_key, pa.Table.from_batches(chunks_buffer)
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# pyarrow/table.pxi:5039: in pyarrow.lib.Table.from_batches
# ???
# pyarrow/error.pxi:155: in pyarrow.lib.pyarrow_internal_check_status
# ???
# _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
# > ???
# E pyarrow.lib.ArrowInvalid: Schema at index 1 was different:
# E a: int64
# E b: list<item: null>
# E vs
# E a: int64
# E b: list<item: struct<c: int64>>
# pyarrow/error.pxi:92: ArrowInvalid
```
### Expected behavior
arrow operations use schema provided through `features=` and not the one inferred from the data
### Environment info
- datasets version: 4.4.1
- Platform: macOS-15.7.1-arm64-arm-64bit-Mach-O
- Python version: 3.13.1
- huggingface_hub version: 1.1.4
- PyArrow version: 22.0.0
- Pandas version: 2.3.3
- fsspec version: 2025.10.0
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7872/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7872/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7871
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7871/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7871/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7871/events
|
https://github.com/huggingface/datasets/issues/7871
| 3,643,607,371
|
I_kwDODunzps7ZLQlL
| 7,871
|
Reqwest Error: HTTP status client error (429 Too Many Requests)
|
{
"login": "yanan1116",
"id": 26405281,
"node_id": "MDQ6VXNlcjI2NDA1Mjgx",
"avatar_url": "https://avatars.githubusercontent.com/u/26405281?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yanan1116",
"html_url": "https://github.com/yanan1116",
"followers_url": "https://api.github.com/users/yanan1116/followers",
"following_url": "https://api.github.com/users/yanan1116/following{/other_user}",
"gists_url": "https://api.github.com/users/yanan1116/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yanan1116/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yanan1116/subscriptions",
"organizations_url": "https://api.github.com/users/yanan1116/orgs",
"repos_url": "https://api.github.com/users/yanan1116/repos",
"events_url": "https://api.github.com/users/yanan1116/events{/privacy}",
"received_events_url": "https://api.github.com/users/yanan1116/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"the dataset repo: `https://huggingface.co/datasets/nvidia/PhysicalAI-Robotics-GR00T-X-Embodiment-Sim`",
"Hi @yanan1116,\n\nThanks for the detailed report! However, this issue was filed in the wrong repository. This is a `huggingface_hub` issue, not a `datasets` issue.\n\nLooking at your traceback, you're using the `hf download` CLI command (from `huggingface_hub`), and the error occurs in `huggingface_hub/file_download.py` at line 571 in the `xet_get` function. The `datasets` library is not involved in this download at all.\n\nThe 429 error means the CAS (Content Addressable Storage) service at `https://cas-server.xethub.hf.co` is rate-limiting your requests. The `huggingface_hub` library currently doesn't have automatic retry logic for 429 errors from the CAS service.\n\nPlease reopen this issue at: https://github.com/huggingface/huggingface_hub/issues"
] | 2025-11-19T16:52:24
| 2025-11-30T13:38:32
| 2025-11-30T13:38:32
|
NONE
| null | null | null | null |
### Describe the bug
full error message:
```
Traceback (most recent call last):
File "/home/yanan/miniconda3/bin/hf", line 7, in <module>
sys.exit(main())
~~~~^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/cli/hf.py", line 56, in main
app()
~~~^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/typer/main.py", line 327, in __call__
raise e
File "/home/yanan/miniconda3/lib/python3.13/site-packages/typer/main.py", line 310, in __call__
return get_command(self)(*args, **kwargs)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/click/core.py", line 1161, in __call__
return self.main(*args, **kwargs)
~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/typer/core.py", line 803, in main
return _main(
self,
...<6 lines>...
**extra,
)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/typer/core.py", line 192, in _main
rv = self.invoke(ctx)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/click/core.py", line 1697, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/click/core.py", line 1443, in invoke
return ctx.invoke(self.callback, **ctx.params)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/click/core.py", line 788, in invoke
return __callback(*args, **kwargs)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/typer/main.py", line 691, in wrapper
return callback(**use_params)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/cli/download.py", line 188, in download
_print_result(run_download())
~~~~~~~~~~~~^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/cli/download.py", line 149, in run_download
return snapshot_download(
repo_id=repo_id,
...<10 lines>...
dry_run=dry_run,
)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/utils/_validators.py", line 89, in _inner_fn
return fn(*args, **kwargs)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/_snapshot_download.py", line 451, in snapshot_download
thread_map(
~~~~~~~~~~^
_inner_hf_hub_download,
^^^^^^^^^^^^^^^^^^^^^^^
...<3 lines>...
tqdm_class=tqdm_class,
^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/tqdm/contrib/concurrent.py", line 69, in thread_map
return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/tqdm/contrib/concurrent.py", line 51, in _executor_map
return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs))
File "/home/yanan/miniconda3/lib/python3.13/site-packages/tqdm/std.py", line 1181, in __iter__
for obj in iterable:
^^^^^^^^
File "/home/yanan/miniconda3/lib/python3.13/concurrent/futures/_base.py", line 619, in result_iterator
yield _result_or_cancel(fs.pop())
~~~~~~~~~~~~~~~~~^^^^^^^^^^
File "/home/yanan/miniconda3/lib/python3.13/concurrent/futures/_base.py", line 317, in _result_or_cancel
return fut.result(timeout)
~~~~~~~~~~^^^^^^^^^
File "/home/yanan/miniconda3/lib/python3.13/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
~~~~~~~~~~~~~~~~~^^
File "/home/yanan/miniconda3/lib/python3.13/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/home/yanan/miniconda3/lib/python3.13/concurrent/futures/thread.py", line 59, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/_snapshot_download.py", line 431, in _inner_hf_hub_download
hf_hub_download( # type: ignore
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
repo_id,
^^^^^^^^
...<14 lines>...
dry_run=dry_run,
^^^^^^^^^^^^^^^^
)
^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/utils/_validators.py", line 89, in _inner_fn
return fn(*args, **kwargs)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/file_download.py", line 986, in hf_hub_download
return _hf_hub_download_to_local_dir(
# Destination
...<16 lines>...
dry_run=dry_run,
)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/file_download.py", line 1390, in _hf_hub_download_to_local_dir
_download_to_tmp_and_move(
~~~~~~~~~~~~~~~~~~~~~~~~~^
incomplete_path=paths.incomplete_path(etag),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<8 lines>...
tqdm_class=tqdm_class,
^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/file_download.py", line 1791, in _download_to_tmp_and_move
xet_get(
~~~~~~~^
incomplete_path=incomplete_path,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<4 lines>...
tqdm_class=tqdm_class,
^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/file_download.py", line 571, in xet_get
download_files(
~~~~~~~~~~~~~~^
xet_download_info,
^^^^^^^^^^^^^^^^^^
...<3 lines>...
progress_updater=[progress_updater],
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
RuntimeError: Data processing error: CAS service error : Reqwest Error: HTTP status client error (429 Too Many Requests), domain: https://cas-server.xethub.hf.co/reconstructions/04b8a4667b84b3b874a6a2f070cec88920f6289e71185d69fa87e3cf29834710
```
### Steps to reproduce the bug
my command
```bash
hf download nvidia/PhysicalAI-Robotics-GR00T-X-Embodiment-Sim --repo-type dataset --include "single_panda_gripper.CoffeePressButton/**" --local-dir /home/yanan/robotics/Isaac-GR00T/gr00t_dataset_official/
```
### Expected behavior
expect the data can be downloaded without any issue
### Environment info
huggingface_hub 1.1.4
|
{
"login": "yanan1116",
"id": 26405281,
"node_id": "MDQ6VXNlcjI2NDA1Mjgx",
"avatar_url": "https://avatars.githubusercontent.com/u/26405281?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yanan1116",
"html_url": "https://github.com/yanan1116",
"followers_url": "https://api.github.com/users/yanan1116/followers",
"following_url": "https://api.github.com/users/yanan1116/following{/other_user}",
"gists_url": "https://api.github.com/users/yanan1116/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yanan1116/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yanan1116/subscriptions",
"organizations_url": "https://api.github.com/users/yanan1116/orgs",
"repos_url": "https://api.github.com/users/yanan1116/repos",
"events_url": "https://api.github.com/users/yanan1116/events{/privacy}",
"received_events_url": "https://api.github.com/users/yanan1116/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7871/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7871/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7870
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7870/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7870/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7870/events
|
https://github.com/huggingface/datasets/issues/7870
| 3,642,209,953
|
I_kwDODunzps7ZF7ah
| 7,870
|
Visualization for Medical Imaging Datasets
|
{
"login": "CloseChoice",
"id": 31857876,
"node_id": "MDQ6VXNlcjMxODU3ODc2",
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CloseChoice",
"html_url": "https://github.com/CloseChoice",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions",
"organizations_url": "https://api.github.com/users/CloseChoice/orgs",
"repos_url": "https://api.github.com/users/CloseChoice/repos",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"received_events_url": "https://api.github.com/users/CloseChoice/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"It would be amazing to be able to show the Papaya UI in google colab / jupyter notebook. IIRC both allow serving javascript via nbextensions that we can surely use in HTML() objects.\n\nAlternatively we could also start with a simple approach and dump the medical image data as a video file that goes through the slices, so we don't need javascript."
] | 2025-11-19T11:05:39
| 2025-11-21T12:31:19
| 2025-11-21T12:31:19
|
CONTRIBUTOR
| null | null | null | null |
This is a followup to: https://github.com/huggingface/datasets/pull/7815.
I checked the possibilities to visualize the nifti (and potentially dicom), and here's what I found:
- https://github.com/aces/brainbrowser, AGPL3 license, last commit 3 months ago, latest (github) release from 2017. It's available on jsdelivr: https://www.jsdelivr.com/package/npm/brainbrowser (but that is from 2015!)
- https://github.com/rii-mango/Papaya, custom but BSD-style license that would require datasets to list the conditions in their readme somewhere, last commit June 2024. I looked into this library and it looks mature and good enough for our use case, but just working on it for a short time I wasn't able to get this to work, but am sure we could get this working, would probably require some JS on datasets' end. Available on jsdelivr as well: https://www.jsdelivr.com/package/npm/papaya-viewer. Seems like it's frequently loaded.
- https://github.com/hanayik/niivue, BSD3 license, last commit May 26, 2021. Archived. Doesn't look like an option.
I think the only real option for us Papaya, but there is also the risk that we'll end up with an unmaintained package after a while, since development seems to be slow or even halted.
I think conceptually we would need to figure out how we can build a good solution for visualizing Medical Image data. On shap, we have a separate javascript folder in which we render visualizations, this could be a blueprint but will require a bundler, etc. Alternatively one could go with a naive approach to just write some html code in a python string and load the package via jsdelivr.
@lhoestq thoughts?
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7870/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7870/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7869
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7869/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7869/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7869/events
|
https://github.com/huggingface/datasets/issues/7869
| 3,636,808,734
|
I_kwDODunzps7YxUwe
| 7,869
|
Why does dataset merge fail when tools have different parameters?
|
{
"login": "hitszxs",
"id": 116297296,
"node_id": "U_kgDOBu6OUA",
"avatar_url": "https://avatars.githubusercontent.com/u/116297296?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hitszxs",
"html_url": "https://github.com/hitszxs",
"followers_url": "https://api.github.com/users/hitszxs/followers",
"following_url": "https://api.github.com/users/hitszxs/following{/other_user}",
"gists_url": "https://api.github.com/users/hitszxs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hitszxs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hitszxs/subscriptions",
"organizations_url": "https://api.github.com/users/hitszxs/orgs",
"repos_url": "https://api.github.com/users/hitszxs/repos",
"events_url": "https://api.github.com/users/hitszxs/events{/privacy}",
"received_events_url": "https://api.github.com/users/hitszxs/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi @hitszxs,\n This is indeed by design,\n\nThe `datasets` library is built on top of [Apache Arrow](https://arrow.apache.org/), which uses a **columnar storage format** with strict schema requirements. When you try to concatenate/merge datasets, the library checks if features can be aligned using the [`_check_if_features_can_be_aligned`](https://github.com/huggingface/datasets/blob/main/src/datasets/features/features.py#L2297-L2316) function.\n\nTwo datasets can be merged if:\n1. Columns with the same name have the **same type**, OR\n2. One of them has `Value(\"null\")` (representing missing data)\n\nFor struct types (nested dictionaries like your tool schemas), **all fields must match exactly**. This ensures type safety and efficient columnar storage.\n\n## Workarounds for Your Use Case\n Store tools as JSON strings\n\nInstead of using nested struct types, store the tool definitions as JSON strings\n\n\n"
] | 2025-11-18T08:33:04
| 2025-11-30T03:52:07
| null |
NONE
| null | null | null | null |
Hi, I have a question about SFT (Supervised Fine-tuning) for an agent model.
Suppose I want to fine-tune an agent model that may receive two different tools: tool1 and tool2. These tools have different parameters and types in their schema definitions.
When I try to merge datasets containing different tool definitions, I get the following error:
TypeError: Couldn't cast array of type
struct<refundFee: struct<description: string, type: string>, ... , servicerId: struct<description: string, type: string>>
to
{
'refundFee': {'description': Value(dtype='string'), 'type': Value(dtype='string')},
...
'templateId': {'description': Value(dtype='string'), 'type': Value(dtype='string')}
}
From my understanding, the merge fails because the tools column's nested structure is different across datasets — e.g., one struct contains an extra field servicerId while the other does not. This causes HuggingFace Datasets (and its underlying Apache Arrow schema) to reject the merge.
My question is: why is it designed this way?
Is this strict schema matching a hard requirement of the library?
Is there a recommended way to merge datasets with different tool schemas (different parameters and types)?
For an agent model supporting multiple tools, what's the best practice for preparing/merging training data without losing flexibility?
Any guidance or design rationale would be greatly appreciated. Thanks!
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7869/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7869/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7868
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7868/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7868/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7868/events
|
https://github.com/huggingface/datasets/issues/7868
| 3,632,429,308
|
I_kwDODunzps7Ygnj8
| 7,868
|
Data duplication with `split_dataset_by_node` and `interleaved_dataset`
|
{
"login": "ValMystletainn",
"id": 42485228,
"node_id": "MDQ6VXNlcjQyNDg1MjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/42485228?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ValMystletainn",
"html_url": "https://github.com/ValMystletainn",
"followers_url": "https://api.github.com/users/ValMystletainn/followers",
"following_url": "https://api.github.com/users/ValMystletainn/following{/other_user}",
"gists_url": "https://api.github.com/users/ValMystletainn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ValMystletainn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ValMystletainn/subscriptions",
"organizations_url": "https://api.github.com/users/ValMystletainn/orgs",
"repos_url": "https://api.github.com/users/ValMystletainn/repos",
"events_url": "https://api.github.com/users/ValMystletainn/events{/privacy}",
"received_events_url": "https://api.github.com/users/ValMystletainn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi @ValMystletainn ,\nCan I be assigned this issue?",
"> split_dataset_by_node\n\nHello, I have some questions about your intended use: (1) It seems unnecessary to use interleaving for a single dataset. (2) For multiple datasets, it seems possible to interleave first and then split by node?",
"Hi ! I think interleave_dataset() should take the distributed config into account. It should use .shard() to get the dataset that corresponds to a certain node according to the distributed config, and do that for every dataset passed to interleave_dataset()\n\nfeel free to open a PR and ping me for reviews if you'd like to try to fix this issue !"
] | 2025-11-17T09:15:24
| 2025-12-15T11:52:32
| null |
NONE
| null | null | null | null |
### Describe the bug
Data duplication in different rank, when process a iterabledataset with first `split_dataset_by_node` and then `interleaved_dataset`
### Steps to reproduce the bug
I have provide a minimum scripts
```python
import os
from datasets import interleave_datasets, load_dataset
from datasets.distributed import split_dataset_by_node
path = "/mnt/wwx/datasets/fineweb/data/CC-MAIN-2013-20/"
files = [os.path.join(path, fn) for fn in os.listdir(path)]
dataset = load_dataset("parquet", split="train", data_files=files, streaming=True)
print(f"{dataset.n_shards=}")
dataset_rank0 = split_dataset_by_node(dataset, 0, 4)
dataset_rank1 = split_dataset_by_node(dataset, 1, 4)
dataset_rank0_interleaved = interleave_datasets([dataset_rank0], seed=42, probabilities=[1.0])
dataset_rank1_interleaved = interleave_datasets([dataset_rank1], seed=42, probabilities=[1.0])
print("print the first sample id from all datasets")
print("dataset", next(iter(dataset))['id'])
print("dataset_rank0", next(iter(dataset_rank0))['id'])
print("dataset_rank1", next(iter(dataset_rank1))['id'])
print("dataset_rank0_interleaved", next(iter(dataset_rank0_interleaved))['id'])
print("dataset_rank1_interleaved", next(iter(dataset_rank1_interleaved))['id'])
dataset_rank0_shard = dataset.shard(4, 0)
dataset_rank1_shard = dataset.shard(4, 1)
dataset_rank0_shard_interleaved = interleave_datasets([dataset_rank0_shard], seed=42, probabilities=[1.0])
dataset_rank1_shard_interleaved = interleave_datasets([dataset_rank1_shard], seed=42, probabilities=[1.0])
print("dataset_rank0_shard", next(iter(dataset_rank0_shard))['id'])
print("dataset_rank1_shard", next(iter(dataset_rank1_shard))['id'])
print("dataset_rank0_shard_interleaved", next(iter(dataset_rank0_shard_interleaved))['id'])
print("dataset_rank1_shard_interleaved", next(iter(dataset_rank1_shard_interleaved))['id'])
```
I just use a subfold of C4 with 14 paruets to do the quick run and get
```
dataset.n_shards=14
print the first sample id from all datasets
dataset <urn:uuid:c84a7f00-f3e8-4b67-baa4-df5adaf23bae>
dataset_rank0 <urn:uuid:c84a7f00-f3e8-4b67-baa4-df5adaf23bae>
dataset_rank1 <urn:uuid:6b7da64f-c26e-4086-aef5-4b6f01106223>
dataset_rank0_interleaved <urn:uuid:c84a7f00-f3e8-4b67-baa4-df5adaf23bae>
dataset_rank1_interleaved <urn:uuid:c84a7f00-f3e8-4b67-baa4-df5adaf23bae>
dataset_rank0_shard <urn:uuid:c84a7f00-f3e8-4b67-baa4-df5adaf23bae>
dataset_rank1_shard <urn:uuid:67cf7216-dd05-4f55-a28a-1a1c96989c51>
dataset_rank0_shard_interleaved <urn:uuid:c84a7f00-f3e8-4b67-baa4-df5adaf23bae>
dataset_rank1_shard_interleaved <urn:uuid:67cf7216-dd05-4f55-a28a-1a1c96989c51>
```
### Expected behavior
the first sample of `dataset_rank0_interleaved` and `dataset_rank1_interleaved` should be different, as other `rank0` `rank1` couples.
I have dive into the function and try to find how it work in `split -> interleaved` process.
the `split_dataset_by_node` of iterable dataset does't not change `._ex_iterable` attribute of the dataset. it just set the distributed config in dataset, and the distributed dataset is used in actually `__iter__` call, to handle with shard split or sample skipping.
however, in `interleaved_dataset` of iterable dataset. it copy out all of the `._ex_iterable` of provided datasets, and consist a new `_ex_iterable`, so the missing copy of `distributed config` caused the data duplication in different dp rank.
So I may first ask, is it an unexpected using order of those function, which means:
- always do `split_dataset_by_node` at final rather than in middle way.
- or use `dataset.shard(dp_size, dp_rank)` rather than `split_dataset_by_node` in case similar of mine.
if the using order is permiited, I think it is a bug, and I can do a PR to fix it
(I meet this bug in real training, related issue is https://github.com/ByteDance-Seed/VeOmni/issues/200 if it helps.
### Environment info
datasets 4.4.1
ubuntu 20.04
python 3.11.4
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7868/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7868/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7867
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7867/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7867/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7867/events
|
https://github.com/huggingface/datasets/issues/7867
| 3,620,931,722
|
I_kwDODunzps7X0wiK
| 7,867
|
NonMatchingSplitsSizesError when loading partial dataset files
|
{
"login": "QingGo",
"id": 13678719,
"node_id": "MDQ6VXNlcjEzNjc4NzE5",
"avatar_url": "https://avatars.githubusercontent.com/u/13678719?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/QingGo",
"html_url": "https://github.com/QingGo",
"followers_url": "https://api.github.com/users/QingGo/followers",
"following_url": "https://api.github.com/users/QingGo/following{/other_user}",
"gists_url": "https://api.github.com/users/QingGo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/QingGo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/QingGo/subscriptions",
"organizations_url": "https://api.github.com/users/QingGo/orgs",
"repos_url": "https://api.github.com/users/QingGo/repos",
"events_url": "https://api.github.com/users/QingGo/events{/privacy}",
"received_events_url": "https://api.github.com/users/QingGo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"While using verification_mode='no_checks' parameter in load_dataset() can bypass this validation, this solution is not intuitive or convenient for most users, especially those who are not familiar with all the parameters of the load_dataset() function.\n\n```python\nbook_corpus_ds = load_dataset(\n \"SaylorTwift/the_pile_books3_minus_gutenberg\",\n name=\"default\",\n data_files=\"data/train-00000-of-00213-312fd8d7a3c58a63.parquet\",\n split=\"train\",\n cache_dir=\"./data\",\n verification_mode='no_checks'\n)\n```",
"Thanks for the report and reproduction steps @QingGo \n@lhoestq which one of the following looks like a nicer way to handle this?\n\n1] Skip split-size validation entirely for partial loads\nIf the user passes data_files manually and it represents only a subset, then verify_splits() should simply not run, or skip validation only for that split.\n\n2] Replace the error with a warning\n\n3] Automatically detect partial-load cases(i mean we can try this out!)\n\nAssume this, \nIf data_files is provided AND\nthe number of provided files ≠ number of expected files in metadata,\nthen treat it as a partial load and disable strict verification.\n"
] | 2025-11-13T12:03:23
| 2025-11-16T15:39:23
| null |
NONE
| null | null | null | null |
### Describe the bug
When loading only a subset of dataset files while the dataset's README.md contains split metadata, the system throws a NonMatchingSplitsSizesError . This prevents users from loading partial datasets for quick validation in cases of poor network conditions or very large datasets.
### Steps to reproduce the bug
1. Use the Hugging Face `datasets` library to load a dataset with only specific files specified
2. Ensure the dataset repository has split metadata defined in README.md
3. Observe the error when attempting to load a subset of files
```python
# Example code that triggers the error
from datasets import load_dataset
book_corpus_ds = load_dataset(
"SaylorTwift/the_pile_books3_minus_gutenberg",
name="default",
data_files="data/train-00000-of-00213-312fd8d7a3c58a63.parquet",
split="train",
cache_dir="./data"
)
```
### Error Message
```
Traceback (most recent call last):
File "/Users/QingGo/code/llm_learn/src/data/clean_cc_bc.py", line 13, in <module>
book_corpus_ds = load_dataset(
"SaylorTwift/the_pile_books3_minus_gutenberg",
...
File "/Users/QingGo/code/llm_learn/.venv/lib/python3.13/site-packages/datasets/utils/info_utils.py", line 77, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
datasets.exceptions.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=106199627990.47722, num_examples=192661, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=454897326, num_examples=905, shard_lengths=None, dataset_name='the_pile_books3_minus_gutenberg')}]
```
### Expected behavior
When loading partial dataset files, the system should:
1. Skip the `NonMatchingSplitsSizesError` validation, OR
2. Only log a warning message instead of raising an error
### Environment info
- `datasets` version: 4.3.0
- Platform: macOS-15.7.1-arm64-arm-64bit-Mach-O
- Python version: 3.13.2
- `huggingface_hub` version: 0.36.0
- PyArrow version: 22.0.0
- Pandas version: 2.3.3
- `fsspec` version: 2025.9.0
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7867/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7867/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7866
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7866/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7866/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7866/events
|
https://github.com/huggingface/datasets/pull/7866
| 3,620,436,248
|
PR_kwDODunzps6zL7Sz
| 7,866
|
docs: add Python version requirement note to installation section
|
{
"login": "ananthasai-2006",
"id": 222381706,
"node_id": "U_kgDODUFGig",
"avatar_url": "https://avatars.githubusercontent.com/u/222381706?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ananthasai-2006",
"html_url": "https://github.com/ananthasai-2006",
"followers_url": "https://api.github.com/users/ananthasai-2006/followers",
"following_url": "https://api.github.com/users/ananthasai-2006/following{/other_user}",
"gists_url": "https://api.github.com/users/ananthasai-2006/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ananthasai-2006/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ananthasai-2006/subscriptions",
"organizations_url": "https://api.github.com/users/ananthasai-2006/orgs",
"repos_url": "https://api.github.com/users/ananthasai-2006/repos",
"events_url": "https://api.github.com/users/ananthasai-2006/events{/privacy}",
"received_events_url": "https://api.github.com/users/ananthasai-2006/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-11-13T09:54:35
| 2025-11-13T09:54:35
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7866",
"html_url": "https://github.com/huggingface/datasets/pull/7866",
"diff_url": "https://github.com/huggingface/datasets/pull/7866.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7866.patch",
"merged_at": null
}
|
Added note about Python version requirement for conda installation.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7866/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7866/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7865
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7865/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7865/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7865/events
|
https://github.com/huggingface/datasets/pull/7865
| 3,620,116,195
|
PR_kwDODunzps6zK2H_
| 7,865
|
[FEAT] MIDI feature support
|
{
"login": "frascuchon",
"id": 2518789,
"node_id": "MDQ6VXNlcjI1MTg3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2518789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/frascuchon",
"html_url": "https://github.com/frascuchon",
"followers_url": "https://api.github.com/users/frascuchon/followers",
"following_url": "https://api.github.com/users/frascuchon/following{/other_user}",
"gists_url": "https://api.github.com/users/frascuchon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/frascuchon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/frascuchon/subscriptions",
"organizations_url": "https://api.github.com/users/frascuchon/orgs",
"repos_url": "https://api.github.com/users/frascuchon/repos",
"events_url": "https://api.github.com/users/frascuchon/events{/privacy}",
"received_events_url": "https://api.github.com/users/frascuchon/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7865). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-11-13T08:31:51
| 2025-11-14T13:58:52
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7865",
"html_url": "https://github.com/huggingface/datasets/pull/7865",
"diff_url": "https://github.com/huggingface/datasets/pull/7865.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7865.patch",
"merged_at": null
}
|
This PR adds a new `Midi` feature for reading and importing MIDI files into the datasets.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7865/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7865/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7864
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7864/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7864/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7864/events
|
https://github.com/huggingface/datasets/issues/7864
| 3,619,137,823
|
I_kwDODunzps7Xt6kf
| 7,864
|
add_column and add_item erroneously(?) require new_fingerprint parameter
|
{
"login": "echthesia",
"id": 17151810,
"node_id": "MDQ6VXNlcjE3MTUxODEw",
"avatar_url": "https://avatars.githubusercontent.com/u/17151810?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/echthesia",
"html_url": "https://github.com/echthesia",
"followers_url": "https://api.github.com/users/echthesia/followers",
"following_url": "https://api.github.com/users/echthesia/following{/other_user}",
"gists_url": "https://api.github.com/users/echthesia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/echthesia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/echthesia/subscriptions",
"organizations_url": "https://api.github.com/users/echthesia/orgs",
"repos_url": "https://api.github.com/users/echthesia/repos",
"events_url": "https://api.github.com/users/echthesia/events{/privacy}",
"received_events_url": "https://api.github.com/users/echthesia/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"Take this with a grain of salt, this is just my personal understanding:\nWhile you technically can overwrite the new_fingerprint with a string, e.g.\n```python\nt = d.add_column(\"new_column\", col_value, new_fingerprint=\"dummy_fp\")\nassert t._fingerprint == \"dummy_fp\" # this is true and will pass\n```\nthis is not desired since the fingerprint should be calculated based on the operations (and their arguments) to be unique. This is handled by the [fingerprint_transform](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L6077) function which needs a \"new_fingerprint\" keyword argument and creates a unique hash if its value is not set, see [here](https://github.com/huggingface/datasets/blob/main/src/datasets/fingerprint.py#L432). So it is probably safer to not document this keyword, since one doesn't want the user to actually use it and it's only a feature in very limited cases for people really knowing what they are doing. The thing that might be bugging people who read the code is that `new_fingerprint` seems to be required for `add_item` and `add_column` but it is actually set by the decorator (in which's definition it is optional), so maybe changing the signature of `add_item` and `add_column` to `new_fingerprint: Optional[str] = None` would make sense, since this is also how it's handled in the other cases (created by claude):\n\n - [flatten](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L2034)\n - [cast_column](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L2165)\n - [remove_columns](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L2209)\n - [rename_column](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L2263)\n - [rename_columns](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L2329)\n - [select_columns](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L2397)\n - [batch](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L3760)\n - [filter](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L3813)\n - [flatten_indices](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L3959)\n - [select](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L4038)\n - [_select_contiguous](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L4128)\n - [sort](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L4376)\n - [shuffle](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L4506)\n - [train_test_split](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L4641)\nSo as you mentioned, I believe the methods erronously require the `new_fingerprint` parameter and making them optional is a little consistency win.",
"Any updates on this? Current behavior suggests that user is supposed to pass in the `new_fingerprint` argument, while examples in documentation doesn't include that argument at all."
] | 2025-11-13T02:56:49
| 2025-12-07T14:41:40
| null |
NONE
| null | null | null | null |
### Describe the bug
Contradicting their documentation (which doesn't mention the parameter at all), both Dataset.add_column and Dataset.add_item require a new_fingerprint string. This parameter is passed directly to the dataset constructor, which has the fingerprint parameter listed as optional; is there any reason it shouldn't be optional in these methods as well?
### Steps to reproduce the bug
Reproduction steps:
1. Look at the function signature for add_column: https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L6078
2. Repeat for add_item: https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L6336
### Expected behavior
add_column and add_item should either set the fingerprint parameter to optional or include it in their docstrings
### Environment info
Not environment-dependent
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7864/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7864/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7863
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7863/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7863/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7863/events
|
https://github.com/huggingface/datasets/issues/7863
| 3,618,836,821
|
I_kwDODunzps7XsxFV
| 7,863
|
Support hosting lance / vortex / iceberg / zarr datasets on huggingface hub
|
{
"login": "pavanramkumar",
"id": 3664715,
"node_id": "MDQ6VXNlcjM2NjQ3MTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/3664715?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pavanramkumar",
"html_url": "https://github.com/pavanramkumar",
"followers_url": "https://api.github.com/users/pavanramkumar/followers",
"following_url": "https://api.github.com/users/pavanramkumar/following{/other_user}",
"gists_url": "https://api.github.com/users/pavanramkumar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pavanramkumar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pavanramkumar/subscriptions",
"organizations_url": "https://api.github.com/users/pavanramkumar/orgs",
"repos_url": "https://api.github.com/users/pavanramkumar/repos",
"events_url": "https://api.github.com/users/pavanramkumar/events{/privacy}",
"received_events_url": "https://api.github.com/users/pavanramkumar/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null |
[
"Kudos!",
"So cool ! Would love to see support for lance :)",
"@lhoestq thanks for your support! Any suggestions across `datasets` or `huggingface_hub` projects to make this happen?\n\nI just noticed this blog post: https://huggingface.co/blog/streaming-datasets\n\nDo you know if `hfFileSystem` from `huggingface_hub` is flexible enough to accommodate lance? I don't want to `open` and scan a file, I want to create generators with the `lance.dataset.to_batches()` from each fragment (partition) that I can iterate over in a distributed dataloader.\n\nIdeally, something like this should just work:\n\n```\nimport lance\nlance_ds_path = f\"hf://datasets/{dataset_id}/{path_in_repo}.lance\"\nds = lance.dataset(lance_ds_path)\nfragments = ds.get_fragments()\nfragment_generators = []\nfor fragment in fragments:\n fragment_generators = fragment.to_batches()\n```\n\nLooking at the huggingface blog post, I think we might need a PR into `pyarrow` to create a `LanceFragmentScanOptions` class that subclasses [pyarrow.dataset.FragmentScanOptions](https://arrow.apache.org/docs/python/generated/pyarrow.dataset.FragmentScanOptions.html#pyarrow.dataset.FragmentScanOptions) cc @prrao87, @changhiskhan",
"> Do you know if HfFileSystem from huggingface_hub is flexible enough to accommodate lance?\n\nit provides file-like objects for files on HF, and works using range requests. PyArrow uses HfFileSystem for HF files already\n\nThough in the Parquet / PyArrow case the data is read generally row group per row group (using range requests with a minimum size `range_size_limit ` to optimize I/O in case of small row groups)\n\nPS: there is an equivalent to HfFileSystem in rust in OpenDAL, but it only supports read from HF, not write (yet ?)\n\n> I don't want to open and scan a file, I want to create generators with the lance.dataset.to_batches() from each fragment (partition) that I can iterate over in a distributed dataloader.\n\nWe do something very similar for Parquet here: \n\nhttps://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/packaged_modules/parquet/parquet.py#L168-L169",
"Hi, I work on the Lance project. We'd be happy to see the format supported on huggingface hub.\n\nIt's not clear to me from this thread what is required for that. Could we clarify that? Are there examples we can point to?\n\n> I think we might need a PR into `pyarrow` to create a `LanceFragmentScanOptions` class that subclasses [pyarrow.dataset.FragmentScanOptions](https://arrow.apache.org/docs/python/generated/pyarrow.dataset.FragmentScanOptions.html#pyarrow.dataset.FragmentScanOptions)\n\nCould you elaborate why a `FragmentScanOptions` subclass is required? Also, if it is, we could just define that as a subclass within the `pylance` module, unless I'm missing something.\n\nLance supports OpenDAL storage, so I think we could add support for huggingface's filesystem through that and make sure it's exposed in pylance. Could also help implement some write operations. Perhaps that's the main blocker? ",
"> PS: there is an equivalent to HfFileSystem in rust in OpenDAL, but it only supports read from HF, not write (yet ?)\n\nHi, I’m willing to add full-fledged support for the HF file system. This shouldn’t be considered a blocker. 🤟 ",
"Exposing the existing HF filesystem from OpenDAL in pylance would be great ! and a good first step\n\nExcited for write operations too",
"Thanks @lhoestq @wjones127 @Xuanwo ! I think we have all the necessary people on this thread now to make it happen :)\n\n> Could you elaborate why a FragmentScanOptions subclass is required? Also, if it is, we could just define that as a subclass within the pylance module, unless I'm missing something.\n\n@wjones127 I'm not actually sure this is needed but I'm guessing based on [this blog post](https://huggingface.co/blog/streaming-datasets) from a couple of weeks ago. Specifically, this section which allows creation of a dataset object with configurable prefetching:\n\n```\nimport pyarrow\nimport pyarrow.dataset\n\nfragment_scan_options = pyarrow.dataset.ParquetFragmentScanOptions(\n cache_options=pyarrow.CacheOptions(\n prefetch_limit=1,\n range_size_limit=128 << 20\n ),\n)\nds = load_dataset(parquet_dataset_id, streaming=True, fragment_scan_options=fragment_scan_options)\n```\n\nI might be completely wrong that we do need an equivalent `LanceFragmentScanOptions` PR into `pyarrow` and the `OpenDAL` path might be sufficient.\n\nI really just want something like this to work out of the box:\n\n```\nimport lance\nlance_ds_path = f\"hf://datasets/{dataset_id}/{path_in_repo}.lance\"\nds = lance.dataset(lance_ds_path)\nfragments = ds.get_fragments()\nfragment_generators = []\nfor fragment in fragments:\n fragment_generators = fragment.to_batches()\n```\n\nIn the ideal case, I'd like to be able to control prefetch configuration via arguments to `to_batches()` like the ones that already exist for a lance dataset on any S3-compatible object store.\n\nWould a useful approach be to create a toy lance dataset on huggingface and see if this \"just works\"; then work backwards from there?\n\nAs for writing, I'm looking to migrate datasets from my own private S3-compatible object store bucket (Tigris Data) to huggingface datasets but ~~I'm 100% sure~~ I'm _not_ 100% sure whether we even need `hfFileSystem` compatible write capability\n\n\n",
"Here's a public dataset which could be a working example to work backwards from:\n\nhttps://huggingface.co/datasets/pavan-ramkumar/test-slaf\n\npylance currently looks for default object store backends and returns this `ValueError`\n\n```\n>>> import lance\n>>> hf_path = \"hf://datasets/pavan-ramkumar/test-slaf/tree/main/synthetic_50k_processed_v21.slaf/expression.lance\"\n>>> ds = lance.dataset(hf_path)\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"/Users/pavan/slaf-project/slaf/.venv/lib/python3.12/site-packages/lance/__init__.py\", line 145, in dataset\n ds = LanceDataset(\n ^^^^^^^^^^^^^\n File \"/Users/pavan/slaf-project/slaf/.venv/lib/python3.12/site-packages/lance/dataset.py\", line 425, in __init__\n self._ds = _Dataset(\n ^^^^^^^^^\nValueError: Invalid user input: No object store provider found for scheme: 'hf'\nValid schemes: gs, memory, s3, az, file-object-store, file, oss, s3+ddb, /Users/runner/work/lance/lance/rust/lance-io/src/object_store/providers.rs:161:54\n```",
"@Xuanwo @wjones127 just checking in to see if you had a chance to add a huggingface provider via opendal to pylance. I'm assuming we need a new `huggingface.rs` provider [here](https://github.com/lance-format/lance/tree/4d9c1a4d459ea486556de0ee90828a442d0425b0/rust/lance-io/src/object_store/providers).\n\nDo let me know if I can do anything to help, really excited to help stream lance datasets from huggingface hub",
"> @Xuanwo @wjones127 just checking in to see if you had a chance to add a huggingface provider via opendal to pylance. I'm assuming we need a new `huggingface.rs` provider [here](https://github.com/lance-format/lance/tree/4d9c1a4d459ea486556de0ee90828a442d0425b0/rust/lance-io/src/object_store/providers).\n> \n> Do let me know if I can do anything to help, really excited to help stream lance datasets from huggingface hub\n\nI'm willing to work on this! Would you like to create an issue on lance side and ping me there?",
" > I'm willing to work on this! Would you like to create an issue on lance side and ping me there?\n\nDone! [Link](https://github.com/lance-format/lance/issues/5346)\n",
"@pavanramkumar pls check this out once it's merged! https://github.com/lance-format/lance/pull/5353"
] | 2025-11-13T00:51:07
| 2025-11-26T14:10:29
| null |
NONE
| null | null | null | null |
### Feature request
Huggingface datasets has great support for large tabular datasets in parquet with large partitions. I would love to see two things in the future:
- equivalent support for `lance`, `vortex`, `iceberg`, `zarr` (in that order) in a way that I can stream them using the datasets library
- more fine-grained control of streaming, so that I can stream at the partition / shard level
### Motivation
I work with very large `lance` datasets on S3 and often require random access for AI/ML applications like multi-node training. I was able to achieve high throughput dataloading on a lance dataset with ~150B rows by building distributed dataloaders that can be scaled both vertically (until i/o and CPU are saturated), and then horizontally (to workaround network bottlenecks).
Using this strategy I was able to achieve 10-20x the throughput of the streaming data loader from the `huggingface/datasets` library.
I realized that these would be great features for huggingface to support natively
### Your contribution
I'm not ready yet to make a PR but open to it with the right pointers!
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7863/reactions",
"total_count": 23,
"+1": 4,
"-1": 0,
"laugh": 2,
"hooray": 2,
"confused": 0,
"heart": 5,
"rocket": 8,
"eyes": 2
}
|
https://api.github.com/repos/huggingface/datasets/issues/7863/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7862
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7862/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7862/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7862/events
|
https://github.com/huggingface/datasets/pull/7862
| 3,617,947,090
|
PR_kwDODunzps6zDjEj
| 7,862
|
Add flatten_indices option to save_to_disk method
|
{
"login": "ArjunJagdale",
"id": 142811259,
"node_id": "U_kgDOCIMgew",
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArjunJagdale",
"html_url": "https://github.com/ArjunJagdale",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"as said by @KCKawalkar used below script to test - \r\n\r\nBEFORE PATCH - \r\nTEST.PY:\r\n```\r\nfrom datasets import Dataset\r\nimport time\r\n\r\ndataset = Dataset.from_dict({'text': [f'sample {i}' for i in range(100000)]})\r\n\r\n# Baseline save (no indices)\r\nstart = time.time()\r\ndataset.save_to_disk('baseline')\r\nbaseline_time = time.time() - start\r\n\r\n# Filtered save (creates indices)\r\nfiltered = dataset.filter(lambda x: True)\r\nstart = time.time()\r\nfiltered.save_to_disk('filtered')\r\nfiltered_time = time.time() - start\r\n\r\nprint(f\"Baseline: {baseline_time:.3f}s\")\r\nprint(f\"Filtered: {filtered_time:.3f}s\")\r\nprint(f\"Slowdown: {(filtered_time/baseline_time-1)*100:.1f}%\")\r\n```\r\nRESULTS:\r\n```\r\n@ArjunJagdale ➜ /workspaces/datasets (main) $ python test_arjun.py\r\nSaving the dataset (1/1 shards): 100%|█████████████████████████████████████████████████████████████████████████████████████████████| 100000/100000 [00:00<00:00, 3030654.07 examples/s]\r\nFilter: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 100000/100000 [00:00<00:00, 576296.61 examples/s]\r\nSaving the dataset (1/1 shards): 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 100000/100000 [00:00<00:00, 310565.19 examples/s]\r\nBaseline: 0.035s\r\nFiltered: 0.323s\r\nSlowdown: 813.4%\r\n```\r\n\r\nAFTER PATCH - \r\nTEST.PY:\r\n```\r\nfrom datasets import Dataset\r\nimport time\r\n\r\n# Create dataset\r\ndataset = Dataset.from_dict({'text': [f'sample {i}' for i in range(100000)]})\r\n\r\n# Baseline save (no indices)\r\nstart = time.time()\r\ndataset.save_to_disk('baseline')\r\nbaseline_time = time.time() - start\r\n\r\n# Filtered save (creates indices)\r\nfiltered = dataset.filter(lambda x: True)\r\nstart = time.time()\r\nfiltered.save_to_disk('filtered', flatten_indices=False)\r\nfiltered_time = time.time() - start\r\n\r\nprint(f\"Baseline: {baseline_time:.3f}s\")\r\nprint(f\"Filtered: {filtered_time:.3f}s\") \r\nprint(f\"Slowdown: {(filtered_time/baseline_time-1)*100:.1f}%\")\r\n```\r\n\r\nREESULT:\r\n```\r\n@ArjunJagdale ➜ /workspaces/datasets (main) $ python test_arjun.py\r\nSaving the dataset (1/1 shards): 100%|█████████████████████████████████████████████████████████████████████████████████████████████| 100000/100000 [00:00<00:00, 3027482.12 examples/s]\r\nFilter: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 100000/100000 [00:00<00:00, 468901.89 examples/s]\r\nSaving the dataset (1/1 shards): 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 100000/100000 [00:00<00:00, 324036.36 examples/s]\r\nBaseline: 0.036s\r\nFiltered: 0.310s\r\nSlowdown: 771.1%\r\n\r\n```"
] | 2025-11-12T19:38:51
| 2025-11-12T19:50:20
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7862",
"html_url": "https://github.com/huggingface/datasets/pull/7862",
"diff_url": "https://github.com/huggingface/datasets/pull/7862.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7862.patch",
"merged_at": null
}
|
Added flatten_indices parameter to control index flattening during dataset saving.
Solves #7861
This PR introduces a new optional argument, flatten_indices, to the save_to_disk methods in both Dataset and DatasetDict.
The change allows users to skip the expensive index-flattening step when saving datasets that already use index mappings (e.g., after filter() or shuffle()), resulting in significant speed improvements for large datasets while maintaining backward compatibility.
While not a huge absolute difference at 100K rows, the improvement scales significantly with larger datasets (millions of rows).
This patch gives users control — they can disable flattening when they don’t need it, avoiding unnecessary rewrites.
@lhoestq WDYT?
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7862/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7862/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7861
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7861/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7861/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7861/events
|
https://github.com/huggingface/datasets/issues/7861
| 3,611,821,713
|
I_kwDODunzps7XSAaR
| 7,861
|
Performance Issue: save_to_disk() 200-1200% slower due to unconditional flatten_indices()
|
{
"login": "KCKawalkar",
"id": 222552287,
"node_id": "U_kgDODUPg3w",
"avatar_url": "https://avatars.githubusercontent.com/u/222552287?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KCKawalkar",
"html_url": "https://github.com/KCKawalkar",
"followers_url": "https://api.github.com/users/KCKawalkar/followers",
"following_url": "https://api.github.com/users/KCKawalkar/following{/other_user}",
"gists_url": "https://api.github.com/users/KCKawalkar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KCKawalkar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KCKawalkar/subscriptions",
"organizations_url": "https://api.github.com/users/KCKawalkar/orgs",
"repos_url": "https://api.github.com/users/KCKawalkar/repos",
"events_url": "https://api.github.com/users/KCKawalkar/events{/privacy}",
"received_events_url": "https://api.github.com/users/KCKawalkar/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-11-11T11:05:38
| 2025-11-11T11:05:38
| null |
NONE
| null | null | null | null |
## 🐛 Bug Description
The `save_to_disk()` method unconditionally calls `flatten_indices()` when `_indices` is not None, causing severe performance degradation for datasets processed with filtering, shuffling, or multiprocessed mapping operations.
**Root cause**: This line rebuilds the entire dataset unnecessarily:
```python
dataset = self.flatten_indices() if self._indices is not None else self
```
## 📊 Performance Impact
| Dataset Size | Operation | Save Time | Slowdown |
|-------------|-----------|-----------|----------|
| 100K | Baseline (no indices) | 0.027s | - |
| 100K | Filtered (with indices) | 0.146s | **+431%** |
| 100K | Shuffled (with indices) | 0.332s | **+1107%** |
| 250K | Shuffled (with indices) | 0.849s | **+1202%** |
## 🔄 Reproduction
```python
from datasets import Dataset
import time
# Create dataset
dataset = Dataset.from_dict({'text': [f'sample {i}' for i in range(100000)]})
# Baseline save (no indices)
start = time.time()
dataset.save_to_disk('baseline')
baseline_time = time.time() - start
# Filtered save (creates indices)
filtered = dataset.filter(lambda x: True)
start = time.time()
filtered.save_to_disk('filtered')
filtered_time = time.time() - start
print(f"Baseline: {baseline_time:.3f}s")
print(f"Filtered: {filtered_time:.3f}s")
print(f"Slowdown: {(filtered_time/baseline_time-1)*100:.1f}%")
```
**Expected output**: Filtered dataset is 400-1000% slower than baseline
## 💡 Proposed Solution
Add optional parameter to control flattening:
```python
def save_to_disk(self, dataset_path, flatten_indices=True):
dataset = self.flatten_indices() if (self._indices is not None and flatten_indices) else self
# ... rest of save logic
```
**Benefits**:
- ✅ Immediate performance improvement for users who don't need flattening
- ✅ Backwards compatible (default behavior unchanged)
- ✅ Simple implementation
## 🌍 Environment
- **datasets version**: 2.x
- **Python**: 3.10+
- **OS**: Linux/macOS/Windows
## 📈 Impact
This affects **most ML preprocessing workflows** that filter/shuffle datasets before saving. Performance degradation scales exponentially with dataset size, making it a critical bottleneck for production systems.
## 🔗 Additional Resources
We have comprehensive test scripts demonstrating this across multiple scenarios if needed for further investigation.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7861/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7861/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7860
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7860/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7860/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7860/events
|
https://github.com/huggingface/datasets/pull/7860
| 3,610,706,034
|
PR_kwDODunzps6yrHQN
| 7,860
|
Support loading local arrow datasets via load_dataset
|
{
"login": "gstrat88",
"id": 16986130,
"node_id": "MDQ6VXNlcjE2OTg2MTMw",
"avatar_url": "https://avatars.githubusercontent.com/u/16986130?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gstrat88",
"html_url": "https://github.com/gstrat88",
"followers_url": "https://api.github.com/users/gstrat88/followers",
"following_url": "https://api.github.com/users/gstrat88/following{/other_user}",
"gists_url": "https://api.github.com/users/gstrat88/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gstrat88/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gstrat88/subscriptions",
"organizations_url": "https://api.github.com/users/gstrat88/orgs",
"repos_url": "https://api.github.com/users/gstrat88/repos",
"events_url": "https://api.github.com/users/gstrat88/events{/privacy}",
"received_events_url": "https://api.github.com/users/gstrat88/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-11-11T04:58:33
| 2025-11-11T20:58:46
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7860",
"html_url": "https://github.com/huggingface/datasets/pull/7860",
"diff_url": "https://github.com/huggingface/datasets/pull/7860.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7860.patch",
"merged_at": null
}
|
Load_dataset will handle local saved datasets that way
#7018
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7860/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7860/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7859
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7859/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7859/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7859/events
|
https://github.com/huggingface/datasets/pull/7859
| 3,608,586,063
|
PR_kwDODunzps6yj-aZ
| 7,859
|
fix some broken links
|
{
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7859). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-11-10T15:34:46
| 2025-11-10T17:11:07
| 2025-11-10T17:11:05
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7859",
"html_url": "https://github.com/huggingface/datasets/pull/7859",
"diff_url": "https://github.com/huggingface/datasets/pull/7859.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7859.patch",
"merged_at": "2025-11-10T17:11:05"
}
|
would be cool to automate finding those broken links as i think there might be many of them @lhoestq @albertvillanova
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7859/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7859/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7858
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7858/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7858/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7858/events
|
https://github.com/huggingface/datasets/pull/7858
| 3,605,471,548
|
PR_kwDODunzps6yZq4r
| 7,858
|
Support downloading specific splits in `load_dataset`
|
{
"login": "CloseChoice",
"id": 31857876,
"node_id": "MDQ6VXNlcjMxODU3ODc2",
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CloseChoice",
"html_url": "https://github.com/CloseChoice",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions",
"organizations_url": "https://api.github.com/users/CloseChoice/orgs",
"repos_url": "https://api.github.com/users/CloseChoice/repos",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"received_events_url": "https://api.github.com/users/CloseChoice/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"@CloseChoice This looks great! You're absolutely right about the missing comparison - that's a critical bug I missed. ",
"@lhoestq just resolved the conflicts inflicted by the latest changes. Thought it might be good giving this a shot now before more changes mess with this PR.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7858). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-11-09T20:44:00
| 2025-12-29T15:49:20
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7858",
"html_url": "https://github.com/huggingface/datasets/pull/7858",
"diff_url": "https://github.com/huggingface/datasets/pull/7858.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7858.patch",
"merged_at": null
}
|
This is PR builds on top of #7706 to revive the unfinished #6832 but isn't just cleaning up things, here are some important changes:
- `download_mode="FORCE_REDOWNLOAD"` is interpreted as always creating a clean slate, that means that even if we already did:
```python
load_dataset("<name>")
load_dataset("<name>", split="train", download_mode="force_redownload")
```
This makes sure that only the train dataset is available after executing both. This was different in the original PR, which proposed that train and test would be available.
- `download_mode="REUSE_DATASET_IF_EXISTS"` is interpreted as only ever adding new data, never redownloading OR deleting other splits. This was different in the original PR, where
```python
load_dataset("<name>", split="test")
load_dataset("<name>", split="train")
```
resulted in only the train data being available, which I deem very unintuitive and probably not what users want. Also I argue that this is just the first step to a more user friendly partial loading when specifying percentages (or maybe even single instances) via the ReadInstructions, and then doing
```python
load_dataset("<name>", split="test[:10%]")
load_dataset("<name>", split="test[10%:]")
```
should result IMO in the whole dataset being cached locally without redownloads.
Furthermore this PR fixes a couple issues with the previous PR, e.g. a [missing comparison](https://github.com/huggingface/datasets/pull/7706/files#diff-f933ce41f71c6c0d1ce658e27de62cbe0b45d777e9e68056dd012ac3eb9324f7R877) and adding tests for the proposed changes in behaviour, which would both fail on @ArjunJagdale's original PR.
Todo:
- [ ] update docs?
Future outlook (just my opinions and up for debate):
As mentioned before, I would see this as just a step towards the feature of partial percentage loading (though how the API should behave in that case is not entirely clear for me now) and maybe we could introduce another `download_mode="FORCE_REDOWNLOAD_SPLIT"`, which makes sure that even if a split is specified, only the referenced split is redownloaded and everything else left unchanged, this would then allow users more granular control over what they want to redownload.
@lhoestq very curious to get your opinion on this.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7858/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7858/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7856
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7856/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7856/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7856/events
|
https://github.com/huggingface/datasets/issues/7856
| 3,603,729,142
|
I_kwDODunzps7WzIr2
| 7,856
|
Missing transcript column when loading a local dataset with "audiofolder"
|
{
"login": "gweltou",
"id": 10166907,
"node_id": "MDQ6VXNlcjEwMTY2OTA3",
"avatar_url": "https://avatars.githubusercontent.com/u/10166907?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gweltou",
"html_url": "https://github.com/gweltou",
"followers_url": "https://api.github.com/users/gweltou/followers",
"following_url": "https://api.github.com/users/gweltou/following{/other_user}",
"gists_url": "https://api.github.com/users/gweltou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gweltou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gweltou/subscriptions",
"organizations_url": "https://api.github.com/users/gweltou/orgs",
"repos_url": "https://api.github.com/users/gweltou/repos",
"events_url": "https://api.github.com/users/gweltou/events{/privacy}",
"received_events_url": "https://api.github.com/users/gweltou/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"First bad commit 5c8869f8c36dbc8c8d423030b7b7c4fd64f8c729\n\nEDIT: This is not a bug or a regression. It was a breaking change introduced in the commit I mentioned and was also documented in there. The docs state how to handle this now, see https://huggingface.co/docs/datasets/main/en/audio_load#audiofolder-with-metadata\n\nor simply, move your metadata into the splits folder and update the paths, in your case this would look like this:\n```bash\nmy_dataset/\n - data/\n - test/\n - 54db8760de3cfbff3c8a36a36b4d0f77_00390.0_04583.0.mp3\n - 54db8760de3cfbff3c8a36a36b4d0f77_04583.0_05730.0.mp3\n - metadata.jsonl\n```\n\nand the pahts in the jsonl should be relative to the metadata.json:\n```bash\n{\"file_name\": \"54db8760de3cfbff3c8a36a36b4d0f77_00390.0_04583.0.mp3\", \"transcript\": \"Ata tudoù penaos e tro ar bed ?\"}\n{\"file_name\": \"54db8760de3cfbff3c8a36a36b4d0f77_04583.0_05730.0.mp3\", \"transcript\": \"Ur gwir blijadur eo adkavout ac'hanoc'h hiziv.\"}\n...\n```\n\nSo I think this can be closed.",
"Thank you for your quick answer !\nI'm sorry I missed that in the documentation.\nEverything works fine again after following your recommendations.\nI'm closing the issue."
] | 2025-11-08T16:27:58
| 2025-11-09T12:13:38
| 2025-11-09T12:13:38
|
NONE
| null | null | null | null |
### Describe the bug
My local dataset is not properly loaded when using `load_dataset("audiofolder", data_dir="my_dataset")` with a `jsonl` metadata file.
Only the `audio` column is read while the `transcript` column is not.
The last tested `datasets` version where the behavior was still correct is 2.18.0.
### Steps to reproduce the bug
Dataset directory structure:
```
my_dataset/
- data/
- test/
- 54db8760de3cfbff3c8a36a36b4d0f77_00390.0_04583.0.mp3
- 54db8760de3cfbff3c8a36a36b4d0f77_04583.0_05730.0.mp3
- ...
- metadata.jsonl
```
`metadata.jsonl` file content:
```
{"file_name": "data/test/54db8760de3cfbff3c8a36a36b4d0f77_00390.0_04583.0.mp3", "transcript": "Ata tudoù penaos e tro ar bed ?"}
{"file_name": "data/test/54db8760de3cfbff3c8a36a36b4d0f77_04583.0_05730.0.mp3", "transcript": "Ur gwir blijadur eo adkavout ac'hanoc'h hiziv."}
...
```
```python3
my_dataset = load_dataset("audiofolder", data_dir="my_dataset")
print(my_dataset)
'''
DatasetDict({
test: Dataset({
features: ['audio'],
num_rows: 347
})
})
'''
print(my_dataset['test'][0])
'''
{'audio': <datasets.features._torchcodec.AudioDecoder object at 0x75ffcd172510>}
'''
```
### Expected behavior
Being able to access the `transcript` column in the loaded dataset.
### Environment info
- `datasets` version: 4.4.1
- Platform: Linux-6.5.0-45-generic-x86_64-with-glibc2.39
- Python version: 3.13.9
- `huggingface_hub` version: 1.1.2
- PyArrow version: 22.0.0
- Pandas version: 2.3.3
- `fsspec` version: 2025.10.0
Note: same issue with `datasets` v3.6.0
|
{
"login": "gweltou",
"id": 10166907,
"node_id": "MDQ6VXNlcjEwMTY2OTA3",
"avatar_url": "https://avatars.githubusercontent.com/u/10166907?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gweltou",
"html_url": "https://github.com/gweltou",
"followers_url": "https://api.github.com/users/gweltou/followers",
"following_url": "https://api.github.com/users/gweltou/following{/other_user}",
"gists_url": "https://api.github.com/users/gweltou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gweltou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gweltou/subscriptions",
"organizations_url": "https://api.github.com/users/gweltou/orgs",
"repos_url": "https://api.github.com/users/gweltou/repos",
"events_url": "https://api.github.com/users/gweltou/events{/privacy}",
"received_events_url": "https://api.github.com/users/gweltou/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7856/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7856/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7855
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7855/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7855/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7855/events
|
https://github.com/huggingface/datasets/pull/7855
| 3,602,216,153
|
PR_kwDODunzps6yPIRy
| 7,855
|
ArXiv -> HF Papers
|
{
"login": "qgallouedec",
"id": 45557362,
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qgallouedec",
"html_url": "https://github.com/qgallouedec",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7855). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-11-07T22:16:36
| 2025-11-10T15:01:13
| 2025-11-10T15:01:13
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7855",
"html_url": "https://github.com/huggingface/datasets/pull/7855",
"diff_url": "https://github.com/huggingface/datasets/pull/7855.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7855.patch",
"merged_at": "2025-11-10T15:01:12"
}
| null |
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7855/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7855/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7854
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7854/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7854/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7854/events
|
https://github.com/huggingface/datasets/pull/7854
| 3,596,750,849
|
PR_kwDODunzps6x8yiy
| 7,854
|
[Distributed] split_dataset_by_node() gives the same number of examples for each node
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7854). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Making this work with multiple workers could create a lot of communication for not a lot of benefits, considering you can simply use `Join()` to let nodes shutdown when they run out of data while the other nodes continue training: https://docs.pytorch.org/docs/stable/distributed.algorithms.join.html"
] | 2025-11-06T17:14:18
| 2025-11-10T14:57:44
| null |
MEMBER
| null | null | true
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7854",
"html_url": "https://github.com/huggingface/datasets/pull/7854",
"diff_url": "https://github.com/huggingface/datasets/pull/7854.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7854.patch",
"merged_at": null
}
|
this works:
```python
import torch.distributed as dist
from datasets import IterableDataset
from datasets.distributed import split_dataset_by_node
from collections import Counter
def g(shards):
for shard in shards:
# shards don't have the same length
num_examples = 3 + shard
for i in range(num_examples):
yield {"shard": f"{shard=}", "i": i}
if __name__ == "__main__":
dist.init_process_group(backend="gloo")
rank, world_size = dist.get_rank(), dist.get_world_size()
num_shards = 6
ds = IterableDataset.from_generator(g, gen_kwargs={"shards": list(range(num_shards))})
ds = split_dataset_by_node(ds, rank=rank, world_size=world_size)
# Check that each rank has the same number of examples
# and show the number of examples per shard and per rank
counter = Counter(ds["shard"])
print(f"# {rank=}\ttotal={counter.total()}\t{counter}", flush=True)
# torchrun --nproc_per_node 2 script.py
# rank=0 total=16 Counter({'shard=4': 7, 'shard=2': 5, 'shard=0': 4})
# rank=1 total=16 Counter({'shard=3': 6, 'shard=5': 6, 'shard=1': 4})
```
TODO: make it work with DataLoader (communicate with main process to know when the node runs out of data ?)
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7854/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7854/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7853
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7853/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7853/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7853/events
|
https://github.com/huggingface/datasets/pull/7853
| 3,596,232,275
|
PR_kwDODunzps6x7ARa
| 7,853
|
Fix embed storage nifti
|
{
"login": "CloseChoice",
"id": 31857876,
"node_id": "MDQ6VXNlcjMxODU3ODc2",
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CloseChoice",
"html_url": "https://github.com/CloseChoice",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions",
"organizations_url": "https://api.github.com/users/CloseChoice/orgs",
"repos_url": "https://api.github.com/users/CloseChoice/repos",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"received_events_url": "https://api.github.com/users/CloseChoice/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7853). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-11-06T15:07:58
| 2025-11-06T17:04:57
| 2025-11-06T16:20:36
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7853",
"html_url": "https://github.com/huggingface/datasets/pull/7853",
"diff_url": "https://github.com/huggingface/datasets/pull/7853.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7853.patch",
"merged_at": "2025-11-06T16:20:36"
}
|
Fixes #7852
Adds `embed_storage` function and allows gzipped files to be loaded correctly from local storage.
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7853/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7853/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7852
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7852/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7852/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7852/events
|
https://github.com/huggingface/datasets/issues/7852
| 3,595,450,602
|
I_kwDODunzps7WTjjq
| 7,852
|
Problems with NifTI
|
{
"login": "CloseChoice",
"id": 31857876,
"node_id": "MDQ6VXNlcjMxODU3ODc2",
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CloseChoice",
"html_url": "https://github.com/CloseChoice",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions",
"organizations_url": "https://api.github.com/users/CloseChoice/orgs",
"repos_url": "https://api.github.com/users/CloseChoice/repos",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"received_events_url": "https://api.github.com/users/CloseChoice/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"> 2. when uploading via the niftifolder feature, the resulting parquet only contains relative paths to the nifti files:\n\nwhat did you use to upload the dataset ? iirc push_to_hub() does upload the bytes as well, but to_parquet() doesn't",
"> > 2. when uploading via the niftifolder feature, the resulting parquet only contains relative paths to the nifti files:\n> \n> what did you use to upload the dataset ? iirc push_to_hub() does upload the bytes as well, but to_parquet() doesn't\n\nI used `push_to_hub` but the problem is that the nifti feature does not have an `embed_storage` function"
] | 2025-11-06T11:46:33
| 2025-11-06T16:20:38
| 2025-11-06T16:20:38
|
CONTRIBUTOR
| null | null | null | null |
### Describe the bug
There are currently 2 problems with the new NifTI feature:
1. dealing with zipped files, this is mentioned and explained [here](https://github.com/huggingface/datasets/pull/7815#issuecomment-3496199503)
2. when uploading via the `niftifolder` feature, the resulting parquet only contains relative paths to the nifti files:
```bash
table['nifti']
<pyarrow.lib.ChunkedArray object at 0x798245d37d60>
[
-- is_valid: all not null
-- child 0 type: binary
[
null,
null,
null,
null,
null,
null
]
-- child 1 type: string
[
"/home/tobias/programming/github/datasets/nifti_extracted/T1.nii",
"/home/tobias/programming/github/datasets/nifti_extracted/T2-interleaved.nii",
"/home/tobias/programming/github/datasets/nifti_extracted/T2.nii",
"/home/tobias/programming/github/datasets/nifti_extracted/T2_-interleaved.nii",
"/home/tobias/programming/github/datasets/nifti_extracted/T2_.nii",
"/home/tobias/programming/github/datasets/nifti_extracted/fieldmap.nii"
]
]
```
instead of containing bytes. The code is copy pasted from PDF, so I wonder what is going wrong here.
### Steps to reproduce the bug
see the linked comment
### Expected behavior
downloading should work as smoothly as for pdf
### Environment info
- `datasets` version: 4.4.2.dev0
- Platform: Linux-6.14.0-33-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- `huggingface_hub` version: 0.35.3
- PyArrow version: 21.0.0
- Pandas version: 2.3.3
- `fsspec` version: 2025.9.0
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7852/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7852/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7851
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7851/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7851/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7851/events
|
https://github.com/huggingface/datasets/pull/7851
| 3,592,252,116
|
PR_kwDODunzps6xtvVj
| 7,851
|
Add fasta support
|
{
"login": "georgia-hf",
"id": 209551168,
"node_id": "U_kgDODH1_QA",
"avatar_url": "https://avatars.githubusercontent.com/u/209551168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/georgia-hf",
"html_url": "https://github.com/georgia-hf",
"followers_url": "https://api.github.com/users/georgia-hf/followers",
"following_url": "https://api.github.com/users/georgia-hf/following{/other_user}",
"gists_url": "https://api.github.com/users/georgia-hf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/georgia-hf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/georgia-hf/subscriptions",
"organizations_url": "https://api.github.com/users/georgia-hf/orgs",
"repos_url": "https://api.github.com/users/georgia-hf/repos",
"events_url": "https://api.github.com/users/georgia-hf/events{/privacy}",
"received_events_url": "https://api.github.com/users/georgia-hf/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7851). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"A few comments:\r\n\r\n- Have you tried using this with longer sequences? @UriNeri developed something similar internally and used it with viral genomes. He got some Parquet errors due to genomes not fitting in a `utf8` column. This was fixed by using `large_utf8`.\r\n- If you're only using it to read FASTA files, I think that having BioPython as a dependency is overkill. The library is very large and the FASTA parser isn't particularly fast. I have an example of a fast parser with no external references [here](https://gist.github.com/apcamargo/d039aa04a2cbbcbb14e2d34a0963b862) (this is actually based on [`readfq.py`](https://github.com/lh3/readfq/blob/master/readfq.py), with a couple of extra functions that might not be useful in the context of this PR)",
"> * If you're only using it to read FASTA files, I think that having BioPython as a dependency is overkill. The library is very large and the FASTA parser isn't particularly fast. I have an example of a fast parser with no external references [here](https://gist.github.com/apcamargo/d039aa04a2cbbcbb14e2d34a0963b862) (this is actually based on [`readfq.py`](https://github.com/lh3/readfq/blob/master/readfq.py), with a couple of extra functions that might not be useful in the context of this PR)\r\n\r\nWhat @apcamargo said, plus FWIW in **our approach** (so might not be relevant here) we use polars (with custom fasta io parser) or polars-bio (that has a `scan_fasta` function) and we foudn out that the page size sometimes need to be adjusted:\r\n```\r\nenvs/default/lib/python3.9/site-packages/polars/lazyframe/frame.py:2422, in LazyFrame.collect(self, type_coercion, predicate_pushdown, projection_pushdown, simplify_expression, slice_pushdown, comm_subplan_elim, comm_subexpr_elim, cluster_with_columns, collapse_joins, no_optimization, engine, background, optimizations, **_kwargs)\r\n 2420 # Only for testing purposes\r\n 2421 callback = _kwargs.get(\"post_opt_callback\", callback)\r\n-> 2422 return wrap_df(ldf.collect(engine, callback))\r\nComputeError: parquet: File out of specification: A page can only contain i32::MAX uncompressed bytes. This one contains 4544943557\r\n```\r\n\r\nWhich in polars can be solved with:\r\n```\r\ndf.write_parquet(\r\n \"test1.patquet\",\r\n compression=\"zstd\",\r\n row_group_size=10_000, # smaller row groups\r\n data_page_size=1024*1024 # 1MB page size\r\n)\r\n```\r\n"
] | 2025-11-05T18:11:12
| 2025-11-15T00:51:53
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7851",
"html_url": "https://github.com/huggingface/datasets/pull/7851",
"diff_url": "https://github.com/huggingface/datasets/pull/7851.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7851.patch",
"merged_at": null
}
|
This PR adds support for FASTA files conversion to Parquet.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7851/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7851/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7850
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7850/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7850/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7850/events
|
https://github.com/huggingface/datasets/pull/7850
| 3,591,758,675
|
PR_kwDODunzps6xsGi_
| 7,850
|
dev version
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7850). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-11-05T16:02:23
| 2025-11-05T16:05:40
| 2025-11-05T16:02:32
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7850",
"html_url": "https://github.com/huggingface/datasets/pull/7850",
"diff_url": "https://github.com/huggingface/datasets/pull/7850.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7850.patch",
"merged_at": "2025-11-05T16:02:32"
}
| null |
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7850/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7850/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7849
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7849/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7849/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7849/events
|
https://github.com/huggingface/datasets/pull/7849
| 3,591,749,675
|
PR_kwDODunzps6xsEm0
| 7,849
|
release: 4.4.1
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7849). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-11-05T16:00:05
| 2025-11-05T16:03:06
| 2025-11-05T16:00:46
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7849",
"html_url": "https://github.com/huggingface/datasets/pull/7849",
"diff_url": "https://github.com/huggingface/datasets/pull/7849.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7849.patch",
"merged_at": "2025-11-05T16:00:46"
}
| null |
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7849/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7849/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7848
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7848/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7848/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7848/events
|
https://github.com/huggingface/datasets/pull/7848
| 3,590,024,849
|
PR_kwDODunzps6xmPYZ
| 7,848
|
DOC: remove mode parameter in docstring of pdf and video feature
|
{
"login": "CloseChoice",
"id": 31857876,
"node_id": "MDQ6VXNlcjMxODU3ODc2",
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CloseChoice",
"html_url": "https://github.com/CloseChoice",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions",
"organizations_url": "https://api.github.com/users/CloseChoice/orgs",
"repos_url": "https://api.github.com/users/CloseChoice/repos",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"received_events_url": "https://api.github.com/users/CloseChoice/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-11-05T09:11:46
| 2025-11-05T14:42:59
| 2025-11-05T14:04:03
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7848",
"html_url": "https://github.com/huggingface/datasets/pull/7848",
"diff_url": "https://github.com/huggingface/datasets/pull/7848.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7848.patch",
"merged_at": "2025-11-05T14:04:03"
}
|
closes #7841
As mentioned in the issue `mode` has been copy-pasted but isn't used in these files.
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7848/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7848/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7847
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7847/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7847/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7847/events
|
https://github.com/huggingface/datasets/pull/7847
| 3,586,135,727
|
PR_kwDODunzps6xZZb9
| 7,847
|
Better streaming retries (504 and 429)
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7847). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-11-04T11:23:58
| 2025-11-04T13:52:25
| 2025-11-04T13:52:22
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7847",
"html_url": "https://github.com/huggingface/datasets/pull/7847",
"diff_url": "https://github.com/huggingface/datasets/pull/7847.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7847.patch",
"merged_at": "2025-11-04T13:52:22"
}
| null |
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7847/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7847/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7846
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7846/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7846/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7846/events
|
https://github.com/huggingface/datasets/pull/7846
| 3,585,966,335
|
PR_kwDODunzps6xYzny
| 7,846
|
set dev version
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7846). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-11-04T10:44:27
| 2025-11-04T10:49:24
| 2025-11-04T10:44:37
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7846",
"html_url": "https://github.com/huggingface/datasets/pull/7846",
"diff_url": "https://github.com/huggingface/datasets/pull/7846.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7846.patch",
"merged_at": "2025-11-04T10:44:37"
}
| null |
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7846/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7846/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7845
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7845/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7845/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7845/events
|
https://github.com/huggingface/datasets/pull/7845
| 3,585,926,647
|
PR_kwDODunzps6xYq2r
| 7,845
|
Release: 4.4.0
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7845). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-11-04T10:35:33
| 2025-11-04T10:39:47
| 2025-11-04T10:36:37
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7845",
"html_url": "https://github.com/huggingface/datasets/pull/7845",
"diff_url": "https://github.com/huggingface/datasets/pull/7845.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7845.patch",
"merged_at": "2025-11-04T10:36:37"
}
| null |
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7845/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7845/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7844
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7844/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7844/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7844/events
|
https://github.com/huggingface/datasets/pull/7844
| 3,582,354,507
|
PR_kwDODunzps6xM9hd
| 7,844
|
support fsspec 2025.10.0
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7844). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-11-03T14:34:29
| 2025-11-03T14:51:33
| 2025-11-03T14:51:32
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7844",
"html_url": "https://github.com/huggingface/datasets/pull/7844",
"diff_url": "https://github.com/huggingface/datasets/pull/7844.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7844.patch",
"merged_at": "2025-11-03T14:51:32"
}
| null |
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7844/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7844/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7843
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7843/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7843/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7843/events
|
https://github.com/huggingface/datasets/pull/7843
| 3,582,311,403
|
PR_kwDODunzps6xM0sq
| 7,843
|
fix column with transform
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7843). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-11-03T14:23:01
| 2025-11-03T14:34:15
| 2025-11-03T14:34:12
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7843",
"html_url": "https://github.com/huggingface/datasets/pull/7843",
"diff_url": "https://github.com/huggingface/datasets/pull/7843.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7843.patch",
"merged_at": "2025-11-03T14:34:12"
}
|
fix https://github.com/huggingface/datasets/issues/7842
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7843/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7843/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7842
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7842/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7842/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7842/events
|
https://github.com/huggingface/datasets/issues/7842
| 3,582,182,995
|
I_kwDODunzps7Vg8ZT
| 7,842
|
Transform with columns parameter triggers on non-specified column access
|
{
"login": "mr-brobot",
"id": 18426892,
"node_id": "MDQ6VXNlcjE4NDI2ODky",
"avatar_url": "https://avatars.githubusercontent.com/u/18426892?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mr-brobot",
"html_url": "https://github.com/mr-brobot",
"followers_url": "https://api.github.com/users/mr-brobot/followers",
"following_url": "https://api.github.com/users/mr-brobot/following{/other_user}",
"gists_url": "https://api.github.com/users/mr-brobot/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mr-brobot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mr-brobot/subscriptions",
"organizations_url": "https://api.github.com/users/mr-brobot/orgs",
"repos_url": "https://api.github.com/users/mr-brobot/repos",
"events_url": "https://api.github.com/users/mr-brobot/events{/privacy}",
"received_events_url": "https://api.github.com/users/mr-brobot/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-11-03T13:55:27
| 2025-11-03T14:34:13
| 2025-11-03T14:34:13
|
NONE
| null | null | null | null |
### Describe the bug
Iterating over a [`Column`](https://github.com/huggingface/datasets/blob/8b1bd4ec1cc9e9ce022f749abb6485ef984ae7c0/src/datasets/arrow_dataset.py#L633-L692) iterates through the parent [`Dataset`](https://github.com/huggingface/datasets/blob/8b1bd4ec1cc9e9ce022f749abb6485ef984ae7c0/src/datasets/arrow_dataset.py#L695) and applies all formatting/transforms on each row, regardless of which column is being accessed. This causes an error when transforms depend on columns not present in the projection.
### Steps to reproduce the bug
### Load a dataset with multiple columns
```python
ds = load_dataset("mrbrobot/isic-2024", split="train")
```
### Define a transform that specifies an input column
```python
def image_transform(batch):
batch["image"] = batch["image"] # KeyError when batch doesn't contain "image"
return batch
# apply transform only to image column
ds = ds.with_format("torch")
ds = ds.with_transform(image_transform, columns=["image"], output_all_columns=True)
```
### Iterate over non-specified column
```python
# iterate over a different column, triggers the transform on each row, but batch doesn't contain "image"
for t in ds["target"]: # KeyError: 'image'
print(t)
```
### Expected behavior
If a user iterates over `ds["target"]` and the transform specifies `columns=["image"]`, the transform should be skipped.
### Environment info
`datasets`: 4.2.0
Python: 3.12.12
Linux: Debian 11.11
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7842/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7842/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7841
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7841/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7841/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7841/events
|
https://github.com/huggingface/datasets/issues/7841
| 3,579,506,747
|
I_kwDODunzps7VWvA7
| 7,841
|
DOC: `mode` parameter on pdf and video features unused
|
{
"login": "CloseChoice",
"id": 31857876,
"node_id": "MDQ6VXNlcjMxODU3ODc2",
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CloseChoice",
"html_url": "https://github.com/CloseChoice",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions",
"organizations_url": "https://api.github.com/users/CloseChoice/orgs",
"repos_url": "https://api.github.com/users/CloseChoice/repos",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"received_events_url": "https://api.github.com/users/CloseChoice/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"They seem to be artefacts from a copy-paste of the Image feature ^^' we should remove them"
] | 2025-11-02T12:37:47
| 2025-11-05T14:04:04
| 2025-11-05T14:04:04
|
CONTRIBUTOR
| null | null | null | null |
Following up on https://github.com/huggingface/datasets/pull/7840 I asked claude code to check for undocumented parameters for other features and it found:
- mode parameter on video is documented but unused: https://github.com/huggingface/datasets/blob/main/src/datasets/features/video.py#L48-L49
- the same goes for the mode parameter on the pdf feature: https://github.com/huggingface/datasets/blob/main/src/datasets/features/pdf.py#L47-L48
I assume checking if these modes can be supported and otherwise removing them is the way to go here.
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7841/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7841/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7840
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7840/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7840/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7840/events
|
https://github.com/huggingface/datasets/pull/7840
| 3,579,486,348
|
PR_kwDODunzps6xDsbG
| 7,840
|
Add num channels to audio
|
{
"login": "CloseChoice",
"id": 31857876,
"node_id": "MDQ6VXNlcjMxODU3ODc2",
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CloseChoice",
"html_url": "https://github.com/CloseChoice",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions",
"organizations_url": "https://api.github.com/users/CloseChoice/orgs",
"repos_url": "https://api.github.com/users/CloseChoice/repos",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"received_events_url": "https://api.github.com/users/CloseChoice/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7840). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-11-02T12:10:24
| 2025-11-03T17:37:48
| 2025-11-03T14:24:11
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7840",
"html_url": "https://github.com/huggingface/datasets/pull/7840",
"diff_url": "https://github.com/huggingface/datasets/pull/7840.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7840.patch",
"merged_at": "2025-11-03T14:24:11"
}
|
Fixes #7837
We currently have the [mono attribute for Audio documented](https://github.com/huggingface/datasets/blob/41c05299348a499807432ab476e1cdc4143c8772/src/datasets/features/audio.py#L52C1-L54C22) but not used anywhere resulting in confusion for users. Since torchcodec does not know this attribute I suggest using `num_channels` (currently supported `None` (leave unchanged), mono: `1`, stereo: `2`).
I could also add a mono attribute but found that to be more confusing for developers and would restrict us if at any point in the future more than 2 channels are supported.
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7840/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7840/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7839
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7839/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7839/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7839/events
|
https://github.com/huggingface/datasets/issues/7839
| 3,579,121,843
|
I_kwDODunzps7VVRCz
| 7,839
|
datasets doesn't work with python 3.14
|
{
"login": "zachmoshe",
"id": 4789087,
"node_id": "MDQ6VXNlcjQ3ODkwODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4789087?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zachmoshe",
"html_url": "https://github.com/zachmoshe",
"followers_url": "https://api.github.com/users/zachmoshe/followers",
"following_url": "https://api.github.com/users/zachmoshe/following{/other_user}",
"gists_url": "https://api.github.com/users/zachmoshe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zachmoshe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zachmoshe/subscriptions",
"organizations_url": "https://api.github.com/users/zachmoshe/orgs",
"repos_url": "https://api.github.com/users/zachmoshe/repos",
"events_url": "https://api.github.com/users/zachmoshe/events{/privacy}",
"received_events_url": "https://api.github.com/users/zachmoshe/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"Thanks for the report.\nHave you tried on main? This should work, there was recently a PR merged to address this problem, see #7817",
"Works on main 👍 \nWhat's the release schedule for `datasets`? Seems like a cadence of ~2weeks so I assume a real version is due pretty soon?",
"let's say we do a new release later today ? :)",
"Premium service! \n😂 👑 \nJust checked 4.4.0 - works as expected!"
] | 2025-11-02T09:09:06
| 2025-11-04T14:02:25
| 2025-11-04T14:02:25
|
NONE
| null | null | null | null |
### Describe the bug
Seems that `dataset` doesn't work with python==3.14. The root cause seems to be something with a `deel` API that was changed.
```
TypeError: Pickler._batch_setitems() takes 2 positional arguments but 3 were given
```
### Steps to reproduce the bug
(on a new folder)
uv init
uv python pin 3.14
uv add datasets
uv run python
(in REPL)
import datasets
datasets.load_dataset("cais/mmlu", "all") # will fail on any dataset
```
>>> datasets.load_dataset("cais/mmlu", "all")
Traceback (most recent call last):
File "<python-input-2>", line 1, in <module>
datasets.load_dataset("cais/mmlu", "all")
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/datasets/load.py", line 1397, in load_dataset
builder_instance = load_dataset_builder(
path=path,
...<10 lines>...
**config_kwargs,
)
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/datasets/load.py", line 1185, in load_dataset_builder
builder_instance._use_legacy_cache_dir_if_possible(dataset_module)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/datasets/builder.py", line 615, in _use_legacy_cache_dir_if_possible
self._check_legacy_cache2(dataset_module) or self._check_legacy_cache() or None
~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/datasets/builder.py", line 487, in _check_legacy_cache2
config_id = self.config.name + "-" + Hasher.hash({"data_files": self.config.data_files})
~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/datasets/fingerprint.py", line 188, in hash
return cls.hash_bytes(dumps(value))
~~~~~^^^^^^^
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/datasets/utils/_dill.py", line 120, in dumps
dump(obj, file)
~~~~^^^^^^^^^^^
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/datasets/utils/_dill.py", line 114, in dump
Pickler(file, recurse=True).dump(obj)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/dill/_dill.py", line 428, in dump
StockPickler.dump(self, obj)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^
File "/Users/zmoshe/.local/uv/python/cpython-3.14.0rc2-macos-aarch64-none/lib/python3.14/pickle.py", line 498, in dump
self.save(obj)
~~~~~~~~~^^^^^
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/datasets/utils/_dill.py", line 70, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/dill/_dill.py", line 422, in save
StockPickler.save(self, obj, save_persistent_id)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/zmoshe/.local/uv/python/cpython-3.14.0rc2-macos-aarch64-none/lib/python3.14/pickle.py", line 572, in save
f(self, obj) # Call unbound method with explicit self
~^^^^^^^^^^^
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/dill/_dill.py", line 1262, in save_module_dict
StockPickler.save_dict(pickler, obj)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^
File "/Users/zmoshe/.local/uv/python/cpython-3.14.0rc2-macos-aarch64-none/lib/python3.14/pickle.py", line 1064, in save_dict
self._batch_setitems(obj.items(), obj)
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
TypeError: Pickler._batch_setitems() takes 2 positional arguments but 3 were given
```
### Expected behavior
should work.
### Environment info
datasets==v4.3.0
python==3.14
|
{
"login": "zachmoshe",
"id": 4789087,
"node_id": "MDQ6VXNlcjQ3ODkwODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4789087?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zachmoshe",
"html_url": "https://github.com/zachmoshe",
"followers_url": "https://api.github.com/users/zachmoshe/followers",
"following_url": "https://api.github.com/users/zachmoshe/following{/other_user}",
"gists_url": "https://api.github.com/users/zachmoshe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zachmoshe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zachmoshe/subscriptions",
"organizations_url": "https://api.github.com/users/zachmoshe/orgs",
"repos_url": "https://api.github.com/users/zachmoshe/repos",
"events_url": "https://api.github.com/users/zachmoshe/events{/privacy}",
"received_events_url": "https://api.github.com/users/zachmoshe/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7839/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7839/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7837
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7837/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7837/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7837/events
|
https://github.com/huggingface/datasets/issues/7837
| 3,575,454,726
|
I_kwDODunzps7VHRwG
| 7,837
|
mono parameter to the Audio feature is missing
|
{
"login": "ernestum",
"id": 1250234,
"node_id": "MDQ6VXNlcjEyNTAyMzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1250234?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ernestum",
"html_url": "https://github.com/ernestum",
"followers_url": "https://api.github.com/users/ernestum/followers",
"following_url": "https://api.github.com/users/ernestum/following{/other_user}",
"gists_url": "https://api.github.com/users/ernestum/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ernestum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ernestum/subscriptions",
"organizations_url": "https://api.github.com/users/ernestum/orgs",
"repos_url": "https://api.github.com/users/ernestum/repos",
"events_url": "https://api.github.com/users/ernestum/events{/privacy}",
"received_events_url": "https://api.github.com/users/ernestum/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hey, we removed the misleading passage in the docstring and enabled support for `num_channels` as torchcodec does",
"thanks!"
] | 2025-10-31T15:41:39
| 2025-11-03T15:59:18
| 2025-11-03T14:24:12
|
NONE
| null | null | null | null |
According to the docs, there is a "mono" parameter to the Audio feature, which turns any stereo into mono. In practice the signal is not touched and the mono parameter, even though documented, does not exist.
https://github.com/huggingface/datasets/blob/41c05299348a499807432ab476e1cdc4143c8772/src/datasets/features/audio.py#L52C1-L54C22
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7837/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7837/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7836
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7836/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7836/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7836/events
|
https://github.com/huggingface/datasets/pull/7836
| 3,562,316,362
|
PR_kwDODunzps6wLuh9
| 7,836
|
Python 3.14
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7836). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-10-28T16:11:13
| 2025-10-31T17:27:17
| 2025-10-31T17:27:15
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7836",
"html_url": "https://github.com/huggingface/datasets/pull/7836",
"diff_url": "https://github.com/huggingface/datasets/pull/7836.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7836.patch",
"merged_at": "2025-10-31T17:27:15"
}
| null |
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7836/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7836/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7835
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7835/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7835/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7835/events
|
https://github.com/huggingface/datasets/pull/7835
| 3,560,909,796
|
PR_kwDODunzps6wHK9e
| 7,835
|
Add DICOM support
|
{
"login": "CloseChoice",
"id": 31857876,
"node_id": "MDQ6VXNlcjMxODU3ODc2",
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CloseChoice",
"html_url": "https://github.com/CloseChoice",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions",
"organizations_url": "https://api.github.com/users/CloseChoice/orgs",
"repos_url": "https://api.github.com/users/CloseChoice/repos",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"received_events_url": "https://api.github.com/users/CloseChoice/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"Awesome ! For the docs should we rename https://huggingface.co/docs/datasets/nifti_dataset to medical_imaging_dataset and have both DICOM and NIfTI together or have separate pages in you opinion ?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7835). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"> Awesome ! For the docs should we rename https://huggingface.co/docs/datasets/nifti_dataset to medical_imaging_dataset and have both DICOM and NIfTI together or have separate pages in you opinion ?\r\n\r\nMakes sense, is more intuitive for the user and the pages as proposed in this branch have a lot of overlap. I would then structure it in such a way to write some brief things about medical imaging, then introduce the formats (so basically concatenating the two pages together and removing duplicates).",
"Pls don't merge currently, since we'll need an `embed_storage` function in here as well. See\r\nhttps://github.com/huggingface/datasets/pull/7815#issuecomment-3494094692 and the following conversation",
"@lhoestq, this is ready for a first round of review."
] | 2025-10-28T10:41:05
| 2025-11-26T15:25:03
| null |
CONTRIBUTOR
| null | null | true
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7835",
"html_url": "https://github.com/huggingface/datasets/pull/7835",
"diff_url": "https://github.com/huggingface/datasets/pull/7835.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7835.patch",
"merged_at": null
}
|
supports #7804
Add support for the dicom file format.
This PR follows PR #7815 and PR #7325 closely.
Remarkable differences:
I made sure that we can load all of pydicom's test data, and encountered the `force=True` parameter that we explicitly support here. This allows to trying to load corrupted dicom files, we explicitly test this!
There is one dataset with all of dicom's test data on huggingface which can be loaded using this branch with the following script:
```python
from datasets import load_dataset
from datasets import Features, ClassLabel
from datasets.features import Dicom
features = Features({
"dicom": Dicom(force=True), # necessary to be able to load one corrupted file
"label": ClassLabel(num_classes=2)
})
ds = load_dataset("TobiasPitters/dicom-sample-dataset",
features=features)
error_count = 0
for idx, item in enumerate(ds["test"]):
dicom = item["dicom"]
try:
print(f"Type: {type(dicom)}")
if hasattr(dicom, 'PatientID'):
print(f"PatientID: {dicom.PatientID}")
if hasattr(dicom, 'StudyInstanceUID'):
print(f"StudyInstanceUID: {dicom.StudyInstanceUID}")
if hasattr(dicom, 'Modality'):
print(f"Modality: {dicom.Modality}")
except Exception as e:
error_count += 1
print(e)
print(f"Finished processing with {error_count} errors.")
```
todo:
- [x] add docs (will do so soon)
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7835/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7835/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7834
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7834/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7834/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7834/events
|
https://github.com/huggingface/datasets/issues/7834
| 3,558,802,959
|
I_kwDODunzps7UHwYP
| 7,834
|
Audio.cast_column() or Audio.decode_example() causes Colab kernel crash (std::bad_alloc)
|
{
"login": "rachidio",
"id": 2559570,
"node_id": "MDQ6VXNlcjI1NTk1NzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2559570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rachidio",
"html_url": "https://github.com/rachidio",
"followers_url": "https://api.github.com/users/rachidio/followers",
"following_url": "https://api.github.com/users/rachidio/following{/other_user}",
"gists_url": "https://api.github.com/users/rachidio/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rachidio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rachidio/subscriptions",
"organizations_url": "https://api.github.com/users/rachidio/orgs",
"repos_url": "https://api.github.com/users/rachidio/repos",
"events_url": "https://api.github.com/users/rachidio/events{/privacy}",
"received_events_url": "https://api.github.com/users/rachidio/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi ! `datasets` v4 uses `torchcodec` for audio decoding (previous versions were using `soundfile`). What is your `torchcodec` version ? Can you try other versions of `torchcodec` and see if it works ?",
"When I install `datasets` with `pip install datasets[audio]` it install this version of `torchcodec`:\n```\nName: torchcodec\nVersion: 0.8.1\n```\nCan you please point to a working version of `torchcodec`?\n\nThanks for your help",
"I believe you simply need to make sure the torchcodec and torch versions work together. Here is how to fix it:\n\n```python\n!pip install -U torchcodec torch\n```",
"I am also encountering this same issue when i run `print(ug_court[\"train\"][0])` to view the features of the first row of my audio data",
"the problem still goes on to when i force training with seeing these features",
"Thank you @lhoestq I've reinstalled the packages an the error is gone.\nMy new versions are:\n```\nName: torch\nVersion: 2.8.0\n---\nName: torchaudio\nVersion: 2.8.0\n---\nName: torchcodec\nVersion: 0.8.1\n```\n\nRegards",
"mine too has worked ",
"Hi,\n\nI encounter the same problem when trying to inspect the first element in the dataset. My environment is:\n```\nroot@3ac6f9f8c6c4:/workspace# pip3 list | grep torch\npytorch-lightning 2.5.6\npytorch-metric-learning 2.9.0\ntorch 2.8.0+cu126\ntorch-audiomentations 0.12.0\ntorch_pitch_shift 1.2.5\ntorchaudio 2.8.0+cu126\ntorchcodec 0.8.1\ntorchelastic 0.2.2\ntorchmetrics 1.8.2\ntorchvision 0.23.0+cu126\n```\nthe same as @rachidio 's new version that works.\n\nI am in a Docker container environment, and here is the code I am working with:\n\n<img width=\"1350\" height=\"388\" alt=\"Image\" src=\"https://github.com/user-attachments/assets/4cf0400f-9ee7-47c7-ba57-c4ef3c1e7fd6\" />"
] | 2025-10-27T22:02:00
| 2025-11-15T16:28:04
| null |
NONE
| null | null | null | null |
### Describe the bug
When using the huggingface datasets.Audio feature to decode a local or remote (public HF dataset) audio file inside Google Colab, the notebook kernel crashes with std::bad_alloc (C++ memory allocation failure).
The crash happens even with a minimal code example and valid .wav file that can be read successfully using soundfile.
Here is a sample Collab notebook to reproduce the problem.
https://colab.research.google.com/drive/1nnb-GC5748Tux3xcYRussCGp2x-zM9Id?usp=sharing
code sample:
```
...
audio_dataset = audio_dataset.cast_column("audio", Audio(sampling_rate=16000))
# Accessing the first element crashes the Colab kernel
print(audio_dataset[0]["audio"])
```
Error log
```
WARNING what(): std::bad_alloc
terminate called after throwing an instance of 'std::bad_alloc'
```
Environment
Platform: Google Colab (Python 3.12.12)
datasets Version: 4.3.0
soundfile Version: 0.13.1
torchaudio Version: 2.8.0+cu126
Thanks in advance to help me on this error I get approx two weeks now after it was working before.
Regards
### Steps to reproduce the bug
https://colab.research.google.com/drive/1nnb-GC5748Tux3xcYRussCGp2x-zM9Id?usp=sharing
### Expected behavior
Loading the audio and decode it.
It should safely return:
{
"path": "path/filaname.wav",
"array": np.ndarray([...]),
"sampling_rate": 16000
}
### Environment info
Environment
Platform: Google Colab (Python 3.12.12)
datasets Version: 4.3.0
soundfile Version: 0.13.1
torchaudio Version: 2.8.0+cu126
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7834/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/datasets/issues/7834/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7833
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7833/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7833/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7833/events
|
https://github.com/huggingface/datasets/pull/7833
| 3,556,014,911
|
PR_kwDODunzps6v2gAI
| 7,833
|
Fix: Properly render [!TIP] block in stream.shuffle documentation
|
{
"login": "art-test-stack",
"id": 110672812,
"node_id": "U_kgDOBpi7rA",
"avatar_url": "https://avatars.githubusercontent.com/u/110672812?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/art-test-stack",
"html_url": "https://github.com/art-test-stack",
"followers_url": "https://api.github.com/users/art-test-stack/followers",
"following_url": "https://api.github.com/users/art-test-stack/following{/other_user}",
"gists_url": "https://api.github.com/users/art-test-stack/gists{/gist_id}",
"starred_url": "https://api.github.com/users/art-test-stack/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/art-test-stack/subscriptions",
"organizations_url": "https://api.github.com/users/art-test-stack/orgs",
"repos_url": "https://api.github.com/users/art-test-stack/repos",
"events_url": "https://api.github.com/users/art-test-stack/events{/privacy}",
"received_events_url": "https://api.github.com/users/art-test-stack/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-27T10:09:31
| 2025-10-28T15:57:33
| 2025-10-28T15:57:33
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7833",
"html_url": "https://github.com/huggingface/datasets/pull/7833",
"diff_url": "https://github.com/huggingface/datasets/pull/7833.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7833.patch",
"merged_at": "2025-10-28T15:57:33"
}
|
The second line starting with the bracket doesn't properly render on huggingface/docs. Added "> " to address it.
In the client documentation, the markdown 'TIP' paragraph in docs/stream#shuffle is not well executed, not as the other in the same page / while markdown is correctly considering it.
Documentation:
https://huggingface.co/docs/datasets/v4.3.0/en/stream#shuffle:~:text=%5B!TIP%5D%5BIterableDataset.shuffle()%5D(/docs/datasets/v4.3.0/en/package_reference/main_classes%23datasets.IterableDataset.shuffle)%20will%20also%20shuffle%20the%20order%20of%20the%20shards%20if%20the%20dataset%20is%20sharded%20into%20multiple%20files.
Github source:
https://github.com/huggingface/datasets/blob/main/docs/source/stream.mdx#:~:text=Casting%20only%20works%20if%20the%20original%20feature%20type%20and%20new%20feature%20type%20are%20compatible.%20For%20example%2C%20you%20can%20cast%20a%20column%20with%20the%20feature%20type%20Value(%27int32%27)%20to%20Value(%27bool%27)%20if%20the%20original%20column%20only%20contains%20ones%20and%20zeros.
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7833/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7833/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7832
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7832/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7832/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7832/events
|
https://github.com/huggingface/datasets/issues/7832
| 3,555,991,552
|
I_kwDODunzps7T9CAA
| 7,832
|
[DOCS][minor] TIPS paragraph not compiled in docs/stream
|
{
"login": "art-test-stack",
"id": 110672812,
"node_id": "U_kgDOBpi7rA",
"avatar_url": "https://avatars.githubusercontent.com/u/110672812?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/art-test-stack",
"html_url": "https://github.com/art-test-stack",
"followers_url": "https://api.github.com/users/art-test-stack/followers",
"following_url": "https://api.github.com/users/art-test-stack/following{/other_user}",
"gists_url": "https://api.github.com/users/art-test-stack/gists{/gist_id}",
"starred_url": "https://api.github.com/users/art-test-stack/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/art-test-stack/subscriptions",
"organizations_url": "https://api.github.com/users/art-test-stack/orgs",
"repos_url": "https://api.github.com/users/art-test-stack/repos",
"events_url": "https://api.github.com/users/art-test-stack/events{/privacy}",
"received_events_url": "https://api.github.com/users/art-test-stack/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-27T10:03:22
| 2025-10-27T10:10:54
| 2025-10-27T10:10:54
|
CONTRIBUTOR
| null | null | null | null |
In the client documentation, the markdown 'TIP' paragraph for paragraph in docs/stream#shuffle is not well executed — not as the other in the same page / while markdown is correctly considering it.
Documentation:
https://huggingface.co/docs/datasets/v4.3.0/en/stream#shuffle:~:text=%5B!TIP%5D%5BIterableDataset.shuffle()%5D(/docs/datasets/v4.3.0/en/package_reference/main_classes%23datasets.IterableDataset.shuffle)%20will%20also%20shuffle%20the%20order%20of%20the%20shards%20if%20the%20dataset%20is%20sharded%20into%20multiple%20files.
Github source:
https://github.com/huggingface/datasets/blob/main/docs/source/stream.mdx#:~:text=Casting%20only%20works%20if%20the%20original%20feature%20type%20and%20new%20feature%20type%20are%20compatible.%20For%20example%2C%20you%20can%20cast%20a%20column%20with%20the%20feature%20type%20Value(%27int32%27)%20to%20Value(%27bool%27)%20if%20the%20original%20column%20only%20contains%20ones%20and%20zeros.
|
{
"login": "art-test-stack",
"id": 110672812,
"node_id": "U_kgDOBpi7rA",
"avatar_url": "https://avatars.githubusercontent.com/u/110672812?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/art-test-stack",
"html_url": "https://github.com/art-test-stack",
"followers_url": "https://api.github.com/users/art-test-stack/followers",
"following_url": "https://api.github.com/users/art-test-stack/following{/other_user}",
"gists_url": "https://api.github.com/users/art-test-stack/gists{/gist_id}",
"starred_url": "https://api.github.com/users/art-test-stack/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/art-test-stack/subscriptions",
"organizations_url": "https://api.github.com/users/art-test-stack/orgs",
"repos_url": "https://api.github.com/users/art-test-stack/repos",
"events_url": "https://api.github.com/users/art-test-stack/events{/privacy}",
"received_events_url": "https://api.github.com/users/art-test-stack/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7832/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7832/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.