id
int64
959M
2.55B
title
stringlengths
3
133
body
stringlengths
1
65.5k
description
stringlengths
5
65.6k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
closed_at
stringlengths
20
20
user
stringclasses
174 values
1,783,215,209
Adding retry to create duckdb index commit
Should fix HfHubHTTPError when creating commit for duckdb-index Part of https://github.com/huggingface/datasets-server/issues/1462
Adding retry to create duckdb index commit: Should fix HfHubHTTPError when creating commit for duckdb-index Part of https://github.com/huggingface/datasets-server/issues/1462
closed
2023-06-30T22:16:13Z
2023-07-03T12:56:23Z
2023-07-03T12:56:22Z
AndreaFrancis
1,783,075,231
Ensure parquet shards are sorted
fixes #1397
Ensure parquet shards are sorted: fixes #1397
closed
2023-06-30T19:44:37Z
2023-06-30T20:02:40Z
2023-06-30T20:02:13Z
severo
1,782,967,650
Change the way we represent ResponseAlreadyComputedError in the cache
When a "parallel" step has already been computed, an error is stored in the cache with `ResponseAlreadyComputedError`error_code, and http status 500 (ie: if `split-first-rows-from-streaming` exists, then `split-first-rows-from-parquet` does not need to be computed). But it makes it hard to monitor the "true" errors....
Change the way we represent ResponseAlreadyComputedError in the cache: When a "parallel" step has already been computed, an error is stored in the cache with `ResponseAlreadyComputedError`error_code, and http status 500 (ie: if `split-first-rows-from-streaming` exists, then `split-first-rows-from-parquet` does not need...
closed
2023-06-30T18:13:34Z
2024-02-23T09:56:05Z
2024-02-23T09:56:04Z
severo
1,782,821,542
Change the structure of parquet files
The parquet files will be stored in the `refs/convert/parquet`"branch" with the following structure: ``` [config]/[split]/[shard index: 0000 to 9999].parquet ``` Note that the "partially" converted datasets will use the following (See https://github.com/huggingface/datasets-server/pull/1448): ``` [config]/[...
Change the structure of parquet files: The parquet files will be stored in the `refs/convert/parquet`"branch" with the following structure: ``` [config]/[split]/[shard index: 0000 to 9999].parquet ``` Note that the "partially" converted datasets will use the following (See https://github.com/huggingface/dataset...
closed
2023-06-30T16:33:12Z
2023-08-17T20:41:16Z
2023-08-17T20:41:16Z
severo
1,782,478,740
split-duckdb-index many UnexpectedError in error_code
Updated query (Without errors from parent): ``` db.cachedResponsesBlue.aggregate([{$match: {error_code: "UnexpectedError", kind:"split-duckdb-index", "details.copied_from_artifact":{$exists:false}}},{$group: {_id: {cause: "$details.cause_exception"}, count: {$sum: 1}}},{$sort: {count: -1}}]) ``` From 128617 recor...
split-duckdb-index many UnexpectedError in error_code: Updated query (Without errors from parent): ``` db.cachedResponsesBlue.aggregate([{$match: {error_code: "UnexpectedError", kind:"split-duckdb-index", "details.copied_from_artifact":{$exists:false}}},{$group: {_id: {cause: "$details.cause_exception"}, count: {$sum...
closed
2023-06-30T12:52:15Z
2023-08-11T15:44:16Z
2023-08-11T15:44:16Z
AndreaFrancis
1,781,341,194
Disable backfill k8s Job
null
Disable backfill k8s Job:
closed
2023-06-29T19:04:42Z
2023-06-29T19:06:03Z
2023-06-29T19:06:02Z
AndreaFrancis
1,781,072,941
feat: 🎸 backfill the datasets
because dataset-is-valid step version has been increased. Using this to also fix possible issues (see https://github.com/huggingface/datasets-server/pull/1345 we had put on hold waiting for #1346, that has since be fixed)
feat: 🎸 backfill the datasets: because dataset-is-valid step version has been increased. Using this to also fix possible issues (see https://github.com/huggingface/datasets-server/pull/1345 we had put on hold waiting for #1346, that has since be fixed)
closed
2023-06-29T15:30:58Z
2023-06-29T15:31:05Z
2023-06-29T15:31:04Z
severo
1,780,946,502
fix: split-duckdb-index error when indexing columns with spaces
After deploying split-duckdb-index, there are some errors because of column names with spaces like: ``` duckdb.BinderException: Binder Error: Referenced column "Mean" not found in FROM clause! Candidate bindings: "read_parquet.Model" LINE 1: ...f_index_id, Ranking,User,Model,Results,**Mean Reward**,Std Reward FROM ...
fix: split-duckdb-index error when indexing columns with spaces: After deploying split-duckdb-index, there are some errors because of column names with spaces like: ``` duckdb.BinderException: Binder Error: Referenced column "Mean" not found in FROM clause! Candidate bindings: "read_parquet.Model" LINE 1: ...f_inde...
closed
2023-06-29T14:25:59Z
2023-06-29T15:59:00Z
2023-06-29T15:54:39Z
AndreaFrancis
1,780,313,353
Improve metrics to hide duplicates
Yesterday, a new step (`split-duckcb-index`) was added and run over all the datasets. Here are the metrics: <img width="771" alt="Capture d’écran 2023-06-29 à 09 46 31" src="https://github.com/huggingface/datasets-server/assets/1676121/5daf2798-c79b-4ff9-a192-03ba38d4b149"> We can see that many new cache entrie...
Improve metrics to hide duplicates: Yesterday, a new step (`split-duckcb-index`) was added and run over all the datasets. Here are the metrics: <img width="771" alt="Capture d’écran 2023-06-29 à 09 46 31" src="https://github.com/huggingface/datasets-server/assets/1676121/5daf2798-c79b-4ff9-a192-03ba38d4b149"> W...
closed
2023-06-29T07:49:28Z
2024-02-06T14:48:11Z
2024-02-06T14:48:11Z
severo
1,780,284,502
feat: 🎸 reduce the number of workers back to 20
null
feat: 🎸 reduce the number of workers back to 20:
closed
2023-06-29T07:27:34Z
2023-06-29T07:28:07Z
2023-06-29T07:27:39Z
severo
1,779,744,022
Disable backfill - ACTION = skip
null
Disable backfill - ACTION = skip:
closed
2023-06-28T20:41:20Z
2023-06-28T20:42:36Z
2023-06-28T20:42:34Z
AndreaFrancis
1,779,350,448
Update quality target in Makefile for /chart
The path to staging environment in the Makefile was outdated (the name was changed from `dev` to `staging`)
Update quality target in Makefile for /chart: The path to staging environment in the Makefile was outdated (the name was changed from `dev` to `staging`)
closed
2023-06-28T17:00:24Z
2023-06-28T18:33:54Z
2023-06-28T18:33:53Z
polinaeterna
1,779,215,015
Enable backfill one time
After this PR is merged and deployed, I will rollback to action=skip to disable the k8s job.
Enable backfill one time: After this PR is merged and deployed, I will rollback to action=skip to disable the k8s job.
closed
2023-06-28T15:36:22Z
2023-06-28T15:53:08Z
2023-06-28T15:53:07Z
AndreaFrancis
1,779,204,041
Temporaly increase resources for new job runner split-duckdb-index
null
Temporaly increase resources for new job runner split-duckdb-index:
closed
2023-06-28T15:29:19Z
2023-06-28T15:31:06Z
2023-06-28T15:31:05Z
AndreaFrancis
1,779,167,702
Replace valid with preview and viewer
replaces #1450 and #1447 fixes #1446 and #1445 - [x] remove `valid` field from `/valid` endpoint - [x] replace `valid` with `viewer` and `preview` in `/is-valid` - [x] update the docs - [x] update openapi, <strike>rapidapi</strike> (I don't really understand rapidapi anymore), postman This change is breakin...
Replace valid with preview and viewer: replaces #1450 and #1447 fixes #1446 and #1445 - [x] remove `valid` field from `/valid` endpoint - [x] replace `valid` with `viewer` and `preview` in `/is-valid` - [x] update the docs - [x] update openapi, <strike>rapidapi</strike> (I don't really understand rapidapi anym...
closed
2023-06-28T15:09:37Z
2023-06-29T14:13:46Z
2023-06-29T14:13:14Z
severo
1,779,124,224
Change duckdb commiter key
null
Change duckdb commiter key:
closed
2023-06-28T14:47:37Z
2023-06-28T14:50:37Z
2023-06-28T14:50:36Z
AndreaFrancis
1,779,109,875
Add preview and viewer to is valid
null
Add preview and viewer to is valid:
closed
2023-06-28T14:41:05Z
2023-06-28T15:10:19Z
2023-06-28T15:09:50Z
severo
1,779,028,625
Adding debug logs for split-duckdb-index
When processing split-duckdb-index in staging env, it is showing this message: ``` DEBUG: 2023-06-28 13:42:51,345 - root - The dataset does not exist on the Hub. DEBUG: 2023-06-28 13:42:51,349 - root - Directory removed: /duckdb-index/21626898975922-split-duckdb-index-asoria-sample_glue-84f50613 DEBUG: 2023-06-28 1...
Adding debug logs for split-duckdb-index: When processing split-duckdb-index in staging env, it is showing this message: ``` DEBUG: 2023-06-28 13:42:51,345 - root - The dataset does not exist on the Hub. DEBUG: 2023-06-28 13:42:51,349 - root - Directory removed: /duckdb-index/21626898975922-split-duckdb-index-asoria...
closed
2023-06-28T14:04:54Z
2023-06-28T14:18:18Z
2023-06-28T14:18:17Z
AndreaFrancis
1,778,626,219
Stream convert to parquet
Allow to have datasets partially converted to parquet, like c4, refinedweb, oscar, etc. Datasets above 5GB are streamed to generate 5GB (uncompressed) of parquet files. ## Implementation details I implemented a context manager `limite_parquet_writes` that does some monkeypatching in the `datasets` lib to stop...
Stream convert to parquet: Allow to have datasets partially converted to parquet, like c4, refinedweb, oscar, etc. Datasets above 5GB are streamed to generate 5GB (uncompressed) of parquet files. ## Implementation details I implemented a context manager `limite_parquet_writes` that does some monkeypatching in...
closed
2023-06-28T10:06:31Z
2023-07-03T15:42:26Z
2023-07-03T15:40:32Z
lhoestq
1,778,601,816
docs: ✏️ add docs for fields viewer and preview in /valid
null
docs: ✏️ add docs for fields viewer and preview in /valid:
closed
2023-06-28T09:52:14Z
2023-06-28T15:10:41Z
2023-06-28T15:10:09Z
severo
1,778,545,555
Add fields `viewer` and `preview` to /is-valid
For coherence with /valid, we should add the `viewer` and `preview` fields to /is-valid. We should also consider deprecating the current `valid` field (as in https://github.com/huggingface/datasets-server/issues/1445). Note that it's in use in https://github.com/search?q=org%3Ahuggingface+datasets-server.huggingface...
Add fields `viewer` and `preview` to /is-valid: For coherence with /valid, we should add the `viewer` and `preview` fields to /is-valid. We should also consider deprecating the current `valid` field (as in https://github.com/huggingface/datasets-server/issues/1445). Note that it's in use in https://github.com/search...
closed
2023-06-28T09:19:56Z
2023-06-29T14:13:16Z
2023-06-29T14:13:16Z
severo
1,778,541,141
Remove `.valid` from `/valid` endpoint?
We recently added to fields to `/valid`: - `viewer`: all the datasets that have a valid dataset viewer - `preview`: all the datasets that don't have a valid dataset viewer, but have a dataset preview And the Hub does not use the original field `valid` anymore. We still fill it with the union of both sets. Shoul...
Remove `.valid` from `/valid` endpoint?: We recently added to fields to `/valid`: - `viewer`: all the datasets that have a valid dataset viewer - `preview`: all the datasets that don't have a valid dataset viewer, but have a dataset preview And the Hub does not use the original field `valid` anymore. We still fill...
closed
2023-06-28T09:17:13Z
2023-07-26T15:47:35Z
2023-07-26T15:47:35Z
severo
1,778,524,649
Remove the useless mongodb indexes
Review the current indexes in the mongodb collections, and ensure all of them are required. Else, remove the redundant ones. It will allow us to reduce the size on the server
Remove the useless mongodb indexes: Review the current indexes in the mongodb collections, and ensure all of them are required. Else, remove the redundant ones. It will allow us to reduce the size on the server
closed
2023-06-28T09:07:19Z
2023-08-16T21:23:20Z
2023-08-16T21:23:20Z
severo
1,778,462,851
Raise specific errors (and error_code) instead of UnexpectedError
The following query on the production database gives the number of datasets with at least one cache entry with error_code "UnexpectedError", grouped by the underlying "cause_exception". For the most common ones (`DatasetGenerationError`, `HfHubHTTPError`, `OSError`, etc.) we would benefit from raising a specific er...
Raise specific errors (and error_code) instead of UnexpectedError: The following query on the production database gives the number of datasets with at least one cache entry with error_code "UnexpectedError", grouped by the underlying "cause_exception". For the most common ones (`DatasetGenerationError`, `HfHubHTTPE...
open
2023-06-28T08:28:06Z
2024-08-01T11:11:21Z
null
severo
1,778,427,146
Add new dependencies for the job runners
Based on statistics about the most needed dependencies (https://github.com/huggingface/datasets-server/issues/1281#issuecomment-1609455781), we should prioritize adding [ir-datasets](https://pypi.org/project/ir-datasets/), [bioc](https://pypi.org/project/bioc/) and [pytorch_ie](https://pypi.org/project/pytorch-ie/).
Add new dependencies for the job runners: Based on statistics about the most needed dependencies (https://github.com/huggingface/datasets-server/issues/1281#issuecomment-1609455781), we should prioritize adding [ir-datasets](https://pypi.org/project/ir-datasets/), [bioc](https://pypi.org/project/bioc/) and [pytorch_ie]...
closed
2023-06-28T08:04:49Z
2024-02-02T17:19:24Z
2024-02-02T17:19:23Z
severo
1,777,655,160
Adding other processing steps
Previously, `split-duckdb-index` was triggered only by `config-split-names-from-info` but when this step finished with error 500 because of ResponseAlreadyComputedError, `split-duckdb-index` never started. Adding other parents to avoid skipping job compute. Note that the issue with parallel processing steps will ...
Adding other processing steps: Previously, `split-duckdb-index` was triggered only by `config-split-names-from-info` but when this step finished with error 500 because of ResponseAlreadyComputedError, `split-duckdb-index` never started. Adding other parents to avoid skipping job compute. Note that the issue with ...
closed
2023-06-27T19:51:03Z
2023-06-30T09:04:08Z
2023-06-27T20:01:10Z
AndreaFrancis
1,777,404,581
Try to fix Duckdb extensions
Error when computing split-duckdb-index ``` "details": { "error": "IO Error: Extension \"//.duckdb/extensions/v0.8.1/linux_amd64_gcc4/httpfs.duckdb_extension\" not found.\nExtension \"httpfs\" is an existing extension.\n\nInstall it first using \"INSTALL httpfs\".", "cause...
Try to fix Duckdb extensions: Error when computing split-duckdb-index ``` "details": { "error": "IO Error: Extension \"//.duckdb/extensions/v0.8.1/linux_amd64_gcc4/httpfs.duckdb_extension\" not found.\nExtension \"httpfs\" is an existing extension.\n\nInstall it first using \"INSTALL httpfs\"....
closed
2023-06-27T17:08:56Z
2023-06-28T12:04:29Z
2023-06-27T19:12:46Z
AndreaFrancis
1,777,335,026
Set Duckdb extensions install directory
Error `duckdb.IOException: IO Error: Failed to create directory \"//.duckdb\"!\n ` is shown when computing duckdb index.
Set Duckdb extensions install directory: Error `duckdb.IOException: IO Error: Failed to create directory \"//.duckdb\"!\n ` is shown when computing duckdb index.
closed
2023-06-27T16:24:25Z
2023-06-27T16:45:38Z
2023-06-27T16:45:36Z
AndreaFrancis
1,777,242,709
Increase chart version
null
Increase chart version:
closed
2023-06-27T15:30:39Z
2023-06-27T15:31:42Z
2023-06-27T15:31:41Z
AndreaFrancis
1,777,018,800
Use specific stemmer by dataset according to the language
Currently, '`porter`' stemmer is used by default for duckdb indexing here https://github.com/huggingface/datasets-server/pull/1296/files#diff-d9a2c828d7feca3b7f9e332e040ef861e842a16d18276b356461d2aa34396a8aR145 See https://duckdb.org/docs/extensions/full_text_search.html for more details about '`stemmer`' parameter....
Use specific stemmer by dataset according to the language: Currently, '`porter`' stemmer is used by default for duckdb indexing here https://github.com/huggingface/datasets-server/pull/1296/files#diff-d9a2c828d7feca3b7f9e332e040ef861e842a16d18276b356461d2aa34396a8aR145 See https://duckdb.org/docs/extensions/full_tex...
open
2023-06-27T13:46:44Z
2024-08-22T00:45:07Z
null
AndreaFrancis
1,777,002,036
Prevent using cache_subdirectory=None on JobRunnerWithCache's children
Currently, all job runners that depend on `JobRunnerWithCache` and use the `cache_subdirectory` field, need to do validation before using the generated value like here https://github.com/huggingface/datasets-server/pull/1296/files#diff-d9a2c828d7feca3b7f9e332e040ef861e842a16d18276b356461d2aa34396a8aR248 We need a be...
Prevent using cache_subdirectory=None on JobRunnerWithCache's children : Currently, all job runners that depend on `JobRunnerWithCache` and use the `cache_subdirectory` field, need to do validation before using the generated value like here https://github.com/huggingface/datasets-server/pull/1296/files#diff-d9a2c828d7f...
open
2023-06-27T13:39:56Z
2023-08-07T16:35:04Z
null
AndreaFrancis
1,776,497,349
refactor: 💡 remove dead code
null
refactor: 💡 remove dead code:
closed
2023-06-27T09:38:35Z
2023-06-27T13:04:02Z
2023-06-27T13:04:01Z
severo
1,776,475,451
Unquote path and revision when copying parquet files
close https://github.com/huggingface/datasets-server/issues/1433
Unquote path and revision when copying parquet files: close https://github.com/huggingface/datasets-server/issues/1433
closed
2023-06-27T09:26:04Z
2023-06-27T12:12:21Z
2023-06-27T12:12:20Z
lhoestq
1,776,453,056
Can't copy parquet files with path that can be URL encoded
we get this error for bigcode/the-stack in config-parquet-and-info ``` huggingface_hub.utils._errors.EntryNotFoundError: Cannot copy data/c%2B%2B/train-00000-of-00214.parquet at revision 349a71353fd5868fb90b593ef09e311379da498a: file is missing on repo. ```
Can't copy parquet files with path that can be URL encoded: we get this error for bigcode/the-stack in config-parquet-and-info ``` huggingface_hub.utils._errors.EntryNotFoundError: Cannot copy data/c%2B%2B/train-00000-of-00214.parquet at revision 349a71353fd5868fb90b593ef09e311379da498a: file is missing on repo. `...
closed
2023-06-27T09:13:06Z
2023-06-27T12:12:21Z
2023-06-27T12:12:21Z
lhoestq
1,775,140,577
feat: 🎸 don't insert a new lock when releasing
null
feat: 🎸 don't insert a new lock when releasing:
closed
2023-06-26T16:16:57Z
2023-06-26T16:17:04Z
2023-06-26T16:17:03Z
severo
1,775,102,927
fix: 🐛 remove the "required" constraint on created_at in Lock
the code does not rely on always having a created_at field. And it's not that easy to always ensure it's filled (see `update(upsert=True, ...`)
fix: 🐛 remove the "required" constraint on created_at in Lock: the code does not rely on always having a created_at field. And it's not that easy to always ensure it's filled (see `update(upsert=True, ...`)
closed
2023-06-26T15:56:40Z
2023-06-26T15:56:47Z
2023-06-26T15:56:46Z
severo
1,775,088,105
Fix auth in rows again
for real this time including a test
Fix auth in rows again: for real this time including a test
closed
2023-06-26T15:47:13Z
2023-06-26T17:23:14Z
2023-06-26T17:23:13Z
lhoestq
1,774,326,535
Ignore duckdb files in parquet and info
needed for https://github.com/huggingface/datasets-server/pull/1296 see https://github.com/huggingface/datasets-server/pull/1296#issuecomment-1604502957
Ignore duckdb files in parquet and info: needed for https://github.com/huggingface/datasets-server/pull/1296 see https://github.com/huggingface/datasets-server/pull/1296#issuecomment-1604502957
closed
2023-06-26T09:22:03Z
2023-06-26T10:40:26Z
2023-06-26T10:40:25Z
lhoestq
1,772,942,026
Rename classes to indicate inheritance
This commit adds `Document` suffix to classes to indicate inheritance. cc @severo Fixes https://github.com/huggingface/datasets-server/issues/1359
Rename classes to indicate inheritance: This commit adds `Document` suffix to classes to indicate inheritance. cc @severo Fixes https://github.com/huggingface/datasets-server/issues/1359
closed
2023-06-24T22:06:58Z
2023-07-01T16:20:40Z
2023-07-01T15:49:33Z
geethika-123
1,771,743,801
Add auth in rows
fix ``` aiohttp.client_exceptions.ClientResponseError: 401, message='Unauthorized', url=URL('https://huggingface.co/datasets/bigcode/the-stack-dedup/resolve/refs%2Fconvert%2Fparquet/bigcode--the-stack-dedup/parquet-train-00000-of-05140.parquet') ``` when doing pagination on the bigcode/the-stack-dedup which is ga...
Add auth in rows: fix ``` aiohttp.client_exceptions.ClientResponseError: 401, message='Unauthorized', url=URL('https://huggingface.co/datasets/bigcode/the-stack-dedup/resolve/refs%2Fconvert%2Fparquet/bigcode--the-stack-dedup/parquet-train-00000-of-05140.parquet') ``` when doing pagination on the bigcode/the-stack...
closed
2023-06-23T16:37:55Z
2023-06-26T08:21:51Z
2023-06-26T08:21:50Z
lhoestq
1,771,726,572
Move dtos to its own file
Currently, we have data transfer objects in the `utils.py` file, but this one is growing and growing and sometime we could have duplicated code if don't verify the existing models. This PR just moves all the abstractions of responses to a `dtos.py` file. Most of the files are changes on the `imports` definition.
Move dtos to its own file: Currently, we have data transfer objects in the `utils.py` file, but this one is growing and growing and sometime we could have duplicated code if don't verify the existing models. This PR just moves all the abstractions of responses to a `dtos.py` file. Most of the files are changes on the...
closed
2023-06-23T16:22:40Z
2023-06-26T11:04:14Z
2023-06-26T11:04:12Z
AndreaFrancis
1,771,645,506
Create missing Jobs when /rows cache does not exists yet
Should fix https://github.com/huggingface/datasets-server/issues/1341
Create missing Jobs when /rows cache does not exists yet: Should fix https://github.com/huggingface/datasets-server/issues/1341
closed
2023-06-23T15:22:59Z
2023-06-26T11:01:37Z
2023-06-26T11:01:36Z
AndreaFrancis
1,771,454,880
Fix regression: use parquet metadata when possible
I noticed some datasets have a slow pagination like https://huggingface.co/datasets/mlfoundations/datacomp_1b that times out. This is because there was a regression in https://github.com/huggingface/datasets-server/pull/1287 where it wouldn't use the parquet metadata because `get_best_response` returns the first suc...
Fix regression: use parquet metadata when possible: I noticed some datasets have a slow pagination like https://huggingface.co/datasets/mlfoundations/datacomp_1b that times out. This is because there was a regression in https://github.com/huggingface/datasets-server/pull/1287 where it wouldn't use the parquet metada...
closed
2023-06-23T13:20:08Z
2023-06-27T08:56:57Z
2023-06-23T16:01:53Z
lhoestq
1,771,416,315
Remove too restrictive __all__ definitions
Remove too restrictive `__all__` from `libcommon/simple_cache`: - it contained only one attribute: `DoesNotExist` - Python considers as private all the module attributes which are not defined in `__all__` ``` from libcommon.simple_cache import upsert_response Warning: Accessing a protected member of a class or...
Remove too restrictive __all__ definitions: Remove too restrictive `__all__` from `libcommon/simple_cache`: - it contained only one attribute: `DoesNotExist` - Python considers as private all the module attributes which are not defined in `__all__` ``` from libcommon.simple_cache import upsert_response Warning...
closed
2023-06-23T12:56:41Z
2023-06-26T13:01:27Z
2023-06-26T13:01:25Z
albertvillanova
1,771,026,707
Update to datasets 2.13.1
This should fix the parquet-and-info job for bigcode/the-stack-dedup. The patch release includes a fix that makes it ignore non data files (in the case of bigcode/the-stack-dedup there's a license.json file that shouldn't be taken into account in the data of course)
Update to datasets 2.13.1: This should fix the parquet-and-info job for bigcode/the-stack-dedup. The patch release includes a fix that makes it ignore non data files (in the case of bigcode/the-stack-dedup there's a license.json file that shouldn't be taken into account in the data of course)
closed
2023-06-23T08:15:55Z
2023-06-23T12:02:32Z
2023-06-23T12:02:30Z
lhoestq
1,769,777,048
/rows returns `null` images for some datasets
Reported in https://github.com/huggingface/datasets/issues/2526 See https://huggingface.co/datasets/lombardata/panoptic_2023_06_22 has a working /first-rows but /rows always returns `null` for images. edit: another one https://huggingface.co/datasets/jonathan-roberts1/RSSCN7
/rows returns `null` images for some datasets: Reported in https://github.com/huggingface/datasets/issues/2526 See https://huggingface.co/datasets/lombardata/panoptic_2023_06_22 has a working /first-rows but /rows always returns `null` for images. edit: another one https://huggingface.co/datasets/jonathan-roberts...
closed
2023-06-22T14:13:39Z
2023-07-28T12:42:23Z
2023-07-28T12:42:23Z
lhoestq
1,769,268,896
Ensure only one job is started for the same unicity_id
To avoid multiple job runners getting the same job at the same time, for a given unicity_id (identifies job type + parameters): - a lock is used during the update of the selected job - we ensure no other job is already started - we select the newest (in date order) job from all the waiting jobs and start it (status,...
Ensure only one job is started for the same unicity_id: To avoid multiple job runners getting the same job at the same time, for a given unicity_id (identifies job type + parameters): - a lock is used during the update of the selected job - we ensure no other job is already started - we select the newest (in date or...
closed
2023-06-22T09:10:26Z
2023-06-26T15:37:28Z
2023-06-26T15:37:27Z
severo
1,769,120,320
Use external-secrets to read secrets from AWS
null
Use external-secrets to read secrets from AWS:
closed
2023-06-22T07:37:10Z
2023-06-22T07:51:08Z
2023-06-22T07:51:07Z
rtrompier
1,769,074,534
Create /filter endpoint
Create /filter endpoint. I have tried to follow roughly the same logic as the /rows endpoint. TODO: - [x] e2e tests - [x] chart: only increase the number of replicas - ~~docker-compose files~~ - [x] openapi specification - [x] documentation pages: draft of `filter.mdx' Subsequent PRs: - Index all datase...
Create /filter endpoint: Create /filter endpoint. I have tried to follow roughly the same logic as the /rows endpoint. TODO: - [x] e2e tests - [x] chart: only increase the number of replicas - ~~docker-compose files~~ - [x] openapi specification - [x] documentation pages: draft of `filter.mdx' Subsequent...
closed
2023-06-22T07:10:03Z
2023-10-05T06:49:47Z
2023-10-05T06:49:09Z
albertvillanova
1,768,066,940
keep image and audio untruncated
close https://github.com/huggingface/datasets-server/issues/1416
keep image and audio untruncated: close https://github.com/huggingface/datasets-server/issues/1416
closed
2023-06-21T17:15:27Z
2023-06-22T12:51:41Z
2023-06-22T12:51:40Z
lhoestq
1,768,045,641
Truncated first-rows may crop image URLs
see https://huggingface.co/datasets/Antreas/TALI-base because of that the images are not show in the UI
Truncated first-rows may crop image URLs: see https://huggingface.co/datasets/Antreas/TALI-base because of that the images are not show in the UI
closed
2023-06-21T17:05:29Z
2023-06-22T12:51:41Z
2023-06-22T12:51:41Z
lhoestq
1,767,726,199
Test concurrency in parquet and info
It adds a test on the concurrency of job runners on the step `config-parquet-and-info`(which creates `refs/convert/parquet` and uploads parquet files), to detect when the lock is not respected, leading to `CreateCommitError`. Also: - group all the code that accesses the Hub inside the lock (create the "branch", sen...
Test concurrency in parquet and info: It adds a test on the concurrency of job runners on the step `config-parquet-and-info`(which creates `refs/convert/parquet` and uploads parquet files), to detect when the lock is not respected, leading to `CreateCommitError`. Also: - group all the code that accesses the Hub ins...
closed
2023-06-21T14:22:15Z
2023-06-21T15:06:16Z
2023-06-21T15:06:15Z
severo
1,767,622,888
Fix the lock
Several processes were able to acquire the lock at the same time
Fix the lock: Several processes were able to acquire the lock at the same time
closed
2023-06-21T13:36:15Z
2023-06-21T14:25:16Z
2023-06-21T14:25:15Z
severo
1,767,394,602
The e2e tests have implicit dependencies
As reported by @albertvillanova for example, the following test does not pass ``` $ TEST_PATH=tests/test_11_api.py::test_rows_endpoint make test ``` while this ones pass: ``` $ TEST_PATH="tests/test_11_api.py::test_endpoint tests/test_11_api.py::test_rows_endpoint" make test ``` It's because `test_rows_...
The e2e tests have implicit dependencies: As reported by @albertvillanova for example, the following test does not pass ``` $ TEST_PATH=tests/test_11_api.py::test_rows_endpoint make test ``` while this ones pass: ``` $ TEST_PATH="tests/test_11_api.py::test_endpoint tests/test_11_api.py::test_rows_endpoint" ...
open
2023-06-21T11:38:20Z
2023-08-15T15:13:24Z
null
severo
1,767,253,537
unblock DFKI-SLT/few-nerd
null
unblock DFKI-SLT/few-nerd:
closed
2023-06-21T10:12:26Z
2023-06-21T14:21:58Z
2023-06-21T14:21:57Z
lhoestq
1,767,202,799
Split-names-from-streaming is incorrect
e.g. only returns ["test"] for [Antreas/TALI-base](https://huggingface.co/datasets/Antreas/TALI-base) instead of ['train', 'test', 'val']
Split-names-from-streaming is incorrect: e.g. only returns ["test"] for [Antreas/TALI-base](https://huggingface.co/datasets/Antreas/TALI-base) instead of ['train', 'test', 'val']
closed
2023-06-21T09:44:29Z
2023-06-21T09:50:13Z
2023-06-21T09:50:13Z
lhoestq
1,766,207,317
fix: 🐛 try to ensure only one job can get the lock
As it's difficult to test on multiple MongoDB replicas (we should do it at one point), I'll try to: - merge to main - deploy on prod - refresh the dataset severo/flores_101 - see if all the configs could be processed for the parquet creation, or if we still have an error for "half" of them
fix: 🐛 try to ensure only one job can get the lock: As it's difficult to test on multiple MongoDB replicas (we should do it at one point), I'll try to: - merge to main - deploy on prod - refresh the dataset severo/flores_101 - see if all the configs could be processed for the parquet creation, or if we still hav...
closed
2023-06-20T21:20:23Z
2023-06-20T21:36:30Z
2023-06-20T21:26:59Z
severo
1,766,092,638
test: 💍 add a test on lock.git_branch
null
test: 💍 add a test on lock.git_branch:
closed
2023-06-20T20:19:16Z
2023-06-20T20:35:13Z
2023-06-20T20:35:11Z
severo
1,765,983,796
Add tests on create_commits
see #1396
Add tests on create_commits: see #1396
closed
2023-06-20T18:59:08Z
2023-06-20T19:08:41Z
2023-06-20T19:08:40Z
severo
1,765,621,130
Use EFS instead of NFS for datasets and parquet "local" cache
(and duckdb local cache in the PRs) related to https://github.com/huggingface/datasets-server/issues/1072 internal: see https://github.com/huggingface/infra/issues/605#issue-1758616648
Use EFS instead of NFS for datasets and parquet "local" cache: (and duckdb local cache in the PRs) related to https://github.com/huggingface/datasets-server/issues/1072 internal: see https://github.com/huggingface/infra/issues/605#issue-1758616648
closed
2023-06-20T15:12:02Z
2023-08-11T13:48:34Z
2023-08-11T13:48:34Z
severo
1,765,618,183
Use S3 + cloudfront for assets and cached-assets
related to #1072 internal: see https://github.com/huggingface/infra/issues/605#issue-1758616648
Use S3 + cloudfront for assets and cached-assets: related to #1072 internal: see https://github.com/huggingface/infra/issues/605#issue-1758616648
closed
2023-06-20T15:10:24Z
2023-10-09T17:54:03Z
2023-10-09T17:54:03Z
severo
1,765,617,336
Modify TTL index condition
Adding a condition TTL index in Job document. It will only delete those records with a final state of SUCCESS, ERROR or CANCELLED
Modify TTL index condition: Adding a condition TTL index in Job document. It will only delete those records with a final state of SUCCESS, ERROR or CANCELLED
closed
2023-06-20T15:09:53Z
2023-06-20T15:20:00Z
2023-06-20T15:19:59Z
AndreaFrancis
1,765,596,874
Retry get_parquet_file_and_size
In prod I got an ArrowInvalid when instantiating a pq.ParquetFile for bigcode/the-stack-dedup even though all the parquet files are valid (I ran a script and checked I could get all the pq.ParquetFile objects)
Retry get_parquet_file_and_size: In prod I got an ArrowInvalid when instantiating a pq.ParquetFile for bigcode/the-stack-dedup even though all the parquet files are valid (I ran a script and checked I could get all the pq.ParquetFile objects)
closed
2023-06-20T14:59:10Z
2023-06-21T09:54:13Z
2023-06-21T09:54:12Z
lhoestq
1,765,568,744
Temporaly remove TTL index
Like https://github.com/huggingface/datasets-server/commit/47ea65b2567db4482579cd7000393cf0a15b412e , firs step to modify TTL index is to remove it from the code, then a deploy is needed.
Temporaly remove TTL index: Like https://github.com/huggingface/datasets-server/commit/47ea65b2567db4482579cd7000393cf0a15b412e , firs step to modify TTL index is to remove it from the code, then a deploy is needed.
closed
2023-06-20T14:46:05Z
2023-06-20T14:59:09Z
2023-06-20T14:59:07Z
AndreaFrancis
1,765,455,044
Update docs with hub parquet endpoint
Wait for (internal)https://github.com/huggingface/moon-landing/pull/6695 to be merged and deployed close https://github.com/huggingface/datasets-server/issues/1400
Update docs with hub parquet endpoint: Wait for (internal)https://github.com/huggingface/moon-landing/pull/6695 to be merged and deployed close https://github.com/huggingface/datasets-server/issues/1400
closed
2023-06-20T13:49:22Z
2023-07-18T15:54:08Z
2023-07-18T15:53:36Z
lhoestq
1,765,413,674
fix: 🐛 split the default value to get a list of strings
null
fix: 🐛 split the default value to get a list of strings:
closed
2023-06-20T13:28:14Z
2023-06-20T13:28:28Z
2023-06-20T13:28:26Z
severo
1,765,400,845
Update docs for hf.co/api/datasets/<dataset>/parquet endpoint
To be used instead of the datasets server /parquet endpoint in examples following https://github.com/huggingface/moon-landing/pull/6695
Update docs for hf.co/api/datasets/<dataset>/parquet endpoint: To be used instead of the datasets server /parquet endpoint in examples following https://github.com/huggingface/moon-landing/pull/6695
closed
2023-06-20T13:21:04Z
2023-07-19T12:02:37Z
2023-07-19T12:02:37Z
lhoestq
1,765,398,521
admin-UI stuck for datasets with many configs/splits
For example, https://huggingface.co/spaces/datasets-maintainers/datasets-server-admin-ui with `gsarti/flores_101` on "Dataset status" tab takes a lot of time.
admin-UI stuck for datasets with many configs/splits: For example, https://huggingface.co/spaces/datasets-maintainers/datasets-server-admin-ui with `gsarti/flores_101` on "Dataset status" tab takes a lot of time.
closed
2023-06-20T13:19:48Z
2024-02-06T14:39:15Z
2024-02-06T14:39:14Z
severo
1,765,309,469
Rename parent job runners
Based on the new job runner for those that need a cached directory on https://github.com/huggingface/datasets-server/pull/1388 We will need a new split job runner that inherits from the new JobRunnerWithCache to be used as part of https://github.com/huggingface/datasets-server/pull/1199 and https://github.com/hugging...
Rename parent job runners: Based on the new job runner for those that need a cached directory on https://github.com/huggingface/datasets-server/pull/1388 We will need a new split job runner that inherits from the new JobRunnerWithCache to be used as part of https://github.com/huggingface/datasets-server/pull/1199 and...
closed
2023-06-20T12:31:31Z
2023-06-20T18:43:09Z
2023-06-20T18:43:08Z
AndreaFrancis
1,765,303,711
Ensure the parquet files in /parquet are sorted by "shard" index
And tell it in the docs
Ensure the parquet files in /parquet are sorted by "shard" index: And tell it in the docs
closed
2023-06-20T12:28:36Z
2023-06-30T20:02:14Z
2023-06-30T20:02:14Z
severo
1,765,292,247
Avoid commit conflicts
See https://github.com/huggingface/datasets-server/issues/1163#issuecomment-1598504866 and following comments - [x] add tests to check if `parent_commit=parent_commit if not commit_infos else commit_infos[-1].oid` is correct. Yes; see https://github.com/huggingface/datasets-server/pull/1408 - [x] add write/read con...
Avoid commit conflicts: See https://github.com/huggingface/datasets-server/issues/1163#issuecomment-1598504866 and following comments - [x] add tests to check if `parent_commit=parent_commit if not commit_infos else commit_infos[-1].oid` is correct. Yes; see https://github.com/huggingface/datasets-server/pull/1408 ...
closed
2023-06-20T12:23:06Z
2023-06-21T15:04:04Z
2023-06-21T15:04:04Z
severo
1,765,084,367
Increase max job duration
Currently bigcode/the-stack-dedup seems to take more than 20min to copy the parquet files. This is mostly to test that it works - we can decide later if we keep this value or if we need to make this value depend on the job.
Increase max job duration: Currently bigcode/the-stack-dedup seems to take more than 20min to copy the parquet files. This is mostly to test that it works - we can decide later if we keep this value or if we need to make this value depend on the job.
closed
2023-06-20T10:11:37Z
2023-06-20T13:52:23Z
2023-06-20T13:52:22Z
lhoestq
1,765,010,399
Remove unused code from /rows API endpoint
Remove unused code from /rows API endpoint.
Remove unused code from /rows API endpoint: Remove unused code from /rows API endpoint.
closed
2023-06-20T09:26:25Z
2023-06-21T14:22:46Z
2023-06-21T14:22:44Z
albertvillanova
1,764,960,588
Raise retryable error on hfhubhttperror
see https://github.com/huggingface/datasets-server/issues/1163
Raise retryable error on hfhubhttperror: see https://github.com/huggingface/datasets-server/issues/1163
closed
2023-06-20T08:58:43Z
2023-06-20T12:36:46Z
2023-06-20T12:36:45Z
severo
1,763,437,683
feat: 🎸 10x the size of supported images
null
feat: 🎸 10x the size of supported images:
closed
2023-06-19T12:27:34Z
2023-06-19T12:36:01Z
2023-06-19T12:36:00Z
severo
1,762,949,715
Fix typo in error message
I already suggested this typo fix: - https://github.com/huggingface/datasets-server/pull/1371/files#r1230650727 while reviewing PR: - #1371 And normally it was taken into account with commit: - https://github.com/huggingface/datasets-server/pull/1371/commits/1c55c9e6d55e178062eb6b85c33e7c6e71dc13ec However ...
Fix typo in error message: I already suggested this typo fix: - https://github.com/huggingface/datasets-server/pull/1371/files#r1230650727 while reviewing PR: - #1371 And normally it was taken into account with commit: - https://github.com/huggingface/datasets-server/pull/1371/commits/1c55c9e6d55e178062eb6b85c...
closed
2023-06-19T07:45:23Z
2023-06-19T08:55:14Z
2023-06-19T08:55:12Z
albertvillanova
1,762,411,257
Add Docker internal to extra_hosts
This is required to connect to the local DB instance on Linux; it is already added to `tools/docker-compose-dev-datasets-server.yml`
Add Docker internal to extra_hosts: This is required to connect to the local DB instance on Linux; it is already added to `tools/docker-compose-dev-datasets-server.yml`
closed
2023-06-18T18:26:33Z
2023-06-19T10:39:36Z
2023-06-19T10:39:36Z
baskrahmer
1,762,407,543
Small typos
Fix closing brackets and GH action link
Small typos: Fix closing brackets and GH action link
closed
2023-06-18T18:19:07Z
2023-06-19T08:51:26Z
2023-06-19T08:51:25Z
baskrahmer
1,761,088,254
New parent job runner for cached data
Currently, we have datasets_based_job_runner but we need a new one that only creates a chance folder without modifying the datasets library config. Context https://github.com/huggingface/datasets-server/pull/1296#discussion_r1232512427
New parent job runner for cached data: Currently, we have datasets_based_job_runner but we need a new one that only creates a chance folder without modifying the datasets library config. Context https://github.com/huggingface/datasets-server/pull/1296#discussion_r1232512427
closed
2023-06-16T18:05:01Z
2023-06-20T12:21:33Z
2023-06-20T12:21:32Z
AndreaFrancis
1,761,052,990
fix: 🐛 support bigger images
fixes https://github.com/huggingface/datasets-server/issues/1361
fix: 🐛 support bigger images: fixes https://github.com/huggingface/datasets-server/issues/1361
closed
2023-06-16T17:40:57Z
2023-06-19T11:21:41Z
2023-06-19T11:21:40Z
severo
1,760,975,012
Detect flaky hosting platforms and propose to host on the Hub
Many datasets are hosted on Zenodo or GDrive, and loaded using a loading script. But we have a lot of issues with them, it's not very reliable. @albertvillanova has fixed a lot of them, see https://huggingface.co/datasets/medal/discussions/2#648856b01927b18ced79d8b7 for example. In case of errors, we could detect if...
Detect flaky hosting platforms and propose to host on the Hub: Many datasets are hosted on Zenodo or GDrive, and loaded using a loading script. But we have a lot of issues with them, it's not very reliable. @albertvillanova has fixed a lot of them, see https://huggingface.co/datasets/medal/discussions/2#648856b01927b18...
closed
2023-06-16T16:39:23Z
2024-06-19T14:15:38Z
2024-06-19T14:15:38Z
severo
1,760,678,709
Uncaught error on config-parquet-and-info on big datasets
For some datasets that require copying original parquet files to `refs/convert/parquet` in multiple commits under a lock (see https://github.com/huggingface/datasets-server/issues/1349), we get: https://datasets-server.huggingface.co/parquet?dataset=bigcode/the-stack&config=bigcode--the-stack https://datasets-serve...
Uncaught error on config-parquet-and-info on big datasets: For some datasets that require copying original parquet files to `refs/convert/parquet` in multiple commits under a lock (see https://github.com/huggingface/datasets-server/issues/1349), we get: https://datasets-server.huggingface.co/parquet?dataset=bigcode/...
closed
2023-06-16T13:49:29Z
2023-07-17T16:40:53Z
2023-07-17T16:40:52Z
severo
1,760,673,656
Uncaught error in /rows on big datasets
For some datasets, for which the parquet files have been uploaded (or copied) with multiple commits (see https://github.com/huggingface/datasets-server/issues/1349) like `atom-in-the-universe/zlib-books-1k-50k`: ``` tiiuae/falcon-refinedweb marianna13/zlib-books-1k-500k atom-in-the-universe/zlib-books-1k-50k ato...
Uncaught error in /rows on big datasets: For some datasets, for which the parquet files have been uploaded (or copied) with multiple commits (see https://github.com/huggingface/datasets-server/issues/1349) like `atom-in-the-universe/zlib-books-1k-50k`: ``` tiiuae/falcon-refinedweb marianna13/zlib-books-1k-500k at...
closed
2023-06-16T13:46:20Z
2023-07-17T16:40:28Z
2023-07-17T16:40:28Z
severo
1,760,254,971
Rename dev to staging, and use staging mongodb cluster
null
Rename dev to staging, and use staging mongodb cluster:
closed
2023-06-16T09:22:22Z
2023-06-19T12:12:20Z
2023-06-19T12:12:18Z
severo
1,760,249,622
Change "dev" environment to "staging"
It makes more sense to call it "staging". And it will use the mongo atlas staging cluster
Change "dev" environment to "staging": It makes more sense to call it "staging". And it will use the mongo atlas staging cluster
closed
2023-06-16T09:18:33Z
2023-06-20T18:05:49Z
2023-06-20T18:05:49Z
severo
1,760,105,155
Upgrade prod mongo from v5 to v6
This is needed for `$in` function in TTL index: https://github.com/huggingface/datasets-server/pull/1325/files#diff-44fa7cb2645881e55953db64dafa198b2e007a2e531f70acaeebfc50ffa67953R141 See https://www.mongodb.com/docs/manual/release-notes/6.0/#indexes --- to upgrade: https://www.mongodb.com/docs/atlas/tutorial...
Upgrade prod mongo from v5 to v6: This is needed for `$in` function in TTL index: https://github.com/huggingface/datasets-server/pull/1325/files#diff-44fa7cb2645881e55953db64dafa198b2e007a2e531f70acaeebfc50ffa67953R141 See https://www.mongodb.com/docs/manual/release-notes/6.0/#indexes --- to upgrade: https://w...
closed
2023-06-16T07:37:29Z
2023-06-20T15:08:11Z
2023-06-20T15:08:11Z
severo
1,760,089,716
Revert "Delete ttl index from queue.py code (#1378)"
This reverts commit 47ea65b2567db4482579cd7000393cf0a15b412e.
Revert "Delete ttl index from queue.py code (#1378)": This reverts commit 47ea65b2567db4482579cd7000393cf0a15b412e.
closed
2023-06-16T07:29:39Z
2023-06-16T07:29:55Z
2023-06-16T07:29:54Z
severo
1,759,657,559
Rollback TTL index
null
Rollback TTL index:
closed
2023-06-15T23:08:40Z
2023-06-15T23:20:25Z
2023-06-15T23:20:24Z
AndreaFrancis
1,759,573,668
Delete ttl index from queue.py code
First part of https://github.com/huggingface/datasets-server/issues/1326
Delete ttl index from queue.py code: First part of https://github.com/huggingface/datasets-server/issues/1326
closed
2023-06-15T21:28:16Z
2023-06-15T22:08:08Z
2023-06-15T22:08:07Z
AndreaFrancis
1,759,467,470
[docs] Add build notebook workflow
Enables the doc-builder to build Colab notebooks :)
[docs] Add build notebook workflow: Enables the doc-builder to build Colab notebooks :)
closed
2023-06-15T19:58:06Z
2023-06-15T20:22:51Z
2023-06-15T20:22:50Z
stevhliu
1,759,417,823
[docs] Improvements
Based on @mishig25's [feedback](https://huggingface.slack.com/archives/C0311GZ7R6K/p1684484245379859), this adds: - a response to the code snippets in the Quickstart - an end-to-end example of using `/parquet` to get a dataset, analyze it, and plot the results - button to open in a Colab notebook to run examples rig...
[docs] Improvements: Based on @mishig25's [feedback](https://huggingface.slack.com/archives/C0311GZ7R6K/p1684484245379859), this adds: - a response to the code snippets in the Quickstart - an end-to-end example of using `/parquet` to get a dataset, analyze it, and plot the results - button to open in a Colab noteboo...
closed
2023-06-15T19:23:55Z
2023-06-16T16:10:35Z
2023-06-16T16:10:04Z
stevhliu
1,759,223,139
Fix fill_builder_info
close https://github.com/huggingface/datasets-server/issues/1374 NamedSplit peut pas être convertit par orjson
Fix fill_builder_info: close https://github.com/huggingface/datasets-server/issues/1374 NamedSplit peut pas être convertit par orjson
closed
2023-06-15T16:57:35Z
2023-06-15T20:37:36Z
2023-06-15T20:37:35Z
lhoestq
1,759,049,916
Truncated cells seem to prevent conversion to parquet
See https://github.com/huggingface/datasets-server/pull/1372#issuecomment-1593249655 ``` Traceback (most recent call last): File "/src/services/worker/src/worker/job_manager.py", line 167, in process if len(orjson_dumps(content)) > self.worker_config.content_max_bytes: File "/src/libs/libcommon/src/libco...
Truncated cells seem to prevent conversion to parquet: See https://github.com/huggingface/datasets-server/pull/1372#issuecomment-1593249655 ``` Traceback (most recent call last): File "/src/services/worker/src/worker/job_manager.py", line 167, in process if len(orjson_dumps(content)) > self.worker_config.co...
closed
2023-06-15T15:11:19Z
2023-06-15T20:37:36Z
2023-06-15T20:37:36Z
severo
1,758,633,320
Refac hub_datasets fixture
This way no need to setup all the hub datasets fixtures just to run one single test close #921
Refac hub_datasets fixture: This way no need to setup all the hub datasets fixtures just to run one single test close #921
closed
2023-06-15T11:35:02Z
2023-06-15T20:53:44Z
2023-06-15T20:53:43Z
lhoestq
1,758,454,961
Update datasets dependency to 2.13.0 version
After 2.13.0 datasets release, update dependencies on it. Note that I have also removed the explicit dependency on `datasets` from `services/api`, - see commit: https://github.com/huggingface/datasets-server/commit/a2c0cd908b45a7065d936964c4d0477143146d6c This is analogous to what was previously done on `service...
Update datasets dependency to 2.13.0 version: After 2.13.0 datasets release, update dependencies on it. Note that I have also removed the explicit dependency on `datasets` from `services/api`, - see commit: https://github.com/huggingface/datasets-server/commit/a2c0cd908b45a7065d936964c4d0477143146d6c This is ana...
closed
2023-06-15T09:48:39Z
2023-06-15T20:49:23Z
2023-06-15T15:59:03Z
albertvillanova
1,757,309,822
Adding limit for number of configs
Closes https://github.com/huggingface/datasets-server/issues/1367
Adding limit for number of configs: Closes https://github.com/huggingface/datasets-server/issues/1367
closed
2023-06-14T16:56:59Z
2023-06-15T14:58:43Z
2023-06-15T14:58:42Z
AndreaFrancis
1,757,272,674
Update datasets to 2.13.0
https://github.com/huggingface/datasets/releases/tag/2.13.0 Related to the datasets server: - Better row group size in push_to_hub by @lhoestq in https://github.com/huggingface/datasets/pull/5935 - Make get_from_cache use custom temp filename that is locked by @albertvillanova in https://github.com/huggingface/dat...
Update datasets to 2.13.0: https://github.com/huggingface/datasets/releases/tag/2.13.0 Related to the datasets server: - Better row group size in push_to_hub by @lhoestq in https://github.com/huggingface/datasets/pull/5935 - Make get_from_cache use custom temp filename that is locked by @albertvillanova in https:/...
closed
2023-06-14T16:31:24Z
2023-06-15T15:59:05Z
2023-06-15T15:59:05Z
severo
1,757,265,256
Remove duplicates in cache
We have cache entries for `bigscience/P3` and `BigScience/P3` for example: they resolve to the same dataset.
Remove duplicates in cache: We have cache entries for `bigscience/P3` and `BigScience/P3` for example: they resolve to the same dataset.
closed
2023-06-14T16:26:07Z
2023-07-17T16:41:02Z
2023-07-17T16:41:02Z
severo
1,757,157,046
feat: 🎸 reduce the resources
null
feat: 🎸 reduce the resources:
closed
2023-06-14T15:23:53Z
2023-06-14T15:25:10Z
2023-06-14T15:25:09Z
severo
1,757,098,647
Set a limit on the number of configs
Dataset https://huggingface.co/datasets/Muennighoff/flores200 has more than 40,000 configs. It's too much for our infrastructure for now. We should set a limit on it.
Set a limit on the number of configs: Dataset https://huggingface.co/datasets/Muennighoff/flores200 has more than 40,000 configs. It's too much for our infrastructure for now. We should set a limit on it.
closed
2023-06-14T14:55:05Z
2023-06-15T14:58:44Z
2023-06-15T14:58:43Z
severo