id
int64
959M
2.55B
title
stringlengths
3
133
body
stringlengths
1
65.5k
description
stringlengths
5
65.6k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
closed_at
stringlengths
20
20
user
stringclasses
174 values
1,821,824,876
Document setting up local dev env with same Poetry version as CI
Add to `DEVELOPER_GUIDE` docs how to set up local development environment with Poetry version 1.4.2, the same as in CI. Fix #1565.
Document setting up local dev env with same Poetry version as CI: Add to `DEVELOPER_GUIDE` docs how to set up local development environment with Poetry version 1.4.2, the same as in CI. Fix #1565.
closed
2023-07-26T08:11:06Z
2023-07-27T09:52:22Z
2023-07-27T09:52:21Z
albertvillanova
1,821,703,588
Install same poetry version in local development environment as in CI
Currently, the `DEVELOPER_GUIDE` instruct to install the latest Poetry version (currently 1.5.1) to set up the local development environment, differently from the CI, which uses 1.4.2.
Install same poetry version in local development environment as in CI: Currently, the `DEVELOPER_GUIDE` instruct to install the latest Poetry version (currently 1.5.1) to set up the local development environment, differently from the CI, which uses 1.4.2.
closed
2023-07-26T06:49:39Z
2023-07-27T09:52:22Z
2023-07-27T09:52:22Z
albertvillanova
1,821,255,854
feat: 🎸 add heavy workers to help flush the queue
null
feat: 🎸 add heavy workers to help flush the queue:
closed
2023-07-25T22:25:31Z
2023-07-25T22:26:25Z
2023-07-25T22:26:25Z
severo
1,821,104,475
Add information about the storage locations on app startup
For all the apps (services, jobs, workers), emit logs at startup that describe the storage locations (and statistics about the space and inodes? + is it accessible by the runtime user?)
Add information about the storage locations on app startup: For all the apps (services, jobs, workers), emit logs at startup that describe the storage locations (and statistics about the space and inodes? + is it accessible by the runtime user?)
closed
2023-07-25T20:34:19Z
2024-06-19T14:18:42Z
2024-06-19T14:18:41Z
severo
1,821,101,939
Mount all the storages in the "storage" pod
See https://www.notion.so/huggingface2/Disk-storage-0a4a8fcf27754c8cb7248b259dcc4b21 (internal)
Mount all the storages in the "storage" pod: See https://www.notion.so/huggingface2/Disk-storage-0a4a8fcf27754c8cb7248b259dcc4b21 (internal)
closed
2023-07-25T20:32:29Z
2023-08-25T15:06:35Z
2023-08-25T15:06:35Z
severo
1,821,099,533
Check the disk usage of all the storages in metrics
See https://www.notion.so/huggingface2/Disk-storage-0a4a8fcf27754c8cb7248b259dcc4b21 (internal)
Check the disk usage of all the storages in metrics: See https://www.notion.so/huggingface2/Disk-storage-0a4a8fcf27754c8cb7248b259dcc4b21 (internal)
closed
2023-07-25T20:30:48Z
2023-08-11T20:51:45Z
2023-08-11T20:51:45Z
severo
1,821,070,069
The api and rows services cannot store datasets cache
The datasets cache, for api and rows services (they depend on datasets), is not set, and by default is `/.cache/huggingface/datasets`. But this directory is not accessible by the python user. I'm not sure if it's an issue, but I think we should: - set the datasets environment variable for these services (note that ...
The api and rows services cannot store datasets cache: The datasets cache, for api and rows services (they depend on datasets), is not set, and by default is `/.cache/huggingface/datasets`. But this directory is not accessible by the python user. I'm not sure if it's an issue, but I think we should: - set the datas...
open
2023-07-25T20:13:22Z
2023-09-04T11:42:23Z
null
severo
1,821,038,082
Fix storage configs
null
Fix storage configs:
closed
2023-07-25T19:55:26Z
2023-07-25T20:22:19Z
2023-07-25T20:22:18Z
severo
1,820,707,381
feat: 🎸 request 4Gi per /rows pod
it will allocate less pods per node. We regularly have OOM errors, with pods that use 3, 5, 6, 7 GiB RAM, and the node has not enough RAM left for them. <img width="305" alt="Capture d’écran 2023-07-25 à 12 29 26" src="https://github.com/huggingface/datasets-server/assets/1676121/6f237006-2bab-4604-82f7-2f0fa1a3ac...
feat: 🎸 request 4Gi per /rows pod: it will allocate less pods per node. We regularly have OOM errors, with pods that use 3, 5, 6, 7 GiB RAM, and the node has not enough RAM left for them. <img width="305" alt="Capture d’écran 2023-07-25 à 12 29 26" src="https://github.com/huggingface/datasets-server/assets/167612...
closed
2023-07-25T16:31:45Z
2023-07-25T19:32:53Z
2023-07-25T16:40:07Z
severo
1,820,638,038
Update certifi to 2023.7.22
Update `certifi` to 2023.7.22 in poetry lock files Fix #1556
Update certifi to 2023.7.22: Update `certifi` to 2023.7.22 in poetry lock files Fix #1556
closed
2023-07-25T15:50:52Z
2023-07-25T16:05:54Z
2023-07-25T16:05:53Z
albertvillanova
1,820,614,095
Update certifi to 2023.7.22
Our CI pip audit finds 1 vulnerability in: https://github.com/huggingface/datasets-server/actions/runs/5658665095/job/15330470711?pr=1555 ``` Found 1 known vulnerability in 1 package Name Version ID Fix Versions ------- -------- ------------------- ------------ certifi 2023.5.7 GHSA-xqr8-7jwr-...
Update certifi to 2023.7.22 : Our CI pip audit finds 1 vulnerability in: https://github.com/huggingface/datasets-server/actions/runs/5658665095/job/15330470711?pr=1555 ``` Found 1 known vulnerability in 1 package Name Version ID Fix Versions ------- -------- ------------------- ------------ ce...
closed
2023-07-25T15:38:10Z
2023-07-25T16:05:54Z
2023-07-25T16:05:54Z
albertvillanova
1,820,582,354
Update poetry minor version in Dockerfiles and GH Actions
Update poetry minor version in Dockerfiles and GH Actions: - From: 1.4.0 - To: 1.4.2 This way we integrate the bug fixes to 1.4.0.
Update poetry minor version in Dockerfiles and GH Actions: Update poetry minor version in Dockerfiles and GH Actions: - From: 1.4.0 - To: 1.4.2 This way we integrate the bug fixes to 1.4.0.
closed
2023-07-25T15:20:23Z
2023-07-25T20:02:29Z
2023-07-25T17:30:34Z
albertvillanova
1,820,092,749
Align poetry version in all Docker files
Currently, the Poetry version set in jobs/cache_maintenance Docker file is different from the one set in all the other Docker files. This PR aligns the Poetry version in jobs/cache_maintenance Docker file with all the rest. Related PRs: - #1017 - #923
Align poetry version in all Docker files: Currently, the Poetry version set in jobs/cache_maintenance Docker file is different from the one set in all the other Docker files. This PR aligns the Poetry version in jobs/cache_maintenance Docker file with all the rest. Related PRs: - #1017 - #923
closed
2023-07-25T11:08:13Z
2023-07-25T13:17:15Z
2023-07-25T13:17:14Z
albertvillanova
1,820,057,841
Update locked cachecontrol yanked version in e2e
Update locked `cachecontrol` version from yanked 0.13.0 to 0.13.1 in `e2e` subpackage. Related to: - #1344
Update locked cachecontrol yanked version in e2e: Update locked `cachecontrol` version from yanked 0.13.0 to 0.13.1 in `e2e` subpackage. Related to: - #1344
closed
2023-07-25T10:44:48Z
2023-07-25T16:14:23Z
2023-07-25T16:14:22Z
albertvillanova
1,820,022,880
Update huggingface-hub dependency to 0.16.4 version
After 0.16 `huggingface-hub` release, update dependencies on it. Note that we remove the dependency on an explicit commit from `services/worker`. Close #1487.
Update huggingface-hub dependency to 0.16.4 version: After 0.16 `huggingface-hub` release, update dependencies on it. Note that we remove the dependency on an explicit commit from `services/worker`. Close #1487.
closed
2023-07-25T10:25:24Z
2023-07-25T16:13:40Z
2023-07-25T16:13:38Z
albertvillanova
1,819,072,365
feat: 🎸 reduce resources
the queue is empty
feat: 🎸 reduce resources: the queue is empty
closed
2023-07-24T20:14:16Z
2023-07-24T20:15:14Z
2023-07-24T20:15:13Z
severo
1,818,736,032
upgrade datasets to 2.14
https://github.com/huggingface/datasets/releases/tag/2.14.0 main changes: - use `token` instead of `use_auth_token` - the default config name is now `default` instead of `username--dataset_name`: we have to refresh all the datasets with only one config TODO: - [x] #1589 - [x] #1578 - [x] Refresh all the data...
upgrade datasets to 2.14: https://github.com/huggingface/datasets/releases/tag/2.14.0 main changes: - use `token` instead of `use_auth_token` - the default config name is now `default` instead of `username--dataset_name`: we have to refresh all the datasets with only one config TODO: - [x] #1589 - [x] #1578 ...
closed
2023-07-24T16:15:34Z
2023-09-06T12:33:52Z
2023-09-06T00:20:25Z
severo
1,818,703,864
The /rows pods take too long to initialize
The pods for the /rows service can take up to 2 minutes to become available (ie respond on /healthcheck).
The /rows pods take too long to initialize: The pods for the /rows service can take up to 2 minutes to become available (ie respond on /healthcheck).
closed
2023-07-24T15:56:05Z
2023-08-24T15:13:56Z
2023-08-24T15:13:56Z
severo
1,818,687,092
fix(helm): update probes for services pods
null
fix(helm): update probes for services pods:
closed
2023-07-24T15:46:01Z
2023-07-24T15:47:55Z
2023-07-24T15:47:54Z
rtrompier
1,818,686,624
Update prod.yaml
remove bigcode/the-stack from supportedDatasets, since it's supported anyway (copy of parquet files)
Update prod.yaml: remove bigcode/the-stack from supportedDatasets, since it's supported anyway (copy of parquet files)
closed
2023-07-24T15:45:44Z
2023-07-24T15:46:25Z
2023-07-24T15:45:49Z
severo
1,818,643,276
fix: use dedicated nodes for rows pods
null
fix: use dedicated nodes for rows pods:
closed
2023-07-24T15:21:39Z
2023-07-24T15:22:45Z
2023-07-24T15:22:44Z
rtrompier
1,818,639,949
feat: 🎸 unblock all datasets but Graphcore (and echarlaix) ones
1aurent/icdar-2011,Abuelnour/json_1000_Scientific_Paper,Biomedical-TeMU/ProfNER_corpus_NER,Biomedical-TeMU/ProfNER_corpus_classification,Biomedical-TeMU/SPACCC_Sentence-Splitter,Carlisle/msmarco-passage-non-abs,Champion/vpc2020_clear_anon_speech,CristianaLazar/librispeech500,CristianaLazar/librispeech5k_train,DTU54DL/l...
feat: 🎸 unblock all datasets but Graphcore (and echarlaix) ones: 1aurent/icdar-2011,Abuelnour/json_1000_Scientific_Paper,Biomedical-TeMU/ProfNER_corpus_NER,Biomedical-TeMU/ProfNER_corpus_classification,Biomedical-TeMU/SPACCC_Sentence-Splitter,Carlisle/msmarco-passage-non-abs,Champion/vpc2020_clear_anon_speech,Cristian...
closed
2023-07-24T15:19:50Z
2023-07-24T15:21:46Z
2023-07-24T15:21:45Z
severo
1,818,585,100
fix: resources allocation and use dedicated nodes for worker light
null
fix: resources allocation and use dedicated nodes for worker light:
closed
2023-07-24T14:50:29Z
2023-07-24T14:56:23Z
2023-07-24T14:56:22Z
rtrompier
1,818,585,045
feat: 🎸 unblock datasets with 200 downloads or more
GEM/BiSECT,GEM/references,GEM/xsum,HuggingFaceM4/charades,Karavet/ILUR-news-text-classification-corpus,Lacito/pangloss,SaulLu/Natural_Questions_HTML_reduced_all,SetFit/mnli,Tevatron/beir-corpus,Tevatron/wikipedia-curated-corpus,Tevatron/wikipedia-squad,Tevatron/wikipedia-squad-corpus,Tevatron/wikipedia-trivia-corpus,Te...
feat: 🎸 unblock datasets with 200 downloads or more: GEM/BiSECT,GEM/references,GEM/xsum,HuggingFaceM4/charades,Karavet/ILUR-news-text-classification-corpus,Lacito/pangloss,SaulLu/Natural_Questions_HTML_reduced_all,SetFit/mnli,Tevatron/beir-corpus,Tevatron/wikipedia-curated-corpus,Tevatron/wikipedia-squad,Tevatron/wiki...
closed
2023-07-24T14:50:27Z
2023-07-24T14:51:19Z
2023-07-24T14:51:18Z
severo
1,818,511,087
feat: 🎸 unblock datasets with at least 3 likes
DelgadoPanadero/Pokemon,HuggingFaceM4/COCO,HuggingFaceM4/FairFace,HuggingFaceM4/VQAv2,HuggingFaceM4/cm4-synthetic-testing,Muennighoff/flores200,VIMA/VIMA-Data,alkzar90/CC6204-Hackaton-Cub-Dataset,asapp/slue,ashraf-ali/quran-data,biglam/brill_iconclass,ccdv/cnn_dailymail,ccdv/mediasum,chrisjay/mnist-adversarial-dataset,...
feat: 🎸 unblock datasets with at least 3 likes: DelgadoPanadero/Pokemon,HuggingFaceM4/COCO,HuggingFaceM4/FairFace,HuggingFaceM4/VQAv2,HuggingFaceM4/cm4-synthetic-testing,Muennighoff/flores200,VIMA/VIMA-Data,alkzar90/CC6204-Hackaton-Cub-Dataset,asapp/slue,ashraf-ali/quran-data,biglam/brill_iconclass,ccdv/cnn_dailymail,...
closed
2023-07-24T14:10:03Z
2023-07-24T14:11:58Z
2023-07-24T14:11:57Z
severo
1,818,069,042
Add auth to first_rows_from_parquet
related to [ivrit-ai/audio-base](https://huggingface.co/datasets/ivrit-ai/audio-base) it works the same way as in /rows
Add auth to first_rows_from_parquet: related to [ivrit-ai/audio-base](https://huggingface.co/datasets/ivrit-ai/audio-base) it works the same way as in /rows
closed
2023-07-24T09:56:56Z
2023-07-24T10:17:31Z
2023-07-24T10:17:30Z
lhoestq
1,816,197,649
feat: 🎸 unblock 26 datasets (5 likes or more)
Unblocked datasets: CodedotAI/code_clippy, HuggingFaceM4/TGIF, SLPL/naab-raw, SocialGrep/ten-million-reddit-answers, ami, backslashlim/LoRA-Datasets, biglam/nls_chapbook_illustrations, cats_vs_dogs, common_language, cornell_movie_dialog, dalle-mini/YFCC100M_OpenAI_subset, joelito/lextreme, lj_speech, mozilla-foundat...
feat: 🎸 unblock 26 datasets (5 likes or more): Unblocked datasets: CodedotAI/code_clippy, HuggingFaceM4/TGIF, SLPL/naab-raw, SocialGrep/ten-million-reddit-answers, ami, backslashlim/LoRA-Datasets, biglam/nls_chapbook_illustrations, cats_vs_dogs, common_language, cornell_movie_dialog, dalle-mini/YFCC100M_OpenAI_subs...
closed
2023-07-21T18:11:34Z
2023-07-21T18:12:25Z
2023-07-21T18:12:24Z
severo
1,816,046,131
feat: 🎸 unblock impactful datasets
reazon-research/reazonspeech, tapaco, ccdv/arxiv-summarization, competition_math, mozilla-foundation/common_voice_7_0, ds4sd/DocLayNet, beyond/chinese_clean_passages_80m, xglue, miracl/miracl, superb done with https://observablehq.com/@huggingface/blocked-datasets
feat: 🎸 unblock impactful datasets: reazon-research/reazonspeech, tapaco, ccdv/arxiv-summarization, competition_math, mozilla-foundation/common_voice_7_0, ds4sd/DocLayNet, beyond/chinese_clean_passages_80m, xglue, miracl/miracl, superb done with https://observablehq.com/@huggingface/blocked-datasets
closed
2023-07-21T16:01:02Z
2023-07-21T16:02:41Z
2023-07-21T16:02:40Z
severo
1,814,793,743
Update aiohttp
Fix aiohttp in admin_ui and libapi
Update aiohttp: Fix aiohttp in admin_ui and libapi
closed
2023-07-20T21:01:42Z
2023-07-20T21:17:44Z
2023-07-20T21:17:43Z
AndreaFrancis
1,814,420,492
Update aiohttp dependency version
null
Update aiohttp dependency version:
closed
2023-07-20T16:48:10Z
2023-07-20T17:03:40Z
2023-07-20T17:03:39Z
AndreaFrancis
1,814,410,347
K8s job to periodically remove indexes
Cron Job to delete downloaded files on https://github.com/huggingface/datasets-server/pull/1516
K8s job to periodically remove indexes: Cron Job to delete downloaded files on https://github.com/huggingface/datasets-server/pull/1516
closed
2023-07-20T16:41:29Z
2023-08-04T16:03:00Z
2023-08-04T16:02:59Z
AndreaFrancis
1,814,369,068
chore(deps): bump aiohttp from 3.8.4 to 3.8.5 in /libs/libcommon
Bumps [aiohttp](https://github.com/aio-libs/aiohttp) from 3.8.4 to 3.8.5. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/aio-libs/aiohttp/releases">aiohttp's releases</a>.</em></p> <blockquote> <h2>3.8.5</h2> <h2>Security bugfixes</h2> <ul> <li> <p>Upgraded the vendored copy ...
chore(deps): bump aiohttp from 3.8.4 to 3.8.5 in /libs/libcommon: Bumps [aiohttp](https://github.com/aio-libs/aiohttp) from 3.8.4 to 3.8.5. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/aio-libs/aiohttp/releases">aiohttp's releases</a>.</em></p> <blockquote> <h2>3.8.5</h2> <...
closed
2023-07-20T16:18:56Z
2023-07-20T17:55:12Z
2023-07-20T17:55:08Z
dependabot[bot]
1,814,324,677
Remove datasets from the blocklist
The analysis is here: https://observablehq.com/@huggingface/blocked-datasets We remove: - the datasets that do not exist anymore on the Hub or are private - the 5 most liked datasets: bigscience/P3, google/fleurs, mc4, bigscience/xP3, allenai/nllb
Remove datasets from the blocklist: The analysis is here: https://observablehq.com/@huggingface/blocked-datasets We remove: - the datasets that do not exist anymore on the Hub or are private - the 5 most liked datasets: bigscience/P3, google/fleurs, mc4, bigscience/xP3, allenai/nllb
closed
2023-07-20T15:53:57Z
2023-07-20T16:24:25Z
2023-07-20T16:24:24Z
severo
1,813,537,760
Separate parquet metadata by split
Since we added partial conversion to parquet, we introduced the new config/split/ssss.parquet paths but the parquet metadata worker was nos following it and therefore splits could overwrite each other This affects any dataset with partial conversion and multiple splits, e.g. c4 Related to https://github.com/huggi...
Separate parquet metadata by split: Since we added partial conversion to parquet, we introduced the new config/split/ssss.parquet paths but the parquet metadata worker was nos following it and therefore splits could overwrite each other This affects any dataset with partial conversion and multiple splits, e.g. c4 ...
closed
2023-07-20T09:16:19Z
2023-07-20T14:03:08Z
2023-07-20T13:17:35Z
lhoestq
1,812,647,671
provide one "partial" field per entry in aggregated responses
For example, https://datasets-server.huggingface.co/size?dataset=c4 only provides a global `partial: true` field and the response does not explicit that the "train" split is partial, while the "test" one is complete. Every entry in `configs` and `splits` should also include its own `partial` field, to be able to sho...
provide one "partial" field per entry in aggregated responses: For example, https://datasets-server.huggingface.co/size?dataset=c4 only provides a global `partial: true` field and the response does not explicit that the "train" split is partial, while the "test" one is complete. Every entry in `configs` and `splits`...
open
2023-07-19T20:01:58Z
2024-05-16T09:36:20Z
null
severo
1,811,756,650
Fix libapi and rows in dev docker
null
Fix libapi and rows in dev docker:
closed
2023-07-19T11:34:39Z
2023-07-19T11:35:11Z
2023-07-19T11:35:10Z
lhoestq
1,810,669,838
Moving some /rows shared utils
There are some classes, functions from /rows that will be used in https://github.com/huggingface/datasets-server/pull/1516 and https://github.com/huggingface/datasets-server/pull/1418, to avoid duplicate code, moving some of them to dedicated utils or to existing files.
Moving some /rows shared utils: There are some classes, functions from /rows that will be used in https://github.com/huggingface/datasets-server/pull/1516 and https://github.com/huggingface/datasets-server/pull/1418, to avoid duplicate code, moving some of them to dedicated utils or to existing files.
closed
2023-07-18T20:30:41Z
2023-07-18T20:49:56Z
2023-07-18T20:49:55Z
AndreaFrancis
1,810,545,680
feat: 🎸 unblock allenai/c4
also: sort the list, and remove 4 duplicates
feat: 🎸 unblock allenai/c4: also: sort the list, and remove 4 duplicates
closed
2023-07-18T19:11:15Z
2023-07-18T19:12:07Z
2023-07-18T19:12:04Z
severo
1,810,529,045
Reduce the number of manually blocked datasets
327 datasets (+ 4 duplicates) are currently blocked https://github.com/huggingface/datasets-server/blob/902d9ac2cc951ed1a132086fc71d0aa70dc020fa/chart/env/prod.yaml#L116 With the improvements done ultimately, we should be able to remove many of them. See https://github.com/huggingface/datasets-server/issues/14...
Reduce the number of manually blocked datasets: 327 datasets (+ 4 duplicates) are currently blocked https://github.com/huggingface/datasets-server/blob/902d9ac2cc951ed1a132086fc71d0aa70dc020fa/chart/env/prod.yaml#L116 With the improvements done ultimately, we should be able to remove many of them. See https://...
closed
2023-07-18T19:01:22Z
2023-07-24T15:41:17Z
2023-07-24T15:41:17Z
severo
1,810,369,144
/rows: raise an error if a dataset has too big row groups
It can happen if a dataset was converted to parquet before the recent row group size optimization, e.g. garythung/trashnet Currently it makes the worker crash. We could also refresh the parquet export of the dataset when this happens
/rows: raise an error if a dataset has too big row groups: It can happen if a dataset was converted to parquet before the recent row group size optimization, e.g. garythung/trashnet Currently it makes the worker crash. We could also refresh the parquet export of the dataset when this happens
closed
2023-07-18T17:07:39Z
2023-09-05T17:33:37Z
2023-09-05T17:33:37Z
lhoestq
1,810,242,251
add Hub API convenience endpoint in parquet docs
close https://github.com/huggingface/datasets-server/issues/1400
add Hub API convenience endpoint in parquet docs: close https://github.com/huggingface/datasets-server/issues/1400
closed
2023-07-18T16:00:15Z
2023-07-19T12:03:07Z
2023-07-19T12:02:36Z
lhoestq
1,808,675,183
Update dependencies cryptography and scipy
Updating dependencies to try to fix CI name = "scipy" - version = "1.10.1" -> "1.11.1" name = "cryptography" - version = "41.0.1" -> "41.0.2"
Update dependencies cryptography and scipy: Updating dependencies to try to fix CI name = "scipy" - version = "1.10.1" -> "1.11.1" name = "cryptography" - version = "41.0.1" -> "41.0.2"
closed
2023-07-17T21:44:16Z
2023-07-18T20:20:02Z
2023-07-18T20:20:00Z
AndreaFrancis
1,808,266,851
feat: 🎸 reduce resources
null
feat: 🎸 reduce resources:
closed
2023-07-17T17:44:24Z
2023-07-17T17:44:59Z
2023-07-17T17:44:30Z
severo
1,808,208,578
Ignore scipy in pip audit
...to fix the ci.
Ignore scipy in pip audit: ...to fix the ci.
closed
2023-07-17T17:10:07Z
2023-07-17T17:40:32Z
2023-07-17T17:20:32Z
lhoestq
1,808,205,135
Use `CONSTANT_LIST.copy` in list config fieds
See https://github.com/huggingface/datasets-server/pull/1508#discussion_r1265658458 In particular `get_empty_str_list` should not be used anymore. Same with `default_factory=list`
Use `CONSTANT_LIST.copy` in list config fieds: See https://github.com/huggingface/datasets-server/pull/1508#discussion_r1265658458 In particular `get_empty_str_list` should not be used anymore. Same with `default_factory=list`
open
2023-07-17T17:08:34Z
2023-08-17T15:44:09Z
null
severo
1,808,172,649
Create a new endpoint with info on size + parquet metadata
See https://github.com/huggingface/datasets-server/pull/1503#issuecomment-1625161886
Create a new endpoint with info on size + parquet metadata: See https://github.com/huggingface/datasets-server/pull/1503#issuecomment-1625161886
closed
2023-07-17T16:50:50Z
2024-02-09T10:22:01Z
2024-02-09T10:22:01Z
severo
1,808,146,416
Always verify parquet before copying
close https://github.com/huggingface/datasets-server/issues/1519
Always verify parquet before copying: close https://github.com/huggingface/datasets-server/issues/1519
closed
2023-07-17T16:35:22Z
2023-07-18T17:47:35Z
2023-07-18T17:17:41Z
lhoestq
1,808,120,939
Some datasets are converted to parquet with too big row groups, which makes the viewer crash
... and workers to OOM eg IDEA-CCNL/laion2B-multi-chinese-subset ```python In [1]: import fsspec; import pyarrow.parquet as pq In [2]: url = "https://huggingface.co/datasets/IDEA-CCNL/laion2B-multi-chinese-subset/resolve/main/data/train-00000-of-00013.parquet" In [3]: pf = pq.ParquetFile(fsspec.open(url).o...
Some datasets are converted to parquet with too big row groups, which makes the viewer crash: ... and workers to OOM eg IDEA-CCNL/laion2B-multi-chinese-subset ```python In [1]: import fsspec; import pyarrow.parquet as pq In [2]: url = "https://huggingface.co/datasets/IDEA-CCNL/laion2B-multi-chinese-subset/res...
closed
2023-07-17T16:21:05Z
2023-07-18T17:17:42Z
2023-07-18T17:17:42Z
lhoestq
1,804,224,185
https://huggingface.co/datasets/ccmusic-database/vocal_range/discussions/1
Could HF please print more error messages with code line for our own codes instead of official framework message? I t would be hard for us to debug when some error like this happens
https://huggingface.co/datasets/ccmusic-database/vocal_range/discussions/1: Could HF please print more error messages with code line for our own codes instead of official framework message? I t would be hard for us to debug when some error like this happens
closed
2023-07-14T05:50:20Z
2023-07-17T17:17:41Z
2023-07-17T17:17:41Z
monetjoe
1,801,569,193
Reduce resources
null
Reduce resources:
closed
2023-07-12T18:48:52Z
2023-07-12T18:49:58Z
2023-07-12T18:49:57Z
AndreaFrancis
1,801,219,006
feat: /search endpoint
Second part of FTS implementation using duckdb for https://github.com/huggingface/datasets-server/issues/629 This PR introduces a new endpoint `/search` in a new project **search** service with the following parameters: - dataset - config - split - query - offset (by default 0) - length (by default 100) The...
feat: /search endpoint: Second part of FTS implementation using duckdb for https://github.com/huggingface/datasets-server/issues/629 This PR introduces a new endpoint `/search` in a new project **search** service with the following parameters: - dataset - config - split - query - offset (by default 0) - length...
closed
2023-07-12T15:20:42Z
2023-08-02T18:52:48Z
2023-08-02T18:52:47Z
AndreaFrancis
1,799,131,197
Fix optional download_size
Should fix https://huggingface.co/datasets/Open-Orca/OpenOrca dataset viewer. The dataset was stream converted completely so partial is False but the download_size is still None because streaming was used.
Fix optional download_size: Should fix https://huggingface.co/datasets/Open-Orca/OpenOrca dataset viewer. The dataset was stream converted completely so partial is False but the download_size is still None because streaming was used.
closed
2023-07-11T14:54:19Z
2023-07-11T15:47:04Z
2023-07-11T15:47:03Z
lhoestq
1,799,011,327
Adding partial ttl index to locks
Adding a 10 min TTL index to locks collections.
Adding partial ttl index to locks: Adding a 10 min TTL index to locks collections.
closed
2023-07-11T13:59:34Z
2023-07-11T15:05:08Z
2023-07-11T15:05:06Z
AndreaFrancis
1,798,939,875
(minor) rename noMaxSizeLimitDatasets
just a better naming following https://github.com/huggingface/datasets-server/pull/1508
(minor) rename noMaxSizeLimitDatasets: just a better naming following https://github.com/huggingface/datasets-server/pull/1508
closed
2023-07-11T13:24:01Z
2023-07-11T14:47:43Z
2023-07-11T14:47:18Z
lhoestq
1,797,587,010
Last index sync and Increase resources
It looks like db performance has been improved, try to increase resources to flush jobs queue.
Last index sync and Increase resources: It looks like db performance has been improved, try to increase resources to flush jobs queue.
closed
2023-07-10T21:05:54Z
2023-07-10T22:45:25Z
2023-07-10T22:45:24Z
AndreaFrancis
1,797,532,756
Sync advised indexes by Atlas
Syncing advised indexes by Mongo Atlas and removing not used index (Replaced with a new one).
Sync advised indexes by Atlas: Syncing advised indexes by Mongo Atlas and removing not used index (Replaced with a new one).
closed
2023-07-10T20:27:02Z
2023-07-10T20:39:48Z
2023-07-10T20:39:46Z
AndreaFrancis
1,797,169,668
Add priority param to force refresh endpoint
Right now I always have to manually set the priority in mongo to "normal"
Add priority param to force refresh endpoint: Right now I always have to manually set the priority in mongo to "normal"
closed
2023-07-10T17:00:14Z
2023-07-11T11:31:48Z
2023-07-11T11:31:46Z
lhoestq
1,796,790,972
Try to improve index usage for Job collection
Removing some indexes already covered for other existing ones to speed query plan decision Index, | Proposal | Index Alternative   -- | -- | -- "dataset", | DELETE: No query using only dataset |   ("dataset", "revision", "status"), | DELETE | ("type", "dataset", "revision", "config", "split", "status", "priorit...
Try to improve index usage for Job collection: Removing some indexes already covered for other existing ones to speed query plan decision Index, | Proposal | Index Alternative   -- | -- | -- "dataset", | DELETE: No query using only dataset |   ("dataset", "revision", "status"), | DELETE | ("type", "dataset", "r...
closed
2023-07-10T13:30:07Z
2023-07-10T17:02:53Z
2023-07-10T17:02:52Z
AndreaFrancis
1,796,535,091
Add fully converted datasets
To fully convert https://huggingface.co/datasets/Open-Orca/OpenOrca to parquet (top 1 trending dataset right now)
Add fully converted datasets: To fully convert https://huggingface.co/datasets/Open-Orca/OpenOrca to parquet (top 1 trending dataset right now)
closed
2023-07-10T11:14:40Z
2023-07-17T17:19:27Z
2023-07-10T13:47:44Z
lhoestq
1,796,383,767
Reduce rows lru cache
It was causing the workers memory to keep increasing and finally OOM. If there are still memory errors after that I might remove the LRU cache altogether
Reduce rows lru cache: It was causing the workers memory to keep increasing and finally OOM. If there are still memory errors after that I might remove the LRU cache altogether
closed
2023-07-10T09:44:14Z
2023-07-17T17:27:04Z
2023-07-10T09:49:00Z
lhoestq
1,793,852,790
Memory efficient config-parquet-metadata
I moved some code to make the job write the parquet metadata on disk as they are downloaded, instead of keeping them all in RAM and write them all at the end. should help for https://github.com/huggingface/datasets-server/issues/1502
Memory efficient config-parquet-metadata: I moved some code to make the job write the parquet metadata on disk as they are downloaded, instead of keeping them all in RAM and write them all at the end. should help for https://github.com/huggingface/datasets-server/issues/1502
closed
2023-07-07T16:51:19Z
2023-07-10T11:14:46Z
2023-07-10T11:14:45Z
lhoestq
1,793,642,507
Sync mongodb indexes
Syncing some existing and helpful indexes db->code (These already exists) - ("priority", "status", "namespace", "type", "created_at"), - ("priority", "status", "created_at", "namespace", "-difficulty"), Removing some useless ones like: - "status" - ("type", "status") - ("priority", "status", "created_at",...
Sync mongodb indexes : Syncing some existing and helpful indexes db->code (These already exists) - ("priority", "status", "namespace", "type", "created_at"), - ("priority", "status", "created_at", "namespace", "-difficulty"), Removing some useless ones like: - "status" - ("type", "status") - ("priority", ...
closed
2023-07-07T14:25:31Z
2023-07-18T19:26:43Z
2023-07-10T10:33:00Z
AndreaFrancis
1,792,196,768
Validate source Parquet files before linking in refs/convert/parquet
While trying to read a parquet file from dataset revision refs/convert/parquet (generated by datasets-server) with duckdb, it throws the following error: ``` D select * from 'https://huggingface.co/datasets/Pavithra/sampled-code-parrot-train-100k/resolve/refs%2Fconvert%2Fparquet/Pavithra--sampled-code-parrot-train...
Validate source Parquet files before linking in refs/convert/parquet: While trying to read a parquet file from dataset revision refs/convert/parquet (generated by datasets-server) with duckdb, it throws the following error: ``` D select * from 'https://huggingface.co/datasets/Pavithra/sampled-code-parrot-train-100...
open
2023-07-06T20:35:45Z
2024-06-19T14:17:51Z
null
AndreaFrancis
1,791,866,859
Remove parquet index without metadata
The parquet index without metadata is too slow and causes the majority of the OOMs on /rows atm. I think it's best to remove it completely. For now it would cause some datasets like [tiiuae/falcon-refinedweb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) and [Open-Orca/OpenOrca](https://huggingface.co...
Remove parquet index without metadata: The parquet index without metadata is too slow and causes the majority of the OOMs on /rows atm. I think it's best to remove it completely. For now it would cause some datasets like [tiiuae/falcon-refinedweb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) and [Ope...
closed
2023-07-06T16:23:30Z
2023-07-17T16:51:08Z
2023-07-07T09:50:36Z
lhoestq
1,791,860,052
Parquet metadata OOM for tiiuae/falcon-refinedweb
Logs show nothing but the kube YAML of the pod shows OOMKilled. The job should be improved to be more memory efficient. This issue prevents us from having pagination for this dataset.
Parquet metadata OOM for tiiuae/falcon-refinedweb: Logs show nothing but the kube YAML of the pod shows OOMKilled. The job should be improved to be more memory efficient. This issue prevents us from having pagination for this dataset.
closed
2023-07-06T16:18:25Z
2023-07-13T16:57:19Z
2023-07-13T16:57:19Z
lhoestq
1,791,745,222
rollback: Exclude parquet volume from EFS
null
rollback: Exclude parquet volume from EFS:
closed
2023-07-06T15:05:34Z
2023-07-06T15:13:17Z
2023-07-06T15:13:13Z
AndreaFrancis
1,791,703,730
Fix - Call volumeCache
null
Fix - Call volumeCache:
closed
2023-07-06T14:42:06Z
2023-07-06T14:43:15Z
2023-07-06T14:43:13Z
AndreaFrancis
1,791,665,812
Fix volume refs in volumeMount
null
Fix volume refs in volumeMount:
closed
2023-07-06T14:21:06Z
2023-07-06T14:26:04Z
2023-07-06T14:26:02Z
AndreaFrancis
1,791,434,379
Delete `/config-names` endpoint
Part of https://github.com/huggingface/datasets-server/issues/1086
Delete `/config-names` endpoint: Part of https://github.com/huggingface/datasets-server/issues/1086
closed
2023-07-06T12:09:37Z
2023-07-07T13:44:24Z
2023-07-07T13:44:23Z
polinaeterna
1,790,007,590
Convert if too big row groups for copy
Before copying the parquet files I check that the row groups are not too big. Otherwise it can cause OOM for users that could like to use the parquet export, and also because it would make the dataset viewer too slow. To do that, I check the first row group of the first parquet files and check their size. If one ro...
Convert if too big row groups for copy: Before copying the parquet files I check that the row groups are not too big. Otherwise it can cause OOM for users that could like to use the parquet export, and also because it would make the dataset viewer too slow. To do that, I check the first row group of the first parque...
closed
2023-07-05T17:41:49Z
2023-07-06T16:12:28Z
2023-07-06T16:12:27Z
lhoestq
1,789,913,319
feat: 🎸 reduce the number of workers
null
feat: 🎸 reduce the number of workers:
closed
2023-07-05T16:37:41Z
2023-07-05T16:38:16Z
2023-07-05T16:37:46Z
severo
1,789,898,795
Adding EFS volumes for cache, parquet and duckdb storage
Related to https://github.com/huggingface/datasets-server/issues/1407 Adding new volumes for: cache (datasets library), parquet and duckdb Based on https://github.com/huggingface/infra/pull/607, the persistenceVolumeClaims should be: - datasets-server-cache-pvc - datasets-server-parquet-pvc - datasets-server-duc...
Adding EFS volumes for cache, parquet and duckdb storage: Related to https://github.com/huggingface/datasets-server/issues/1407 Adding new volumes for: cache (datasets library), parquet and duckdb Based on https://github.com/huggingface/infra/pull/607, the persistenceVolumeClaims should be: - datasets-server-cache...
closed
2023-07-05T16:26:36Z
2023-07-06T14:04:46Z
2023-07-06T14:04:45Z
AndreaFrancis
1,789,874,274
feat: 🎸 increase resources to flush the jobs
null
feat: 🎸 increase resources to flush the jobs:
closed
2023-07-05T16:09:43Z
2023-07-05T16:10:16Z
2023-07-05T16:09:48Z
severo
1,789,859,611
feat: 🎸 avoid adding filters on difficulty when not needed
only add a filter if min > 0, or max < 100
feat: 🎸 avoid adding filters on difficulty when not needed: only add a filter if min > 0, or max < 100
closed
2023-07-05T16:02:10Z
2023-07-05T16:07:47Z
2023-07-05T16:07:45Z
severo
1,789,808,996
fix: 🐛 ensure the env vars are int
note that the limits (min and max) will always be set in the mongo queries. I also added an index to make it quick but let's see if it works well.
fix: 🐛 ensure the env vars are int: note that the limits (min and max) will always be set in the mongo queries. I also added an index to make it quick but let's see if it works well.
closed
2023-07-05T15:32:01Z
2023-07-05T15:35:19Z
2023-07-05T15:35:17Z
severo
1,789,729,737
Use stream to parquet for slow parquet datasets
Use stream_to_parquet() for parquet datasets with too big row groups to rewrite the parquet data like https://huggingface.co/datasets/Open-Orca/OpenOrca in refs/convert/parquet
Use stream to parquet for slow parquet datasets: Use stream_to_parquet() for parquet datasets with too big row groups to rewrite the parquet data like https://huggingface.co/datasets/Open-Orca/OpenOrca in refs/convert/parquet
closed
2023-07-05T14:49:39Z
2023-07-06T16:12:28Z
2023-07-06T16:12:28Z
lhoestq
1,789,718,074
Don't run config-parquet-metadata in light workers
Because it causes an OOM for all the big parquet datasets like the-stack, refinedweb etc. causing their pagination to hang because it uses the parquet index without metadata which is too slow for big datasets
Don't run config-parquet-metadata in light workers: Because it causes an OOM for all the big parquet datasets like the-stack, refinedweb etc. causing their pagination to hang because it uses the parquet index without metadata which is too slow for big datasets
closed
2023-07-05T14:43:45Z
2023-07-05T14:51:03Z
2023-07-05T14:51:02Z
lhoestq
1,789,502,766
feat: 🎸 add "difficulty" field to JobDocument
Difficulty is an integer between 0 (easy) and 100 (hard). It aims at filtering the jobs in a specific worker deployment, ie, light workers will only run jobs with difficulty <= 40. It should make the query to MongoDB quicker than currently (a filter `type: {$in: ALLOW_LIST}`). See #1486. For now, all the jobs for...
feat: 🎸 add "difficulty" field to JobDocument: Difficulty is an integer between 0 (easy) and 100 (hard). It aims at filtering the jobs in a specific worker deployment, ie, light workers will only run jobs with difficulty <= 40. It should make the query to MongoDB quicker than currently (a filter `type: {$in: ALLOW_...
closed
2023-07-05T12:57:49Z
2023-07-05T15:06:01Z
2023-07-05T15:06:00Z
severo
1,789,378,839
Delete `/parquet-and-dataset-info` endpoint
part of https://github.com/huggingface/datasets-server/issues/1086
Delete `/parquet-and-dataset-info` endpoint: part of https://github.com/huggingface/datasets-server/issues/1086
closed
2023-07-05T11:43:48Z
2023-07-05T12:45:40Z
2023-07-05T12:45:38Z
polinaeterna
1,789,375,763
upgrade huggingface_hub to 0.16
Needed because we currently rely on a specific commit. Better to depend on a released version
upgrade huggingface_hub to 0.16: Needed because we currently rely on a specific commit. Better to depend on a released version
closed
2023-07-05T11:41:37Z
2023-07-25T16:13:40Z
2023-07-25T16:13:40Z
severo
1,789,346,549
feat: 🎸 query operation $in is faster then $nin
null
feat: 🎸 query operation $in is faster then $nin:
closed
2023-07-05T11:22:55Z
2023-07-05T11:23:44Z
2023-07-05T11:23:33Z
severo
1,789,326,871
More logging for /rows
I'd like to understand better why specific requests take so long (eg refinedweb, the-stack). Locally they work fine but take too much time in prod.
More logging for /rows: I'd like to understand better why specific requests take so long (eg refinedweb, the-stack). Locally they work fine but take too much time in prod.
closed
2023-07-05T11:10:08Z
2023-07-05T13:00:49Z
2023-07-05T13:00:48Z
lhoestq
1,788,259,401
feat: 🎸 increase RAM for /rows service
Is it the right value @lhoestq ?
feat: 🎸 increase RAM for /rows service: Is it the right value @lhoestq ?
closed
2023-07-04T17:21:55Z
2023-07-04T17:27:27Z
2023-07-04T17:27:26Z
severo
1,788,233,793
C4 pagination failing
current error is an ApiError ``` config size could not be parsed: ValidationError: "size.config.num_bytes_original_files" must be a number ```
C4 pagination failing: current error is an ApiError ``` config size could not be parsed: ValidationError: "size.config.num_bytes_original_files" must be a number ```
closed
2023-07-04T16:52:54Z
2023-07-18T19:01:32Z
2023-07-05T10:44:18Z
lhoestq
1,788,182,042
diagnose why the mongo server uses so much CPU
we have many alerts on the use of CPU on the mongo server. ``` System: CPU (User) % has gone above 95 ``` Why?
diagnose why the mongo server uses so much CPU: we have many alerts on the use of CPU on the mongo server. ``` System: CPU (User) % has gone above 95 ``` Why?
closed
2023-07-04T16:04:06Z
2024-02-06T14:49:20Z
2024-02-06T14:49:19Z
severo
1,788,051,826
chore: update pypdf dependency in worker
Should fix https://github.com/huggingface/datasets-server/security/dependabot/200 But not sure where exactly we use this library, according to pypi page pypdf2 is now pypdf (Not sure if this will break something): `NOTE: The PyPDF2 project is going back to its roots. PyPDF2==3.0.X will be the last version of PyPDF2. ...
chore: update pypdf dependency in worker: Should fix https://github.com/huggingface/datasets-server/security/dependabot/200 But not sure where exactly we use this library, according to pypi page pypdf2 is now pypdf (Not sure if this will break something): `NOTE: The PyPDF2 project is going back to its roots. PyPDF2==...
closed
2023-07-04T14:31:22Z
2023-07-05T15:18:30Z
2023-07-05T15:18:29Z
AndreaFrancis
1,788,045,153
Minor fix in update_last_modified_date_of_rows_in_assets_dir
FileNotFoundError can happen because of concurrent api calls, and we can ignore it (found this error while checking some logs today)
Minor fix in update_last_modified_date_of_rows_in_assets_dir: FileNotFoundError can happen because of concurrent api calls, and we can ignore it (found this error while checking some logs today)
closed
2023-07-04T14:28:32Z
2023-07-04T20:31:31Z
2023-07-04T20:31:30Z
lhoestq
1,787,964,180
Move the /rows endpoint to its own service
We create a new service, services/rows, which handles the /rows endpoint. services/api now serves the rest of the endpoints, but not /rows.
Move the /rows endpoint to its own service: We create a new service, services/rows, which handles the /rows endpoint. services/api now serves the rest of the endpoints, but not /rows.
closed
2023-07-04T13:44:24Z
2023-07-05T06:55:58Z
2023-07-04T16:26:15Z
severo
1,787,581,084
Optional num_bytes_original_files
Because `download_size` is `None` in `config-and-parquet-info` if the dataset is >5GB
Optional num_bytes_original_files: Because `download_size` is `None` in `config-and-parquet-info` if the dataset is >5GB
closed
2023-07-04T09:49:29Z
2023-07-04T12:30:10Z
2023-07-04T12:30:09Z
lhoestq
1,787,544,567
Unblock OSCAR
Now it can be converted to parquet (max 5GB) I'll manually refresh it
Unblock OSCAR: Now it can be converted to parquet (max 5GB) I'll manually refresh it
closed
2023-07-04T09:32:30Z
2023-07-04T09:38:23Z
2023-07-04T09:38:22Z
lhoestq
1,787,512,364
More workers
Following #1448 note that `cache-maintenance` took care of running a backfill for the datasets >5GB: ``` "DatasetTooBigFromDatasetsError,DatasetTooBigFromHubError,DatasetWithTooBigExternalFilesError,DatasetWithTooManyExternalFilesError" ```
More workers: Following #1448 note that `cache-maintenance` took care of running a backfill for the datasets >5GB: ``` "DatasetTooBigFromDatasetsError,DatasetTooBigFromHubError,DatasetWithTooBigExternalFilesError,DatasetWithTooManyExternalFilesError" ```
closed
2023-07-04T09:15:45Z
2023-07-04T09:38:15Z
2023-07-04T09:38:14Z
lhoestq
1,787,510,453
Create libapi
This PR prepares the creation of a new API service: services/rows
Create libapi: This PR prepares the creation of a new API service: services/rows
closed
2023-07-04T09:14:34Z
2023-07-04T13:32:20Z
2023-07-04T13:32:18Z
severo
1,786,901,380
split-duckdb-index config
Moving config to worker folder since it is not used from any other project. Also adding doc in the readme file.
split-duckdb-index config: Moving config to worker folder since it is not used from any other project. Also adding doc in the readme file.
closed
2023-07-03T22:55:48Z
2023-07-04T13:32:38Z
2023-07-04T13:32:37Z
AndreaFrancis
1,786,615,494
fix stream_convert_to_parquet for GeneratorBasedBuilder
got ``` "_prepare_split() missing 1 required positional argument: 'check_duplicate_keys'" ``` when converting C4 (config="en") This query should return the jobs to re-run: ``` {kind: "config-parquet-and-info", http_status: 500, "details.error": "_prepare_split() missing 1 required positional argument: 'c...
fix stream_convert_to_parquet for GeneratorBasedBuilder: got ``` "_prepare_split() missing 1 required positional argument: 'check_duplicate_keys'" ``` when converting C4 (config="en") This query should return the jobs to re-run: ``` {kind: "config-parquet-and-info", http_status: 500, "details.error": "_p...
closed
2023-07-03T18:10:44Z
2023-07-03T18:31:47Z
2023-07-03T18:31:46Z
lhoestq
1,786,521,126
How to show fan-in jobs' results in response ("pending" and "failed" keys)
In cache entries of fan-in jobs we have keys `pending` and `failed`. For example, config-level `/parquet` response has the following format (only "parquet_files" key): ```python { "parquet_files": [ { "dataset": "duorc", "config": "ParaphraseRC", "split": "test", ...
How to show fan-in jobs' results in response ("pending" and "failed" keys): In cache entries of fan-in jobs we have keys `pending` and `failed`. For example, config-level `/parquet` response has the following format (only "parquet_files" key): ```python { "parquet_files": [ { "dataset": "du...
open
2023-07-03T16:49:10Z
2023-08-11T15:26:24Z
null
polinaeterna
1,786,497,393
Add partial to subsequent parquet-and-info jobs
Following #1448 we need all subsequent jobs to `config-parquet-and-info` to have the "partial" field: - "config-parquet-and-info" - "config-parquet" - "dataset-parquet" - "config-parquet-metadata" - "config-info" - "dataset-info" For dataset level jobs, "partial" is True if there is at least one config with ...
Add partial to subsequent parquet-and-info jobs: Following #1448 we need all subsequent jobs to `config-parquet-and-info` to have the "partial" field: - "config-parquet-and-info" - "config-parquet" - "dataset-parquet" - "config-parquet-metadata" - "config-info" - "dataset-info" For dataset level jobs, "parti...
closed
2023-07-03T16:30:59Z
2023-07-03T17:20:31Z
2023-07-03T17:20:29Z
lhoestq
1,786,295,177
feat: 🎸 use Normal priority only for API
webhook and jobs created when a requested cache entry is missing are the only ones with Priority.NORMAL. All the jobs created by administrative tasks (/force-refresh, /backfill, backfill job...) will use Priority.LOW.
feat: 🎸 use Normal priority only for API: webhook and jobs created when a requested cache entry is missing are the only ones with Priority.NORMAL. All the jobs created by administrative tasks (/force-refresh, /backfill, backfill job...) will use Priority.LOW.
closed
2023-07-03T14:28:50Z
2023-07-03T15:26:56Z
2023-07-03T15:26:55Z
severo
1,786,026,117
feat: 🎸 backfill cache entries older than 90 days
See https://github.com/huggingface/datasets-server/issues/1219. The idea is to have a general limitation on the duration of the cache. It will make it easier to delete unused resources (assets, cached assets, etc) later: everything older than 120 days (eg) can be deleted.
feat: 🎸 backfill cache entries older than 90 days: See https://github.com/huggingface/datasets-server/issues/1219. The idea is to have a general limitation on the duration of the cache. It will make it easier to delete unused resources (assets, cached assets, etc) later: everything older than 120 days (eg) can be d...
closed
2023-07-03T11:55:11Z
2023-07-03T15:55:06Z
2023-07-03T15:55:05Z
severo
1,786,014,299
Rename `/dataset-info` endpoint to `/info`
Question: do we want to show results of steps that have `pending` and `failed` keys? I assume it might be not clear for users what that means, they also sound a bit too technical. Should we explain in the docs or just don't allow access to the dataset-level aggregations (but if so, why do we even need these cache entri...
Rename `/dataset-info` endpoint to `/info`: Question: do we want to show results of steps that have `pending` and `failed` keys? I assume it might be not clear for users what that means, they also sound a bit too technical. Should we explain in the docs or just don't allow access to the dataset-level aggregations (but ...
closed
2023-07-03T11:48:06Z
2023-07-03T17:11:35Z
2023-07-03T17:11:05Z
polinaeterna
1,785,704,254
Some jobs have a "finished_at" date, but are still started or waiting
``` db.jobsBlue.count({"finished_at": {"$exists": true}, "status": {"$nin": ["success", "error", "cancelled"]}}) 24 ``` For example: ``` { _id: ObjectId("649f417b849c36335817cfa7"), type: 'dataset-size', dataset: 'knowrohit07/know_cot', revision: 'f89e138e31115fd5b144aa0c52888316e710f752', uni...
Some jobs have a "finished_at" date, but are still started or waiting: ``` db.jobsBlue.count({"finished_at": {"$exists": true}, "status": {"$nin": ["success", "error", "cancelled"]}}) 24 ``` For example: ``` { _id: ObjectId("649f417b849c36335817cfa7"), type: 'dataset-size', dataset: 'knowrohit07/kno...
closed
2023-07-03T09:05:37Z
2023-08-29T14:07:03Z
2023-08-29T14:07:03Z
severo