id
int64
959M
2.55B
title
stringlengths
3
133
body
stringlengths
1
65.5k
βŒ€
description
stringlengths
5
65.6k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
closed_at
stringlengths
20
20
βŒ€
user
stringclasses
174 values
1,568,224,345
Locally use volumes for workers code
This way in a local dev env, one can restart a single worker container to apply their code changes instead of running `make stop` and `make start` again
Locally use volumes for workers code: This way in a local dev env, one can restart a single worker container to apply their code changes instead of running `make stop` and `make start` again
closed
2023-02-02T15:05:51Z
2023-02-03T11:09:16Z
2023-02-03T11:09:14Z
lhoestq
1,568,158,540
/sizes doesn't work in local dev env
it always returns ```json { "error": "The response is not ready yet. Please retry later." } ```
/sizes doesn't work in local dev env : it always returns ```json { "error": "The response is not ready yet. Please retry later." } ```
closed
2023-02-02T14:26:29Z
2023-02-06T13:56:40Z
2023-02-06T13:56:40Z
lhoestq
1,568,143,315
Calling /api endpoints create unnecessary jobs
For example calling /config-names does create a /config-names job even though the result is cached. The job is then skipped since the result is cached.
Calling /api endpoints create unnecessary jobs: For example calling /config-names does create a /config-names job even though the result is cached. The job is then skipped since the result is cached.
closed
2023-02-02T14:17:23Z
2023-02-14T16:17:53Z
2023-02-14T16:17:53Z
lhoestq
1,567,772,063
Check if the database exists/is accessible in the migration job
In some cases, the migration job can be run (in k8s) while the mongo database does not exist. In that case, it fails and the install/upgrade is blocked. As the database does not need to be migrated if it does not exist yet, we should test if we can access the database, and if not, return with a success without doing...
Check if the database exists/is accessible in the migration job: In some cases, the migration job can be run (in k8s) while the mongo database does not exist. In that case, it fails and the install/upgrade is blocked. As the database does not need to be migrated if it does not exist yet, we should test if we can acc...
closed
2023-02-02T10:30:08Z
2023-02-10T17:20:42Z
2023-02-10T17:20:42Z
severo
1,567,764,415
Handle the case where the DatasetInfo is too big
In the /parquet-and-dataset-info processing step, if DatasetInfo is over 16MB, we will not be able to store it in MongoDB (https://pymongo.readthedocs.io/en/stable/api/pymongo/errors.html#pymongo.errors.DocumentTooLarge). We have to handle this case, and return a clear error to the user. See https://huggingface.slac...
Handle the case where the DatasetInfo is too big: In the /parquet-and-dataset-info processing step, if DatasetInfo is over 16MB, we will not be able to store it in MongoDB (https://pymongo.readthedocs.io/en/stable/api/pymongo/errors.html#pymongo.errors.DocumentTooLarge). We have to handle this case, and return a clear ...
closed
2023-02-02T10:25:19Z
2023-02-13T13:48:06Z
2023-02-13T13:48:05Z
severo
1,567,611,663
update the logic to skip a job
Instead of retrying for any non-successful response in the cache, we only retry if the error is in the list of "retry-able" errors. Also: refactor the logic and add complete tests
update the logic to skip a job: Instead of retrying for any non-successful response in the cache, we only retry if the error is in the list of "retry-able" errors. Also: refactor the logic and add complete tests
closed
2023-02-02T09:02:44Z
2023-02-02T12:55:58Z
2023-02-02T12:55:07Z
severo
1,566,555,843
Add refresh dataset ui
Allow to force refresh some datasets <img width="1440" alt="image" src="https://user-images.githubusercontent.com/42851186/216124200-59076b70-b910-4242-964d-5e528c7196d0.png">
Add refresh dataset ui: Allow to force refresh some datasets <img width="1440" alt="image" src="https://user-images.githubusercontent.com/42851186/216124200-59076b70-b910-4242-964d-5e528c7196d0.png">
closed
2023-02-01T17:56:38Z
2023-02-02T19:41:19Z
2023-02-02T19:41:18Z
lhoestq
1,566,447,700
test: πŸ’ ensure the database is ready in the tests
add a dependency to the app_config fixture to be sure to have access to the database when running an individual test, with `TEST_PATH="tests/test_worker.py" make test`
test: πŸ’ ensure the database is ready in the tests: add a dependency to the app_config fixture to be sure to have access to the database when running an individual test, with `TEST_PATH="tests/test_worker.py" make test`
closed
2023-02-01T16:43:23Z
2023-02-02T09:01:19Z
2023-02-02T09:01:17Z
severo
1,566,352,167
ci: 🎑 only run on PR and on main
currently, the actions are run twice in the PRs. See https://github.com/huggingface/datasets-server/pull/757/commits/c80eeacfdb2839149ad8b7b81cdaf6b0b4fcb944 for example.
ci: 🎑 only run on PR and on main: currently, the actions are run twice in the PRs. See https://github.com/huggingface/datasets-server/pull/757/commits/c80eeacfdb2839149ad8b7b81cdaf6b0b4fcb944 for example.
closed
2023-02-01T15:46:20Z
2023-02-02T09:03:20Z
2023-02-02T09:03:18Z
severo
1,566,340,449
refactor: πŸ’‘ remove dead code
null
refactor: πŸ’‘ remove dead code:
closed
2023-02-01T15:39:02Z
2023-02-01T15:47:26Z
2023-02-01T15:47:24Z
severo
1,566,326,962
Skip the job depending on the type of error
Currently, we never skip a job if the previous run has returned an error. But it should be conditional: only retry for an allowlist of errors (e.g., ConnectionError), not the other way: for the same commit, we will get the same result and just lose time and resources. Here: https://github.com/huggingface/datasets-se...
Skip the job depending on the type of error: Currently, we never skip a job if the previous run has returned an error. But it should be conditional: only retry for an allowlist of errors (e.g., ConnectionError), not the other way: for the same commit, we will get the same result and just lose time and resources. Her...
closed
2023-02-01T15:30:31Z
2023-02-13T11:15:56Z
2023-02-13T11:15:56Z
severo
1,566,302,997
Fill /split-names and /first-rows from /parquet-and-dataset-info if needed
/split-names and /first-rows are created using streaming. They might fail for this reason while /parquet-and-dataset-info works for the same dataset. In this case, we should fill /split-names (from /dataset-info) and /first-rows (from /parquet). Note that we would generate a concurrence between two execution path...
Fill /split-names and /first-rows from /parquet-and-dataset-info if needed: /split-names and /first-rows are created using streaming. They might fail for this reason while /parquet-and-dataset-info works for the same dataset. In this case, we should fill /split-names (from /dataset-info) and /first-rows (from /parqu...
closed
2023-02-01T15:16:30Z
2023-04-03T16:15:41Z
2023-04-03T16:08:30Z
severo
1,566,250,070
Improve error messages
Related to: - #745 - #718
Improve error messages: Related to: - #745 - #718
closed
2023-02-01T14:48:56Z
2023-02-24T13:24:53Z
2023-02-24T13:22:29Z
albertvillanova
1,566,199,878
Delete extracted the downloaded files just after being extracted
``` from datasets import load_dataset, DownloadConfig download_config = DownloadConfig(delete_extracted=True) dataset = load_dataset("./codeparrot", split="train", download_config=download_config) ``` internal ref: https://huggingface.slack.com/archives/C04L6P8KNQ5/p1675260906599029?thread_ts=1675260896.849119...
Delete extracted the downloaded files just after being extracted: ``` from datasets import load_dataset, DownloadConfig download_config = DownloadConfig(delete_extracted=True) dataset = load_dataset("./codeparrot", split="train", download_config=download_config) ``` internal ref: https://huggingface.slack.com/...
closed
2023-02-01T14:17:22Z
2023-02-22T18:49:26Z
2023-02-22T18:49:26Z
severo
1,565,049,472
remove docker-images.yaml, and fix dev.yaml
null
remove docker-images.yaml, and fix dev.yaml:
closed
2023-01-31T21:48:22Z
2023-02-01T10:09:14Z
2023-02-01T10:09:13Z
severo
1,565,030,653
Refactor to have only one app accessing a mongodb database
As assessed by @rtrompier, it's generally a bad idea to have multiple apps accessing the same database concurrently. Ideally, one app should have access to the database, and expose a REST API for the other apps.
Refactor to have only one app accessing a mongodb database : As assessed by @rtrompier, it's generally a bad idea to have multiple apps accessing the same database concurrently. Ideally, one app should have access to the database, and expose a REST API for the other apps.
closed
2023-01-31T21:29:42Z
2023-04-06T15:04:08Z
2023-04-06T15:04:08Z
severo
1,564,737,479
Allow list some datasets for some jobs
This way when adding a new job we can apply it on a small amount of datasets, before spending too much time making it work for all the datasets
Allow list some datasets for some jobs: This way when adding a new job we can apply it on a small amount of datasets, before spending too much time making it work for all the datasets
closed
2023-01-31T17:34:00Z
2023-04-08T15:04:13Z
2023-04-08T15:04:13Z
lhoestq
1,564,688,334
Adding custom exception when cache insert fails because of too many columns
Adding custom exception to fix scenarios like https://github.com/huggingface/datasets-server/issues/731
Adding custom exception when cache insert fails because of too many columns: Adding custom exception to fix scenarios like https://github.com/huggingface/datasets-server/issues/731
closed
2023-01-31T17:00:08Z
2023-02-03T09:49:20Z
2023-02-02T18:18:33Z
AndreaFrancis
1,564,628,030
feat: 🎸 update docker images
null
feat: 🎸 update docker images:
closed
2023-01-31T16:21:45Z
2023-01-31T16:25:02Z
2023-01-31T16:25:00Z
severo
1,564,547,805
fix: πŸ› fix the migration scripts to be able to run on new base
fixes #744
fix: πŸ› fix the migration scripts to be able to run on new base: fixes #744
closed
2023-01-31T15:32:24Z
2023-01-31T16:20:52Z
2023-01-31T15:36:42Z
severo
1,564,545,755
Add HF_TOKEN env var for admin ui
this way we can deploy in a private Space and don't ask for the token
Add HF_TOKEN env var for admin ui: this way we can deploy in a private Space and don't ask for the token
closed
2023-01-31T15:31:03Z
2023-01-31T15:40:02Z
2023-01-31T15:40:00Z
lhoestq
1,564,494,320
Improve the error messages in the dataset viewer
The error messages are shown in the dataset viewer. We should try to improve them: - help the user understand what is going on - help the user fix the issue - show that we understand what is going on, and that it's not a bug. See https://huggingface.slack.com/archives/C04L6P8KNQ5/p1675171833108199 (internal) and ...
Improve the error messages in the dataset viewer: The error messages are shown in the dataset viewer. We should try to improve them: - help the user understand what is going on - help the user fix the issue - show that we understand what is going on, and that it's not a bug. See https://huggingface.slack.com/arch...
closed
2023-01-31T15:07:54Z
2023-04-06T15:04:10Z
2023-04-06T15:04:10Z
severo
1,564,463,708
Migration job fails on a new database
When run on a new database, where the migrations have never been run, the migration script that adds a "force" field fails with: ``` ERROR: 2023-01-31 14:36:28,825 - root - Migration failed: The fields "{'priority'}" do not exist on the document "JobSnapshot" ``` Reported by @rtrompier
Migration job fails on a new database: When run on a new database, where the migrations have never been run, the migration script that adds a "force" field fails with: ``` ERROR: 2023-01-31 14:36:28,825 - root - Migration failed: The fields "{'priority'}" do not exist on the document "JobSnapshot" ``` Reported ...
closed
2023-01-31T14:52:11Z
2023-01-31T15:36:44Z
2023-01-31T15:36:44Z
severo
1,564,459,321
fix: πŸ› disable the mongodbMigration job for now
it breaks on a new database. Disabling it until we fix the issue.
fix: πŸ› disable the mongodbMigration job for now: it breaks on a new database. Disabling it until we fix the issue.
closed
2023-01-31T14:49:41Z
2023-01-31T14:53:10Z
2023-01-31T14:53:09Z
severo
1,564,452,845
fix admin ui requirements.txt
null
fix admin ui requirements.txt:
closed
2023-01-31T14:46:03Z
2023-01-31T15:26:38Z
2023-01-31T15:26:37Z
lhoestq
1,564,325,655
Detect the "zombie" jobs, and kill them 🧟 πŸ”«
Sometimes the pods crash: ``` prod-datasets-server-worker-first-rows-8579994756-vgpkg 0/1 OutOfmemory 0 92m prod-datasets-server-worker-first-rows-8579994756-vmvk7 0/1 OutOfmemory 0 92m prod-datasets-server-worker-first-rows-8579994...
Detect the "zombie" jobs, and kill them 🧟 πŸ”«: Sometimes the pods crash: ``` prod-datasets-server-worker-first-rows-8579994756-vgpkg 0/1 OutOfmemory 0 92m prod-datasets-server-worker-first-rows-8579994756-vmvk7 0/1 OutOfmemory 0 92m ...
closed
2023-01-31T13:37:32Z
2023-02-17T18:22:49Z
2023-02-17T18:15:07Z
severo
1,564,307,540
Convert the /backfill endpoint to a kubernetes Job run periodically
See https://github.com/huggingface/datasets-server/pull/708#issuecomment-1406319901 and following comments. Maybe we should not have this endpoint as such. See the related issue: https://github.com/huggingface/datasets-server/issues/736. Maybe #736 should be fixed first, then see how we implement a backfill trigger.
Convert the /backfill endpoint to a kubernetes Job run periodically: See https://github.com/huggingface/datasets-server/pull/708#issuecomment-1406319901 and following comments. Maybe we should not have this endpoint as such. See the related issue: https://github.com/huggingface/datasets-server/issues/736. Maybe #736...
closed
2023-01-31T13:25:52Z
2023-05-09T07:50:12Z
2023-05-09T07:50:12Z
severo
1,564,230,254
feat: merge helm lint action with publish
null
feat: merge helm lint action with publish:
closed
2023-01-31T12:35:16Z
2023-01-31T12:44:24Z
2023-01-31T12:44:23Z
rtrompier
1,564,209,389
fix: remove mongo migration job execution on pre-install hook
null
fix: remove mongo migration job execution on pre-install hook:
closed
2023-01-31T12:23:09Z
2023-01-31T12:28:15Z
2023-01-31T12:28:14Z
rtrompier
1,564,189,257
Use a generic worker
All the workers use the same codebase, but we assign only one processing step to each worker at startup. We could instead allow a worker to handle all (or part of) the processing steps and let the queue manager give it the most adequate job. By the way, the current way of doing is a particular case, where the set of...
Use a generic worker: All the workers use the same codebase, but we assign only one processing step to each worker at startup. We could instead allow a worker to handle all (or part of) the processing steps and let the queue manager give it the most adequate job. By the way, the current way of doing is a particular ...
closed
2023-01-31T12:10:06Z
2023-02-13T15:29:25Z
2023-02-13T15:29:25Z
severo
1,564,140,661
Avoid creating jobs as much as possible
We are often facing a full queue that blocks the viewer on the Hub from being available (https://github.com/huggingface/datasets-server/issues/725, https://github.com/huggingface/datasets-server/issues/704, etc.) It's generally the /first-rows queue. There are several ideas to improve this. One of them is to dete...
Avoid creating jobs as much as possible: We are often facing a full queue that blocks the viewer on the Hub from being available (https://github.com/huggingface/datasets-server/issues/725, https://github.com/huggingface/datasets-server/issues/704, etc.) It's generally the /first-rows queue. There are several idea...
closed
2023-01-31T11:33:50Z
2023-02-14T15:07:06Z
2023-02-14T15:07:05Z
severo
1,564,130,989
Change /parquet-and-dataset-info processing step
For now, /parquet-and-dataset-info is run for a dataset. But as it's a loop on all the configs, we could instead decide to turn it into a "config" job, ie. make it depend on the result of /config-names, and launch a job for each of the configs. This means doing the same for the dependent processing steps: /dataset-i...
Change /parquet-and-dataset-info processing step: For now, /parquet-and-dataset-info is run for a dataset. But as it's a loop on all the configs, we could instead decide to turn it into a "config" job, ie. make it depend on the result of /config-names, and launch a job for each of the configs. This means doing the s...
closed
2023-01-31T11:26:18Z
2023-05-09T12:34:18Z
2023-05-09T12:34:18Z
severo
1,564,093,033
Modify /splits worker
The /splits endpoint is now redundant with the /config-names and the /split-names endpoints and can be computed from them. One benefit is getting a list of configs and splits, even if some configs are erroneous. See #208 and #701. We want to keep the /splits endpoint (used by the Hub, for example) but compute it fr...
Modify /splits worker: The /splits endpoint is now redundant with the /config-names and the /split-names endpoints and can be computed from them. One benefit is getting a list of configs and splits, even if some configs are erroneous. See #208 and #701. We want to keep the /splits endpoint (used by the Hub, for exa...
closed
2023-01-31T11:02:22Z
2023-04-10T12:17:57Z
2023-04-10T12:17:56Z
severo
1,563,909,876
feat: 🎸 adapt number of replicas to flush the queues
<img width="394" alt="Capture d’écran 2023-01-31 aΜ€ 09 59 04" src="https://user-images.githubusercontent.com/1676121/215714662-1ede53b7-f50f-4a10-a367-115d49728e9b.png"> Current status: https://grafana.huggingface.tech/d/i7gwsO5Vz/global-view?orgId=1&from=now-15m&to=now&refresh=1m&var-slo_quantile_99_5=0.995&var-...
feat: 🎸 adapt number of replicas to flush the queues: <img width="394" alt="Capture d’écran 2023-01-31 aΜ€ 09 59 04" src="https://user-images.githubusercontent.com/1676121/215714662-1ede53b7-f50f-4a10-a367-115d49728e9b.png"> Current status: https://grafana.huggingface.tech/d/i7gwsO5Vz/global-view?orgId=1&from=now...
closed
2023-01-31T08:59:53Z
2023-01-31T09:32:01Z
2023-01-31T09:31:59Z
severo
1,563,027,428
Add gradio admin interface
A simple Gradio app to show the pending jobs. Contrary to the observable notebook, this can run locally and connect to your local dev environment. I added a feature to see the number of jobs in the queue, and a simple SQL query feature that uses DuckDB. The app can be extended to support more admin features e....
Add gradio admin interface: A simple Gradio app to show the pending jobs. Contrary to the observable notebook, this can run locally and connect to your local dev environment. I added a feature to see the number of jobs in the queue, and a simple SQL query feature that uses DuckDB. The app can be extended to su...
closed
2023-01-30T19:02:36Z
2023-01-31T14:28:56Z
2023-01-31T14:28:55Z
lhoestq
1,563,010,995
Dataset Viewer issue for jonas/undp_jobs_raw
### Link https://huggingface.co/datasets/jonas/undp_jobs_raw ### Description When going to the preview panel, this message is shown: ``` 'update' command document too large Error code: UnexpectedError ``` Other datasets with same issue: - SamAct/medium_cleaned split train - grasshoff--lhc_sents split tr...
Dataset Viewer issue for jonas/undp_jobs_raw: ### Link https://huggingface.co/datasets/jonas/undp_jobs_raw ### Description When going to the preview panel, this message is shown: ``` 'update' command document too large Error code: UnexpectedError ``` Other datasets with same issue: - SamAct/medium_cleane...
closed
2023-01-30T18:48:56Z
2023-02-08T18:56:43Z
2023-02-08T18:56:42Z
AndreaFrancis
1,562,941,763
fix: πŸ› fix two labels
null
fix: πŸ› fix two labels:
closed
2023-01-30T18:13:26Z
2023-01-30T18:23:56Z
2023-01-30T18:23:55Z
severo
1,562,784,943
feat: publish helm chart on HF internal registry
null
feat: publish helm chart on HF internal registry:
closed
2023-01-30T16:37:33Z
2023-01-30T16:44:51Z
2023-01-30T16:44:50Z
rtrompier
1,562,685,337
feat: 🎸 add indexes, based on recommendations from mongo cloud
null
feat: 🎸 add indexes, based on recommendations from mongo cloud:
closed
2023-01-30T15:39:48Z
2023-01-31T12:31:49Z
2023-01-31T12:23:36Z
severo
1,562,545,783
Dataset Viewer issue for jmdu/summ-without-dialogs-t5
### Link https://huggingface.co/datasets/jmdu/summ-without-dialogs-t5 ### Description The dataset viewer is not working for dataset jmdu/summ-without-dialogs-t5. Error details: ``` Error code: ResponseNotReady ```
Dataset Viewer issue for jmdu/summ-without-dialogs-t5: ### Link https://huggingface.co/datasets/jmdu/summ-without-dialogs-t5 ### Description The dataset viewer is not working for dataset jmdu/summ-without-dialogs-t5. Error details: ``` Error code: ResponseNotReady ```
closed
2023-01-30T14:23:48Z
2023-01-31T09:45:19Z
2023-01-31T09:45:19Z
jmdu99
1,562,544,141
Dataset Viewer issue for jmdu/summ-with-dialogs-t5
### Link https://huggingface.co/datasets/jmdu/summ-with-dialogs-t5 ### Description The dataset viewer is not working for dataset jmdu/summ-with-dialogs-t5. Error details: ``` Error code: ResponseNotReady ```
Dataset Viewer issue for jmdu/summ-with-dialogs-t5: ### Link https://huggingface.co/datasets/jmdu/summ-with-dialogs-t5 ### Description The dataset viewer is not working for dataset jmdu/summ-with-dialogs-t5. Error details: ``` Error code: ResponseNotReady ```
closed
2023-01-30T14:23:11Z
2023-01-31T09:45:50Z
2023-01-31T09:45:50Z
jmdu99
1,562,537,122
Dataset Viewer issue for tkurtulus/thycomments
### Link https://huggingface.co/datasets/tkurtulus/thycomments ### Description The dataset viewer is not working for dataset tkurtulus/thycomments. Error details: ``` Error code: ResponseNotReady ```
Dataset Viewer issue for tkurtulus/thycomments: ### Link https://huggingface.co/datasets/tkurtulus/thycomments ### Description The dataset viewer is not working for dataset tkurtulus/thycomments. Error details: ``` Error code: ResponseNotReady ```
closed
2023-01-30T14:19:20Z
2023-02-03T10:17:14Z
2023-02-03T10:17:13Z
tolgakurtuluss
1,562,524,965
Simplify the deployment to kubernetes
For now, there is no way to know if a job that has been started is still alive or if it has crashed silently (if the pod had an OOM error, for example). When deploying a new version, it's annoying because we have to first stop all the workers (replicas=0), then cancel the jobs (STARTED => WAITING), then deploy. I...
Simplify the deployment to kubernetes: For now, there is no way to know if a job that has been started is still alive or if it has crashed silently (if the pod had an OOM error, for example). When deploying a new version, it's annoying because we have to first stop all the workers (replicas=0), then cancel the jobs ...
closed
2023-01-30T14:12:21Z
2023-02-13T11:19:40Z
2023-02-13T11:19:40Z
severo
1,562,464,777
feat: 🎸 update docker images
null
feat: 🎸 update docker images:
closed
2023-01-30T13:35:02Z
2023-01-30T13:46:19Z
2023-01-30T13:46:17Z
severo
1,562,446,861
fix: πŸ› add a missing default value for org name in admin/
Thanks @lhoestq for spotting the bug
fix: πŸ› add a missing default value for org name in admin/: Thanks @lhoestq for spotting the bug
closed
2023-01-30T13:24:31Z
2023-01-30T13:46:30Z
2023-01-30T13:46:29Z
severo
1,562,260,708
Allow codecov update to fail
null
Allow codecov update to fail:
closed
2023-01-30T11:26:33Z
2023-01-30T11:29:38Z
2023-01-30T11:29:37Z
severo
1,562,221,443
fix: πŸ› don't check if dataset is supported when we know it is
As we are running a loop of updates on supported datasets, it's useless to check if the dataset is supported inside the `update_dataset` method.
fix: πŸ› don't check if dataset is supported when we know it is: As we are running a loop of updates on supported datasets, it's useless to check if the dataset is supported inside the `update_dataset` method.
closed
2023-01-30T11:06:49Z
2023-01-30T13:28:14Z
2023-01-30T13:28:13Z
severo
1,561,954,470
Refactoring for Private hub
null
Refactoring for Private hub :
closed
2023-01-30T08:18:13Z
2023-01-30T15:03:40Z
2023-01-30T15:03:39Z
rtrompier
1,561,020,630
Dataset Viewer issue for rfernand/basic_sentence_transforms
### Link https://huggingface.co/datasets/rfernand/basic_sentence_transforms ### Description The dataset viewer is not working for dataset rfernand/basic_sentence_transforms. Error details: ``` Error code: ResponseNotReady ```
Dataset Viewer issue for rfernand/basic_sentence_transforms: ### Link https://huggingface.co/datasets/rfernand/basic_sentence_transforms ### Description The dataset viewer is not working for dataset rfernand/basic_sentence_transforms. Error details: ``` Error code: ResponseNotReady ```
closed
2023-01-28T21:30:01Z
2023-02-28T15:42:37Z
2023-02-28T15:42:36Z
rfernand2
1,560,513,742
ci: 🎑 build and push the docker images only on push to main
See #712
ci: 🎑 build and push the docker images only on push to main: See #712
closed
2023-01-27T22:32:23Z
2023-01-27T22:32:40Z
2023-01-27T22:32:39Z
severo
1,559,882,783
ci: 🎑 build the images before running the e2e tests
See https://github.com/huggingface/datasets-server/issues/712#issuecomment-1406530448. It removes the need to edit the chart/docker-images.yaml file.
ci: 🎑 build the images before running the e2e tests: See https://github.com/huggingface/datasets-server/issues/712#issuecomment-1406530448. It removes the need to edit the chart/docker-images.yaml file.
closed
2023-01-27T14:43:12Z
2023-01-27T22:21:05Z
2023-01-27T22:21:03Z
severo
1,559,839,662
Update datasets to 2.9.0
Close #709.
Update datasets to 2.9.0: Close #709.
closed
2023-01-27T14:14:42Z
2023-01-30T09:14:26Z
2023-01-30T09:14:25Z
albertvillanova
1,559,679,277
Update poetry lock file format to 2.0
This PR locks poetry files with format 2.0. Close #710, close #711. Supersede and duplicate (but not in fork) of: - #711
Update poetry lock file format to 2.0: This PR locks poetry files with format 2.0. Close #710, close #711. Supersede and duplicate (but not in fork) of: - #711
closed
2023-01-27T12:24:06Z
2023-01-27T13:52:00Z
2023-01-27T13:48:56Z
albertvillanova
1,559,519,260
Trigger CI by PRs from forks
Fix #712. @severo could you please check we have all the required workflows (and no more than the required ones) to be triggered by PRs from forks?
Trigger CI by PRs from forks: Fix #712. @severo could you please check we have all the required workflows (and no more than the required ones) to be triggered by PRs from forks?
closed
2023-01-27T10:29:09Z
2023-01-30T13:38:20Z
2023-01-30T13:38:20Z
albertvillanova
1,559,497,733
Fix CI for PR from a fork
The CI does not run properly when a PR is made from a fork.
Fix CI for PR from a fork: The CI does not run properly when a PR is made from a fork.
closed
2023-01-27T10:14:01Z
2023-01-30T13:38:21Z
2023-01-30T13:38:21Z
albertvillanova
1,559,409,605
Update poetry lock file format to 2.0
This PR locks poetry files with format 2.0. Close #710.
Update poetry lock file format to 2.0: This PR locks poetry files with format 2.0. Close #710.
closed
2023-01-27T09:10:18Z
2023-01-27T13:37:32Z
2023-01-27T13:34:59Z
albertvillanova
1,559,343,044
Update poetry lock file format to 2.0
Since `poetry` 1.3.0 (9 Dec 2022), a new lock file format is used (version 2.0). See release notes: https://github.com/python-poetry/poetry/releases/tag/1.3.0 We should update our poetry lock files to the new format.
Update poetry lock file format to 2.0: Since `poetry` 1.3.0 (9 Dec 2022), a new lock file format is used (version 2.0). See release notes: https://github.com/python-poetry/poetry/releases/tag/1.3.0 We should update our poetry lock files to the new format.
closed
2023-01-27T08:11:36Z
2023-01-27T13:48:58Z
2023-01-27T13:48:57Z
albertvillanova
1,559,320,736
Update datasets to 2.9.0
After 2.9.0 `datasets` release, update dependencies on it.
Update datasets to 2.9.0: After 2.9.0 `datasets` release, update dependencies on it.
closed
2023-01-27T07:50:55Z
2023-01-30T09:14:26Z
2023-01-30T09:14:26Z
albertvillanova
1,558,682,470
feat: 🎸 add a /backfill admin endpoint
The logic is very basic: it updates all the datasets of the Hub, with a low priority. Note that most of the jobs will be skipped, because the response will already be in the cache. We might want to take a more detailed approach later to reduce the number of unnecessary jobs by specifically creating jobs for the miss...
feat: 🎸 add a /backfill admin endpoint: The logic is very basic: it updates all the datasets of the Hub, with a low priority. Note that most of the jobs will be skipped, because the response will already be in the cache. We might want to take a more detailed approach later to reduce the number of unnecessary jobs b...
closed
2023-01-26T19:45:15Z
2023-01-30T13:48:13Z
2023-01-27T10:20:53Z
severo
1,558,594,213
fix: πŸ› fix migration script
See https://github.com/huggingface/datasets-server/pull/705#issuecomment-1405433714
fix: πŸ› fix migration script: See https://github.com/huggingface/datasets-server/pull/705#issuecomment-1405433714
closed
2023-01-26T18:38:59Z
2023-01-26T19:10:39Z
2023-01-26T18:46:04Z
severo
1,558,464,920
feat: 🎸 make /first-rows depend on /split-names, not /splits
`/splits` fails if any config is broken. By depending on `/split-names`, the working configs will have `/first-rows`. Follow-up to #702
feat: 🎸 make /first-rows depend on /split-names, not /splits: `/splits` fails if any config is broken. By depending on `/split-names`, the working configs will have `/first-rows`. Follow-up to #702
closed
2023-01-26T16:58:15Z
2023-01-26T17:52:23Z
2023-01-26T17:52:21Z
severo
1,558,382,002
Add priority field to queue
In this PR: - the jobs have a new field called `priority`: `normal` (default) or `low` - the `normal` jobs are fetched before the `low` ones (but respecting `max_jobs_per_namespace`) - when updating a job, i.e., if a dataset has been updated again, the priority is inherited, never lowered, even if the priority field...
Add priority field to queue: In this PR: - the jobs have a new field called `priority`: `normal` (default) or `low` - the `normal` jobs are fetched before the `low` ones (but respecting `max_jobs_per_namespace`) - when updating a job, i.e., if a dataset has been updated again, the priority is inherited, never lowere...
closed
2023-01-26T16:03:13Z
2023-01-26T18:57:25Z
2023-01-26T18:12:57Z
severo
1,558,169,289
Dataset Viewer issue for matchbench/semi-text-w
null
Dataset Viewer issue for matchbench/semi-text-w:
closed
2023-01-26T13:49:06Z
2023-01-31T15:38:12Z
2023-01-31T15:38:12Z
ScHh0625
1,557,786,300
ci: 🎑 launch CI when libcommon has been modified
null
ci: 🎑 launch CI when libcommon has been modified:
closed
2023-01-26T08:39:53Z
2023-01-26T08:59:36Z
2023-01-26T08:59:35Z
severo
1,557,063,929
Configs and splits
See https://github.com/huggingface/datasets-server/issues/701 This PR does: - create a new endpoint `/config-names`. It gives the list of config names for a dataset - create a new endpoint `/split-names`. It gives the list of split names for a config (not for a dataset, which is the difference with /splits for now...
Configs and splits: See https://github.com/huggingface/datasets-server/issues/701 This PR does: - create a new endpoint `/config-names`. It gives the list of config names for a dataset - create a new endpoint `/split-names`. It gives the list of split names for a config (not for a dataset, which is the difference ...
closed
2023-01-25T17:59:32Z
2023-01-27T10:01:52Z
2023-01-26T14:14:32Z
severo
1,556,783,498
Dataset Viewer issue for bigbio/pubmed_qa
### Link https://huggingface.co/datasets/bigbio/pubmed_qa ### Description this config fails to return the split names: pubmed_qa_labeled_fold6_bigbio_qa but we get an error even though the other configs work
Dataset Viewer issue for bigbio/pubmed_qa: ### Link https://huggingface.co/datasets/bigbio/pubmed_qa ### Description this config fails to return the split names: pubmed_qa_labeled_fold6_bigbio_qa but we get an error even though the other configs work
closed
2023-01-25T14:50:07Z
2023-02-13T12:38:23Z
2023-02-13T12:38:22Z
lhoestq
1,556,698,931
Update hfh
In particular: we now use the new [`list_repo_refs`](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.list_repo_refs) method, and the new [`revision`](https://github.com/huggingface/huggingface_hub/pull/1293) parameter when we create the `refs/convert/parquet` branch.
Update hfh: In particular: we now use the new [`list_repo_refs`](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.list_repo_refs) method, and the new [`revision`](https://github.com/huggingface/huggingface_hub/pull/1293) parameter when we create the `refs/convert/parque...
closed
2023-01-25T13:54:41Z
2023-01-25T15:28:32Z
2023-01-25T14:22:01Z
severo
1,556,392,791
refactor: πŸ’‘ set libcommon as an "editable" dependency
All the jobs, services and workers share the same version of libs/libcommon, which is the current version. Before, we had to update the libcommon version, run poetry build, update the version in the other projects' pyproject.toml and poetry update it. The workflow when updating libcommon will be a lot simpler now...
refactor: πŸ’‘ set libcommon as an "editable" dependency: All the jobs, services and workers share the same version of libs/libcommon, which is the current version. Before, we had to update the libcommon version, run poetry build, update the version in the other projects' pyproject.toml and poetry update it. The wo...
closed
2023-01-25T10:18:52Z
2023-01-25T10:33:38Z
2023-01-25T10:33:37Z
severo
1,556,292,696
feat: 🎸 block more datasets in /parquet-and-dataset-info
null
feat: 🎸 block more datasets in /parquet-and-dataset-info:
closed
2023-01-25T09:10:09Z
2023-01-25T09:10:28Z
2023-01-25T09:10:27Z
severo
1,556,286,349
feat: 🎸 reduce logs level from DEBUG to INFO
cc @co42
feat: 🎸 reduce logs level from DEBUG to INFO: cc @co42
closed
2023-01-25T09:05:30Z
2023-01-25T09:05:48Z
2023-01-25T09:05:46Z
severo
1,553,947,576
Add a new route: /cache-reports-with-content
Also: add a missing field in an index
Add a new route: /cache-reports-with-content: Also: add a missing field in an index
closed
2023-01-23T22:41:38Z
2023-01-23T23:10:01Z
2023-01-23T23:10:00Z
severo
1,553,438,978
feat: 🎸 launch children jobs even when skipped
if we re-run a "DAG", all the steps will be processed, even if the first ones are skipped because the result is already in the cache. It will fix the issue with https://github.com/huggingface/datasets-server/pull/694#issuecomment-1400568759 (we will update the datasets in the queue, and remove the duplicates, withou...
feat: 🎸 launch children jobs even when skipped: if we re-run a "DAG", all the steps will be processed, even if the first ones are skipped because the result is already in the cache. It will fix the issue with https://github.com/huggingface/datasets-server/pull/694#issuecomment-1400568759 (we will update the dataset...
closed
2023-01-23T16:59:08Z
2023-01-23T17:40:20Z
2023-01-23T17:40:18Z
severo
1,553,239,965
feat: 🎸 replace Queue.add_job with Queue.upsert_job
upsert_job ensures only one waiting job for the same set of parameters. On every call to upsert_job, all the previous waiting jobs for the same set of parameters are canceled, and a new one is created with a new "created_at" date, which means it will be put at the end of the queue. It will help to fight against dataset...
feat: 🎸 replace Queue.add_job with Queue.upsert_job: upsert_job ensures only one waiting job for the same set of parameters. On every call to upsert_job, all the previous waiting jobs for the same set of parameters are canceled, and a new one is created with a new "created_at" date, which means it will be put at the e...
closed
2023-01-23T14:54:56Z
2023-01-23T17:39:58Z
2023-01-23T15:17:23Z
severo
1,552,545,760
Update index.mdx
updated the following text's linke: "give the [Datasets Server repository] a ⭐️ if you're interested in the latest updates!" To correct link address that corresponds to repo: (https://github.com/huggingface/datasets-server)
Update index.mdx: updated the following text's linke: "give the [Datasets Server repository] a ⭐️ if you're interested in the latest updates!" To correct link address that corresponds to repo: (https://github.com/huggingface/datasets-server)
closed
2023-01-23T05:36:32Z
2023-01-26T16:21:59Z
2023-01-26T16:19:21Z
keleffew
1,551,873,576
Space Viewer issue for [dataset name]
### Link https://huggingface.co/spaces/ivelin/ui-refexp ### Description Hello HF team, Not sure if this issue belongs here, but I could not find a dedicatred repo for Spaces Server issue. My space referenced in the link has been working fine for a few weeks. However this morning it started erroring with the me...
Space Viewer issue for [dataset name]: ### Link https://huggingface.co/spaces/ivelin/ui-refexp ### Description Hello HF team, Not sure if this issue belongs here, but I could not find a dedicatred repo for Spaces Server issue. My space referenced in the link has been working fine for a few weeks. However this ...
closed
2023-01-21T18:13:59Z
2023-01-23T14:04:52Z
2023-01-23T10:16:12Z
ivelin
1,551,417,698
feat: 🎸 add support for pdf2image
βœ… Closes: #688
feat: 🎸 add support for pdf2image: βœ… Closes: #688
closed
2023-01-20T20:33:52Z
2023-01-23T10:17:22Z
2023-01-23T10:17:20Z
severo
1,551,373,294
feat: 🎸 block more datasets, and allow more /first-rows per ns
null
feat: 🎸 block more datasets, and allow more /first-rows per ns:
closed
2023-01-20T19:59:36Z
2023-01-20T20:10:52Z
2023-01-20T20:10:51Z
severo
1,551,231,814
Dataset Viewer issue for dadosdq/wallbed_dataset
### Link https://huggingface.co/datasets/dadosdq/wallbed_dataset ### Description The dataset viewer is not working for dataset dadosdq/wallbed_dataset. Error details: ``` Error code: ResponseNotReady ```
Dataset Viewer issue for dadosdq/wallbed_dataset: ### Link https://huggingface.co/datasets/dadosdq/wallbed_dataset ### Description The dataset viewer is not working for dataset dadosdq/wallbed_dataset. Error details: ``` Error code: ResponseNotReady ```
closed
2023-01-20T17:43:45Z
2023-01-20T19:48:37Z
2023-01-20T19:48:37Z
dadobtx
1,551,132,436
Add pdf2image as preinstalled package
Needed for: https://huggingface.co/datasets/jordyvl/unit-test_PDFfolder (the installation instructions for pdf2image can be found there)
Add pdf2image as preinstalled package: Needed for: https://huggingface.co/datasets/jordyvl/unit-test_PDFfolder (the installation instructions for pdf2image can be found there)
closed
2023-01-20T16:25:49Z
2023-01-23T21:28:50Z
2023-01-23T16:06:03Z
mariosasko
1,550,933,877
feat: 🎸 quick and dirty POC for the random rows endpoint
null
feat: 🎸 quick and dirty POC for the random rows endpoint:
closed
2023-01-20T14:22:57Z
2023-03-02T08:58:06Z
2023-03-01T10:36:26Z
severo
1,550,835,078
chore: πŸ€– update resources
the splits queue is now empty, we can reduce the number of workers. Block new datasets for parquet for now.
chore: πŸ€– update resources: the splits queue is now empty, we can reduce the number of workers. Block new datasets for parquet for now.
closed
2023-01-20T13:18:49Z
2023-01-20T13:19:11Z
2023-01-20T13:19:10Z
severo
1,549,887,151
Dataset Viewer issue for ivelin/rico_sca_refexp_synthetic_saved
### Link https://huggingface.co/datasets/ivelin/rico_sca_refexp_synthetic_saved ### Description The dataset viewer is not working for dataset ivelin/rico_sca_refexp_synthetic_saved. Error details: ``` Error code: ResponseNotReady ```
Dataset Viewer issue for ivelin/rico_sca_refexp_synthetic_saved: ### Link https://huggingface.co/datasets/ivelin/rico_sca_refexp_synthetic_saved ### Description The dataset viewer is not working for dataset ivelin/rico_sca_refexp_synthetic_saved. Error details: ``` Error code: ResponseNotReady ```
closed
2023-01-19T20:18:02Z
2023-01-20T14:45:30Z
2023-01-20T08:56:41Z
ivelin
1,549,455,221
fix: πŸ› fix memory specification + increase pods in /parquet
error was: Warning: spec.template.spec.containers[0].resources.requests[memory]: fractional byte value "107374182400m" is invalid, must be an integer
fix: πŸ› fix memory specification + increase pods in /parquet: error was: Warning: spec.template.spec.containers[0].resources.requests[memory]: fractional byte value "107374182400m" is invalid, must be an integer
closed
2023-01-19T16:00:52Z
2023-01-19T16:23:52Z
2023-01-19T16:23:51Z
severo
1,549,403,811
feat: 🎸 increase resources`
null
feat: 🎸 increase resources`:
closed
2023-01-19T15:33:43Z
2023-01-19T15:34:21Z
2023-01-19T15:34:20Z
severo
1,549,286,654
feat: 🎸 increase resources
null
feat: 🎸 increase resources:
closed
2023-01-19T14:35:13Z
2023-01-19T14:35:49Z
2023-01-19T14:35:48Z
severo
1,549,251,776
feat: 🎸 increase number of workers for a moment
null
feat: 🎸 increase number of workers for a moment:
closed
2023-01-19T14:18:04Z
2023-01-19T14:18:22Z
2023-01-19T14:18:20Z
severo
1,548,903,615
chore: πŸ€– add --no-cache (poetry) and --no-cache-dir (pip)
to reduce the size of the docker images Thanks @XciD
chore: πŸ€– add --no-cache (poetry) and --no-cache-dir (pip): to reduce the size of the docker images Thanks @XciD
closed
2023-01-19T10:39:24Z
2023-01-19T13:24:16Z
2023-01-19T13:24:14Z
severo
1,548,224,075
feat: 🎸 add /sizes
null
feat: 🎸 add /sizes:
closed
2023-01-18T22:30:26Z
2023-01-19T10:34:10Z
2023-01-19T10:34:09Z
severo
1,534,681,151
ci: 🎑 fix app token
see https://github.com/huggingface/moon-landing/pull/5106 (internal)
ci: 🎑 fix app token: see https://github.com/huggingface/moon-landing/pull/5106 (internal)
closed
2023-01-16T10:26:01Z
2023-01-16T12:30:04Z
2023-01-16T12:30:03Z
severo
1,532,968,510
Create children in generic worker
Extracting (from https://github.com/huggingface/datasets-server/pull/670) the logic to create the children jobs
Create children in generic worker: Extracting (from https://github.com/huggingface/datasets-server/pull/670) the logic to create the children jobs
closed
2023-01-13T22:02:51Z
2023-01-16T12:53:55Z
2023-01-16T12:53:54Z
severo
1,529,177,289
fix: πŸ› only check webhook payload for what we are interested in
checking the payload in details (eg if the URL field is well formed) while not using it afterward is not useful. It even lead to breaking the webhook after https://github.com/huggingface/moon-landing/pull/4477 (internal link) where the "url" field format had changed. cc @SBrandeis @coyotte508 fyi
fix: πŸ› only check webhook payload for what we are interested in: checking the payload in details (eg if the URL field is well formed) while not using it afterward is not useful. It even lead to breaking the webhook after https://github.com/huggingface/moon-landing/pull/4477 (internal link) where the "url" field format...
closed
2023-01-11T14:39:43Z
2023-01-11T15:01:12Z
2023-01-11T15:01:11Z
severo
1,520,419,410
feat: 🎸 allow more concurrent jobs fo the same namespace
needed today for allenai, see #674
feat: 🎸 allow more concurrent jobs fo the same namespace: needed today for allenai, see #674
closed
2023-01-05T09:41:23Z
2023-01-05T09:41:36Z
2023-01-05T09:41:35Z
severo
1,520,382,528
Dataset Viewer issue for allenai/soda
### Link https://huggingface.co/datasets/allenai/soda ### Description The dataset viewer is not working for dataset allenai/soda. Error details: ``` Error code: ResponseNotReady ``` Please help! πŸ™πŸ»
Dataset Viewer issue for allenai/soda: ### Link https://huggingface.co/datasets/allenai/soda ### Description The dataset viewer is not working for dataset allenai/soda. Error details: ``` Error code: ResponseNotReady ``` Please help! πŸ™πŸ»
closed
2023-01-05T09:16:14Z
2023-01-05T09:43:58Z
2023-01-05T09:40:20Z
skywalker023
1,518,706,736
Dataset Viewer issue for xfact/nq-dpr
### Link https://huggingface.co/datasets/xfact/nq-dpr ### Description The dataset viewer is not working for dataset xfact/nq-dpr. Error details: ``` Error code: ResponseNotReady ```
Dataset Viewer issue for xfact/nq-dpr: ### Link https://huggingface.co/datasets/xfact/nq-dpr ### Description The dataset viewer is not working for dataset xfact/nq-dpr. Error details: ``` Error code: ResponseNotReady ```
closed
2023-01-04T10:21:02Z
2023-01-05T08:46:44Z
2023-01-05T08:46:44Z
euiyulsong
1,517,129,352
feat: 🎸 create orchestrator service to run the DAGs
null
feat: 🎸 create orchestrator service to run the DAGs:
closed
2023-01-03T09:28:39Z
2024-01-26T09:01:25Z
2023-02-28T16:12:32Z
severo
1,516,555,986
feat: 🎸 update the HF webhook content
We don't use the new fields for now.
feat: 🎸 update the HF webhook content: We don't use the new fields for now.
closed
2023-01-02T16:31:58Z
2023-01-02T17:21:53Z
2023-01-02T17:21:52Z
severo
1,509,802,334
Create endpoint /dataset-info
null
Create endpoint /dataset-info:
closed
2022-12-23T21:45:59Z
2023-01-18T21:26:26Z
2023-01-18T21:26:25Z
severo
1,509,232,092
chore: πŸ€– speed-up docker build
in the next builds, the cache will be used if only src/ has been modified, which should help mostly with the workers/datasets_based image, because the poetry install of dependencies takes a lot of time.
chore: πŸ€– speed-up docker build: in the next builds, the cache will be used if only src/ has been modified, which should help mostly with the workers/datasets_based image, because the poetry install of dependencies takes a lot of time.
closed
2022-12-23T11:23:47Z
2022-12-23T13:14:19Z
2022-12-23T13:14:19Z
severo
1,506,436,148
Split Worker into WorkerLoop, WorkerFactory and Worker
null
Split Worker into WorkerLoop, WorkerFactory and Worker:
closed
2022-12-21T14:59:31Z
2022-12-23T10:55:05Z
2022-12-23T10:55:04Z
severo
1,505,361,123
feat: 🎸 give each worker its own version + upgrade to 2.0.0
as datasets library has been upgraded, we want to upgrade the major version of the workers so that jobs are not skipped if re-run on the same commit (erroneous ones can be OK now)
feat: 🎸 give each worker its own version + upgrade to 2.0.0: as datasets library has been upgraded, we want to upgrade the major version of the workers so that jobs are not skipped if re-run on the same commit (erroneous ones can be OK now)
closed
2022-12-20T21:51:54Z
2022-12-20T22:10:59Z
2022-12-20T22:10:58Z
severo