id int64 959M 2.55B | title stringlengths 3 133 | body stringlengths 1 65.5k ⌀ | description stringlengths 5 65.6k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | closed_at stringlengths 20 20 ⌀ | user stringclasses 174
values |
|---|---|---|---|---|---|---|---|---|
1,845,990,087 | full_scan is a boolean, it should not be assigned None | https://github.com/huggingface/datasets-server/blob/4a70eba13cc7c17be613aad88450c713c51c059f/services/worker/src/worker/job_runners/split/opt_in_out_urls_scan_from_streaming.py#L193 | full_scan is a boolean, it should not be assigned None: https://github.com/huggingface/datasets-server/blob/4a70eba13cc7c17be613aad88450c713c51c059f/services/worker/src/worker/job_runners/split/opt_in_out_urls_scan_from_streaming.py#L193 | open | 2023-08-10T22:49:18Z | 2023-08-10T22:49:31Z | null | severo |
1,845,881,472 | The parameters of an endpoint should not change the response format | The optional parameters should only change the response's content, not structure.
For example, the `length` parameter in /rows reduces the number of returned rows.
But for /parquet, for example, if we ask for the config level (https://datasets-server.huggingface.co/parquet?dataset=mnist), we get the list of featu... | The parameters of an endpoint should not change the response format: The optional parameters should only change the response's content, not structure.
For example, the `length` parameter in /rows reduces the number of returned rows.
But for /parquet, for example, if we ask for the config level (https://datasets-s... | open | 2023-08-10T20:49:05Z | 2023-11-10T15:10:08Z | null | severo |
1,845,839,070 | Add a section for the missing endpoints in the doc | Missing documentation:
- [x] /size (dataset, config)
- [x] /info (dataset, config)
- [x] /statistics (split)
- [x] /search (split) - see #1663
- [ ] /opt-in-out-urls (dataset, config, split)
| Add a section for the missing endpoints in the doc: Missing documentation:
- [x] /size (dataset, config)
- [x] /info (dataset, config)
- [x] /statistics (split)
- [x] /search (split) - see #1663
- [ ] /opt-in-out-urls (dataset, config, split)
| open | 2023-08-10T20:19:54Z | 2024-02-06T14:57:33Z | null | severo |
1,845,501,120 | Add a section for /search in the docs | As the endpoint is public, we should have a section in https://huggingface.co/docs/datasets-server.
For https://huggingface.co/docs/hub/datasets-viewer, let's wait to have the search integrated into the Hub dataset viewer. | Add a section for /search in the docs: As the endpoint is public, we should have a section in https://huggingface.co/docs/datasets-server.
For https://huggingface.co/docs/hub/datasets-viewer, let's wait to have the search integrated into the Hub dataset viewer. | closed | 2023-08-10T16:16:40Z | 2023-09-22T11:22:04Z | 2023-09-11T15:44:58Z | severo |
1,845,472,246 | Should we change 500 to another status code when the error comes from the dataset? | See #1661 for example.
Same for the "retry later" error: is 500 the most appropriate status code? | Should we change 500 to another status code when the error comes from the dataset?: See #1661 for example.
Same for the "retry later" error: is 500 the most appropriate status code? | open | 2023-08-10T15:57:03Z | 2023-08-14T15:36:27Z | null | severo |
1,845,452,736 | rows returns 404 instead of 500 on dataset error | For example, https://datasets-server.huggingface.co/rows?dataset=atomic&config=atomic&split=train returns 404, Not found. It should instead return a detailed error which helps the user debug, as it's done on all the cached responses. /rows is special, as it's created on the fly, but it should stick with the same logic:... | rows returns 404 instead of 500 on dataset error: For example, https://datasets-server.huggingface.co/rows?dataset=atomic&config=atomic&split=train returns 404, Not found. It should instead return a detailed error which helps the user debug, as it's done on all the cached responses. /rows is special, as it's created on... | closed | 2023-08-10T15:45:20Z | 2023-09-04T14:26:39Z | 2023-09-04T14:26:39Z | severo |
1,844,760,755 | Revert datasets authentication with DownloadConfig | Once we update `datasets` to version 2.14.4, we no longer need the authentication tweaks (where we had to use `download_config` instead of `token`) introduced by:
- #1620
Fix #1659. | Revert datasets authentication with DownloadConfig: Once we update `datasets` to version 2.14.4, we no longer need the authentication tweaks (where we had to use `download_config` instead of `token`) introduced by:
- #1620
Fix #1659. | closed | 2023-08-10T09:17:27Z | 2023-08-10T14:51:06Z | 2023-08-10T14:51:05Z | albertvillanova |
1,844,750,240 | Revert datasets authentication tweaks | Once we update `datasets` to version 2.14.4, we no longer need the authentication tweaks (where we had to use `download_config` instead of `token`) introduced by:
- #1620 | Revert datasets authentication tweaks: Once we update `datasets` to version 2.14.4, we no longer need the authentication tweaks (where we had to use `download_config` instead of `token`) introduced by:
- #1620 | closed | 2023-08-10T09:11:13Z | 2023-08-10T14:51:07Z | 2023-08-10T14:51:06Z | albertvillanova |
1,844,097,492 | Incremental cache metrics | Currently, we calculate cache metrics in a job that runs every X minutes but this one queries the full cache collection every time leading to mongo query targeting issues in Mongo Atlas (ratio > 1000).
Alert message:
`The ratio of documents scanned to returned exceeded 1000.0 on datasets-server-prod-shard-00-00.u... | Incremental cache metrics: Currently, we calculate cache metrics in a job that runs every X minutes but this one queries the full cache collection every time leading to mongo query targeting issues in Mongo Atlas (ratio > 1000).
Alert message:
`The ratio of documents scanned to returned exceeded 1000.0 on dataset... | closed | 2023-08-09T22:10:37Z | 2023-08-11T16:59:05Z | 2023-08-11T16:59:04Z | AndreaFrancis |
1,843,925,658 | Private and inexistent datasets should return 404, not 401 | Try
https://datasets-server.huggingface.co/splits?dataset=severo/test_private_datasets
https://datasets-server.huggingface.co/splits?dataset=severo/inexistent
in a private window. It returns 401, not 404. | Private and inexistent datasets should return 404, not 401: Try
https://datasets-server.huggingface.co/splits?dataset=severo/test_private_datasets
https://datasets-server.huggingface.co/splits?dataset=severo/inexistent
in a private window. It returns 401, not 404. | open | 2023-08-09T20:03:01Z | 2023-08-09T20:03:13Z | null | severo |
1,843,920,147 | Gated datasets without authentication header return 404 | It should return 401 (Unauthorized).
See for example https://datasets-server.huggingface.co/splits?dataset=severo/bigcode/the-stack from a private window.
Or https://datasets-server.huggingface.co/splits?dataset=JosephusCheung/GuanacoDataset (if you have not access to it), while passing credentials (opening it wh... | Gated datasets without authentication header return 404: It should return 401 (Unauthorized).
See for example https://datasets-server.huggingface.co/splits?dataset=severo/bigcode/the-stack from a private window.
Or https://datasets-server.huggingface.co/splits?dataset=JosephusCheung/GuanacoDataset (if you have no... | open | 2023-08-09T19:58:49Z | 2023-08-11T16:23:59Z | null | severo |
1,843,912,198 | Give a better error message for private datasets | When accessing a private dataset without credentials, or with the wrong credentials, we get the same error response as for inexistent datasets, which prevent disclosing the name of private datasets:
```
{"error":"The dataset does not exist, or is not accessible without authentication (private or gated). Please chec... | Give a better error message for private datasets: When accessing a private dataset without credentials, or with the wrong credentials, we get the same error response as for inexistent datasets, which prevent disclosing the name of private datasets:
```
{"error":"The dataset does not exist, or is not accessible with... | closed | 2023-08-09T19:52:45Z | 2024-02-02T12:29:54Z | 2024-02-02T12:29:54Z | severo |
1,843,131,265 | move cache metrics inc to orchestrator | Currently, we calculate cache metrics in a job that runs every X minutes but this one queries the full cache collection every time leading to mongo query targeting issues in Mongo Atlas (ratio > 1000).
The ratio of documents scanned to returned exceeded 1000.0 on datasets-server-prod-shard-00-00.ujrd0.mongodb.net,... | move cache metrics inc to orchestrator: Currently, we calculate cache metrics in a job that runs every X minutes but this one queries the full cache collection every time leading to mongo query targeting issues in Mongo Atlas (ratio > 1000).
The ratio of documents scanned to returned exceeded 1000.0 on datasets-se... | closed | 2023-08-09T12:27:01Z | 2023-10-10T13:29:28Z | 2023-08-09T15:14:19Z | AndreaFrancis |
1,842,845,132 | Update datasets to 2.14.4 | Update datasets to 2.14.4.
Fix #1652.
Fix partially #1550. | Update datasets to 2.14.4: Update datasets to 2.14.4.
Fix #1652.
Fix partially #1550. | closed | 2023-08-09T09:30:36Z | 2023-08-09T15:56:27Z | 2023-08-09T15:56:26Z | albertvillanova |
1,842,829,405 | Update datasets to 2.14.4 | Update `datasets` to 2.14.4: https://github.com/huggingface/datasets/releases/tag/2.14.4
> Fix authentication issues by @albertvillanova in https://github.com/huggingface/datasets/pull/6127
We will be able to remove some authentication tweaks, where we had to use `download_config` instead of `token`. See:
- #16... | Update datasets to 2.14.4: Update `datasets` to 2.14.4: https://github.com/huggingface/datasets/releases/tag/2.14.4
> Fix authentication issues by @albertvillanova in https://github.com/huggingface/datasets/pull/6127
We will be able to remove some authentication tweaks, where we had to use `download_config` inste... | closed | 2023-08-09T09:21:17Z | 2023-08-09T15:56:27Z | 2023-08-09T15:56:27Z | albertvillanova |
1,841,843,039 | fix: 🐛 TypeError: can't subtract offset-naive and offset-aware | https://github.com/huggingface/datasets-server/actions/runs/5798651863/job/15716898067 | fix: 🐛 TypeError: can't subtract offset-naive and offset-aware: https://github.com/huggingface/datasets-server/actions/runs/5798651863/job/15716898067 | closed | 2023-08-08T18:45:41Z | 2023-08-08T18:45:47Z | 2023-08-08T18:45:45Z | severo |
1,840,500,190 | fix: 🐛 add missing volume declaration | null | fix: 🐛 add missing volume declaration: | closed | 2023-08-08T03:31:07Z | 2023-08-08T03:31:38Z | 2023-08-08T03:31:11Z | severo |
1,840,272,977 | feat: 🎸 use EFS instead of NFS for parquet-metadata files | beware: only deploy once 1. the workers have been stopped, 2. the sync has been done from NFS to EFS | feat: 🎸 use EFS instead of NFS for parquet-metadata files: beware: only deploy once 1. the workers have been stopped, 2. the sync has been done from NFS to EFS | closed | 2023-08-07T22:09:12Z | 2023-08-08T18:30:43Z | 2023-08-08T03:25:45Z | severo |
1,840,238,792 | feat: 🎸 install latest version of rclone | null | feat: 🎸 install latest version of rclone: | closed | 2023-08-07T21:35:34Z | 2023-08-07T21:35:41Z | 2023-08-07T21:35:40Z | severo |
1,840,225,959 | feat: 🎸 reduce the number of cpus for storage admin | because no nodes can provide this in the current setup | feat: 🎸 reduce the number of cpus for storage admin: because no nodes can provide this in the current setup | closed | 2023-08-07T21:22:58Z | 2023-08-07T21:23:37Z | 2023-08-07T21:23:35Z | severo |
1,840,221,377 | build and sync only the required containers | For example if the worker has been updated, the API service should not be rebuild and resynchronized. | build and sync only the required containers: For example if the worker has been updated, the API service should not be rebuild and resynchronized. | open | 2023-08-07T21:18:23Z | 2023-08-07T21:18:34Z | null | severo |
1,840,212,718 | fix: 🐛 fix the dockerfile | to avoid an interactive question while configuring the apt packages | fix: 🐛 fix the dockerfile: to avoid an interactive question while configuring the apt packages | closed | 2023-08-07T21:11:04Z | 2023-08-07T21:11:09Z | 2023-08-07T21:11:08Z | severo |
1,840,190,949 | feat: 🎸 use rclone on storage admin with multiple cores | null | feat: 🎸 use rclone on storage admin with multiple cores: | closed | 2023-08-07T20:52:17Z | 2023-08-07T20:53:05Z | 2023-08-07T20:53:04Z | severo |
1,840,161,522 | feat: 🎸 remove /admin/cancel-jobs/{job_type} | it's never used | feat: 🎸 remove /admin/cancel-jobs/{job_type}: it's never used | closed | 2023-08-07T20:28:40Z | 2023-08-07T20:45:06Z | 2023-08-07T20:45:05Z | severo |
1,840,146,159 | feat: 🎸 add RAM to the storage admin machine | else: rsync crashes for lack of memory | feat: 🎸 add RAM to the storage admin machine: else: rsync crashes for lack of memory | closed | 2023-08-07T20:16:19Z | 2023-08-07T20:17:06Z | 2023-08-07T20:16:24Z | severo |
1,840,138,301 | Reduce ram for rows and search | null | Reduce ram for rows and search: | closed | 2023-08-07T20:09:53Z | 2023-08-07T20:10:25Z | 2023-08-07T20:10:14Z | severo |
1,840,117,873 | feat: 🎸 reduce RAM from 8 to 7GiB for rows and search services | because nodes have only 16RAM -> we want two pods per node | feat: 🎸 reduce RAM from 8 to 7GiB for rows and search services: because nodes have only 16RAM -> we want two pods per node | closed | 2023-08-07T19:54:13Z | 2023-08-07T19:54:41Z | 2023-08-07T19:54:18Z | severo |
1,840,110,116 | refactor: 💡 change labels to lowercase programmatically | instead of requiring the maintainer to lowercase manually | refactor: 💡 change labels to lowercase programmatically: instead of requiring the maintainer to lowercase manually | closed | 2023-08-07T19:48:04Z | 2023-08-07T19:48:13Z | 2023-08-07T19:48:10Z | severo |
1,840,088,218 | feat: 🎸 reduce workers, and assign more RAM to /rows | null | feat: 🎸 reduce workers, and assign more RAM to /rows: | closed | 2023-08-07T19:29:38Z | 2023-08-07T19:31:45Z | 2023-08-07T19:31:44Z | severo |
1,840,005,885 | remove locks when finishing a job | should fix the issue with old remaining locks, when a job is killed (too long job, after 40 minutes) while it's uploading files to the Hub (lock created with git_branch()).
also: add environment variables in docker compose and helm, add the description in readme, and fix test value | remove locks when finishing a job: should fix the issue with old remaining locks, when a job is killed (too long job, after 40 minutes) while it's uploading files to the Hub (lock created with git_branch()).
also: add environment variables in docker compose and helm, add the description in readme, and fix test value | closed | 2023-08-07T18:27:32Z | 2023-08-07T19:31:27Z | 2023-08-07T19:17:51Z | severo |
1,839,852,312 | pass "endpoint" to hfh.hf_hub_download and hfh.hf_hub_url | once https://github.com/huggingface/huggingface_hub/pull/1580 is released.
`hf_hub_download`: https://github.com/search?q=repo%3Ahuggingface%2Fdatasets-server%20hf_hub_download&type=code
`hf_hub_url`: https://github.com/search?q=repo%3Ahuggingface%2Fdatasets-server+hf_hub_url&type=code (our local `hf_hub_url` funct... | pass "endpoint" to hfh.hf_hub_download and hfh.hf_hub_url: once https://github.com/huggingface/huggingface_hub/pull/1580 is released.
`hf_hub_download`: https://github.com/search?q=repo%3Ahuggingface%2Fdatasets-server%20hf_hub_download&type=code
`hf_hub_url`: https://github.com/search?q=repo%3Ahuggingface%2Fdataset... | closed | 2023-08-07T16:38:42Z | 2024-02-06T14:53:46Z | 2024-02-06T14:53:46Z | severo |
1,839,841,865 | ci: 🎡 fix the stale bot | The issue tags must be lowercase. P0, P1 and P2 were ignored since they were uppercase. I also refactored to make the code a bit clearer. | ci: 🎡 fix the stale bot: The issue tags must be lowercase. P0, P1 and P2 were ignored since they were uppercase. I also refactored to make the code a bit clearer. | closed | 2023-08-07T16:32:19Z | 2023-08-07T16:32:31Z | 2023-08-07T16:32:30Z | severo |
1,837,278,922 | Set default env values for staging and prod - delete indixes | - delete indexes job will run at 00:00 for staging and prod (default in values.yaml)
- expiredTimeIntervalSeconds: 259_200 # 3 days for prod | Set default env values for staging and prod - delete indixes: - delete indexes job will run at 00:00 for staging and prod (default in values.yaml)
- expiredTimeIntervalSeconds: 259_200 # 3 days for prod | closed | 2023-08-04T20:07:53Z | 2023-08-04T20:08:39Z | 2023-08-04T20:08:38Z | AndreaFrancis |
1,837,263,160 | test reduce index interval time - prod | null | test reduce index interval time - prod: | closed | 2023-08-04T19:51:59Z | 2023-08-04T19:53:19Z | 2023-08-04T19:53:18Z | AndreaFrancis |
1,837,126,961 | Init duckdb storage only for delete-indexes action | It should not be initialized for other actions like backfill or collect-metrics. | Init duckdb storage only for delete-indexes action: It should not be initialized for other actions like backfill or collect-metrics. | closed | 2023-08-04T17:43:14Z | 2023-08-04T17:45:29Z | 2023-08-04T17:45:28Z | AndreaFrancis |
1,837,118,018 | Fix delete indexes volume | null | Fix delete indexes volume: | closed | 2023-08-04T17:34:39Z | 2023-08-04T17:37:40Z | 2023-08-04T17:37:39Z | AndreaFrancis |
1,837,112,526 | Fix container definition for delete-indexes job | null | Fix container definition for delete-indexes job: | closed | 2023-08-04T17:29:14Z | 2023-08-04T17:29:55Z | 2023-08-04T17:29:54Z | AndreaFrancis |
1,837,104,736 | Add duckdb volume to delete-indexes k8s job | null | Add duckdb volume to delete-indexes k8s job: | closed | 2023-08-04T17:21:46Z | 2023-08-04T17:24:10Z | 2023-08-04T17:24:09Z | AndreaFrancis |
1,837,095,986 | fix: unquoted env vars | null | fix: unquoted env vars: | closed | 2023-08-04T17:14:11Z | 2023-08-04T17:16:02Z | 2023-08-04T17:15:50Z | rtrompier |
1,837,079,857 | Fix delete indexes job - fix cron | null | Fix delete indexes job - fix cron : | closed | 2023-08-04T16:59:06Z | 2023-08-04T17:00:27Z | 2023-08-04T17:00:26Z | AndreaFrancis |
1,837,065,148 | Try to fix delete indexes job | Staging deploy fails with message:
```
one or more objects failed to apply, reason: CronJob in version "v1" cannot be handled as a CronJob: json: cannot unmarshal number into Go struct field EnvVar.spec.jobTemplate.spec.template.spec.containers.env.value of type string
``` | Try to fix delete indexes job: Staging deploy fails with message:
```
one or more objects failed to apply, reason: CronJob in version "v1" cannot be handled as a CronJob: json: cannot unmarshal number into Go struct field EnvVar.spec.jobTemplate.spec.template.spec.containers.env.value of type string
``` | closed | 2023-08-04T16:45:27Z | 2023-08-04T16:52:57Z | 2023-08-04T16:52:56Z | AndreaFrancis |
1,837,048,151 | Try to fix delete indexes job | null | Try to fix delete indexes job: | closed | 2023-08-04T16:30:38Z | 2023-08-04T16:31:31Z | 2023-08-04T16:31:30Z | AndreaFrancis |
1,837,035,385 | Increase chart version because of new job | null | Increase chart version because of new job: | closed | 2023-08-04T16:22:53Z | 2023-08-04T16:23:45Z | 2023-08-04T16:23:44Z | AndreaFrancis |
1,837,023,674 | Enable delete-indexes job to run every 10 minutes | In order to try correct functionality of delete-index job, I would like to test every 10 minutes.
Then I will remove this schedule and will keep as default (once a day). | Enable delete-indexes job to run every 10 minutes: In order to try correct functionality of delete-index job, I would like to test every 10 minutes.
Then I will remove this schedule and will keep as default (once a day). | closed | 2023-08-04T16:13:33Z | 2023-08-04T16:16:17Z | 2023-08-04T16:16:16Z | AndreaFrancis |
1,836,574,648 | Fix e2e test_16_statistics | Fix e2e `test_16_statistics`. as this file was added after branch (https://github.com/huggingface/datasets-server/tree/update-datasets-2.14) was created from main.
This fix is necessary after the refactoring introduced by:
- #1616
| Fix e2e test_16_statistics: Fix e2e `test_16_statistics`. as this file was added after branch (https://github.com/huggingface/datasets-server/tree/update-datasets-2.14) was created from main.
This fix is necessary after the refactoring introduced by:
- #1616
| closed | 2023-08-04T11:26:45Z | 2023-08-04T11:40:55Z | 2023-08-04T11:40:54Z | albertvillanova |
1,836,527,064 | Update datasets dependency to 2.14 | Update datasets dependency to 2.14 and fix related issues.
Fix #1589. | Update datasets dependency to 2.14: Update datasets dependency to 2.14 and fix related issues.
Fix #1589. | closed | 2023-08-04T10:58:41Z | 2023-08-04T15:47:34Z | 2023-08-04T12:57:08Z | albertvillanova |
1,836,456,603 | Fix authentication with DownloadConfig | Fix authentication by passing `DownloadConfig` with `token`.
Fix partially #1589. | Fix authentication with DownloadConfig: Fix authentication by passing `DownloadConfig` with `token`.
Fix partially #1589. | closed | 2023-08-04T10:08:52Z | 2023-08-04T10:55:44Z | 2023-08-04T10:55:44Z | albertvillanova |
1,836,352,537 | Fix HfFileSystem | Fix usage of `HfFileSystem` (instead of `HTTPFileSystem`) and filename format of `data_files`.
Additionally, fix `fill_builder_info` with additional builder information: `builder_name`, `dataset_name`, `config_name` and `version`.
Fix partially #1589. | Fix HfFileSystem: Fix usage of `HfFileSystem` (instead of `HTTPFileSystem`) and filename format of `data_files`.
Additionally, fix `fill_builder_info` with additional builder information: `builder_name`, `dataset_name`, `config_name` and `version`.
Fix partially #1589. | closed | 2023-08-04T08:58:41Z | 2023-08-04T09:36:41Z | 2023-08-04T09:36:40Z | albertvillanova |
1,835,781,976 | feat: add search field to /is-valid | Adding `search `new field to /is-valid response, this new field should help UI identify if search is available for split viewer.
Previously, /is-valid was only available at dataset level, adding split and config level to have better granularity.
New job runners:
- split-is-valid
- config-is-valid
| feat: add search field to /is-valid: Adding `search `new field to /is-valid response, this new field should help UI identify if search is available for split viewer.
Previously, /is-valid was only available at dataset level, adding split and config level to have better granularity.
New job runners:
- split-is-vali... | closed | 2023-08-03T21:54:39Z | 2023-08-08T13:57:04Z | 2023-08-08T13:57:03Z | AndreaFrancis |
1,835,708,656 | Fix cached filenames | Fix the filename of cached files, so that it contains the `builder.dataset_name` instead of `builder.name`.
Fix partially #1589. | Fix cached filenames: Fix the filename of cached files, so that it contains the `builder.dataset_name` instead of `builder.name`.
Fix partially #1589. | closed | 2023-08-03T20:41:29Z | 2023-08-04T06:55:54Z | 2023-08-04T06:55:53Z | albertvillanova |
1,835,440,915 | Fix default config name | Fix default config name and refactor:
- the function no longer uses the argument `dataset`
- it returns a 2-tuple
Fix partially #1589. | Fix default config name: Fix default config name and refactor:
- the function no longer uses the argument `dataset`
- it returns a 2-tuple
Fix partially #1589. | closed | 2023-08-03T17:15:04Z | 2023-08-03T19:46:19Z | 2023-08-03T19:46:18Z | albertvillanova |
1,835,299,373 | Increase replicas for all worker | null | Increase replicas for all worker: | closed | 2023-08-03T15:36:56Z | 2023-08-03T15:37:56Z | 2023-08-03T15:37:55Z | AndreaFrancis |
1,835,278,772 | Update datasets 2.14.3 | Update `datasets` dependency to version 2.14.3, instead of 2.14.1 because there were issues. See:
- https://github.com/huggingface/datasets/pull/6094
- https://github.com/huggingface/datasets/pull/6095
- https://github.com/huggingface/datasets/pull/6105
- https://github.com/huggingface/datasets/pull/6107
We are ... | Update datasets 2.14.3: Update `datasets` dependency to version 2.14.3, instead of 2.14.1 because there were issues. See:
- https://github.com/huggingface/datasets/pull/6094
- https://github.com/huggingface/datasets/pull/6095
- https://github.com/huggingface/datasets/pull/6105
- https://github.com/huggingface/datas... | closed | 2023-08-03T15:24:11Z | 2023-08-03T16:30:56Z | 2023-08-03T16:18:54Z | albertvillanova |
1,835,234,117 | In config-parquet-metadata, delete the old files before uploading new ones | I'm currently moving the parquet-metadata files to a new storage, and I see strange numbering for shards. See for example, in the same directory (35456 parquet files for this one):
```
root@prod-datasets-server-storage-admin-786cfbf44-4ncpv:/storage# ls parquet-metadata-new/Antreas/TALI-large-2/--/Antreas--TALI-lar... | In config-parquet-metadata, delete the old files before uploading new ones: I'm currently moving the parquet-metadata files to a new storage, and I see strange numbering for shards. See for example, in the same directory (35456 parquet files for this one):
```
root@prod-datasets-server-storage-admin-786cfbf44-4ncpv... | closed | 2023-08-03T14:57:37Z | 2024-02-02T17:05:46Z | 2024-02-02T17:05:45Z | severo |
1,833,974,974 | test: add basic e2e for /statistics | null | test: add basic e2e for /statistics: | closed | 2023-08-02T22:19:23Z | 2023-08-03T15:12:25Z | 2023-08-03T15:12:24Z | AndreaFrancis |
1,833,924,953 | Increase resources | Currently we have 365K jobs waiting, might help flush the queue | Increase resources: Currently we have 365K jobs waiting, might help flush the queue | closed | 2023-08-02T21:26:40Z | 2023-08-02T21:27:39Z | 2023-08-02T21:27:38Z | AndreaFrancis |
1,833,900,598 | fix: 🐛 fix docker image name | null | fix: 🐛 fix docker image name: | closed | 2023-08-02T21:02:48Z | 2023-08-02T21:02:54Z | 2023-08-02T21:02:53Z | severo |
1,833,887,301 | feat: 🎸 build a Docker image for storageAdmin to have rsync | I also add curl and wget | feat: 🎸 build a Docker image for storageAdmin to have rsync: I also add curl and wget | closed | 2023-08-02T20:50:50Z | 2023-08-02T20:51:57Z | 2023-08-02T20:51:56Z | severo |
1,833,879,955 | fix: /search - set cache directory when download index | Currently /search is throwing error:
```
File "/src/services/search/src/search/routes/search.py", line 91, in download_index_file
hf_hub_download(
File "/src/services/search/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/src/servi... | fix: /search - set cache directory when download index: Currently /search is throwing error:
```
File "/src/services/search/src/search/routes/search.py", line 91, in download_index_file
hf_hub_download(
File "/src/services/search/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 118, in... | closed | 2023-08-02T20:43:54Z | 2023-08-02T21:00:26Z | 2023-08-02T21:00:24Z | AndreaFrancis |
1,833,814,325 | feat: 🎸 mount EFS storage for parquet-metadata on storage-admin | null | feat: 🎸 mount EFS storage for parquet-metadata on storage-admin: | closed | 2023-08-02T19:49:03Z | 2023-08-02T19:51:42Z | 2023-08-02T19:51:41Z | severo |
1,833,754,005 | Terminate worker pods quicker | Sometimes, when we deploy to prod, the sync is blocked by the worker pods termination, which can take up to 30 minutes! See https://huggingface.slack.com/archives/C04L6P8KNQ5/p1690990164606589?thread_ts=1690989017.253219&cid=C04L6P8KNQ5 (internal)
Ideally, when the pod receives SIGKILL, it should exit in the next fe... | Terminate worker pods quicker: Sometimes, when we deploy to prod, the sync is blocked by the worker pods termination, which can take up to 30 minutes! See https://huggingface.slack.com/archives/C04L6P8KNQ5/p1690990164606589?thread_ts=1690989017.253219&cid=C04L6P8KNQ5 (internal)
Ideally, when the pod receives SIGKILL... | closed | 2023-08-02T19:06:50Z | 2024-02-06T14:52:28Z | 2024-02-06T14:52:28Z | severo |
1,833,406,655 | feat: 🎸 optimize the computation of metrics | fixes #1604.
It seems like I had to first sort by the fields. | feat: 🎸 optimize the computation of metrics: fixes #1604.
It seems like I had to first sort by the fields. | closed | 2023-08-02T15:27:00Z | 2023-08-02T16:08:27Z | 2023-08-02T16:08:26Z | severo |
1,833,373,493 | Optimize mongo query | The metrics about the cache entries is a list of tuples `(kind, http_status, error_code, count)`.
Currently we compute with:
https://github.com/huggingface/datasets-server/blob/deb708ae737a2f8da51b74c1ca4a489c4ff39b51/libs/libcommon/src/libcommon/simple_cache.py#L490-L506
These queries are very slow (eg each `... | Optimize mongo query: The metrics about the cache entries is a list of tuples `(kind, http_status, error_code, count)`.
Currently we compute with:
https://github.com/huggingface/datasets-server/blob/deb708ae737a2f8da51b74c1ca4a489c4ff39b51/libs/libcommon/src/libcommon/simple_cache.py#L490-L506
These queries ar... | closed | 2023-08-02T15:07:44Z | 2023-08-02T16:08:27Z | 2023-08-02T16:08:27Z | severo |
1,833,251,270 | fix: 🐛 fix vulnerability in cryptography | null | fix: 🐛 fix vulnerability in cryptography: | closed | 2023-08-02T14:00:42Z | 2023-08-02T14:16:24Z | 2023-08-02T14:16:23Z | severo |
1,833,221,964 | Parallel steps update incoherence | See the discussion https://huggingface.co/datasets/BelleGroup/multiturn_chat_0.8M/discussions/1#64c9e88a6a26cddbecd9bec6
Before the dataset update, the `split-first-rows-from-parquet` response was a success, and thus the `split-first-rows-from-streaming` response, computed later, is a `ResponseAlreadyComputedError` ... | Parallel steps update incoherence: See the discussion https://huggingface.co/datasets/BelleGroup/multiturn_chat_0.8M/discussions/1#64c9e88a6a26cddbecd9bec6
Before the dataset update, the `split-first-rows-from-parquet` response was a success, and thus the `split-first-rows-from-streaming` response, computed later, i... | closed | 2023-08-02T13:44:35Z | 2024-02-06T14:52:06Z | 2024-02-06T14:52:05Z | severo |
1,832,155,278 | refactor: 💡 clean the chart variables | - ensure coherence in the names of the chart variables
- don't provide cached-assets to admin service (it does not use it)
- remove code related to cached assets in api service (it has moved to rows)
Note: I'm not sure we're allowed to put helper templates in subdirectories (_volumes/, etc). When deployed I'll try... | refactor: 💡 clean the chart variables: - ensure coherence in the names of the chart variables
- don't provide cached-assets to admin service (it does not use it)
- remove code related to cached assets in api service (it has moved to rows)
Note: I'm not sure we're allowed to put helper templates in subdirectories ... | closed | 2023-08-01T23:12:20Z | 2023-08-02T15:09:48Z | 2023-08-02T15:09:47Z | severo |
1,832,074,393 | feat: 🎸 increment the chart version | null | feat: 🎸 increment the chart version: | closed | 2023-08-01T21:43:07Z | 2023-08-01T21:43:53Z | 2023-08-01T21:43:52Z | severo |
1,832,071,814 | fix: 🐛 fix statistics volume | also (unrelated) increase resources | fix: 🐛 fix statistics volume: also (unrelated) increase resources | closed | 2023-08-01T21:40:45Z | 2023-08-01T21:41:29Z | 2023-08-01T21:41:28Z | severo |
1,832,042,159 | feat: 🎸 give more RAM to backfill script | null | feat: 🎸 give more RAM to backfill script: | closed | 2023-08-01T21:12:28Z | 2023-08-01T21:13:11Z | 2023-08-01T21:12:33Z | severo |
1,832,005,634 | feat: 🎸 fix temporarily the backfill cron | null | feat: 🎸 fix temporarily the backfill cron: | closed | 2023-08-01T20:46:15Z | 2023-08-01T20:53:05Z | 2023-08-01T20:53:04Z | severo |
1,831,996,035 | Fix backfill job | null | Fix backfill job: | closed | 2023-08-01T20:38:44Z | 2023-08-01T20:41:22Z | 2023-08-01T20:41:20Z | severo |
1,831,961,844 | Fix descriptive statistics env var | null | Fix descriptive statistics env var: | closed | 2023-08-01T20:11:49Z | 2023-08-01T20:38:49Z | 2023-08-01T20:38:48Z | severo |
1,831,951,591 | split-duckdb-index fix: id from 0 and enable parquet 5G | - Fix serial minvalue for comment https://github.com/huggingface/datasets-server/pull/1516#discussion_r1276696053
- Enable index datasets with parquet under 5G | split-duckdb-index fix: id from 0 and enable parquet 5G: - Fix serial minvalue for comment https://github.com/huggingface/datasets-server/pull/1516#discussion_r1276696053
- Enable index datasets with parquet under 5G | closed | 2023-08-01T20:05:48Z | 2023-08-01T20:27:01Z | 2023-08-01T20:27:00Z | AndreaFrancis |
1,831,878,513 | feat: 🎸 cron every 4 hours (my calculation was wrong) | null | feat: 🎸 cron every 4 hours (my calculation was wrong): | closed | 2023-08-01T19:17:14Z | 2023-08-01T19:17:42Z | 2023-08-01T19:17:19Z | severo |
1,831,869,462 | feat: 🎸 increase rate of backfill | null | feat: 🎸 increase rate of backfill: | closed | 2023-08-01T19:10:22Z | 2023-08-01T19:10:52Z | 2023-08-01T19:10:32Z | severo |
1,831,331,518 | Should we convert the datasets to other formats than parquet? | One OP asked for CSV conversion (not explicitly from the Hub itself): https://huggingface.co/datasets/medical_questions_pairs/discussions/3#64c8c2af527d76365563285c | Should we convert the datasets to other formats than parquet?: One OP asked for CSV conversion (not explicitly from the Hub itself): https://huggingface.co/datasets/medical_questions_pairs/discussions/3#64c8c2af527d76365563285c | closed | 2023-08-01T13:47:12Z | 2024-06-19T14:19:01Z | 2024-06-19T14:19:01Z | severo |
1,830,070,827 | feat: 🎸 adapt examples to new format of the API response | null | feat: 🎸 adapt examples to new format of the API response: | closed | 2023-07-31T21:35:38Z | 2023-07-31T21:41:14Z | 2023-07-31T21:40:44Z | severo |
1,828,545,166 | Update datasets dependency to 2.14 | TODO:
- [x] #1614
- [x] #1616
- [x] #1617
- [x] #1619
- [x] #1620 | Update datasets dependency to 2.14: TODO:
- [x] #1614
- [x] #1616
- [x] #1617
- [x] #1619
- [x] #1620 | closed | 2023-07-31T07:06:37Z | 2023-08-04T12:57:09Z | 2023-08-04T12:57:09Z | albertvillanova |
1,828,539,095 | Update datasets dependency to 2.14.2 version | Update `datasets` dependency to version 2.14.2, instead of 2.14.1 because there were issues. See:
- https://github.com/huggingface/datasets/pull/6094
- https://github.com/huggingface/datasets/pull/6095
Fix #1589.
Fix partially #1550.
Supersede and close #1577. | Update datasets dependency to 2.14.2 version: Update `datasets` dependency to version 2.14.2, instead of 2.14.1 because there were issues. See:
- https://github.com/huggingface/datasets/pull/6094
- https://github.com/huggingface/datasets/pull/6095
Fix #1589.
Fix partially #1550.
Supersede and close #1577. | closed | 2023-07-31T07:02:18Z | 2024-01-26T09:07:29Z | 2023-08-07T09:07:37Z | albertvillanova |
1,826,848,253 | All the workers were blocked because of a single lock entry | Unfortunately, I deleted the faulty lock entry, and I don't remember its value.
But every worker was trying to start the same job, but none could acquire the lock, thus, they all looped on the same job. | All the workers were blocked because of a single lock entry: Unfortunately, I deleted the faulty lock entry, and I don't remember its value.
But every worker was trying to start the same job, but none could acquire the lock, thus, they all looped on the same job. | closed | 2023-07-28T18:01:40Z | 2023-08-07T19:40:51Z | 2023-08-07T19:40:51Z | severo |
1,826,842,280 | Revert logs and 'dns config revert' | null | Revert logs and 'dns config revert': | closed | 2023-07-28T17:56:14Z | 2023-07-28T17:56:56Z | 2023-07-28T17:56:21Z | severo |
1,826,831,733 | feat: 🎸 set log level to debug in prod | null | feat: 🎸 set log level to debug in prod: | closed | 2023-07-28T17:46:38Z | 2023-07-28T17:47:17Z | 2023-07-28T17:46:44Z | severo |
1,826,807,446 | Revert "feat: 🎸 reduce the number of DNS requests (#1581)" | This reverts commit ad754cda0c26bf7d609292853f0c1681380a882e. | Revert "feat: 🎸 reduce the number of DNS requests (#1581)": This reverts commit ad754cda0c26bf7d609292853f0c1681380a882e. | closed | 2023-07-28T17:28:04Z | 2023-07-28T17:28:33Z | 2023-07-28T17:28:11Z | severo |
1,826,801,599 | Rights error for statistics step | ```
INFO: 2023-07-28 16:40:10,399 - root - [split-descriptive-statistics] compute JobManager(job_id=64c3ef6a6c181e70c1093bb7 dataset=Melanit/testsetneuraluma job_info={'job_id': '64c3ef6a6c181e70c1093bb7', 'type': 'split-descriptive-statistics', 'params': {'dataset': 'Melanit/testsetneuraluma', 'revision': '406baec5a6... | Rights error for statistics step: ```
INFO: 2023-07-28 16:40:10,399 - root - [split-descriptive-statistics] compute JobManager(job_id=64c3ef6a6c181e70c1093bb7 dataset=Melanit/testsetneuraluma job_info={'job_id': '64c3ef6a6c181e70c1093bb7', 'type': 'split-descriptive-statistics', 'params': {'dataset': 'Melanit/testsetn... | closed | 2023-07-28T17:24:51Z | 2023-09-04T11:36:30Z | 2023-09-04T11:36:30Z | severo |
1,826,745,616 | PreviousStepFormatError on sil-ai/bloom-speech | For a lot of configs in https://huggingface.co/datasets/sil-ai/bloom-speech, we get PreviousStepFormatError.
<img width="1013" alt="Capture d’écran 2023-07-28 à 12 46 09" src="https://github.com/huggingface/datasets-server/assets/1676121/f2160866-ce78-4654-8e3d-ea1396d1b23e">
| PreviousStepFormatError on sil-ai/bloom-speech: For a lot of configs in https://huggingface.co/datasets/sil-ai/bloom-speech, we get PreviousStepFormatError.
<img width="1013" alt="Capture d’écran 2023-07-28 à 12 46 09" src="https://github.com/huggingface/datasets-server/assets/1676121/f2160866-ce78-4654-8e3d-ea139... | closed | 2023-07-28T16:47:03Z | 2023-11-03T21:56:40Z | 2023-11-03T21:56:40Z | severo |
1,826,642,786 | feat: 🎸 reduce the number of DNS requests | null | feat: 🎸 reduce the number of DNS requests: | closed | 2023-07-28T15:35:23Z | 2023-07-28T15:50:33Z | 2023-07-28T15:50:31Z | severo |
1,826,627,545 | Add num_rows_total to /rows response | null | Add num_rows_total to /rows response: | closed | 2023-07-28T15:24:32Z | 2023-07-28T16:16:14Z | 2023-07-28T16:16:13Z | severo |
1,826,584,792 | The metrics jobs are lasting to long | <img width="306" alt="Capture d’écran 2023-07-28 à 11 01 37" src="https://github.com/huggingface/datasets-server/assets/1676121/0f1c0120-b2e2-4133-b5da-312b7981c4f1">
| The metrics jobs are lasting to long: <img width="306" alt="Capture d’écran 2023-07-28 à 11 01 37" src="https://github.com/huggingface/datasets-server/assets/1676121/0f1c0120-b2e2-4133-b5da-312b7981c4f1">
| closed | 2023-07-28T15:02:16Z | 2023-07-28T15:53:16Z | 2023-07-28T15:52:44Z | severo |
1,826,231,444 | Replace deprecated use_auth_token with token | Fix partially #1550.
Requires:
- [x] #1589 | Replace deprecated use_auth_token with token: Fix partially #1550.
Requires:
- [x] #1589 | closed | 2023-07-28T11:04:45Z | 2023-08-08T15:20:32Z | 2023-08-08T15:20:31Z | albertvillanova |
1,826,191,548 | Update datasets dependency to 2.14.1 version | Fix #1589.
Fix partially #1550. | Update datasets dependency to 2.14.1 version: Fix #1589.
Fix partially #1550. | closed | 2023-07-28T10:35:20Z | 2024-01-26T09:07:40Z | 2023-08-07T09:07:12Z | albertvillanova |
1,825,271,369 | /rows should return numTotalRows as /search | Needed to implement the search on the Hub
`num_total_rows` | /rows should return numTotalRows as /search: Needed to implement the search on the Hub
`num_total_rows` | closed | 2023-07-27T21:48:18Z | 2023-07-28T16:16:14Z | 2023-07-28T16:16:14Z | severo |
1,824,991,934 | Skip real test | Related to https://github.com/huggingface/datasets-server/issues/1085.
Temporarily disabling real test for spawning API to unblock external PRs like https://github.com/huggingface/datasets-server/pull/1570
| Skip real test: Related to https://github.com/huggingface/datasets-server/issues/1085.
Temporarily disabling real test for spawning API to unblock external PRs like https://github.com/huggingface/datasets-server/pull/1570
| closed | 2023-07-27T18:58:34Z | 2023-07-27T19:20:27Z | 2023-07-27T19:20:26Z | AndreaFrancis |
1,824,713,758 | Fix dev admin auth | was needed to use the admin endpoint locally in dev mode | Fix dev admin auth: was needed to use the admin endpoint locally in dev mode | closed | 2023-07-27T16:09:35Z | 2023-07-27T16:36:04Z | 2023-07-27T16:17:29Z | lhoestq |
1,824,713,551 | Use cached features in /rows | /rows needs the cached `features` since they're not always available in the parquet metadata.
This was causing some `Image` columns to be seen as a struct of binary data, which are not supported in the viewer (shown as "null").
Therefore I'm now passing the `features` from `config-parquet-and-info` to `config-parqu... | Use cached features in /rows: /rows needs the cached `features` since they're not always available in the parquet metadata.
This was causing some `Image` columns to be seen as a struct of binary data, which are not supported in the viewer (shown as "null").
Therefore I'm now passing the `features` from `config-parq... | closed | 2023-07-27T16:09:28Z | 2023-07-28T12:42:23Z | 2023-07-28T12:42:22Z | lhoestq |
1,824,548,414 | feat: 🎸 reduce overcommitment for rows service | it will reduce the number of crashes due to missing RAM | feat: 🎸 reduce overcommitment for rows service: it will reduce the number of crashes due to missing RAM | closed | 2023-07-27T14:51:19Z | 2023-07-27T14:51:49Z | 2023-07-27T14:51:29Z | severo |
1,824,289,506 | Fix torch on macos | This fixes local deployment using docker compose on macos
fixes this error when building the `worker` docker image
```
> [stage-0 13/13] RUN --mount=type=cache,target=/home/.cache/pypoetry/cache --mount=type=cache,target=/home/.cache/pypoetry/artifacts poetry install --no-root: ... | Fix torch on macos: This fixes local deployment using docker compose on macos
fixes this error when building the `worker` docker image
```
> [stage-0 13/13] RUN --mount=type=cache,target=/home/.cache/pypoetry/cache --mount=type=cache,target=/home/.cache/pypoetry/artifacts poetry install --no-root: ... | closed | 2023-07-27T12:39:51Z | 2023-07-27T16:47:48Z | 2023-07-27T16:47:47Z | lhoestq |
1,823,390,707 | remove redundant indices | Removal of redundant indices from simple_cache.py. | remove redundant indices: Removal of redundant indices from simple_cache.py. | closed | 2023-07-27T00:15:52Z | 2023-08-01T19:21:59Z | 2023-07-31T13:18:47Z | geethika-123 |
1,822,741,480 | feat: 🎸 update the modification date of root dataset dir | in cached assets. It will help delete old directories. | feat: 🎸 update the modification date of root dataset dir: in cached assets. It will help delete old directories. | closed | 2023-07-26T16:08:11Z | 2023-07-26T20:06:43Z | 2023-07-26T20:06:42Z | severo |
1,822,651,285 | feat: 🎸 only issues with label P0, P1 or P2 cannot be stale | null | feat: 🎸 only issues with label P0, P1 or P2 cannot be stale: | closed | 2023-07-26T15:22:01Z | 2023-07-26T15:22:15Z | 2023-07-26T15:22:14Z | severo |
1,822,580,246 | Fix flaky executor test on long jobs | Sometimes the executor doesn't have a chance to kill the long job before finishing
close https://github.com/huggingface/datasets-server/issues/1156 | Fix flaky executor test on long jobs: Sometimes the executor doesn't have a chance to kill the long job before finishing
close https://github.com/huggingface/datasets-server/issues/1156 | closed | 2023-07-26T14:43:33Z | 2023-07-26T15:28:17Z | 2023-07-26T15:28:16Z | lhoestq |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.