id int64 959M 2.55B | title stringlengths 3 133 | body stringlengths 1 65.5k β | description stringlengths 5 65.6k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | closed_at stringlengths 20 20 β | user stringclasses 174
values |
|---|---|---|---|---|---|---|---|---|
1,881,422,809 | Outdated documentation | The /splits endpoint does not return the number of samples anymore: https://huggingface.co/docs/datasets-server/splits | Outdated documentation: The /splits endpoint does not return the number of samples anymore: https://huggingface.co/docs/datasets-server/splits | closed | 2023-09-05T07:58:22Z | 2023-09-06T08:33:35Z | 2023-09-06T08:33:35Z | severo |
1,880,885,797 | feat: πΈ add resources | null | feat: πΈ add resources: | closed | 2023-09-04T21:33:37Z | 2023-09-04T21:34:10Z | 2023-09-04T21:33:41Z | severo |
1,880,745,138 | Queue metrics every 2 minutes | null | Queue metrics every 2 minutes: | closed | 2023-09-04T18:49:01Z | 2023-09-04T18:51:35Z | 2023-09-04T18:51:34Z | AndreaFrancis |
1,880,722,127 | feat: πΈ reduce the number of workers (/4) | null | feat: πΈ reduce the number of workers (/4): | closed | 2023-09-04T18:23:20Z | 2023-09-04T18:24:02Z | 2023-09-04T18:23:24Z | severo |
1,880,621,967 | feat: πΈ reduce the max ram | because the nodes have 32GB RAM, and we cannot fit 2x 16GB due to the small overhead kubenetes requires aside of the pods.
cc @lhoestq | feat: πΈ reduce the max ram: because the nodes have 32GB RAM, and we cannot fit 2x 16GB due to the small overhead kubenetes requires aside of the pods.
cc @lhoestq | closed | 2023-09-04T16:44:15Z | 2023-09-04T16:44:53Z | 2023-09-04T16:44:23Z | severo |
1,880,611,419 | fix: π upgrade gitpython | null | fix: π upgrade gitpython: | closed | 2023-09-04T16:34:18Z | 2023-09-04T16:38:48Z | 2023-09-04T16:38:47Z | severo |
1,880,602,086 | feat: πΈ increase resources for admin service | also: yamlformat | feat: πΈ increase resources for admin service: also: yamlformat | closed | 2023-09-04T16:28:03Z | 2023-09-04T16:29:52Z | 2023-09-04T16:29:51Z | severo |
1,880,533,496 | More memory for workers | 8GB -> 16GB
Fixes (hopefully) https://github.com/huggingface/datasets-server/issues/1758
Idk what kind of pods we have though, we might have to double check it's not too much
It should be enough to perform the indexing of datasets (up to 5GB which is the max) | More memory for workers: 8GB -> 16GB
Fixes (hopefully) https://github.com/huggingface/datasets-server/issues/1758
Idk what kind of pods we have though, we might have to double check it's not too much
It should be enough to perform the indexing of datasets (up to 5GB which is the max) | closed | 2023-09-04T15:44:48Z | 2023-09-04T16:33:04Z | 2023-09-04T16:31:45Z | lhoestq |
1,880,487,270 | [split-duckdb-index] OOM for big datasets | e.g. dataset=c4, config=en, split=train has a partial parquet export of 5GB
There also seems to be more than 12k cache entries with the same error code, over more than 3k datasets. This includes lots of big datasets (c4, oscar, wikipedia and all the variants by users) as well as many image and audio datasets. | [split-duckdb-index] OOM for big datasets: e.g. dataset=c4, config=en, split=train has a partial parquet export of 5GB
There also seems to be more than 12k cache entries with the same error code, over more than 3k datasets. This includes lots of big datasets (c4, oscar, wikipedia and all the variants by users) as we... | closed | 2023-09-04T15:13:29Z | 2023-10-05T15:30:03Z | 2023-10-05T15:30:02Z | lhoestq |
1,880,367,756 | ci: π‘ upgrade github action | fixes https://github.com/huggingface/datasets-server/issues/1756? | ci: π‘ upgrade github action: fixes https://github.com/huggingface/datasets-server/issues/1756? | closed | 2023-09-04T14:06:20Z | 2023-09-04T14:11:12Z | 2023-09-04T14:09:54Z | severo |
1,880,349,116 | The CI is broken. Due to actions/checkout release? | See https://github.com/actions/checkout/releases. They release v4 one hour ago
See errors in the CI here: https://github.com/huggingface/datasets-server/actions/runs/6074148922 | The CI is broken. Due to actions/checkout release?: See https://github.com/actions/checkout/releases. They release v4 one hour ago
See errors in the CI here: https://github.com/huggingface/datasets-server/actions/runs/6074148922 | closed | 2023-09-04T13:57:25Z | 2023-09-04T14:09:56Z | 2023-09-04T14:09:55Z | severo |
1,880,314,164 | [refactor] extract split name from the URL | We use the same code [here](https://github.com/huggingface/datasets-server/pull/1750/files#diff-b31b94865cf356d2b241fc7f38f98bd9484d61464a991d06364bdf82c2c5ba4eR292), [here](https://github.com/huggingface/datasets-server/pull/1750/files#diff-51b62bca3846b24d6554358a34bd39d0db4db7f24e378182fde2d549e2d1268bR353) and [her... | [refactor] extract split name from the URL: We use the same code [here](https://github.com/huggingface/datasets-server/pull/1750/files#diff-b31b94865cf356d2b241fc7f38f98bd9484d61464a991d06364bdf82c2c5ba4eR292), [here](https://github.com/huggingface/datasets-server/pull/1750/files#diff-51b62bca3846b24d6554358a34bd39d0db... | closed | 2023-09-04T13:39:09Z | 2024-01-24T10:01:29Z | 2024-01-24T10:01:29Z | severo |
1,880,286,071 | Remove PARQUET_AND_INFO_BLOCKED_DATASETS | See https://github.com/huggingface/datasets-server/pull/1751#pullrequestreview-1609519374
> should we remove the other block list (only used in config-parquet-and-info) ?
| Remove PARQUET_AND_INFO_BLOCKED_DATASETS: See https://github.com/huggingface/datasets-server/pull/1751#pullrequestreview-1609519374
> should we remove the other block list (only used in config-parquet-and-info) ?
| closed | 2023-09-04T13:24:21Z | 2024-02-06T15:06:40Z | 2024-02-06T15:06:40Z | severo |
1,880,283,005 | Delete existing cache entries for blocked datasets | Maybe in a cronjob, or in a migration every time the block list has changed?
See https://github.com/huggingface/datasets-server/pull/1751#pullrequestreview-1609519374 | Delete existing cache entries for blocked datasets: Maybe in a cronjob, or in a migration every time the block list has changed?
See https://github.com/huggingface/datasets-server/pull/1751#pullrequestreview-1609519374 | closed | 2023-09-04T13:22:46Z | 2024-02-06T15:03:03Z | 2024-02-06T15:03:02Z | severo |
1,880,273,596 | Dataset viewer fails if there is a split with no examples | See for example https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/viewer/ast/train
This is due to the dataset info not being filled for the empty splits
Should be fixed in `datasets` with https://github.com/huggingface/datasets/pull/6211
| Dataset viewer fails if there is a split with no examples: See for example https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/viewer/ast/train
This is due to the dataset info not being filled for the empty splits
Should be fixed in `datasets` with https://github.com/huggingface/datasets/pull/621... | closed | 2023-09-04T13:17:53Z | 2023-09-25T09:09:14Z | 2023-09-25T09:09:14Z | lhoestq |
1,880,232,015 | Add blockedDatasets variable | and block `alexandrainst/nota`, see [internal slack thread](https://huggingface.slack.com/archives/C04L6P8KNQ5/p1693548446825959)
The idea is to never process anything for blocked datasets. I also chose to not store anything in the cache in this case | Add blockedDatasets variable: and block `alexandrainst/nota`, see [internal slack thread](https://huggingface.slack.com/archives/C04L6P8KNQ5/p1693548446825959)
The idea is to never process anything for blocked datasets. I also chose to not store anything in the cache in this case | closed | 2023-09-04T12:56:30Z | 2023-09-04T13:32:07Z | 2023-09-04T13:32:06Z | lhoestq |
1,879,934,806 | Index partial parquet | It was failing to get the parquet files because it was not using the "partial-" split directory prefix for partially exported data
Fix https://github.com/huggingface/datasets-server/issues/1749 | Index partial parquet: It was failing to get the parquet files because it was not using the "partial-" split directory prefix for partially exported data
Fix https://github.com/huggingface/datasets-server/issues/1749 | closed | 2023-09-04T10:03:19Z | 2023-09-04T16:00:42Z | 2023-09-04T12:54:57Z | lhoestq |
1,879,827,999 | Indexing fails for partial splits | because it tries to get the parquet files from the e.g. `config/train` directory instead of `config/partial-train`.
Some impacted datasets: common voice 11, c4 | Indexing fails for partial splits: because it tries to get the parquet files from the e.g. `config/train` directory instead of `config/partial-train`.
Some impacted datasets: common voice 11, c4 | closed | 2023-09-04T09:00:31Z | 2023-09-04T13:36:36Z | 2023-09-04T12:54:59Z | lhoestq |
1,877,133,876 | Don't call the Hub datasets /tree endpoint with expand=True | This puts a lot of pressure on the Hub, and can even break it for big datasets
This is because the Hub gets the lastCommit for each file, and somehow the implementation is sort of n^2 apparently.
Calling /tree with `expand=True` can happen in this cascade of events invoving `hffs` (aka `huggingface_hub.hf_file_sy... | Don't call the Hub datasets /tree endpoint with expand=True: This puts a lot of pressure on the Hub, and can even break it for big datasets
This is because the Hub gets the lastCommit for each file, and somehow the implementation is sort of n^2 apparently.
Calling /tree with `expand=True` can happen in this casca... | closed | 2023-09-01T10:04:59Z | 2024-02-06T15:04:04Z | 2024-02-06T15:04:04Z | lhoestq |
1,875,919,722 | fix: Error response in rows when cache is failed | Fix for rows issue in https://github.com/huggingface/datasets-server/issues/1661
When cache parquet failed, it returned 404 instead of 500. Now, it returns the detailed error as in /search.
| fix: Error response in rows when cache is failed: Fix for rows issue in https://github.com/huggingface/datasets-server/issues/1661
When cache parquet failed, it returned 404 instead of 500. Now, it returns the detailed error as in /search.
| closed | 2023-08-31T17:03:44Z | 2023-09-04T12:52:28Z | 2023-09-04T12:52:27Z | AndreaFrancis |
1,875,212,109 | Ignore gitpython vuln again | same as https://github.com/huggingface/datasets-server/pull/1744 | Ignore gitpython vuln again: same as https://github.com/huggingface/datasets-server/pull/1744 | closed | 2023-08-31T10:09:19Z | 2023-09-04T18:10:57Z | 2023-08-31T10:18:19Z | lhoestq |
1,873,854,978 | Improve error messages content in `split-descriptive-statistics` | I've found errors in statistics computation in cache database and want to debug them but the error message doesn't contain feature names so it's harder | Improve error messages content in `split-descriptive-statistics`: I've found errors in statistics computation in cache database and want to debug them but the error message doesn't contain feature names so it's harder | closed | 2023-08-30T14:56:35Z | 2023-09-01T12:53:10Z | 2023-09-01T12:53:09Z | polinaeterna |
1,873,753,399 | ignore gitpython vuln | null | ignore gitpython vuln: | closed | 2023-08-30T14:03:15Z | 2023-08-31T10:03:41Z | 2023-08-31T10:03:39Z | lhoestq |
1,873,695,358 | Support audio in rows and search | Using this code I can run `to_rows_list` in less than 2sec on 100 wav files of 1.5MB (from 20sec before).
I chose to stop converting to WAV and only send the MP3 file to the user, and to parallelize the audio conversions.
This should enable the viewer on audio datasets for both /rows and /search :)
Though to try t... | Support audio in rows and search: Using this code I can run `to_rows_list` in less than 2sec on 100 wav files of 1.5MB (from 20sec before).
I chose to stop converting to WAV and only send the MP3 file to the user, and to parallelize the audio conversions.
This should enable the viewer on audio datasets for both /ro... | closed | 2023-08-30T13:34:25Z | 2023-09-04T16:37:33Z | 2023-09-01T10:24:24Z | lhoestq |
1,871,944,668 | Support Search for datasets on the first 5GB of big datasets | If a dataset is in parquet format the viewer shows the full dataset but search is disabled.
In this case it would be nice to support search anyway, at least on the first 5GB. | Support Search for datasets on the first 5GB of big datasets: If a dataset is in parquet format the viewer shows the full dataset but search is disabled.
In this case it would be nice to support search anyway, at least on the first 5GB. | closed | 2023-08-29T15:41:42Z | 2023-11-07T09:52:28Z | 2023-11-07T09:52:28Z | lhoestq |
1,871,752,474 | Add error message in admin app | when the dataset status query returns errors (eg today when whoami-v2 was down) | Add error message in admin app: when the dataset status query returns errors (eg today when whoami-v2 was down) | closed | 2023-08-29T14:05:22Z | 2023-08-29T14:16:45Z | 2023-08-29T14:16:44Z | lhoestq |
1,871,720,950 | Audio feature is not displayed correctly on the first page (not in pagination) | It says `Not supported with pagination yet`, example: https://huggingface.co/datasets/open-asr-leaderboard/datasets-test-only
As far as I understand it is an expected behavior for pages starting from 2 page because audio was [intentionally disabled](https://github.com/huggingface/datasets-server/issues/1255) for `/r... | Audio feature is not displayed correctly on the first page (not in pagination): It says `Not supported with pagination yet`, example: https://huggingface.co/datasets/open-asr-leaderboard/datasets-test-only
As far as I understand it is an expected behavior for pages starting from 2 page because audio was [intentional... | closed | 2023-08-29T13:49:28Z | 2023-09-08T09:27:44Z | 2023-09-08T09:27:44Z | polinaeterna |
1,870,640,562 | Add endpoint /hub-cache | The Hub's backend will use it to get the status of all the datasets. It will replace /valid, with the benefit that it's paginated, so the response should not timeout.
Also: the idea is to call this endpoint only at Hub's backend startup. Then, we plan to update the statuses with server-sent events (https://github.co... | Add endpoint /hub-cache: The Hub's backend will use it to get the status of all the datasets. It will replace /valid, with the benefit that it's paginated, so the response should not timeout.
Also: the idea is to call this endpoint only at Hub's backend startup. Then, we plan to update the statuses with server-sent ... | closed | 2023-08-28T23:23:20Z | 2023-09-05T15:46:21Z | 2023-09-05T15:45:49Z | severo |
1,870,085,369 | Raise custom disk error in job runners with cache when `PermissionError` is raised | related to https://github.com/huggingface/datasets-server/issues/1583 | Raise custom disk error in job runners with cache when `PermissionError` is raised: related to https://github.com/huggingface/datasets-server/issues/1583 | closed | 2023-08-28T16:32:21Z | 2023-08-29T09:23:16Z | 2023-08-29T09:23:14Z | polinaeterna |
1,869,914,037 | fix tailscale | null | fix tailscale: | closed | 2023-08-28T14:47:24Z | 2023-08-28T15:26:39Z | 2023-08-28T14:49:34Z | glegendre01 |
1,869,890,323 | feat: πΈ increase the number of pods for /search | The search is now in production in the Hub, so we need to increase the number of pods | feat: πΈ increase the number of pods for /search: The search is now in production in the Hub, so we need to increase the number of pods | closed | 2023-08-28T14:34:14Z | 2023-08-28T14:54:40Z | 2023-08-28T14:54:39Z | severo |
1,869,694,508 | Search for nested data | Duckdb does support indexing nested data so we're all good on that side already.
Therefore I simply improved the indexable column detection to check for nested data.
The old code used to index all the columns as long as there is at least one non-nested string column though, so we just need to refresh the datasets w... | Search for nested data: Duckdb does support indexing nested data so we're all good on that side already.
Therefore I simply improved the indexable column detection to check for nested data.
The old code used to index all the columns as long as there is at least one non-nested string column though, so we just need t... | closed | 2023-08-28T12:45:17Z | 2023-09-04T18:11:19Z | 2023-08-31T22:57:01Z | lhoestq |
1,869,282,369 | Block open-llm-leaderboard | The 800+ datasets with 60+ configs each have been updated which has filled up the queue to the point that the other datasets are not processed as fast as they should.
Blocking them for now, until we have a better way to handle them | Block open-llm-leaderboard: The 800+ datasets with 60+ configs each have been updated which has filled up the queue to the point that the other datasets are not processed as fast as they should.
Blocking them for now, until we have a better way to handle them | closed | 2023-08-28T08:36:21Z | 2023-08-28T16:14:58Z | 2023-08-28T16:14:57Z | lhoestq |
1,867,717,638 | Add API fuzzer to the tests? | Tools exist, see https://openapi.tools/ | Add API fuzzer to the tests?: Tools exist, see https://openapi.tools/ | closed | 2023-08-25T21:44:10Z | 2023-10-04T15:04:16Z | 2023-10-04T15:04:16Z | severo |
1,867,702,913 | feat: πΈ create step dataset-hub-cache | A new step, specific to the Hub (i.e. it will not be backward-compatible), to help the Hub have a cache of the information it needs for each dataset
Note that it's the first step and endpoint that is specific to the Hub. I think we should have more of them (we can use the `-hub` prefix to make it clear in the code).... | feat: πΈ create step dataset-hub-cache: A new step, specific to the Hub (i.e. it will not be backward-compatible), to help the Hub have a cache of the information it needs for each dataset
Note that it's the first step and endpoint that is specific to the Hub. I think we should have more of them (we can use the `-hu... | closed | 2023-08-25T21:24:31Z | 2023-08-28T18:48:03Z | 2023-08-28T16:16:39Z | severo |
1,867,526,809 | add `truncated` field to /first-rows | On the Hub's side, we have to guess if the result has been truncated. For example, if the result has 35 rows: maybe it has not been truncated because the split only had 35 rows, or maybe it has been truncated because the cells contain heavy data -> *we cannot detect*.
It would be more explicit to return `truncated: ... | add `truncated` field to /first-rows: On the Hub's side, we have to guess if the result has been truncated. For example, if the result has 35 rows: maybe it has not been truncated because the split only had 35 rows, or maybe it has been truncated because the cells contain heavy data -> *we cannot detect*.
It would b... | closed | 2023-08-25T18:39:57Z | 2023-09-21T15:22:43Z | 2023-09-21T15:22:43Z | severo |
1,867,450,765 | fix: Change score alias name in FST query | Fix for https://github.com/huggingface/datasets-server/issues/1729 | fix: Change score alias name in FST query: Fix for https://github.com/huggingface/datasets-server/issues/1729 | closed | 2023-08-25T17:38:42Z | 2023-08-25T19:38:34Z | 2023-08-25T19:38:33Z | AndreaFrancis |
1,867,289,353 | /search num_rows_total field is incoherent | In the following example, num_rows_total=1 (ie. the total number of results for that search) while the `rows` field is an array of 100 rows
https://datasets-server.huggingface.co/search?dataset=loubnabnl/gpt4-1k-annotations&config=default&split=train&query=pokemon&offset=0&limit=100 | /search num_rows_total field is incoherent: In the following example, num_rows_total=1 (ie. the total number of results for that search) while the `rows` field is an array of 100 rows
https://datasets-server.huggingface.co/search?dataset=loubnabnl/gpt4-1k-annotations&config=default&split=train&query=pokemon&offset=0... | closed | 2023-08-25T15:55:46Z | 2023-08-25T20:28:30Z | 2023-08-25T20:28:30Z | severo |
1,867,180,294 | Add TTL for unicity_id locks | Added the `ttl` parameter to `lock()`.
(ony supported value is 600 though, which is the value of the TTL index in mongo - but this extendable for later if needed)
Close https://github.com/huggingface/datasets-server/issues/1727 | Add TTL for unicity_id locks: Added the `ttl` parameter to `lock()`.
(ony supported value is 600 though, which is the value of the TTL index in mongo - but this extendable for later if needed)
Close https://github.com/huggingface/datasets-server/issues/1727 | closed | 2023-08-25T14:41:15Z | 2023-08-27T16:11:22Z | 2023-08-27T16:11:21Z | lhoestq |
1,866,947,915 | Locks sometimes block all the workers | It can happen that the job from `Queue().get_next_waiting_job()` is wrongly locked, which can make the workers fail to start a new job.
A way to fix this is to find a way to not have wrong locks or simply to ignore locked jobs in `Queue().get_next_waiting_job()`. | Locks sometimes block all the workers: It can happen that the job from `Queue().get_next_waiting_job()` is wrongly locked, which can make the workers fail to start a new job.
A way to fix this is to find a way to not have wrong locks or simply to ignore locked jobs in `Queue().get_next_waiting_job()`. | closed | 2023-08-25T12:15:10Z | 2023-08-27T16:11:22Z | 2023-08-27T16:11:22Z | lhoestq |
1,865,782,924 | Cached assets to s3 | The first part of https://github.com/huggingface/datasets-server/issues/1406
Migration for cached-assets enabling only for a list of datasets initially (I added "asoria/image" dataset for the first testing, later, we can remove this logic and apply it for all datasets).
Note that the new logic: validates if the file... | Cached assets to s3: The first part of https://github.com/huggingface/datasets-server/issues/1406
Migration for cached-assets enabling only for a list of datasets initially (I added "asoria/image" dataset for the first testing, later, we can remove this logic and apply it for all datasets).
Note that the new logic: ... | closed | 2023-08-24T19:47:56Z | 2023-09-27T12:39:46Z | 2023-09-27T12:39:45Z | AndreaFrancis |
1,865,396,120 | Use features in search | The search endpoint was using the feature types from the arrow table returned by duckdb, which doesn't contain any metadata about the Image type.
So I added a `features` field to the `split-duckdb-index` job to store the feature types that the search endpoint can use to correctly load the image data.
I added a mi... | Use features in search: The search endpoint was using the feature types from the arrow table returned by duckdb, which doesn't contain any metadata about the Image type.
So I added a `features` field to the `split-duckdb-index` job to store the feature types that the search endpoint can use to correctly load the ima... | closed | 2023-08-24T15:23:13Z | 2023-09-07T11:06:21Z | 2023-08-25T10:20:04Z | lhoestq |
1,864,797,501 | Block KakologArchives/KakologArchives | Has tons of data files and is updated every day.
12k commits already in
https://huggingface.co/datasets/KakologArchives/KakologArchives/tree/refs%2Fconvert%2Fparquet
Let's block it until we have a better way of handling big datasets with frequent updates | Block KakologArchives/KakologArchives: Has tons of data files and is updated every day.
12k commits already in
https://huggingface.co/datasets/KakologArchives/KakologArchives/tree/refs%2Fconvert%2Fparquet
Let's block it until we have a better way of handling big datasets with frequent updates | closed | 2023-08-24T09:48:23Z | 2023-08-24T15:16:45Z | 2023-08-24T15:16:44Z | lhoestq |
1,864,044,303 | fix: π expose X-Error-Code and X-Revision headers to browser | Fixes #1722 | fix: π expose X-Error-Code and X-Revision headers to browser: Fixes #1722 | closed | 2023-08-23T21:32:54Z | 2023-08-23T21:41:58Z | 2023-08-23T21:41:57Z | severo |
1,864,027,896 | Set the `Access-Control-Expose-Headers` header to allow access to X-Error-Code in the browser | Since the dataset viewer on the Hub lets us navigate in the pages of rows, and search, all in the browser, we need to let the browser code access the X-Error-Code header, to be able to handle the errors adequately.
See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Expose-Headers | Set the `Access-Control-Expose-Headers` header to allow access to X-Error-Code in the browser: Since the dataset viewer on the Hub lets us navigate in the pages of rows, and search, all in the browser, we need to let the browser code access the X-Error-Code header, to be able to handle the errors adequately.
See htt... | closed | 2023-08-23T21:20:09Z | 2023-08-23T21:41:58Z | 2023-08-23T21:41:58Z | severo |
1,863,931,685 | Increase resources and change queue metrics time | null | Increase resources and change queue metrics time: | closed | 2023-08-23T19:59:03Z | 2023-08-23T20:00:13Z | 2023-08-23T20:00:12Z | AndreaFrancis |
1,863,714,834 | [docs] ClickHouse integration | A first draft for querying Hub datasets with ClickHouse :)
- [x] Link to the blog post once it is published | [docs] ClickHouse integration: A first draft for querying Hub datasets with ClickHouse :)
- [x] Link to the blog post once it is published | closed | 2023-08-23T17:09:54Z | 2023-09-24T20:13:27Z | 2023-09-05T15:51:37Z | stevhliu |
1,863,678,539 | Download parquet files with huggingface_hub instead of duckdb in `split-descriptive-statistics` | will fix https://github.com/huggingface/datasets-server/issues/1712#issuecomment-1690029285 | Download parquet files with huggingface_hub instead of duckdb in `split-descriptive-statistics`: will fix https://github.com/huggingface/datasets-server/issues/1712#issuecomment-1690029285 | closed | 2023-08-23T16:42:38Z | 2023-08-25T14:45:28Z | 2023-08-25T14:45:27Z | polinaeterna |
1,862,142,672 | Implement Server-sent events to update the Hub cache | The Hub needs to know which datasets have a viewer, or only a preview. Currently, we publish the /valid endpoint, which returns a list of all the dataset names that has the search capability, the viewer, or just the preview.
It has two drawbacks:
1. it gives information about the gated datasets
2. it does not scale ... | Implement Server-sent events to update the Hub cache: The Hub needs to know which datasets have a viewer, or only a preview. Currently, we publish the /valid endpoint, which returns a list of all the dataset names that has the search capability, the viewer, or just the preview.
It has two drawbacks:
1. it gives infor... | closed | 2023-08-22T20:18:25Z | 2023-10-19T11:48:07Z | 2023-10-19T11:48:06Z | severo |
1,862,093,724 | Download parquet in split-duckdb-index | Fixes https://github.com/huggingface/datasets-server/issues/1686
Using hf_download instead of loading parquet files to duckdb directly.
| Download parquet in split-duckdb-index: Fixes https://github.com/huggingface/datasets-server/issues/1686
Using hf_download instead of loading parquet files to duckdb directly.
| closed | 2023-08-22T19:44:53Z | 2023-08-23T13:52:41Z | 2023-08-23T13:52:40Z | AndreaFrancis |
1,861,723,307 | Start job only if waiting | related to https://github.com/huggingface/datasets-server/issues/1467#issuecomment-1687104152 | Start job only if waiting: related to https://github.com/huggingface/datasets-server/issues/1467#issuecomment-1687104152 | closed | 2023-08-22T15:36:37Z | 2023-08-22T19:50:54Z | 2023-08-22T19:50:53Z | lhoestq |
1,859,664,571 | Reduce resources | null | Reduce resources : | closed | 2023-08-21T15:45:37Z | 2023-08-21T15:47:05Z | 2023-08-21T15:47:04Z | AndreaFrancis |
1,859,099,518 | Update admin app requirements.txt | was already updated in pyproject.toml | Update admin app requirements.txt: was already updated in pyproject.toml | closed | 2023-08-21T10:43:06Z | 2023-08-21T10:43:36Z | 2023-08-21T10:43:35Z | lhoestq |
1,859,080,092 | Search doesn't always use Image type | It seems the feature type is not loaded correctly, resulting in a binary type that is ignored in the viewer
e.g.
https://datasets-server.huggingface.co/search?dataset=lambdalabs/pokemon-blip-captions&config=lambdalabs--pokemon-blip-captions&split=train&query=red&offset=0&limit=100
```
"features": [
{... | Search doesn't always use Image type: It seems the feature type is not loaded correctly, resulting in a binary type that is ignored in the viewer
e.g.
https://datasets-server.huggingface.co/search?dataset=lambdalabs/pokemon-blip-captions&config=lambdalabs--pokemon-blip-captions&split=train&query=red&offset=0&limi... | closed | 2023-08-21T10:30:33Z | 2023-08-25T15:37:36Z | 2023-08-25T10:20:05Z | lhoestq |
1,858,969,064 | Missing auth for `split-descriptive-statistics` | ```
| INFO: 2023-08-21 09:23:35,871 - root - [split-descriptive-statistics] compute JobManager(job_id=64dd8b2e4833e19c97d7c0db dataset=mozilla-foundation/common_voice_9_0 job_info={'job_id': '64dd8b2e4833e19c97d7c0db', 'type': 'split-descriptiv β
β INFO: 2023-08-21 09:23:35,879 - root - Compute descriptive statistics... | Missing auth for `split-descriptive-statistics`: ```
| INFO: 2023-08-21 09:23:35,871 - root - [split-descriptive-statistics] compute JobManager(job_id=64dd8b2e4833e19c97d7c0db dataset=mozilla-foundation/common_voice_9_0 job_info={'job_id': '64dd8b2e4833e19c97d7c0db', 'type': 'split-descriptiv β
β INFO: 2023-08-21 09:... | closed | 2023-08-21T09:25:14Z | 2023-08-25T14:45:28Z | 2023-08-25T14:45:28Z | lhoestq |
1,857,300,627 | New attempt jwt array | I restore the list of public keys for JWT. But now, we test at startup if the format of the keys is good | New attempt jwt array: I restore the list of public keys for JWT. But now, we test at startup if the format of the keys is good | closed | 2023-08-18T21:16:25Z | 2023-08-18T21:23:11Z | 2023-08-18T21:23:10Z | severo |
1,856,954,754 | Enable Duckdb index on nested texts | E.g. for dialog datasets with features
```python
Features({
"conversations": [{"from": Value("string"), "value": Value("string")}]
})
```
like https://huggingface.co/datasets/LDJnr/Puffin | Enable Duckdb index on nested texts: E.g. for dialog datasets with features
```python
Features({
"conversations": [{"from": Value("string"), "value": Value("string")}]
})
```
like https://huggingface.co/datasets/LDJnr/Puffin | closed | 2023-08-18T16:01:39Z | 2023-08-31T22:57:02Z | 2023-08-31T22:57:02Z | lhoestq |
1,856,950,887 | Revert "Create jwt array again (#1708)" | This reverts commit 45cd1298b62f8f923eb6bb7763ef8824a397e242. | Revert "Create jwt array again (#1708)": This reverts commit 45cd1298b62f8f923eb6bb7763ef8824a397e242. | closed | 2023-08-18T15:58:33Z | 2023-08-18T15:59:08Z | 2023-08-18T15:58:38Z | severo |
1,856,874,379 | Create jwt array again | A Helm chart had a bad indentation | Create jwt array again: A Helm chart had a bad indentation | closed | 2023-08-18T15:03:15Z | 2023-08-18T15:03:54Z | 2023-08-18T15:03:54Z | severo |
1,856,854,450 | Revert both | null | Revert both: | closed | 2023-08-18T14:50:29Z | 2023-08-18T14:51:15Z | 2023-08-18T14:50:34Z | severo |
1,856,850,156 | Revert "Add unique compound index to cache metric (#1703)" | This reverts commit ab99a259cbf9f961a5286a583239db8d50677e8e. | Revert "Add unique compound index to cache metric (#1703)": This reverts commit ab99a259cbf9f961a5286a583239db8d50677e8e. | closed | 2023-08-18T14:48:19Z | 2023-08-18T14:48:26Z | 2023-08-18T14:48:25Z | severo |
1,855,734,092 | Redirect the API root to the docs | ie. https://datasets-server.huggingface.co/ => https://huggingface.co/docs/datasets-server | Redirect the API root to the docs: ie. https://datasets-server.huggingface.co/ => https://huggingface.co/docs/datasets-server | open | 2023-08-17T21:27:51Z | 2024-02-06T15:02:04Z | null | severo |
1,855,726,623 | Use multiple keys for jwt decoding | See https://github.com/huggingface/moon-landing/pull/7202 (internal)
Note that I created the secrets in the infra. | Use multiple keys for jwt decoding: See https://github.com/huggingface/moon-landing/pull/7202 (internal)
Note that I created the secrets in the infra. | closed | 2023-08-17T21:20:06Z | 2023-08-18T13:58:17Z | 2023-08-18T13:58:16Z | severo |
1,855,696,306 | Add unique compound index to cache metric | null | Add unique compound index to cache metric: | closed | 2023-08-17T20:53:13Z | 2023-08-17T21:14:00Z | 2023-08-17T21:13:58Z | AndreaFrancis |
1,855,581,642 | Temporarily delete cache metric index | It will be added again in the next deploy as:
```
{
"fields":["kind", "http_status", "error_code"],
"unique": True
}
```
(In another PR) | Temporarily delete cache metric index: It will be added again in the next deploy as:
```
{
"fields":["kind", "http_status", "error_code"],
"unique": True
}
```
(In another PR) | closed | 2023-08-17T19:30:04Z | 2023-08-17T20:32:20Z | 2023-08-17T20:32:19Z | AndreaFrancis |
1,855,558,409 | Set collect cache metrics as default schedule | null | Set collect cache metrics as default schedule: | closed | 2023-08-17T19:10:54Z | 2023-08-17T19:15:47Z | 2023-08-17T19:15:46Z | AndreaFrancis |
1,855,400,835 | Remove unique index in cacheTotalMetric | null | Remove unique index in cacheTotalMetric: | closed | 2023-08-17T17:18:46Z | 2023-08-17T17:20:21Z | 2023-08-17T17:20:20Z | AndreaFrancis |
1,855,335,729 | Fix start job lock owner | Related to https://github.com/huggingface/datasets-server/pull/1420
fixes (hopefully) https://github.com/huggingface/datasets-server/issues/1467 | Fix start job lock owner: Related to https://github.com/huggingface/datasets-server/pull/1420
fixes (hopefully) https://github.com/huggingface/datasets-server/issues/1467 | closed | 2023-08-17T16:34:01Z | 2023-08-17T18:05:11Z | 2023-08-17T18:05:10Z | lhoestq |
1,855,309,223 | Rollback queue incremental metrics | Currently, queue metrics are getting wrong values and sometimes negative.
It could be related to an issue with jobs being processed more than one time by different workers https://github.com/huggingface/datasets-server/issues/1467.
Rollbacking incremental queue metrics until job processing issues have been solved.
!... | Rollback queue incremental metrics: Currently, queue metrics are getting wrong values and sometimes negative.
It could be related to an issue with jobs being processed more than one time by different workers https://github.com/huggingface/datasets-server/issues/1467.
Rollbacking incremental queue metrics until job pr... | closed | 2023-08-17T16:16:52Z | 2023-08-17T16:31:51Z | 2023-08-17T16:31:51Z | AndreaFrancis |
1,855,162,947 | Add unique constraint to CacheTotalMetric | null | Add unique constraint to CacheTotalMetric: | closed | 2023-08-17T14:50:04Z | 2023-08-17T15:10:34Z | 2023-08-17T15:10:33Z | AndreaFrancis |
1,853,962,474 | delete obsolete cache records | When running dataset-config-names force-refresh for datasets with only one config for https://github.com/huggingface/datasets-server/issues/1550 , I found that for dataset `triple-t/dummy`, the previous config remains in the db `triple-t/dummy.`
All cache records related to `triple--t/dummy` should have been removed.... | delete obsolete cache records: When running dataset-config-names force-refresh for datasets with only one config for https://github.com/huggingface/datasets-server/issues/1550 , I found that for dataset `triple-t/dummy`, the previous config remains in the db `triple-t/dummy.`
All cache records related to `triple--t/d... | closed | 2023-08-16T22:03:29Z | 2023-09-15T13:47:57Z | 2023-09-15T13:47:18Z | AndreaFrancis |
1,853,860,310 | Lock queue metrics while update | null | Lock queue metrics while update: | closed | 2023-08-16T20:27:39Z | 2023-08-17T13:56:35Z | 2023-08-17T13:56:34Z | AndreaFrancis |
1,853,635,877 | Load a parquet export with `pyarrow.parquet.ParquetDataset` | `read()` fails because it tries to load the `index.duckdb` file as a parquet file
```python
from huggingface_hub import HfFileSystem
import pyarrow.parquet as pq
ds = pq.ParquetDataset("datasets/squad@~parquet", filesystem=HfFileSystem()).read()
```
raises
```
ArrowInvalid: Could not open Parquet input ... | Load a parquet export with `pyarrow.parquet.ParquetDataset`: `read()` fails because it tries to load the `index.duckdb` file as a parquet file
```python
from huggingface_hub import HfFileSystem
import pyarrow.parquet as pq
ds = pq.ParquetDataset("datasets/squad@~parquet", filesystem=HfFileSystem()).read()
```
... | closed | 2023-08-16T17:21:54Z | 2024-06-19T14:21:23Z | 2024-06-19T14:21:23Z | lhoestq |
1,853,558,203 | feat: πΈ allow passing JWT on authorization header + raise error is invalid | The authorization header must use the "jwt:" prefix, ie: `authorization: Bearer jwt:....token....`
Fixes #1690 and #934.
Tasks:
- [x] allow jwt on authorization header
- [x] return an error if the JWT is invalid
- [x] add docs + openapi
- [x] <strike>add e2e tests</strike> as we run the e2e against the CI Hub... | feat: πΈ allow passing JWT on authorization header + raise error is invalid: The authorization header must use the "jwt:" prefix, ie: `authorization: Bearer jwt:....token....`
Fixes #1690 and #934.
Tasks:
- [x] allow jwt on authorization header
- [x] return an error if the JWT is invalid
- [x] add docs + opena... | closed | 2023-08-16T16:22:22Z | 2023-08-17T16:09:23Z | 2023-08-17T16:08:48Z | severo |
1,853,502,496 | Increase config-parquet-and-info version | following #1685
this will update all the datasets and require lots of time and workers :) | Increase config-parquet-and-info version: following #1685
this will update all the datasets and require lots of time and workers :) | closed | 2023-08-16T15:47:03Z | 2023-08-16T15:55:53Z | 2023-08-16T15:55:52Z | lhoestq |
1,853,485,504 | Parquet renames docs | Following https://github.com/huggingface/datasets-server/pull/1685 | Parquet renames docs: Following https://github.com/huggingface/datasets-server/pull/1685 | closed | 2023-08-16T15:36:27Z | 2023-08-17T18:42:02Z | 2023-08-17T18:41:31Z | lhoestq |
1,853,463,919 | Return an error when the JWT is not valid | Currently, we silently ignore errors in the JWT and try other authentication mechanisms.
Instead, we should return an error when the JWT is not valid. It will help trigger a JWT renewal, in particular.
We should give the reason as the error_code (passed in the X-Error-Code header) to be able to discriminate betwe... | Return an error when the JWT is not valid: Currently, we silently ignore errors in the JWT and try other authentication mechanisms.
Instead, we should return an error when the JWT is not valid. It will help trigger a JWT renewal, in particular.
We should give the reason as the error_code (passed in the X-Error-Co... | closed | 2023-08-16T15:24:03Z | 2023-08-17T16:08:49Z | 2023-08-17T16:08:49Z | severo |
1,853,370,773 | Handle breaking change in google dependency? | See https://huggingface.co/datasets/bigscience/P3/discussions/6#64dca122e3e44e8000c45616
Should we downgrade the dependency, or fix the datasets? | Handle breaking change in google dependency?: See https://huggingface.co/datasets/bigscience/P3/discussions/6#64dca122e3e44e8000c45616
Should we downgrade the dependency, or fix the datasets? | closed | 2023-08-16T14:31:28Z | 2024-02-06T14:59:59Z | 2024-02-06T14:59:59Z | severo |
1,852,252,236 | feat: πΈ add num_rows_per_page in /rows and /search responses | also rename num_total_rows in num_rows_total
BREAKING CHANGE: 𧨠field num_total_rows in /rows and /search has been renamed num_rows_total
fixes #1687 | feat: πΈ add num_rows_per_page in /rows and /search responses: also rename num_total_rows in num_rows_total
BREAKING CHANGE: 𧨠field num_total_rows in /rows and /search has been renamed num_rows_total
fixes #1687 | closed | 2023-08-15T22:52:17Z | 2023-08-16T15:21:37Z | 2023-08-16T15:20:57Z | severo |
1,852,063,494 | The /rows and /search responses should return the maximum number of rows per page | To be auto-sufficient, the response of /rows and /search should return the (maximum) number of rows per page. For now, we hardcode 100 in the client.
It could be `max_rows_per_page` (and rename `num_total_rows` to `total_rows`?) | The /rows and /search responses should return the maximum number of rows per page: To be auto-sufficient, the response of /rows and /search should return the (maximum) number of rows per page. For now, we hardcode 100 in the client.
It could be `max_rows_per_page` (and rename `num_total_rows` to `total_rows`?) | closed | 2023-08-15T20:15:20Z | 2023-08-16T15:20:58Z | 2023-08-16T15:20:58Z | severo |
1,851,641,126 | Enable duckdb index on gated datasets | > Currently, duckdb index is not supported for gated/private datasets. I opened a question in duckdb foundations but didn't receive a response yet, I think I will open an issue in the repo https://github.com/duckdb/foundation-discussions/discussions/16
from [Slack](https://huggingface.slack.com/archives/C04L6P8KNQ5/... | Enable duckdb index on gated datasets: > Currently, duckdb index is not supported for gated/private datasets. I opened a question in duckdb foundations but didn't receive a response yet, I think I will open an issue in the repo https://github.com/duckdb/foundation-discussions/discussions/16
from [Slack](https://hugg... | closed | 2023-08-15T15:21:46Z | 2023-08-23T13:52:42Z | 2023-08-23T13:52:42Z | severo |
1,851,143,583 | Rename parquet files | from `{config}/{dataset_name}-{split}{sharded_suffix}.parquet` to `{config}/{split}/{shard_idx:04d}.parquet` | Rename parquet files: from `{config}/{dataset_name}-{split}{sharded_suffix}.parquet` to `{config}/{split}/{shard_idx:04d}.parquet` | closed | 2023-08-15T09:22:54Z | 2023-08-18T14:23:10Z | 2023-08-16T15:35:22Z | lhoestq |
1,850,677,026 | Incremental queue metrics | null | Incremental queue metrics: | closed | 2023-08-14T23:01:42Z | 2023-08-15T20:14:20Z | 2023-08-15T20:14:18Z | AndreaFrancis |
1,850,552,533 | Set Access-Control-Allow-Origin to huggingface.co when a cookie is used for authentication | See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin
Currently, we always set `Access-Control-Allow-Origin: *`. It's wrong. When a request passes a cookie, and when the user is authorized to get access thanks to that cookie, we should return: `Access-Control-Allow-Origin: hugging... | Set Access-Control-Allow-Origin to huggingface.co when a cookie is used for authentication: See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin
Currently, we always set `Access-Control-Allow-Origin: *`. It's wrong. When a request passes a cookie, and when the user is authorized ... | closed | 2023-08-14T21:16:13Z | 2023-09-15T07:51:46Z | 2023-09-15T07:51:45Z | severo |
1,850,304,398 | feat: πΈ be more specific in OpenAPI type | the failed configs format is a CustomError | feat: πΈ be more specific in OpenAPI type: the failed configs format is a CustomError | closed | 2023-08-14T18:16:33Z | 2023-08-14T18:51:25Z | 2023-08-14T18:51:00Z | severo |
1,850,106,500 | fix: π fix the optional types in OpenAPI | I made these changes by looking at how the Typescript types are created with https://github.com/oazapfts/oazapfts (that we use on the Hub). | fix: π fix the optional types in OpenAPI: I made these changes by looking at how the Typescript types are created with https://github.com/oazapfts/oazapfts (that we use on the Hub). | closed | 2023-08-14T16:13:57Z | 2023-08-14T16:56:49Z | 2023-08-14T16:56:18Z | severo |
1,850,097,086 | Fix parquet filename regex | for https://huggingface.co/datasets/GalaktischeGurke/full_dataset_1509_lines_invoice_contract_mail_GPT3.5_test/discussions/1#64da13ff3a7ab21ea7c45e63 | Fix parquet filename regex: for https://huggingface.co/datasets/GalaktischeGurke/full_dataset_1509_lines_invoice_contract_mail_GPT3.5_test/discussions/1#64da13ff3a7ab21ea7c45e63 | closed | 2023-08-14T16:07:27Z | 2023-08-14T22:31:47Z | 2023-08-14T22:31:47Z | lhoestq |
1,849,829,703 | Some refactors | fixes review comments from https://github.com/huggingface/datasets-server/pull/1674. Thanks @AndreaFrancis! | Some refactors: fixes review comments from https://github.com/huggingface/datasets-server/pull/1674. Thanks @AndreaFrancis! | closed | 2023-08-14T13:57:35Z | 2023-08-14T16:56:34Z | 2023-08-14T16:56:32Z | severo |
1,849,658,133 | Fix disk metrics | null | Fix disk metrics: | closed | 2023-08-14T12:13:11Z | 2023-08-14T13:56:17Z | 2023-08-14T13:56:16Z | AndreaFrancis |
1,847,399,369 | fix: π fix vulnerability in gitpython | null | fix: π fix vulnerability in gitpython: | closed | 2023-08-11T20:37:26Z | 2023-08-11T20:42:27Z | 2023-08-11T20:42:26Z | severo |
1,847,397,826 | build(deps-dev): bump gitpython from 3.1.31 to 3.1.32 in /libs/libcommon | Bumps [gitpython](https://github.com/gitpython-developers/GitPython) from 3.1.31 to 3.1.32.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/gitpython-developers/GitPython/releases">gitpython's releases</a>.</em></p>
<blockquote>
<h2>v3.1.32 - with another security update</h2>
... | build(deps-dev): bump gitpython from 3.1.31 to 3.1.32 in /libs/libcommon: Bumps [gitpython](https://github.com/gitpython-developers/GitPython) from 3.1.31 to 3.1.32.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/gitpython-developers/GitPython/releases">gitpython's releases</... | closed | 2023-08-11T20:35:58Z | 2023-08-11T20:52:10Z | 2023-08-11T20:52:00Z | dependabot[bot] |
1,847,397,153 | build(deps-dev): bump gitpython from 3.1.31 to 3.1.32 in /e2e | Bumps [gitpython](https://github.com/gitpython-developers/GitPython) from 3.1.31 to 3.1.32.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/gitpython-developers/GitPython/releases">gitpython's releases</a>.</em></p>
<blockquote>
<h2>v3.1.32 - with another security update</h2>
... | build(deps-dev): bump gitpython from 3.1.31 to 3.1.32 in /e2e: Bumps [gitpython](https://github.com/gitpython-developers/GitPython) from 3.1.31 to 3.1.32.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/gitpython-developers/GitPython/releases">gitpython's releases</a>.</em></p... | closed | 2023-08-11T20:35:17Z | 2023-08-11T20:52:08Z | 2023-08-11T20:52:05Z | dependabot[bot] |
1,847,385,841 | feat: πΈ add metrics for all the volumes | fixes #1561 | feat: πΈ add metrics for all the volumes: fixes #1561 | closed | 2023-08-11T20:25:41Z | 2023-08-14T13:43:29Z | 2023-08-11T20:51:44Z | severo |
1,847,325,777 | Set cache metrics cron schedule to default value | It will be `schedule: "13 00 * * *"` as default in values.yaml | Set cache metrics cron schedule to default value: It will be `schedule: "13 00 * * *"` as default in values.yaml | closed | 2023-08-11T19:32:42Z | 2023-08-11T19:33:39Z | 2023-08-11T19:33:38Z | AndreaFrancis |
1,847,223,883 | feat: πΈ move openapi.json to the docs | note that we cannot serve from the deployed docs (see https://github.com/huggingface/doc-builder/issues/312#issuecomment-1675099444). We thus redirect to github. Also: we fix the github action that checks openapi (it was written opanapi) | feat: πΈ move openapi.json to the docs: note that we cannot serve from the deployed docs (see https://github.com/huggingface/doc-builder/issues/312#issuecomment-1675099444). We thus redirect to github. Also: we fix the github action that checks openapi (it was written opanapi) | closed | 2023-08-11T18:03:30Z | 2023-08-11T18:28:58Z | 2023-08-11T18:18:05Z | severo |
1,847,109,250 | Document all the X-Error-Code in OpenAPI | and also in the docs, maybe, ie: a page with all the error types.
related to #1670 (I think we first want to generate the OpenAPI spec automatically, before documenting all the error codes)
| Document all the X-Error-Code in OpenAPI: and also in the docs, maybe, ie: a page with all the error types.
related to #1670 (I think we first want to generate the OpenAPI spec automatically, before documenting all the error codes)
| open | 2023-08-11T16:26:42Z | 2023-08-11T16:26:50Z | null | severo |
1,847,108,089 | Generate OpenAPI specification from the code | It would help to:
- ensure the OpenAPI is always up to date
- reduce the maintenance burden
- allow contract testing | Generate OpenAPI specification from the code: It would help to:
- ensure the OpenAPI is always up to date
- reduce the maintenance burden
- allow contract testing | open | 2023-08-11T16:25:52Z | 2023-08-11T16:28:36Z | null | severo |
1,847,097,928 | Adding StreamingRowsError to backfill | Temporary adding StreamingRowsError to error_codes_to_retry in order to backfill datasets from https://github.com/huggingface/datasets-server/issues/1550 | Adding StreamingRowsError to backfill: Temporary adding StreamingRowsError to error_codes_to_retry in order to backfill datasets from https://github.com/huggingface/datasets-server/issues/1550 | closed | 2023-08-11T16:17:31Z | 2023-08-11T16:38:58Z | 2023-08-11T16:38:58Z | AndreaFrancis |
1,846,715,345 | Delete empty folders from downloaded duckdb indexes in /search | See comment https://github.com/huggingface/datasets-server/pull/1536#discussion_r1283612238
Currently, the job deletes expired files (more than 3 days in prod) but if the folders remain empty those will continue existing, we should remove them. | Delete empty folders from downloaded duckdb indexes in /search: See comment https://github.com/huggingface/datasets-server/pull/1536#discussion_r1283612238
Currently, the job deletes expired files (more than 3 days in prod) but if the folders remain empty those will continue existing, we should remove them. | closed | 2023-08-11T12:10:29Z | 2023-11-07T13:46:44Z | 2023-11-07T13:46:43Z | AndreaFrancis |
1,846,009,794 | fix: π update the OpenAPI spec | Missing:
- [x] ensure we documented all the status codes: missing: <strike>400 (BAD_REQUEST)</strike> (removed 400 from the code with [d4ee7a5](https://github.com/huggingface/datasets-server/pull/1667/commits/d4ee7a5bd32b9c1666e0f1293c8c0292a265e133)), 501 (NOT_IMPLEMENTED)
- [x] ensure the OpenAPI spec is correct (I... | fix: π update the OpenAPI spec: Missing:
- [x] ensure we documented all the status codes: missing: <strike>400 (BAD_REQUEST)</strike> (removed 400 from the code with [d4ee7a5](https://github.com/huggingface/datasets-server/pull/1667/commits/d4ee7a5bd32b9c1666e0f1293c8c0292a265e133)), 501 (NOT_IMPLEMENTED)
- [x] ensu... | closed | 2023-08-10T23:19:07Z | 2023-08-11T19:30:30Z | 2023-08-11T19:29:58Z | severo |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.