id int64 959M 2.55B | title stringlengths 3 133 | body stringlengths 1 65.5k ⌀ | description stringlengths 5 65.6k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | closed_at stringlengths 20 20 ⌀ | user stringclasses 174
values |
|---|---|---|---|---|---|---|---|---|
2,142,617,746 | Remove codecov from CI | Remove `codecov` from CI.
Fix #2470. | Remove codecov from CI: Remove `codecov` from CI.
Fix #2470. | closed | 2024-02-19T15:21:46Z | 2024-02-20T15:23:12Z | 2024-02-20T15:23:11Z | albertvillanova |
2,142,529,563 | Remove codecov from CI | Remove `codecov` from CI.
See: https://github.com/huggingface/datasets-server/pull/2464#issuecomment-1952372916
> I personally don't really use it, and I surely didn't configure it well, because due to the monorepo structure of our repo (I think) the notifications are meaningless | Remove codecov from CI: Remove `codecov` from CI.
See: https://github.com/huggingface/datasets-server/pull/2464#issuecomment-1952372916
> I personally don't really use it, and I surely didn't configure it well, because due to the monorepo structure of our repo (I think) the notifications are meaningless | closed | 2024-02-19T14:43:04Z | 2024-02-20T15:23:12Z | 2024-02-20T15:23:12Z | albertvillanova |
2,142,387,593 | Change Prometheus histogram buckets for duration of `split-descriptive-statistics` job | I want to increase time range to track stats worker running time in Grafana but I'm not sure I'm doing it right.
And not sure it's much needed but I believe it would be useful for me to monitor | Change Prometheus histogram buckets for duration of `split-descriptive-statistics` job: I want to increase time range to track stats worker running time in Grafana but I'm not sure I'm doing it right.
And not sure it's much needed but I believe it would be useful for me to monitor | closed | 2024-02-19T13:33:33Z | 2024-02-29T12:09:57Z | 2024-02-29T11:49:43Z | polinaeterna |
2,142,303,090 | increase resources to 10/80/10 | we have 5K long jobs (statistics and duckdb), so: increasing here in the code, instead of scaling manually, to be sure to keep these values if we deploy again in the next hours or days | increase resources to 10/80/10: we have 5K long jobs (statistics and duckdb), so: increasing here in the code, instead of scaling manually, to be sure to keep these values if we deploy again in the next hours or days | closed | 2024-02-19T12:51:19Z | 2024-02-19T12:51:55Z | 2024-02-19T12:51:24Z | severo |
2,142,158,544 | Node.js 16 GitHub Actions are deprecated | `Node.js` 16 GitHub Actions are deprecated. See: https://github.blog/changelog/2023-09-22-github-actions-transitioning-from-node-16-to-node-20/
We should update them to Node 20.
See warnings in our CI, e.g.: https://github.com/huggingface/datasets-server/actions/runs/7958659954
```
Node.js 16 actions are deprec... | Node.js 16 GitHub Actions are deprecated: `Node.js` 16 GitHub Actions are deprecated. See: https://github.blog/changelog/2023-09-22-github-actions-transitioning-from-node-16-to-node-20/
We should update them to Node 20.
See warnings in our CI, e.g.: https://github.com/huggingface/datasets-server/actions/runs/7958... | closed | 2024-02-19T11:32:39Z | 2024-02-19T14:44:26Z | 2024-02-19T14:44:26Z | albertvillanova |
2,142,125,407 | Update datasets to 2.17.1 | Update `datasets` to 2.17.1.
Fix #2465. | Update datasets to 2.17.1: Update `datasets` to 2.17.1.
Fix #2465. | closed | 2024-02-19T11:14:20Z | 2024-02-20T08:30:14Z | 2024-02-20T08:30:13Z | albertvillanova |
2,142,119,756 | Update datasets to 2.17.1 | Update datasets to 2.17.1: https://github.com/huggingface/datasets/releases/tag/2.17.1 | Update datasets to 2.17.1: Update datasets to 2.17.1: https://github.com/huggingface/datasets/releases/tag/2.17.1 | closed | 2024-02-19T11:11:19Z | 2024-02-20T08:30:14Z | 2024-02-20T08:30:14Z | albertvillanova |
2,142,042,771 | Update GitHub Actions to Node 20 | Update GitHub Actions to Node 20.
See: https://github.blog/changelog/2023-09-22-github-actions-transitioning-from-node-16-to-node-20/
Fix #2467. | Update GitHub Actions to Node 20: Update GitHub Actions to Node 20.
See: https://github.blog/changelog/2023-09-22-github-actions-transitioning-from-node-16-to-node-20/
Fix #2467. | closed | 2024-02-19T10:31:06Z | 2024-02-19T14:44:26Z | 2024-02-19T14:44:25Z | albertvillanova |
2,139,041,952 | Update duckdb to 0.10.0 | Will fix https://github.com/huggingface/datasets-server/issues/2413
This PR will include the following changes:
- Add a new job runner that will compute duckdb files with `0.10.0` -> Temporal cache kind `split-duckdb-index-010`
- Still have the old job runner that will compute duckdb files with `0.8.1` (Will use d... | Update duckdb to 0.10.0: Will fix https://github.com/huggingface/datasets-server/issues/2413
This PR will include the following changes:
- Add a new job runner that will compute duckdb files with `0.10.0` -> Temporal cache kind `split-duckdb-index-010`
- Still have the old job runner that will compute duckdb files... | closed | 2024-02-16T17:19:14Z | 2024-02-21T12:16:30Z | 2024-02-21T12:16:29Z | AndreaFrancis |
2,138,771,899 | Remove `SplitWithTooBigParquetError` from list of retryable errors | not to forget to remove this line after the next successful backfill
https://github.com/huggingface/datasets-server/blob/main/libs/libcommon/src/libcommon/constants.py#L40 | Remove `SplitWithTooBigParquetError` from list of retryable errors: not to forget to remove this line after the next successful backfill
https://github.com/huggingface/datasets-server/blob/main/libs/libcommon/src/libcommon/constants.py#L40 | closed | 2024-02-16T14:46:15Z | 2024-02-19T21:53:23Z | 2024-02-19T21:53:23Z | polinaeterna |
2,138,738,010 | Unexpected error when a split file has a bad format | See https://huggingface.co/datasets/severo/test-one-split-broken/viewer/default/works
https://datasets-server.huggingface.co/rows?dataset=severo/test-one-split-broken&config=default&split=works returns `{"error":"Unexpected error."}`.
The reason is that `config-parquet-and-info` fails because the other split ('br... | Unexpected error when a split file has a bad format: See https://huggingface.co/datasets/severo/test-one-split-broken/viewer/default/works
https://datasets-server.huggingface.co/rows?dataset=severo/test-one-split-broken&config=default&split=works returns `{"error":"Unexpected error."}`.
The reason is that `config... | open | 2024-02-16T14:29:29Z | 2024-03-18T12:34:54Z | null | severo |
2,138,351,336 | Order of the columns is not consistent between /first-rows and /rows | See /first-rows:
<img width="1038" alt="Capture d’écran 2024-02-16 à 11 52 25" src="https://github.com/huggingface/datasets-server/assets/1676121/017faa01-7e2a-4b7f-a4ba-594ebb6a253d">
/rows
<img width="1888" alt="Capture d’écran 2024-02-16 à 11 52 32" src="https://github.com/huggingface/datasets-server/... | Order of the columns is not consistent between /first-rows and /rows: See /first-rows:
<img width="1038" alt="Capture d’écran 2024-02-16 à 11 52 25" src="https://github.com/huggingface/datasets-server/assets/1676121/017faa01-7e2a-4b7f-a4ba-594ebb6a253d">
/rows
<img width="1888" alt="Capture d’écran 2024-0... | closed | 2024-02-16T10:52:53Z | 2024-02-22T16:00:48Z | 2024-02-22T14:03:59Z | severo |
2,138,312,652 | remove obsolete error code | Follows #2227
To be merged next week, when all the occurrences will have been recomputed.
Currently:
```
use datasets_server_cache
db.cachedResponsesBlue.countDocuments({kind: "split-descriptive-statistics", error_code: "SplitWithTooBigParquetError"})
2937
```
| remove obsolete error code: Follows #2227
To be merged next week, when all the occurrences will have been recomputed.
Currently:
```
use datasets_server_cache
db.cachedResponsesBlue.countDocuments({kind: "split-descriptive-statistics", error_code: "SplitWithTooBigParquetError"})
2937
```
| closed | 2024-02-16T10:34:48Z | 2024-02-19T21:53:23Z | 2024-02-19T21:53:22Z | severo |
2,138,275,639 | add missing migration | follow-up to #2227 | add missing migration: follow-up to #2227 | closed | 2024-02-16T10:13:25Z | 2024-02-16T10:16:08Z | 2024-02-16T10:16:07Z | severo |
2,138,248,684 | remove .md | null | remove .md: | closed | 2024-02-16T09:56:53Z | 2024-02-16T10:00:01Z | 2024-02-16T09:56:58Z | severo |
2,136,256,453 | Link to the endpoint doc page in case of error? | eg. https://datasets-server.huggingface.co/parquet
could return
```json
{"error":"Parameter 'dataset' is required. Read the docs at https://huggingface.co/docs/datasets-server/parquet"}
```
or
```json
{"error":"Parameter 'dataset' is required.", "docs": "https://huggingface.co/docs/datasets-server/parqu... | Link to the endpoint doc page in case of error?: eg. https://datasets-server.huggingface.co/parquet
could return
```json
{"error":"Parameter 'dataset' is required. Read the docs at https://huggingface.co/docs/datasets-server/parquet"}
```
or
```json
{"error":"Parameter 'dataset' is required.", "docs": "... | open | 2024-02-15T11:11:44Z | 2024-02-15T11:12:12Z | null | severo |
2,135,258,571 | Add "modality" tags in /hub-cache | Could be: `image` and `audio`, to start with.
Related issues:
- https://github.com/huggingface/datasets-server/pull/2454 (libraries).
- https://github.com/huggingface/datasets-server/issues/2408 (tasks).
- internal: https://github.com/huggingface/moon-landing/issues/7557
Previous issue:
- https://github.com/h... | Add "modality" tags in /hub-cache: Could be: `image` and `audio`, to start with.
Related issues:
- https://github.com/huggingface/datasets-server/pull/2454 (libraries).
- https://github.com/huggingface/datasets-server/issues/2408 (tasks).
- internal: https://github.com/huggingface/moon-landing/issues/7557
Prev... | closed | 2024-02-14T21:48:57Z | 2024-06-19T15:47:15Z | 2024-06-19T15:47:14Z | severo |
2,134,440,400 | Add loading tags and code (pandas, dask, wds) | I added example codes to load with libraries:
- mlcroissant
- datasets
- pandas or dask or webdataset
The new associated `libraries` tags are: "pandas", "dask", "datasets", "mlcroissant", "webdataset"
I also added a new field `loading_codes`. It is a list (currently with maximum three methods) of loading metho... | Add loading tags and code (pandas, dask, wds): I added example codes to load with libraries:
- mlcroissant
- datasets
- pandas or dask or webdataset
The new associated `libraries` tags are: "pandas", "dask", "datasets", "mlcroissant", "webdataset"
I also added a new field `loading_codes`. It is a list (current... | closed | 2024-02-14T14:08:33Z | 2024-03-01T17:31:44Z | 2024-03-01T17:31:44Z | lhoestq |
2,134,437,283 | Statistics for list feature | Return distribution of lengths for lists (independently of what's inside).
_Note_: empty list is not counted as `na` value (so, not increment `na_count`), it is len=0.
I haven't done a proper testing of speed in an isolated environment (because i'm fighting with dev dockers locally, idk why), but a worker running l... | Statistics for list feature: Return distribution of lengths for lists (independently of what's inside).
_Note_: empty list is not counted as `na` value (so, not increment `na_count`), it is len=0.
I haven't done a proper testing of speed in an isolated environment (because i'm fighting with dev dockers locally, idk... | closed | 2024-02-14T14:06:54Z | 2024-03-01T15:37:54Z | 2024-03-01T12:01:38Z | polinaeterna |
2,134,025,417 | Add croissant page | also add mention of support for the private datasets | Add croissant page: also add mention of support for the private datasets | closed | 2024-02-14T10:34:27Z | 2024-02-16T09:47:21Z | 2024-02-16T09:47:21Z | severo |
2,132,501,771 | Croissant: specify the splits in RecordSet? | eg https://datasets-server.huggingface.co/croissant?dataset=mnist&full=true
See https://github.com/mlcommons/croissant/blob/main/docs/howto/specify-splits.md
The splits could be specified at the RecordSet level. | Croissant: specify the splits in RecordSet?: eg https://datasets-server.huggingface.co/croissant?dataset=mnist&full=true
See https://github.com/mlcommons/croissant/blob/main/docs/howto/specify-splits.md
The splits could be specified at the RecordSet level. | open | 2024-02-13T14:49:03Z | 2024-02-14T09:55:00Z | null | severo |
2,132,492,927 | Croissant: fix sha256 field | We currently return:
```
"sha256": "https://github.com/mlcommons/croissant/issues/80"
```
See https://github.com/mlcommons/croissant/issues/80. cc @marcenacp | Croissant: fix sha256 field: We currently return:
```
"sha256": "https://github.com/mlcommons/croissant/issues/80"
```
See https://github.com/mlcommons/croissant/issues/80. cc @marcenacp | open | 2024-02-13T14:44:32Z | 2024-08-02T19:22:17Z | null | severo |
2,132,413,475 | nit: uniformize logs and method nmaes | null | nit: uniformize logs and method nmaes: | closed | 2024-02-13T14:10:02Z | 2024-02-13T14:11:17Z | 2024-02-13T14:11:16Z | severo |
2,132,376,983 | Log more details when worker crashes | Also, random updates:
- fix names: we don't "cancel" jobs anymore, we "delete" them
- remove obsolete catch of `DatasetInBlockListError` exception when running `finish_job()` -> it's not checked anymore | Log more details when worker crashes: Also, random updates:
- fix names: we don't "cancel" jobs anymore, we "delete" them
- remove obsolete catch of `DatasetInBlockListError` exception when running `finish_job()` -> it's not checked anymore | closed | 2024-02-13T13:52:35Z | 2024-02-13T14:30:23Z | 2024-02-13T14:30:22Z | severo |
2,132,235,122 | requirements.txt is used by spaces, pyproject.yaml is ignored | follows #2445. cc @albertvillanova fyi. I already did the same error... We should find a way to have only one source of truth for the packages, instead of ... 3! | requirements.txt is used by spaces, pyproject.yaml is ignored: follows #2445. cc @albertvillanova fyi. I already did the same error... We should find a way to have only one source of truth for the packages, instead of ... 3! | closed | 2024-02-13T12:38:16Z | 2024-02-13T14:30:10Z | 2024-02-13T14:30:09Z | severo |
2,131,920,697 | fix copy/paste error | null | fix copy/paste error: | closed | 2024-02-13T09:56:20Z | 2024-02-13T10:37:48Z | 2024-02-13T10:37:47Z | severo |
2,131,719,023 | Update gradio | Update `gradio` to ^4.17.0.
Fix #2444. | Update gradio: Update `gradio` to ^4.17.0.
Fix #2444. | closed | 2024-02-13T08:19:07Z | 2024-02-13T09:41:38Z | 2024-02-13T09:41:37Z | albertvillanova |
2,131,072,826 | Upgrade Gradio to be able to copy/paste from dataframe cells | > Allow selecting texts in dataframe cells
was introduced in Gradio 4.17.0
https://github.com/gradio-app/gradio/blob/main/CHANGELOG.md | Upgrade Gradio to be able to copy/paste from dataframe cells: > Allow selecting texts in dataframe cells
was introduced in Gradio 4.17.0
https://github.com/gradio-app/gradio/blob/main/CHANGELOG.md | closed | 2024-02-12T22:10:02Z | 2024-02-13T09:41:38Z | 2024-02-13T09:41:38Z | severo |
2,130,730,150 | split 'updated' stats in untouched and backfilled | null | split 'updated' stats in untouched and backfilled: | closed | 2024-02-12T18:37:17Z | 2024-02-12T20:48:50Z | 2024-02-12T20:48:49Z | severo |
2,130,637,741 | Bump the pip group across 2 directories with 1 update | Bumps the pip group with 1 update in the /front/admin_ui directory: [python-multipart](https://github.com/andrew-d/python-multipart).
Updates `python-multipart` from 0.0.6 to 0.0.7
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/Kludex/python-multipart/blob/master/CHANGELOG.md">p... | Bump the pip group across 2 directories with 1 update: Bumps the pip group with 1 update in the /front/admin_ui directory: [python-multipart](https://github.com/andrew-d/python-multipart).
Updates `python-multipart` from 0.0.6 to 0.0.7
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.... | closed | 2024-02-12T17:42:58Z | 2024-02-12T18:28:37Z | 2024-02-12T18:28:36Z | dependabot[bot] |
2,130,342,517 | two tweaks to the docs | null | two tweaks to the docs: | closed | 2024-02-12T15:12:39Z | 2024-02-12T15:12:51Z | 2024-02-12T15:12:50Z | severo |
2,130,099,869 | Update grpcio to 1.60.1 to fix vulnerability | Update `grpcio` to 1.60.1 to fix vulnerability:
- Affected versions: >= 1.54.0, < 1.54.3
This will fix 1 dependabot alert. | Update grpcio to 1.60.1 to fix vulnerability: Update `grpcio` to 1.60.1 to fix vulnerability:
- Affected versions: >= 1.54.0, < 1.54.3
This will fix 1 dependabot alert. | closed | 2024-02-12T13:04:59Z | 2024-02-12T14:47:39Z | 2024-02-12T14:42:36Z | albertvillanova |
2,129,994,045 | "Errors to retry" only retried once a day | It's strange: `JobManagerCrashedError` is listed in the errors to retry, but I witnessed a dataset where it was not retried (`failed_runs: 0`, no jobs).
See discussion opened here: https://huggingface.co/datasets/Samuel-Martineau/kryptik/discussions/1
<img width="1485" alt="Capture d’écran 2024-02-12 à 12 53 36... | "Errors to retry" only retried once a day: It's strange: `JobManagerCrashedError` is listed in the errors to retry, but I witnessed a dataset where it was not retried (`failed_runs: 0`, no jobs).
See discussion opened here: https://huggingface.co/datasets/Samuel-Martineau/kryptik/discussions/1
<img width="1485" a... | closed | 2024-02-12T12:03:39Z | 2024-02-28T18:08:14Z | 2024-02-28T18:08:14Z | severo |
2,129,913,115 | Update fastapi to fix vulnerability | Update `fastapi` to fix vulnerability Note there is a version constraint between fastapi and starlette (starlette: >=0.36.3,<0.37.0). Bump:
- fastapi-0.109.2
- starlette-0.36.3
Note this PR fixes 2 simultaneous vulnerabilities: :crossed_fingers:
- Vulnerability in fastapi <= 0.109.0
- Vulnerability in: starlett... | Update fastapi to fix vulnerability: Update `fastapi` to fix vulnerability Note there is a version constraint between fastapi and starlette (starlette: >=0.36.3,<0.37.0). Bump:
- fastapi-0.109.2
- starlette-0.36.3
Note this PR fixes 2 simultaneous vulnerabilities: :crossed_fingers:
- Vulnerability in fastapi <= ... | closed | 2024-02-12T11:14:32Z | 2024-02-12T12:56:45Z | 2024-02-12T12:56:44Z | albertvillanova |
2,129,792,632 | Bump the pip group across 3 directories with 1 update | Bumps the pip group with 1 update in the /front/admin_ui directory: [fastapi](https://github.com/tiangolo/fastapi).
Updates `fastapi` from 0.1.17 to 0.109.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/tiangolo/fastapi/releases">fastapi's releases</a>.</em></p>
<blockquote... | Bump the pip group across 3 directories with 1 update: Bumps the pip group with 1 update in the /front/admin_ui directory: [fastapi](https://github.com/tiangolo/fastapi).
Updates `fastapi` from 0.1.17 to 0.109.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/tiangolo/fastapi... | closed | 2024-02-12T10:08:13Z | 2024-02-12T11:30:25Z | 2024-02-12T11:29:51Z | dependabot[bot] |
2,129,690,903 | Update datasets to 2.17.0 | Update `datasets` to 2.17.0.
Fix #2428. | Update datasets to 2.17.0: Update `datasets` to 2.17.0.
Fix #2428. | closed | 2024-02-12T09:08:25Z | 2024-02-12T10:18:21Z | 2024-02-12T10:18:20Z | albertvillanova |
2,129,664,878 | Update starlette minor version to 0.37.1 | Update `starlette` minor version to 0.37.1: https://github.com/encode/starlette/releases/tag/0.37.1
Related to:
- #2400
- #2401 | Update starlette minor version to 0.37.1: Update `starlette` minor version to 0.37.1: https://github.com/encode/starlette/releases/tag/0.37.1
Related to:
- #2400
- #2401 | closed | 2024-02-12T08:51:27Z | 2024-02-12T10:05:00Z | 2024-02-12T10:04:59Z | albertvillanova |
2,127,232,690 | Create a new step: `config-features`? | See https://github.com/huggingface/datasets-server/issues/2215: the `features` part can be heavy, and on the Hub, when we call /rows, /filter or /search, the features content does not change; there is no need to create / serialize / transfer / parse it.
We could:
- add a new /features endpoint
- or add a `features... | Create a new step: `config-features`?: See https://github.com/huggingface/datasets-server/issues/2215: the `features` part can be heavy, and on the Hub, when we call /rows, /filter or /search, the features content does not change; there is no need to create / serialize / transfer / parse it.
We could:
- add a new /... | open | 2024-02-09T14:13:10Z | 2024-02-15T10:26:35Z | null | severo |
2,127,153,168 | Fix all `PreviousStepFormatError` cache entries | And monitor regularly this error (as well as any non-expected error, related to https://github.com/huggingface/datasets-server/issues/1443) | Fix all `PreviousStepFormatError` cache entries: And monitor regularly this error (as well as any non-expected error, related to https://github.com/huggingface/datasets-server/issues/1443) | closed | 2024-02-09T13:26:29Z | 2024-06-20T20:42:21Z | 2024-06-20T20:42:21Z | severo |
2,127,031,941 | Hardcode 100 rows max in /first-rows | fixes #1921 | Hardcode 100 rows max in /first-rows: fixes #1921 | closed | 2024-02-09T12:23:45Z | 2024-02-13T10:04:19Z | 2024-02-13T10:04:18Z | severo |
2,126,926,645 | show image in /splits page + change link | https://moon-ci-docs.huggingface.co/docs/datasets-server/pr_2431/en/splits
https://github.com/huggingface/datasets-server/assets/1676121/414e04d0-95b6-4677-8c19-778daab05cff
| show image in /splits page + change link: https://moon-ci-docs.huggingface.co/docs/datasets-server/pr_2431/en/splits
https://github.com/huggingface/datasets-server/assets/1676121/414e04d0-95b6-4677-8c19-778daab05cff
| closed | 2024-02-09T11:14:42Z | 2024-02-09T18:02:04Z | 2024-02-09T18:02:04Z | severo |
2,126,906,931 | use .md extension instead of .mdx | fixes #2429 | use .md extension instead of .mdx: fixes #2429 | closed | 2024-02-09T11:01:16Z | 2024-02-09T11:07:44Z | 2024-02-09T11:07:44Z | severo |
2,126,899,842 | Use .md instead of .mdx in the docs | One file uses .md extension: https://github.com/huggingface/datasets-server/blob/main/docs/source/clickhouse.md, the others use .mdx.
.md helps getting syntax highlighting and linting in the editor. But maybe .mdx is the correct way of handling the docs.
All the files in the Hub's doc use .md (https://github.com/... | Use .md instead of .mdx in the docs: One file uses .md extension: https://github.com/huggingface/datasets-server/blob/main/docs/source/clickhouse.md, the others use .mdx.
.md helps getting syntax highlighting and linting in the editor. But maybe .mdx is the correct way of handling the docs.
All the files in the H... | closed | 2024-02-09T10:56:24Z | 2024-02-09T11:07:45Z | 2024-02-09T11:07:45Z | severo |
2,126,839,418 | Upgrade to datasets@2.17.0 | See https://github.com/huggingface/datasets/releases/tag/2.17.0 | Upgrade to datasets@2.17.0: See https://github.com/huggingface/datasets/releases/tag/2.17.0 | closed | 2024-02-09T10:16:44Z | 2024-02-12T10:18:21Z | 2024-02-12T10:18:21Z | severo |
2,126,233,135 | add statistics to the backfill cron job logs | fixes #2381 | add statistics to the backfill cron job logs: fixes #2381 | closed | 2024-02-08T23:32:41Z | 2024-02-09T18:00:43Z | 2024-02-09T18:00:42Z | severo |
2,126,159,114 | delete only waiting jobs | follow-up to #2403 | delete only waiting jobs: follow-up to #2403 | closed | 2024-02-08T22:16:45Z | 2024-02-09T12:25:00Z | 2024-02-09T12:24:59Z | severo |
2,126,091,316 | the package must exist | null | the package must exist: | closed | 2024-02-08T21:23:46Z | 2024-02-08T21:23:52Z | 2024-02-08T21:23:52Z | severo |
2,125,903,005 | rename X-Revision to X-Repo-Commit | fixes #2417 | rename X-Revision to X-Repo-Commit: fixes #2417 | closed | 2024-02-08T19:15:16Z | 2024-02-12T09:43:47Z | 2024-02-12T09:43:18Z | severo |
2,125,464,141 | give more details in the discussions opened by parquet-converter | fixes #2419
It could be too verbose now.
Also: we might add a section or a sentence to handle the case where the dataset was already in Parquet (until we properly detect if the dataset was already in Parquet format: https://github.com/huggingface/datasets-server/issues/2301) | give more details in the discussions opened by parquet-converter: fixes #2419
It could be too verbose now.
Also: we might add a section or a sentence to handle the case where the dataset was already in Parquet (until we properly detect if the dataset was already in Parquet format: https://github.com/huggingface/d... | closed | 2024-02-08T15:31:55Z | 2024-02-12T15:13:01Z | 2024-02-09T16:50:00Z | severo |
2,125,325,392 | add information about Parquet | fixes #2420. | add information about Parquet: fixes #2420. | closed | 2024-02-08T14:34:57Z | 2024-02-09T09:31:32Z | 2024-02-09T09:31:31Z | severo |
2,124,868,813 | Show parquet-converter bot as part of HF (official discussion) | from https://github.com/huggingface/datasets-server/issues/2349#issuecomment-1915644060
> So it's really about the user. If it is a legit account then they should offer a bit information about who they are and what their intentions are.
| Show parquet-converter bot as part of HF (official discussion): from https://github.com/huggingface/datasets-server/issues/2349#issuecomment-1915644060
> So it's really about the user. If it is a legit account then they should offer a bit information about who they are and what their intentions are.
| closed | 2024-02-08T10:50:17Z | 2024-02-27T15:02:53Z | 2024-02-27T15:02:53Z | severo |
2,124,857,426 | Mention the advantages of Parquet in the docs | reported here: https://huggingface.slack.com/archives/C02V51Q3800/p1707388935912969 | Mention the advantages of Parquet in the docs: reported here: https://huggingface.slack.com/archives/C02V51Q3800/p1707388935912969 | closed | 2024-02-08T10:46:05Z | 2024-02-09T09:31:32Z | 2024-02-09T09:31:32Z | severo |
2,124,855,985 | Improve the message in Parquet conversion discussions | see https://huggingface.slack.com/archives/C02V51Q3800/p1707388935912969 (internal)
> Would it make sense to add some message in Parquet PRs (e.g. https://huggingface.co/datasets/huggan/smithsonian_butterflies_subset/discussions/2#65c41c1580497543ca3f8a5a) for less technical people?
> As an aside, I don't think the... | Improve the message in Parquet conversion discussions: see https://huggingface.slack.com/archives/C02V51Q3800/p1707388935912969 (internal)
> Would it make sense to add some message in Parquet PRs (e.g. https://huggingface.co/datasets/huggan/smithsonian_butterflies_subset/discussions/2#65c41c1580497543ca3f8a5a) for l... | closed | 2024-02-08T10:45:31Z | 2024-02-09T16:50:01Z | 2024-02-09T16:50:01Z | severo |
2,124,759,156 | Rows and First rows with audio cannot be generated | See https://huggingface.co/datasets/1rsh/speech-rj-hi/discussions/2
First rows gives:
```
soundfile.LibsndfileError: Error opening <_io.BytesIO object at 0x7f272148b950>: Format not recognised.
```
In the trace we also see:
```
Decoding failed. ffmpeg return error code: 1.
...
moov atom not found
/tmp... | Rows and First rows with audio cannot be generated: See https://huggingface.co/datasets/1rsh/speech-rj-hi/discussions/2
First rows gives:
```
soundfile.LibsndfileError: Error opening <_io.BytesIO object at 0x7f272148b950>: Format not recognised.
```
In the trace we also see:
```
Decoding failed. ffmpeg r... | closed | 2024-02-08T09:56:37Z | 2024-07-30T16:11:57Z | 2024-07-30T16:11:57Z | severo |
2,123,214,894 | Rename X-Revision header to X-Repo-Commit | Align with the convention on the Hub: https://github.com/huggingface/moon-landing/issues/8169#issuecomment-1824620102
| Rename X-Revision header to X-Repo-Commit: Align with the convention on the Hub: https://github.com/huggingface/moon-landing/issues/8169#issuecomment-1824620102
| closed | 2024-02-07T14:53:22Z | 2024-02-12T09:43:40Z | 2024-02-12T09:43:39Z | severo |
2,122,467,870 | Support vectorial geospatial columns | Requires https://github.com/huggingface/datasets/issues/6438, to support GeoParquet. We could support more formats.
Possibly requires geopandas as a dependency. | Support vectorial geospatial columns: Requires https://github.com/huggingface/datasets/issues/6438, to support GeoParquet. We could support more formats.
Possibly requires geopandas as a dependency. | open | 2024-02-07T08:33:36Z | 2024-06-26T13:32:49Z | null | severo |
2,121,788,034 | catch ValueError before ImportError | fixes #2385
I didn't increment the job runner version, because it only happen in 235 datasets. We will update them manually.
```
db.cachedResponsesBlue.countDocuments({error_code: "DatasetWithScriptNotSupportedError", kind: "dataset-config-names"})
235
``` | catch ValueError before ImportError: fixes #2385
I didn't increment the job runner version, because it only happen in 235 datasets. We will update them manually.
```
db.cachedResponsesBlue.countDocuments({error_code: "DatasetWithScriptNotSupportedError", kind: "dataset-config-names"})
235
``` | closed | 2024-02-06T22:41:51Z | 2024-02-07T10:08:05Z | 2024-02-07T10:08:05Z | severo |
2,121,327,165 | Loading methods and arguments | We can extend the `dataset-loading-tags` job to also store loading arguments that we can display on dataset pages in code snippets.
For example with the info to build the code snippet like
```json
{"loader": "webdataset", "url": "pipe:curl <hf url>"}
```
This way the front end can translate it to
```python
imp... | Loading methods and arguments: We can extend the `dataset-loading-tags` job to also store loading arguments that we can display on dataset pages in code snippets.
For example with the info to build the code snippet like
```json
{"loader": "webdataset", "url": "pipe:curl <hf url>"}
```
This way the front end can ... | closed | 2024-02-06T17:24:02Z | 2024-07-30T16:11:26Z | 2024-07-30T16:11:25Z | lhoestq |
2,121,275,038 | Update duckdb to 0.10.0 | It will be the first version that will ensure backward-compatibility of the .duckdb format | Update duckdb to 0.10.0: It will be the first version that will ensure backward-compatibility of the .duckdb format | closed | 2024-02-06T16:58:09Z | 2024-02-21T12:16:30Z | 2024-02-21T12:16:30Z | severo |
2,121,219,794 | Missing DatasetLoadingTagsJobRunner in factory | null | Missing DatasetLoadingTagsJobRunner in factory: | closed | 2024-02-06T16:33:18Z | 2024-02-06T16:39:36Z | 2024-02-06T16:39:35Z | lhoestq |
2,121,150,864 | Update Poetry to 1.7.1 version to align with that of Dependabot | Note that once merged this PR, all developers will have to update their local Poetry version.
Fix #2405. | Update Poetry to 1.7.1 version to align with that of Dependabot: Note that once merged this PR, all developers will have to update their local Poetry version.
Fix #2405. | closed | 2024-02-06T16:04:26Z | 2024-02-07T09:05:18Z | 2024-02-07T09:05:17Z | albertvillanova |
2,121,145,131 | unblock two datasets | fixes #2125 | unblock two datasets: fixes #2125 | closed | 2024-02-06T16:02:35Z | 2024-02-06T21:25:12Z | 2024-02-06T21:25:12Z | severo |
2,121,116,563 | retry on ConnectionError | fixes #2049
(we retry ConnectionError, not ClientConnection, btw) | retry on ConnectionError: fixes #2049
(we retry ConnectionError, not ClientConnection, btw) | closed | 2024-02-06T15:51:28Z | 2024-02-06T17:29:45Z | 2024-02-06T17:29:44Z | severo |
2,120,536,385 | Add task tags in /hub-cache? | On the same model as https://github.com/huggingface/datasets-server/pull/2386, detect and associate tags to a dataset to describe the tasks it can be used for.
Previously discussed at https://github.com/huggingface/datasets-server/issues/561#issuecomment-1250029425 | Add task tags in /hub-cache?: On the same model as https://github.com/huggingface/datasets-server/pull/2386, detect and associate tags to a dataset to describe the tasks it can be used for.
Previously discussed at https://github.com/huggingface/datasets-server/issues/561#issuecomment-1250029425 | closed | 2024-02-06T11:17:19Z | 2024-06-19T15:43:15Z | 2024-06-19T15:43:15Z | severo |
2,120,523,132 | Remove env var HF_ENDPOINT? | Is it still required to set HF_ENDPOINT as an environment variable?
https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/resources.py#L41-L45
| Remove env var HF_ENDPOINT?: Is it still required to set HF_ENDPOINT as an environment variable?
https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/resources.py#L41-L45
| closed | 2024-02-06T11:11:24Z | 2024-02-06T14:53:12Z | 2024-02-06T14:53:08Z | severo |
2,120,051,362 | Update cryptography dependency to 42.0 to fix vulnerability | This should fix 11 dependabot alerts. | Update cryptography dependency to 42.0 to fix vulnerability: This should fix 11 dependabot alerts. | closed | 2024-02-06T06:20:21Z | 2024-02-06T10:19:04Z | 2024-02-06T10:19:03Z | albertvillanova |
2,120,019,654 | Update Poetry to the latest version to align with that of Dependabot | What about aligning Poetry version with the one used by Dependabot (currently 1.7.1)?
- Advantage: we will be able to directly merge the security PRs proposed by Dependabot
- Disadvantage: we will have to align with Dependabot version each time they update their Poetry
- Poetry installation instructions explain ho... | Update Poetry to the latest version to align with that of Dependabot: What about aligning Poetry version with the one used by Dependabot (currently 1.7.1)?
- Advantage: we will be able to directly merge the security PRs proposed by Dependabot
- Disadvantage: we will have to align with Dependabot version each time the... | closed | 2024-02-06T05:56:43Z | 2024-02-07T09:05:18Z | 2024-02-07T09:05:18Z | albertvillanova |
2,119,862,099 | Bump cryptography from 41.0.7 to 42.0.0 in /libs/libcommon | Bumps [cryptography](https://github.com/pyca/cryptography) from 41.0.7 to 42.0.0.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst">cryptography's changelog</a>.</em></p>
<blockquote>
<p>42.0.0 - 2024-01-22</p>
<pre><code>
* **BACKWARDS INC... | Bump cryptography from 41.0.7 to 42.0.0 in /libs/libcommon: Bumps [cryptography](https://github.com/pyca/cryptography) from 41.0.7 to 42.0.0.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst">cryptography's changelog</a>.</em></p>
<blockquo... | closed | 2024-02-06T03:17:01Z | 2024-02-06T10:20:25Z | 2024-02-06T10:20:24Z | dependabot[bot] |
2,119,715,333 | delete only waiting jobs | Related to https://github.com/huggingface/datasets-server/issues/2387
As per [internal conversation](https://huggingface.slack.com/archives/C04L6P8KNQ5/p1707031392098359), only waiting jobs should be deleted on backfill to avoid started processes without a related Job record.
| delete only waiting jobs: Related to https://github.com/huggingface/datasets-server/issues/2387
As per [internal conversation](https://huggingface.slack.com/archives/C04L6P8KNQ5/p1707031392098359), only waiting jobs should be deleted on backfill to avoid started processes without a related Job record.
| closed | 2024-02-06T00:33:01Z | 2024-02-06T12:12:19Z | 2024-02-06T12:12:19Z | AndreaFrancis |
2,119,509,862 | Reduce resources for /filter and /search? | They have nearly 0 traffic. https://grafana.huggingface.tech/d/i7gwsO5Vz/global-view?orgId=1&from=now-6h&to=now
Should we reduce the number of pods? How to configure the right level? | Reduce resources for /filter and /search?: They have nearly 0 traffic. https://grafana.huggingface.tech/d/i7gwsO5Vz/global-view?orgId=1&from=now-6h&to=now
Should we reduce the number of pods? How to configure the right level? | closed | 2024-02-05T21:44:56Z | 2024-02-28T17:55:50Z | 2024-02-28T17:55:50Z | severo |
2,119,157,996 | upgrade starlette everywhere | #2400 was not complete
---
<strike>still wip, because of starlette middleware types (https://github.com/encode/starlette/pull/2180, https://github.com/encode/starlette/pull/2381).</strike>
it was a matter of upgrading mypy ahah | upgrade starlette everywhere: #2400 was not complete
---
<strike>still wip, because of starlette middleware types (https://github.com/encode/starlette/pull/2180, https://github.com/encode/starlette/pull/2381).</strike>
it was a matter of upgrading mypy ahah | closed | 2024-02-05T18:04:46Z | 2024-02-05T21:35:51Z | 2024-02-05T21:35:50Z | severo |
2,119,109,081 | upgrade starlette | fixes #2399 | upgrade starlette: fixes #2399 | closed | 2024-02-05T17:41:10Z | 2024-02-05T17:54:18Z | 2024-02-05T17:54:17Z | severo |
2,119,095,738 | upgrade starlette | null | upgrade starlette: | closed | 2024-02-05T17:34:08Z | 2024-02-05T17:54:18Z | 2024-02-05T17:54:18Z | severo |
2,119,091,310 | Bump starlette from 0.28.0 to 0.36.2 in /services/worker | Bumps [starlette](https://github.com/encode/starlette) from 0.28.0 to 0.36.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/encode/starlette/releases">starlette's releases</a>.</em></p>
<blockquote>
<h2>Version 0.36.2</h2>
<h2>Fixed</h2>
<ul>
<li>Upgrade <code>python-multipa... | Bump starlette from 0.28.0 to 0.36.2 in /services/worker: Bumps [starlette](https://github.com/encode/starlette) from 0.28.0 to 0.36.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/encode/starlette/releases">starlette's releases</a>.</em></p>
<blockquote>
<h2>Version 0.36.2... | closed | 2024-02-05T17:31:46Z | 2024-02-05T21:36:34Z | 2024-02-05T21:36:32Z | dependabot[bot] |
2,119,088,004 | Bump starlette from 0.28.0 to 0.36.2 in /jobs/cache_maintenance | Bumps [starlette](https://github.com/encode/starlette) from 0.28.0 to 0.36.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/encode/starlette/releases">starlette's releases</a>.</em></p>
<blockquote>
<h2>Version 0.36.2</h2>
<h2>Fixed</h2>
<ul>
<li>Upgrade <code>python-multipa... | Bump starlette from 0.28.0 to 0.36.2 in /jobs/cache_maintenance: Bumps [starlette](https://github.com/encode/starlette) from 0.28.0 to 0.36.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/encode/starlette/releases">starlette's releases</a>.</em></p>
<blockquote>
<h2>Version... | closed | 2024-02-05T17:30:08Z | 2024-02-05T17:55:19Z | 2024-02-05T17:55:17Z | dependabot[bot] |
2,119,087,368 | Bump starlette from 0.28.0 to 0.36.2 in /libs/libcommon | Bumps [starlette](https://github.com/encode/starlette) from 0.28.0 to 0.36.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/encode/starlette/releases">starlette's releases</a>.</em></p>
<blockquote>
<h2>Version 0.36.2</h2>
<h2>Fixed</h2>
<ul>
<li>Upgrade <code>python-multipa... | Bump starlette from 0.28.0 to 0.36.2 in /libs/libcommon: Bumps [starlette](https://github.com/encode/starlette) from 0.28.0 to 0.36.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/encode/starlette/releases">starlette's releases</a>.</em></p>
<blockquote>
<h2>Version 0.36.2<... | closed | 2024-02-05T17:29:49Z | 2024-02-05T17:55:02Z | 2024-02-05T17:55:00Z | dependabot[bot] |
2,119,084,825 | The API service uses too much RAM | https://huggingface.slack.com/archives/C04L6P8KNQ5/p1706978013568999


 from 0.28.0 to 0.36.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/encode/starlette/releases">starlette's releases</a>.</em></p>
<blockquote>
<h2>Version 0.36.2</h2>
<h2>Fixed</h2>
<ul>
<li>Upgrade <code>python-multipa... | Bump starlette from 0.28.0 to 0.36.2 in /jobs/mongodb_migration: Bumps [starlette](https://github.com/encode/starlette) from 0.28.0 to 0.36.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/encode/starlette/releases">starlette's releases</a>.</em></p>
<blockquote>
<h2>Version... | closed | 2024-02-05T17:28:03Z | 2024-02-05T17:55:05Z | 2024-02-05T17:55:03Z | dependabot[bot] |
2,119,083,609 | Bump starlette from 0.28.0 to 0.36.2 in /libs/libapi | Bumps [starlette](https://github.com/encode/starlette) from 0.28.0 to 0.36.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/encode/starlette/releases">starlette's releases</a>.</em></p>
<blockquote>
<h2>Version 0.36.2</h2>
<h2>Fixed</h2>
<ul>
<li>Upgrade <code>python-multipa... | Bump starlette from 0.28.0 to 0.36.2 in /libs/libapi: Bumps [starlette](https://github.com/encode/starlette) from 0.28.0 to 0.36.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/encode/starlette/releases">starlette's releases</a>.</em></p>
<blockquote>
<h2>Version 0.36.2</h2... | closed | 2024-02-05T17:27:48Z | 2024-02-05T21:36:32Z | 2024-02-05T21:36:30Z | dependabot[bot] |
2,119,069,380 | Bump fastapi from 0.103.2 to 0.109.1 in /front/admin_ui | Bumps [fastapi](https://github.com/tiangolo/fastapi) from 0.103.2 to 0.109.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/tiangolo/fastapi/releases">fastapi's releases</a>.</em></p>
<blockquote>
<h2>0.109.1</h2>
<h3>Security fixes</h3>
<ul>
<li>⬆️ Upgrade minimum version o... | Bump fastapi from 0.103.2 to 0.109.1 in /front/admin_ui: Bumps [fastapi](https://github.com/tiangolo/fastapi) from 0.103.2 to 0.109.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/tiangolo/fastapi/releases">fastapi's releases</a>.</em></p>
<blockquote>
<h2>0.109.1</h2>
<h3>... | closed | 2024-02-05T17:20:04Z | 2024-02-05T17:26:18Z | 2024-02-05T17:26:17Z | dependabot[bot] |
2,118,531,851 | return list of tags in /hub-cache | Add a list of tags in /hub-cache used to refresh the Hub's datasets cache. For now, we add the "croissant" tag if `dataset-info` exists and is complete (all the configs).
We can further build on this to add other tags, like webdataset. | return list of tags in /hub-cache: Add a list of tags in /hub-cache used to refresh the Hub's datasets cache. For now, we add the "croissant" tag if `dataset-info` exists and is complete (all the configs).
We can further build on this to add other tags, like webdataset. | closed | 2024-02-05T13:15:56Z | 2024-02-06T10:58:13Z | 2024-02-06T10:58:10Z | severo |
2,118,442,170 | Store the repo visibility (public/private) to filter webhooks | See https://github.com/huggingface/datasets-server/pull/2389#pullrequestreview-1862425050
Not sure if we want to do it, or wait for the Hub to provide more finely scoped webhooks. See also #2208, where we wanted to store metadata about the datasets. | Store the repo visibility (public/private) to filter webhooks: See https://github.com/huggingface/datasets-server/pull/2389#pullrequestreview-1862425050
Not sure if we want to do it, or wait for the Hub to provide more finely scoped webhooks. See also #2208, where we wanted to store metadata about the datasets. | closed | 2024-02-05T12:37:30Z | 2024-06-19T15:37:36Z | 2024-06-19T15:37:36Z | severo |
2,118,221,553 | handle visibility (public/private) changes | side-effect : any configuration change recreates everything in the datasets-server.
Possibly, the Hub will implement "scope": "repo.config.visibility", but for now, it's the way to react to visibility changes.
fixes #2379 | handle visibility (public/private) changes: side-effect : any configuration change recreates everything in the datasets-server.
Possibly, the Hub will implement "scope": "repo.config.visibility", but for now, it's the way to react to visibility changes.
fixes #2379 | closed | 2024-02-05T10:48:22Z | 2024-02-05T12:37:58Z | 2024-02-05T12:37:57Z | severo |
2,118,153,054 | Backfill cron job generates exceptions | Last backfill job finished with:
```
DEBUG: 2024-02-05 02:45:53,249 - root - 99521 analyzed datasets (total: 99521 datasets): 59 datasets have been deleted (0.06%), 864 datasets raised an exception (0.87%)
```
An exception was raised in backfill_dataset() for 864 datasets.
https://kibana.elastic.huggingface.... | Backfill cron job generates exceptions: Last backfill job finished with:
```
DEBUG: 2024-02-05 02:45:53,249 - root - 99521 analyzed datasets (total: 99521 datasets): 59 datasets have been deleted (0.06%), 864 datasets raised an exception (0.87%)
```
An exception was raised in backfill_dataset() for 864 datasets... | closed | 2024-02-05T10:13:27Z | 2024-02-07T08:25:45Z | 2024-02-07T08:25:45Z | severo |
2,118,091,388 | Simple jobs have difficulty 100 | After a backfill on 2024/02/03, we got a lot of new jobs, and in particular, a lot of them have a high difficulty, much higher than what they should reach.
<img width="643" alt="Capture d’écran 2024-02-05 à 10 41 58" src="https://github.com/huggingface/datasets-server/assets/1676121/35a31d46-451f-476f-9691-01b7d30... | Simple jobs have difficulty 100: After a backfill on 2024/02/03, we got a lot of new jobs, and in particular, a lot of them have a high difficulty, much higher than what they should reach.
<img width="643" alt="Capture d’écran 2024-02-05 à 10 41 58" src="https://github.com/huggingface/datasets-server/assets/167612... | closed | 2024-02-05T09:42:53Z | 2024-02-06T16:09:25Z | 2024-02-06T16:09:25Z | severo |
2,115,785,736 | Add loading tags, starting with croissant and webdataset | Added two tags:
- "croissant" for datasets that have a valid info (= a valid croissant json)
- "webdataset" fro datasets with `info["builder_name"] == "webdataset"`
I implemented this using a new job "dataset-loading-tags" that aims at detecting tags related to dataset loading tools, and then adding the tags to "d... | Add loading tags, starting with croissant and webdataset: Added two tags:
- "croissant" for datasets that have a valid info (= a valid croissant json)
- "webdataset" fro datasets with `info["builder_name"] == "webdataset"`
I implemented this using a new job "dataset-loading-tags" that aims at detecting tags relate... | closed | 2024-02-02T20:54:16Z | 2024-02-06T15:50:54Z | 2024-02-06T15:50:53Z | lhoestq |
2,115,386,622 | Replace `DatasetModuleNotInstalledError` errors with `DatasetWithScriptNotSupportedError` | We should never have a `DatasetModuleNotInstalledError` error, because we should return a `DatasetWithScriptNotSupportedError` error before
See https://github.com/huggingface/datasets-server/issues/1067#issuecomment-1924305954 | Replace `DatasetModuleNotInstalledError` errors with `DatasetWithScriptNotSupportedError`: We should never have a `DatasetModuleNotInstalledError` error, because we should return a `DatasetWithScriptNotSupportedError` error before
See https://github.com/huggingface/datasets-server/issues/1067#issuecomment-1924305954 | closed | 2024-02-02T17:13:31Z | 2024-03-25T13:20:49Z | 2024-02-07T10:08:05Z | severo |
2,115,368,080 | Delete obsolete Parquet and DuckDB files | replaces https://github.com/huggingface/datasets-server/issues/1613 and https://github.com/huggingface/datasets-server/issues/980.
When we call `delete_dataset()`, we should remove all the parquet and duckdb files.
And maybe even the `refs/convert/parquet` branch altogether? | Delete obsolete Parquet and DuckDB files: replaces https://github.com/huggingface/datasets-server/issues/1613 and https://github.com/huggingface/datasets-server/issues/980.
When we call `delete_dataset()`, we should remove all the parquet and duckdb files.
And maybe even the `refs/convert/parquet` branch altogeth... | open | 2024-02-02T17:04:49Z | 2024-07-08T08:46:06Z | null | severo |
2,115,321,192 | Add unit tests for backfill_dataset and backfill | null | Add unit tests for backfill_dataset and backfill: | open | 2024-02-02T16:41:26Z | 2024-02-27T19:59:13Z | null | severo |
2,115,302,264 | Simplify the code of update_dataset() | When we delete and recreate a dataset (or, in the case of a moved dataset: delete the old one and create the new one): we could simplify the code a lot.
The backfill complexity could be used only for the backfill cron job.
| Simplify the code of update_dataset(): When we delete and recreate a dataset (or, in the case of a moved dataset: delete the old one and create the new one): we could simplify the code a lot.
The backfill complexity could be used only for the backfill cron job.
| open | 2024-02-02T16:31:01Z | 2024-02-02T16:31:13Z | null | severo |
2,115,282,465 | Improve backfill job logs to give more stats | number of jobs created/deleted, number of cache entries created/deleted, etc.
it would require the backfill_dataset to return values | Improve backfill job logs to give more stats: number of jobs created/deleted, number of cache entries created/deleted, etc.
it would require the backfill_dataset to return values | closed | 2024-02-02T16:20:56Z | 2024-02-09T18:00:44Z | 2024-02-09T18:00:44Z | severo |
2,115,199,691 | UI: Add difficulty and failed_runs to admin_ui | - Adding difficulty param in admin UI so we can run the job in a light, medium, or heavy worker (It will change on job_type change event).

- Adding failed_runs in cache UI table | UI: Add difficulty and failed_runs to admin_ui: - Adding difficulty param in admin UI so we can run the job in a light, medium, or heavy worker (It will change on job_type change event).

- Adding failed_runs... | closed | 2024-02-02T15:38:36Z | 2024-02-02T16:54:10Z | 2024-02-02T16:54:09Z | AndreaFrancis |
2,115,185,169 | Don't ignore webhooks when a dataset changes visibility | See:
- https://github.com/huggingface/moon-landing/issues/8779
- https://github.com/huggingface/moon-landing/pull/8825
A webhook is sent when a dataset is toggle between public and private.
Currently, we ignore them due to
https://github.com/huggingface/datasets-server/blob/66c1e089e204ab33195b957e1b99b0da6... | Don't ignore webhooks when a dataset changes visibility: See:
- https://github.com/huggingface/moon-landing/issues/8779
- https://github.com/huggingface/moon-landing/pull/8825
A webhook is sent when a dataset is toggle between public and private.
Currently, we ignore them due to
https://github.com/huggingfa... | closed | 2024-02-02T15:29:55Z | 2024-02-05T12:37:58Z | 2024-02-05T12:37:58Z | severo |
2,115,177,002 | Run backfill cron job in parallel on the datasets | See https://github.com/huggingface/datasets-server/pull/2375#issuecomment-1924086210 | Run backfill cron job in parallel on the datasets: See https://github.com/huggingface/datasets-server/pull/2375#issuecomment-1924086210 | closed | 2024-02-02T15:25:44Z | 2024-02-06T16:04:59Z | 2024-02-06T16:04:59Z | severo |
2,114,816,873 | Follow /auth-check 307 redirections | See https://github.com/huggingface/datasets-server/issues/1655#issuecomment-1923688766: as the dataset has been renamed, an internal call to /auth-check returns a 307 redirection, but we don't follow it.
More generally, we should handle redirections in all the code (I think it's always handled by huggingface_hub, so... | Follow /auth-check 307 redirections: See https://github.com/huggingface/datasets-server/issues/1655#issuecomment-1923688766: as the dataset has been renamed, an internal call to /auth-check returns a 307 redirection, but we don't follow it.
More generally, we should handle redirections in all the code (I think it's ... | open | 2024-02-02T12:25:29Z | 2024-07-30T16:09:13Z | null | severo |
2,114,788,836 | Should we increment "failed_runs" when error is "ResponseAlreadyComputedError"? | Related to https://github.com/huggingface/datasets-server/issues/1464: is it really an error? | Should we increment "failed_runs" when error is "ResponseAlreadyComputedError"?: Related to https://github.com/huggingface/datasets-server/issues/1464: is it really an error? | closed | 2024-02-02T12:08:31Z | 2024-02-22T21:16:12Z | 2024-02-22T21:16:12Z | severo |
2,114,613,227 | Fix backfill when dataset only has one cache entry | fixes #2274 | Fix backfill when dataset only has one cache entry: fixes #2274 | closed | 2024-02-02T10:33:31Z | 2024-02-02T15:26:09Z | 2024-02-02T15:26:08Z | severo |
2,113,586,252 | Upgrade gradio | fix #2306 | Upgrade gradio: fix #2306 | closed | 2024-02-01T22:06:13Z | 2024-02-01T22:25:38Z | 2024-02-01T22:25:37Z | severo |
2,113,485,595 | Use "Sign-In with HF" instead of token in admin-UI | See https://huggingface.co/docs/hub/spaces-oauth | Use "Sign-In with HF" instead of token in admin-UI: See https://huggingface.co/docs/hub/spaces-oauth | open | 2024-02-01T21:07:15Z | 2024-02-01T21:21:38Z | null | severo |
2,113,034,162 | Difficulty in admin UI | Already deployed
<img width="1106" alt="Capture d’écran 2024-02-01 à 18 25 08" src="https://github.com/huggingface/datasets-server/assets/1676121/3d33a129-c887-4220-a6ed-100ee8dc6cb6">
| Difficulty in admin UI: Already deployed
<img width="1106" alt="Capture d’écran 2024-02-01 à 18 25 08" src="https://github.com/huggingface/datasets-server/assets/1676121/3d33a129-c887-4220-a6ed-100ee8dc6cb6">
| closed | 2024-02-01T17:21:48Z | 2024-02-01T17:25:25Z | 2024-02-01T17:25:24Z | severo |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.