id int64 959M 2.55B | title stringlengths 3 133 | body stringlengths 1 65.5k ⌀ | description stringlengths 5 65.6k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | closed_at stringlengths 20 20 ⌀ | user stringclasses 174
values |
|---|---|---|---|---|---|---|---|---|
2,230,611,906 | Fix fsspec and s3fs dependencies | This PR:
- Adds explicit dependency on fsspec by using the tilde requirement because fsspec does not use semantic versioning.
- ~~Removes dependency on s3fs because this package is no longer used.~~
- ~~Sets s3fs as dev dependency (also using the tilde requirement) because 2 tests use it (see [comment](https://githu... | Fix fsspec and s3fs dependencies: This PR:
- Adds explicit dependency on fsspec by using the tilde requirement because fsspec does not use semantic versioning.
- ~~Removes dependency on s3fs because this package is no longer used.~~
- ~~Sets s3fs as dev dependency (also using the tilde requirement) because 2 tests u... | closed | 2024-04-08T08:42:57Z | 2024-04-09T10:25:36Z | 2024-04-09T10:25:36Z | albertvillanova |
2,230,454,645 | Update pyarrow to 14.0.2 | Update pyarrow from 14.0.1 to 14.0.2: https://arrow.apache.org/release/14.0.2.html | Update pyarrow to 14.0.2: Update pyarrow from 14.0.1 to 14.0.2: https://arrow.apache.org/release/14.0.2.html | closed | 2024-04-08T07:21:39Z | 2024-04-08T09:29:27Z | 2024-04-08T09:29:26Z | albertvillanova |
2,228,460,307 | Move docs from /datasets-server to /dataset-viewer | We need to change:
- here
- in moon-landing
- in datasets?
- in blog posts, observable, notion, google colabs?... | Move docs from /datasets-server to /dataset-viewer: We need to change:
- here
- in moon-landing
- in datasets?
- in blog posts, observable, notion, google colabs?... | closed | 2024-04-05T16:58:04Z | 2024-07-30T16:57:04Z | 2024-07-30T16:57:04Z | severo |
2,228,456,912 | Rework the docs and README after renaming datasets-server -> dataset-viewer | Now that we call the project "Dataset viewer", we should make it clear in the docs and READMEs that:
- that the project powers the Hub's dataset viewer
- people can file issues if something is broken in the Hub dataset viewer
- contributions here will help improving the dataset viewer
- that it does not conta... | Rework the docs and README after renaming datasets-server -> dataset-viewer: Now that we call the project "Dataset viewer", we should make it clear in the docs and READMEs that:
- that the project powers the Hub's dataset viewer
- people can file issues if something is broken in the Hub dataset viewer
- contri... | closed | 2024-04-05T16:55:48Z | 2024-08-01T15:11:01Z | 2024-08-01T15:11:01Z | severo |
2,228,454,037 | Rename datasets-server to dataset-viewer in infra internals? | Follow-up to #2650.
Is it necessary? Not urgent in any Case.
Some elements to review:
- [ ] https://github.com/huggingface/infra
- [ ] https://github.com/huggingface/infra-deployments
- [ ] docker image tags (https://hub.docker.com/r/huggingface/datasets-server-services-search -> https://hub.docker.com/r/huggi... | Rename datasets-server to dataset-viewer in infra internals?: Follow-up to #2650.
Is it necessary? Not urgent in any Case.
Some elements to review:
- [ ] https://github.com/huggingface/infra
- [ ] https://github.com/huggingface/infra-deployments
- [ ] docker image tags (https://hub.docker.com/r/huggingface/dat... | closed | 2024-04-05T16:53:34Z | 2024-04-08T09:26:14Z | 2024-04-08T09:26:13Z | severo |
2,228,448,367 | Change API URL to dataset-viewer.huggingface.co? | Follow-up to https://github.com/huggingface/dataset-viewer/issues/2650
Should we do it?
- https://github.com/huggingface/dataset-viewer/issues/2650#issuecomment-2040217875
- https://github.com/huggingface/moon-landing/pull/9520#issuecomment-2040220911
If we change it, we would have to update:
- moon-landing
-... | Change API URL to dataset-viewer.huggingface.co?: Follow-up to https://github.com/huggingface/dataset-viewer/issues/2650
Should we do it?
- https://github.com/huggingface/dataset-viewer/issues/2650#issuecomment-2040217875
- https://github.com/huggingface/moon-landing/pull/9520#issuecomment-2040220911
If we chan... | closed | 2024-04-05T16:49:13Z | 2024-04-08T09:24:43Z | 2024-04-08T09:24:43Z | severo |
2,228,404,959 | Renaming part2 | Wait for https://github.com/huggingface/moon-landing/pull/9520 to be merged and deployed to hub-ci before testing this and merging. | Renaming part2: Wait for https://github.com/huggingface/moon-landing/pull/9520 to be merged and deployed to hub-ci before testing this and merging. | closed | 2024-04-05T16:20:56Z | 2024-04-08T15:05:14Z | 2024-04-08T15:05:13Z | severo |
2,228,061,635 | Remove unnecessary types-pillow | Now that we require pillow >= 10.3.0, the package types-pillow is no longer necessary.
This PR removes it as dev dependency. | Remove unnecessary types-pillow: Now that we require pillow >= 10.3.0, the package types-pillow is no longer necessary.
This PR removes it as dev dependency. | closed | 2024-04-05T13:37:34Z | 2024-04-08T06:15:11Z | 2024-04-08T06:15:11Z | albertvillanova |
2,227,966,381 | Rename to dataset viewer part1 | See #2650. The first part of changes has no side-effects. See https://github.com/huggingface/datasets-server/issues/2650#issuecomment-2039609308 for the following tasks | Rename to dataset viewer part1: See #2650. The first part of changes has no side-effects. See https://github.com/huggingface/datasets-server/issues/2650#issuecomment-2039609308 for the following tasks | closed | 2024-04-05T12:51:55Z | 2024-04-05T13:49:14Z | 2024-04-05T13:49:13Z | severo |
2,227,839,042 | Fix dependency on pillow in libs | Both libs (libcommon and libapi) use the pillow package in their code, but do not explicitly depend on it.
See:
- In libcommon src code:
https://github.com/huggingface/datasets-server/blob/33f0c9c098d1fb94ad09c8b6e9f89cebf08d8f20/libs/libcommon/src/libcommon/viewer_utils/asset.py#L10
- In libapi tests code:
http... | Fix dependency on pillow in libs: Both libs (libcommon and libapi) use the pillow package in their code, but do not explicitly depend on it.
See:
- In libcommon src code:
https://github.com/huggingface/datasets-server/blob/33f0c9c098d1fb94ad09c8b6e9f89cebf08d8f20/libs/libcommon/src/libcommon/viewer_utils/asset.py#... | closed | 2024-04-05T11:46:07Z | 2024-04-05T13:32:05Z | 2024-04-05T13:32:04Z | albertvillanova |
2,227,682,468 | Increase the number of backfill workers? | Today, it's 8. Let's try increasing it and see if it speeds up the backfill job.
The current throughput is 577 datasets/minute. | Increase the number of backfill workers?: Today, it's 8. Let's try increasing it and see if it speeds up the backfill job.
The current throughput is 577 datasets/minute. | open | 2024-04-05T10:42:11Z | 2024-04-05T16:42:13Z | null | severo |
2,227,641,869 | move backfill time (we changed hour in France) | null | move backfill time (we changed hour in France): | closed | 2024-04-05T10:20:29Z | 2024-04-05T10:21:10Z | 2024-04-05T10:20:34Z | severo |
2,227,635,895 | move backfill job time again | null | move backfill job time again: | closed | 2024-04-05T10:17:28Z | 2024-04-05T10:18:13Z | 2024-04-05T10:17:34Z | severo |
2,227,573,929 | move the backfill cron job | null | move the backfill cron job: | closed | 2024-04-05T09:53:22Z | 2024-04-05T09:53:53Z | 2024-04-05T09:53:26Z | severo |
2,226,471,890 | Parallelize the backfill cron job | fix #2250 | Parallelize the backfill cron job: fix #2250 | closed | 2024-04-04T20:41:20Z | 2024-04-05T09:45:38Z | 2024-04-05T09:45:37Z | severo |
2,223,386,106 | Bump the pip group across 11 directories with 1 update | Bumps the pip group with 1 update in the /front/admin_ui directory: [pillow](https://github.com/python-pillow/Pillow).
Bumps the pip group with 1 update in the /jobs/cache_maintenance directory: [pillow](https://github.com/python-pillow/Pillow).
Bumps the pip group with 1 update in the /jobs/mongodb_migration directory... | Bump the pip group across 11 directories with 1 update: Bumps the pip group with 1 update in the /front/admin_ui directory: [pillow](https://github.com/python-pillow/Pillow).
Bumps the pip group with 1 update in the /jobs/cache_maintenance directory: [pillow](https://github.com/python-pillow/Pillow).
Bumps the pip grou... | closed | 2024-04-03T16:30:31Z | 2024-04-05T09:48:34Z | 2024-04-05T09:48:34Z | dependabot[bot] |
2,219,449,122 | try if search works in big datasets | Before https://github.com/huggingface/datasets-server/pull/2641, I wanted to try changing the memory limit.
Maybe it is a dummy doubt, but I don't understand why we have many search replicas when there are a low number of requests (we have only 12 for rows, which is highly requested).
 | From https://huggingface.slack.com/archives/C04HZ32QV17/p1711698013265029
> Is there a time-out for webhooks? I take about 1 min to respond and I'm getting 500.
> yes 30 seconds
> you should respond immediately and do stuff in background if it takes that much time
| Ensure /webhook answers immediately (or quickly): From https://huggingface.slack.com/archives/C04HZ32QV17/p1711698013265029
> Is there a time-out for webhooks? I take about 1 min to respond and I'm getting 500.
> yes 30 seconds
> you should respond immediately and do stuff in background if it takes that much time
... | open | 2024-03-29T08:33:22Z | 2024-05-13T21:33:08Z | null | severo |
2,213,481,141 | Rename this project `dataset-viewer` | idea from here: https://huggingface.slack.com/archives/C02V51Q3800/p1711555601697509?thread_ts=1711535232.998799&cid=C02V51Q3800 (internal)
> First is that I think nobody has any idea it’s related to the viewer (keep pointing people to it when they ask)
Making it clear that this code "powers the frontend feature"... | Rename this project `dataset-viewer`: idea from here: https://huggingface.slack.com/archives/C02V51Q3800/p1711555601697509?thread_ts=1711535232.998799&cid=C02V51Q3800 (internal)
> First is that I think nobody has any idea it’s related to the viewer (keep pointing people to it when they ask)
Making it clear that t... | closed | 2024-03-28T15:10:46Z | 2024-04-30T15:54:39Z | 2024-04-30T15:54:39Z | severo |
2,213,351,470 | Should we support /filter on columns that contain SQL commands? | See the `schema` column on https://huggingface.co/datasets/motherduckdb/duckdb-text2sql-25k. Clicking on any of the 'classes' leads to an error
<img width="1209" alt="Capture d’écran 2024-03-28 à 15 11 50" src="https://github.com/huggingface/datasets-server/assets/1676121/3aaf779f-0465-429a-bafb-1a16ff5f2901">
... | Should we support /filter on columns that contain SQL commands?: See the `schema` column on https://huggingface.co/datasets/motherduckdb/duckdb-text2sql-25k. Clicking on any of the 'classes' leads to an error
<img width="1209" alt="Capture d’écran 2024-03-28 à 15 11 50" src="https://github.com/huggingface/datasets... | open | 2024-03-28T14:14:01Z | 2024-03-28T14:24:34Z | null | severo |
2,213,183,259 | Add transformed columns (audio, image, string, lists) to duckdb file | To support filtering for columns that are stored as bytes (audio; image - when it' implemented), we need to store the transformed values that are used for stats computation. For example, for audio we should store their duration as a separate numerical column. We can also store other columns that require any other trans... | Add transformed columns (audio, image, string, lists) to duckdb file: To support filtering for columns that are stored as bytes (audio; image - when it' implemented), we need to store the transformed values that are used for stats computation. For example, for audio we should store their duration as a separate numerica... | closed | 2024-03-28T12:58:17Z | 2024-06-07T13:28:08Z | 2024-06-07T13:28:08Z | polinaeterna |
2,212,932,173 | Fix row-group size for imagefolder and audiofolder datasets | Close https://github.com/huggingface/datasets-server/issues/2646
I'll recompute the parquet-and-info jobs for the impacted datasets, otherwise their Viewer will still be slow or get TooBigContentError | Fix row-group size for imagefolder and audiofolder datasets: Close https://github.com/huggingface/datasets-server/issues/2646
I'll recompute the parquet-and-info jobs for the impacted datasets, otherwise their Viewer will still be slow or get TooBigContentError | closed | 2024-03-28T11:00:52Z | 2024-04-02T10:08:54Z | 2024-03-28T15:57:12Z | lhoestq |
2,212,917,178 | Wrong row-group size for imagefolder and audiofolder datasets | This is due to the `builder.info.features` not being available which causes `get_writer_batch_size_from_info` to return the wrong value in `parquet-and-info` | Wrong row-group size for imagefolder and audiofolder datasets: This is due to the `builder.info.features` not being available which causes `get_writer_batch_size_from_info` to return the wrong value in `parquet-and-info` | closed | 2024-03-28T10:53:28Z | 2024-03-28T15:57:13Z | 2024-03-28T15:57:13Z | lhoestq |
2,212,810,237 | Remove croissant endpoint | after we have these two PRs in prod:
- https://github.com/huggingface/datasets-server/pull/2643
- https://github.com/huggingface/moon-landing/pull/9426 | Remove croissant endpoint: after we have these two PRs in prod:
- https://github.com/huggingface/datasets-server/pull/2643
- https://github.com/huggingface/moon-landing/pull/9426 | closed | 2024-03-28T10:00:29Z | 2024-03-29T10:02:32Z | 2024-03-29T10:02:31Z | severo |
2,211,928,889 | Try to improve fts time for some datasets | null | Try to improve fts time for some datasets: | closed | 2024-03-27T21:20:53Z | 2024-03-27T21:26:40Z | 2024-03-27T21:26:39Z | AndreaFrancis |
2,211,394,582 | /croissant -> /croissant-crumbs (only specific fields) | fixes #2624 | /croissant -> /croissant-crumbs (only specific fields): fixes #2624 | closed | 2024-03-27T17:37:22Z | 2024-03-28T10:25:13Z | 2024-03-28T10:25:12Z | severo |
2,211,376,717 | Update Croissant URL to use the HF Hub `/api` | I updated the URLs in the compatible-libraries jobs and also in the docs.
cc @severo | Update Croissant URL to use the HF Hub `/api`: I updated the URLs in the compatible-libraries jobs and also in the docs.
cc @severo | closed | 2024-03-27T17:29:52Z | 2024-03-28T10:20:15Z | 2024-03-28T10:20:14Z | lhoestq |
2,211,097,843 | Apply faster fts by stage table for all datasets | Enable https://github.com/huggingface/datasets-server/pull/2638 approach for all datasets | Apply faster fts by stage table for all datasets: Enable https://github.com/huggingface/datasets-server/pull/2638 approach for all datasets | closed | 2024-03-27T15:24:00Z | 2024-04-05T11:59:37Z | 2024-04-05T11:20:20Z | AndreaFrancis |
2,210,874,875 | Replace HF app token with 1 hour JWT (for workers?) | Same logic as https://github.com/huggingface/spaces-app-manager/pull/1709 (internal)
Discussion here: https://huggingface.slack.com/archives/C02ETJ9N2LE/p1711526336460819 (internal)
| Replace HF app token with 1 hour JWT (for workers?): Same logic as https://github.com/huggingface/spaces-app-manager/pull/1709 (internal)
Discussion here: https://huggingface.slack.com/archives/C02ETJ9N2LE/p1711526336460819 (internal)
| open | 2024-03-27T13:55:00Z | 2024-03-27T13:55:11Z | null | severo |
2,210,780,561 | Make dataset viewer configurable | Some examples of potential usages:
- specify that a string column should be treated as a class
- specify columns order
- set which config to display by default if it differs from `datasets`' default one.
Maybe in the future, when (i hope) we have more functionality, we can also provide which metrics and over whic... | Make dataset viewer configurable: Some examples of potential usages:
- specify that a string column should be treated as a class
- specify columns order
- set which config to display by default if it differs from `datasets`' default one.
Maybe in the future, when (i hope) we have more functionality, we can also p... | open | 2024-03-27T13:15:38Z | 2024-05-13T14:03:23Z | null | polinaeterna |
2,209,488,348 | Try to improve fts for big datasets by stage table join | Following PR https://github.com/huggingface/datasets-server/pull/2633 and comment https://github.com/huggingface/datasets-server/pull/2633#issuecomment-2021117342 I would like to first try with a JOIN approach:
1. Get the rows that match the query criteria (score NOT NULL)
2. JOIN with the `data` table to get the ... | Try to improve fts for big datasets by stage table join: Following PR https://github.com/huggingface/datasets-server/pull/2633 and comment https://github.com/huggingface/datasets-server/pull/2633#issuecomment-2021117342 I would like to first try with a JOIN approach:
1. Get the rows that match the query criteria (s... | closed | 2024-03-26T22:57:31Z | 2024-03-27T13:33:48Z | 2024-03-27T13:33:47Z | AndreaFrancis |
2,209,471,530 | remove unused dependencies in workers | See https://github.com/huggingface/datasets-server/issues/2636#issuecomment-2021556425
It will fix #2636, and also partly #2476
It should also reduce the CI duration and the size of the service/worker docker image (no more tensorflow and pytorch!) | remove unused dependencies in workers: See https://github.com/huggingface/datasets-server/issues/2636#issuecomment-2021556425
It will fix #2636, and also partly #2476
It should also reduce the CI duration and the size of the service/worker docker image (no more tensorflow and pytorch!) | closed | 2024-03-26T22:41:31Z | 2024-03-27T13:06:25Z | 2024-03-27T12:57:59Z | severo |
2,208,390,726 | e2e is broken due to KenLM install | We get:
```
Note: This error originates from the build backend, and is likely not a problem with poetry but with kenlm (0.2.0 https://github.com/kpu/kenlm/archive/master.zip) not supporting PEP 517 builds. You can verify this by running 'pip wheel --no-cache-dir --use-pep517 "kenlm @ https://github.com/kpu/kenlm/ar... | e2e is broken due to KenLM install: We get:
```
Note: This error originates from the build backend, and is likely not a problem with poetry but with kenlm (0.2.0 https://github.com/kpu/kenlm/archive/master.zip) not supporting PEP 517 builds. You can verify this by running 'pip wheel --no-cache-dir --use-pep517 "ken... | closed | 2024-03-26T14:17:23Z | 2024-03-27T12:58:00Z | 2024-03-27T12:58:00Z | severo |
2,208,129,283 | Add login comment in code snippets | Example:
```python
from datasets import load_dataset
# Login using e.g. `huggingface-cli login` to access this dataset
ds = load_dataset("imagenet-1k")
```
The comment is only show for datasets that require a token to be accessed
Close https://github.com/huggingface/datasets-server/issues/2619 | Add login comment in code snippets: Example:
```python
from datasets import load_dataset
# Login using e.g. `huggingface-cli login` to access this dataset
ds = load_dataset("imagenet-1k")
```
The comment is only show for datasets that require a token to be accessed
Close https://github.com/huggingface/da... | closed | 2024-03-26T12:36:04Z | 2024-03-27T10:43:54Z | 2024-03-27T10:43:53Z | lhoestq |
2,207,984,335 | Upgrade huggingface_hub to 0.22 | See https://huggingface.co/spaces/Wauplin/huggingface_hub/discussions/5
Maybe wait for upgrade in datasets first? | Upgrade huggingface_hub to 0.22: See https://huggingface.co/spaces/Wauplin/huggingface_hub/discussions/5
Maybe wait for upgrade in datasets first? | closed | 2024-03-26T11:30:25Z | 2024-04-16T11:44:07Z | 2024-04-16T11:44:07Z | severo |
2,206,809,390 | Try to improve fts for big datasets by batches | Should fix https://github.com/huggingface/datasets-server/issues/2628 and help with search in big datasets.
When doing the
```
FTS_BY_TABLE_COMMAND = (
"SELECT * EXCLUDE (__hf_fts_score) FROM (SELECT *, fts_main_data.match_bm25(__hf_index_id, ?) AS __hf_fts_score"
" FROM data) A WHERE __hf_fts_score IS ... | Try to improve fts for big datasets by batches: Should fix https://github.com/huggingface/datasets-server/issues/2628 and help with search in big datasets.
When doing the
```
FTS_BY_TABLE_COMMAND = (
"SELECT * EXCLUDE (__hf_fts_score) FROM (SELECT *, fts_main_data.match_bm25(__hf_index_id, ?) AS __hf_fts_sco... | closed | 2024-03-25T22:09:36Z | 2024-03-26T22:57:58Z | 2024-03-26T22:57:58Z | AndreaFrancis |
2,206,141,722 | DatasetScriptError should be retried | On dataset mozilla-foundation/common_voice_6_1, we have `DatasetScriptError`, caused by `DatasetGenerationError`, caused by a timeout.
In this case, for config `de`, step `config-parquet-and-info`:
```
{
"error": "An error occurred while generating the dataset",
"cause_exception": "DatasetGenerationError",... | DatasetScriptError should be retried: On dataset mozilla-foundation/common_voice_6_1, we have `DatasetScriptError`, caused by `DatasetGenerationError`, caused by a timeout.
In this case, for config `de`, step `config-parquet-and-info`:
```
{
"error": "An error occurred while generating the dataset",
"cause... | closed | 2024-03-25T16:18:18Z | 2024-08-22T00:29:13Z | 2024-08-22T00:29:13Z | severo |
2,206,134,460 | ComputationError (ZeroDivisionError) on split-descriptive-statistics | For mozilla-foundation/common_voice_6_1 / dv / other, we currently have error `ComputationError` due to `ZeroDivisionError`. cc @polinaeterna
| ComputationError (ZeroDivisionError) on split-descriptive-statistics: For mozilla-foundation/common_voice_6_1 / dv / other, we currently have error `ComputationError` due to `ZeroDivisionError`. cc @polinaeterna
| open | 2024-03-25T16:14:47Z | 2024-03-27T12:57:02Z | null | severo |
2,205,560,686 | Take spawning.io opted out URLs into account in responses? | In particular, for images (assets / cached-assets).
Raised internally: https://huggingface.slack.com/archives/C040J3VPJUR/p1702578556307069?thread_ts=1702577137.311409&cid=C040J3VPJUR | Take spawning.io opted out URLs into account in responses?: In particular, for images (assets / cached-assets).
Raised internally: https://huggingface.slack.com/archives/C040J3VPJUR/p1702578556307069?thread_ts=1702577137.311409&cid=C040J3VPJUR | open | 2024-03-25T11:49:49Z | 2024-03-25T11:49:58Z | null | severo |
2,205,467,740 | Detect when a new commit only changes the dataset card? | Ideally, when we change the contents of the dataset card (not the YAML part), the responses computed by the datasets server should not be recomputed, because they will lead to the same results.
asked here (private slack channel): https://huggingface.slack.com/archives/C04N96UGUFM/p1701862863691809
> Sometimes I d... | Detect when a new commit only changes the dataset card?: Ideally, when we change the contents of the dataset card (not the YAML part), the responses computed by the datasets server should not be recomputed, because they will lead to the same results.
asked here (private slack channel): https://huggingface.slack.com/... | closed | 2024-03-25T10:57:36Z | 2024-06-19T16:02:33Z | 2024-06-19T16:02:33Z | severo |
2,205,461,399 | Increase nginx gateway timeout limit? | https://datasets-server.huggingface.co/search?dataset=wikimedia%2Fwikipedia&config=20231101.en&split=train&offset=0&length=100&query=deep+learning
gives
```
502 Bad Gateway
nginx/1.20.2
```
| Increase nginx gateway timeout limit?: https://datasets-server.huggingface.co/search?dataset=wikimedia%2Fwikipedia&config=20231101.en&split=train&offset=0&length=100&query=deep+learning
gives
```
502 Bad Gateway
nginx/1.20.2
```
| closed | 2024-03-25T10:53:56Z | 2024-03-27T13:57:59Z | 2024-03-27T13:57:59Z | severo |
2,205,449,001 | Replace our custom "stale bot" action with the GitHub's one? | See `actions/stale@v5`
```yaml
name: Mark inactive issues as stale
on:
schedule:
- cron: "30 1 * * *"
jobs:
close-issues:
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps:
- uses: actions/stale@v5
with:
days-before-is... | Replace our custom "stale bot" action with the GitHub's one?: See `actions/stale@v5`
```yaml
name: Mark inactive issues as stale
on:
schedule:
- cron: "30 1 * * *"
jobs:
close-issues:
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps:
- u... | open | 2024-03-25T10:48:47Z | 2024-03-25T10:49:02Z | null | severo |
2,203,065,290 | upgrade to pyarrow 15? | we use pyarrow 14 | upgrade to pyarrow 15?: we use pyarrow 14 | closed | 2024-03-22T18:22:04Z | 2024-04-30T16:19:19Z | 2024-04-30T16:19:19Z | severo |
2,203,017,088 | Better error code/message for badly formatted Parquet files | See https://huggingface.co/datasets/PleIAs/Dutch-PD/discussions/1
> Indeed it seems that the Parquet files don't share the same type for the publication_date field: either int64 or string.
Some files like dutch_pd_8.parquet even contain only empty strings as publication_date.
> This is causing an error in the Data... | Better error code/message for badly formatted Parquet files: See https://huggingface.co/datasets/PleIAs/Dutch-PD/discussions/1
> Indeed it seems that the Parquet files don't share the same type for the publication_date field: either int64 or string.
Some files like dutch_pd_8.parquet even contain only empty strings... | open | 2024-03-22T17:52:20Z | 2024-03-22T17:53:36Z | null | severo |
2,202,703,241 | remove fields from croissant once it's provided by the Hub API | Once the Hub provides an API to access the croissant metadata (https://github.com/huggingface/moon-landing/pull/9106, internal), we will be able to remove the following fields from the croissant response:
- `name`
- `description`
- `license`
- `url`
Also: we could provide `identifier` (DOI) from the Hub API, and... | remove fields from croissant once it's provided by the Hub API: Once the Hub provides an API to access the croissant metadata (https://github.com/huggingface/moon-landing/pull/9106, internal), we will be able to remove the following fields from the croissant response:
- `name`
- `description`
- `license`
- `url`
... | closed | 2024-03-22T14:59:16Z | 2024-03-28T10:25:13Z | 2024-03-28T10:25:13Z | severo |
2,202,600,045 | try to fix env variable | While deploying the search component, it throws error:
cannot convert int64 to string | try to fix env variable: While deploying the search component, it throws error:
cannot convert int64 to string | closed | 2024-03-22T14:14:51Z | 2024-03-22T14:26:10Z | 2024-03-22T14:26:10Z | AndreaFrancis |
2,202,281,646 | adapt JWT verification to new field | The JWT now contains `permissions: {"repo.content.read": true}` instead
of the (soon deprecated) ``read: true`.
fixes https://github.com/huggingface/datasets-server/issues/2620. | adapt JWT verification to new field: The JWT now contains `permissions: {"repo.content.read": true}` instead
of the (soon deprecated) ``read: true`.
fixes https://github.com/huggingface/datasets-server/issues/2620. | closed | 2024-03-22T11:26:42Z | 2024-03-26T13:54:25Z | 2024-03-26T13:54:24Z | severo |
2,200,407,122 | Use pandas for single files datasets | Pandas only supports paths without `*`.
I fixed an issue to get the full file name from single-file datasets YAML metadata containing the `data_files` with `*`
close https://github.com/huggingface/datasets-server/issues/2614 | Use pandas for single files datasets: Pandas only supports paths without `*`.
I fixed an issue to get the full file name from single-file datasets YAML metadata containing the `data_files` with `*`
close https://github.com/huggingface/datasets-server/issues/2614 | closed | 2024-03-21T14:51:28Z | 2024-03-21T19:26:02Z | 2024-03-21T19:26:01Z | lhoestq |
2,200,183,607 | Check `permissions` instead of `read` in JWT | Here:
https://github.com/huggingface/datasets-server/blob/dd2a81568b2334f69a005f2c1dfa224ad2024e9f/libs/libapi/src/libapi/jwt_token.py#L305-L307
Once https://github.com/huggingface/moon-landing/pull/9306 (internal) is deployed. The new field is:
```
permissions: {"repo.content.read": true}
```
Note that m... | Check `permissions` instead of `read` in JWT: Here:
https://github.com/huggingface/datasets-server/blob/dd2a81568b2334f69a005f2c1dfa224ad2024e9f/libs/libapi/src/libapi/jwt_token.py#L305-L307
Once https://github.com/huggingface/moon-landing/pull/9306 (internal) is deployed. The new field is:
```
permissions: {... | closed | 2024-03-21T13:20:50Z | 2024-03-26T13:54:26Z | 2024-03-26T13:54:26Z | severo |
2,198,764,953 | Include authentication information in /compatible-libraries code | If a dataset is private or gated, we should include authentication in the loading code snippet.
Note that we don't store in datasets-server if a dataset is gated or private. We could, see https://github.com/huggingface/datasets-server/issues/2208. Or we could check the status from the job runner | Include authentication information in /compatible-libraries code: If a dataset is private or gated, we should include authentication in the loading code snippet.
Note that we don't store in datasets-server if a dataset is gated or private. We could, see https://github.com/huggingface/datasets-server/issues/2208. Or ... | closed | 2024-03-20T23:17:21Z | 2024-03-27T10:43:54Z | 2024-03-27T10:43:54Z | severo |
2,198,557,584 | In dataset-hub-cache, only include fields that will be stored by the Hub | Possibly `compatible_libraries` will be removed or simplified
https://github.com/huggingface/datasets-server/pull/2610#discussion_r1531786545 | In dataset-hub-cache, only include fields that will be stored by the Hub: Possibly `compatible_libraries` will be removed or simplified
https://github.com/huggingface/datasets-server/pull/2610#discussion_r1531786545 | closed | 2024-03-20T20:59:41Z | 2024-03-21T10:37:43Z | 2024-03-21T10:37:43Z | severo |
2,198,231,639 | Add formats tags | Added `formats: list[str]` to "hub-cache". It is computed in "compatible-libraries" since compatibilities depend on the format.
Possibles values are
- csv
- json
- parquet
- text
- imagefolder
- audiofolder
Mentioned in https://github.com/huggingface/datasets-server/issues/2455
I chose to have it as a li... | Add formats tags: Added `formats: list[str]` to "hub-cache". It is computed in "compatible-libraries" since compatibilities depend on the format.
Possibles values are
- csv
- json
- parquet
- text
- imagefolder
- audiofolder
Mentioned in https://github.com/huggingface/datasets-server/issues/2455
I chose ... | closed | 2024-03-20T18:12:25Z | 2024-03-21T14:07:08Z | 2024-03-21T13:58:48Z | lhoestq |
2,198,014,467 | Bump black from 22.12.0 to 24.3.0 in /docs | Bumps [black](https://github.com/psf/black) from 22.12.0 to 24.3.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/psf/black/releases">black's releases</a>.</em></p>
<blockquote>
<h2>24.3.0</h2>
<h3>Highlights</h3>
<p>This release is a milestone: it fixes Black's first CVE se... | Bump black from 22.12.0 to 24.3.0 in /docs: Bumps [black](https://github.com/psf/black) from 22.12.0 to 24.3.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/psf/black/releases">black's releases</a>.</em></p>
<blockquote>
<h2>24.3.0</h2>
<h3>Highlights</h3>
<p>This release i... | closed | 2024-03-20T16:43:38Z | 2024-03-26T13:12:28Z | 2024-03-26T13:12:27Z | dependabot[bot] |
2,197,400,313 | Add hf transfer | Fix for https://github.com/huggingface/datasets-server/issues/2525 | Add hf transfer: Fix for https://github.com/huggingface/datasets-server/issues/2525 | closed | 2024-03-20T12:21:43Z | 2024-03-22T14:00:16Z | 2024-03-22T14:00:16Z | AndreaFrancis |
2,197,261,466 | Compatible libraries: Dask shown instead of Pandas for some single file datasets | e.g.
https://datasets-server.huggingface.co/compatible-libraries?dataset=gsm8k
https://datasets-server.huggingface.co/compatible-libraries?dataset=rajpurkar/squad
but for some reason it's fine for
https://datasets-server.huggingface.co/compatible-libraries?dataset=fka/awesome-chatgpt-prompts
I think it's ... | Compatible libraries: Dask shown instead of Pandas for some single file datasets: e.g.
https://datasets-server.huggingface.co/compatible-libraries?dataset=gsm8k
https://datasets-server.huggingface.co/compatible-libraries?dataset=rajpurkar/squad
but for some reason it's fine for
https://datasets-server.huggin... | closed | 2024-03-20T11:06:44Z | 2024-03-21T19:26:02Z | 2024-03-21T19:26:02Z | lhoestq |
2,195,913,334 | Support large_string as indexable column in FTS? | I've seen there are some datasets like [afg1/litscan-epmc-subset](https://huggingface.co/datasets/afg1/litscan-epmc-subset) and [baber/USPTO](https://huggingface.co/datasets/baber/USPTO) that have large_strings but does not support FTS because of this condition https://github.com/huggingface/datasets-server/blob/main/s... | Support large_string as indexable column in FTS?: I've seen there are some datasets like [afg1/litscan-epmc-subset](https://huggingface.co/datasets/afg1/litscan-epmc-subset) and [baber/USPTO](https://huggingface.co/datasets/baber/USPTO) that have large_strings but does not support FTS because of this condition https://... | closed | 2024-03-19T20:14:59Z | 2024-04-30T19:34:45Z | 2024-04-30T19:34:45Z | AndreaFrancis |
2,195,541,282 | Stats for audio | ~~i will maybe create a separate worker for media features (audio and image), haven't decided yet~~
In the end I didn't create a new worker / a new level (feature-type) because it's still not needed for our datasets sizes, see https://github.com/huggingface/datasets-server/pull/2612#issuecomment-2015482880
Failed... | Stats for audio: ~~i will maybe create a separate worker for media features (audio and image), haven't decided yet~~
In the end I didn't create a new worker / a new level (feature-type) because it's still not needed for our datasets sizes, see https://github.com/huggingface/datasets-server/pull/2612#issuecomment-201... | closed | 2024-03-19T17:17:41Z | 2024-03-27T12:47:14Z | 2024-03-27T12:47:13Z | polinaeterna |
2,195,535,326 | Fix ExternalServerError for None URLs in split-opt-in-out-scan | Fix for https://github.com/huggingface/datasets-server/issues/2608
Many URLs are None and the job runner failed when doing the join operation at https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/split/opt_in_out_urls_scan_from_streaming.py#L38 | Fix ExternalServerError for None URLs in split-opt-in-out-scan: Fix for https://github.com/huggingface/datasets-server/issues/2608
Many URLs are None and the job runner failed when doing the join operation at https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/split/opt_in_... | closed | 2024-03-19T17:15:08Z | 2024-03-20T15:03:31Z | 2024-03-20T14:26:24Z | AndreaFrancis |
2,195,274,344 | Fix hub cache libraries | Follows the changes in https://github.com/huggingface/datasets-server/pull/2454
The `dataset-hub-cache` should have `libraries` as a list of strings, not a list of CompatibleLibrary objects | Fix hub cache libraries: Follows the changes in https://github.com/huggingface/datasets-server/pull/2454
The `dataset-hub-cache` should have `libraries` as a list of strings, not a list of CompatibleLibrary objects | closed | 2024-03-19T15:22:50Z | 2024-03-21T10:36:05Z | 2024-03-19T15:47:10Z | lhoestq |
2,194,896,122 | Updating Croissant field names to include RecordSet names. | null | Updating Croissant field names to include RecordSet names.: | closed | 2024-03-19T12:54:02Z | 2024-03-20T13:33:11Z | 2024-03-20T10:12:04Z | ccl-core |
2,194,581,729 | Systematic ExternalServerError on HackerNoon/tech-company-news-data-dump | On step split-opt-in-out-urls-scan, we always get:
```
{"error": "Error when trying to connect to https://opts-api.spawningaiapi.com/api/v2/query/urls"}
```
It seems to be the case only for this dataset (or some other ones).
cc @AndreaFrancis for viz | Systematic ExternalServerError on HackerNoon/tech-company-news-data-dump: On step split-opt-in-out-urls-scan, we always get:
```
{"error": "Error when trying to connect to https://opts-api.spawningaiapi.com/api/v2/query/urls"}
```
It seems to be the case only for this dataset (or some other ones).
cc @Andre... | closed | 2024-03-19T10:28:40Z | 2024-03-20T17:26:51Z | 2024-03-20T17:26:50Z | severo |
2,193,441,999 | `dataset-compatible-libraries` gives an UnexpectedError for some datasets | On https://huggingface.co/datasets/HackerNoon/tech-company-news-data-dump, the step `dataset-compatible-libraries` gives:
```json
{
"error": "Dataset at 'hf://datasets/HackerNoon/tech-company-news-data-dump' doesn't contain data files matching the patterns for config 'default', check `data_files` and `data_fir` ... | `dataset-compatible-libraries` gives an UnexpectedError for some datasets: On https://huggingface.co/datasets/HackerNoon/tech-company-news-data-dump, the step `dataset-compatible-libraries` gives:
```json
{
"error": "Dataset at 'hf://datasets/HackerNoon/tech-company-news-data-dump' doesn't contain data files mat... | closed | 2024-03-18T21:57:23Z | 2024-05-14T15:02:50Z | 2024-05-14T15:02:49Z | severo |
2,193,016,006 | Add Modalities tags | Add "image", "audio" and "text". Later we can think about adding "tabular" (?) or more exotic ones like
TODO
- [x] add to hub-cache
- [x] tests | Add Modalities tags: Add "image", "audio" and "text". Later we can think about adding "tabular" (?) or more exotic ones like
TODO
- [x] add to hub-cache
- [x] tests | closed | 2024-03-18T18:53:26Z | 2024-03-20T17:59:49Z | 2024-03-20T17:59:48Z | lhoestq |
2,192,998,034 | Support exif images | The EXIF orientation tag is currently ignored when displaying images (see https://huggingface.co/datasets/mariosasko/exif_image).
This PR should fix this.
| Support exif images: The EXIF orientation tag is currently ignored when displaying images (see https://huggingface.co/datasets/mariosasko/exif_image).
This PR should fix this.
| closed | 2024-03-18T18:45:44Z | 2024-03-19T10:30:22Z | 2024-03-18T22:01:47Z | mariosasko |
2,192,730,074 | Upgrade to duckdb 0.10.1 | https://github.com/duckdb/duckdb/releases/tag/v0.10.1 | Upgrade to duckdb 0.10.1: https://github.com/duckdb/duckdb/releases/tag/v0.10.1 | closed | 2024-03-18T16:43:05Z | 2024-04-12T14:11:09Z | 2024-04-12T14:11:09Z | severo |
2,189,021,537 | change values for autoscaling | - increase threshold for medium and light workers, because it's pretty common to have a burst of 50 jobs
- reduce the max number of mium and heavy workers because it's not possible currently, with 50 nodes, to get 100 medium workers and 50 heavy workers at the same time | change values for autoscaling: - increase threshold for medium and light workers, because it's pretty common to have a burst of 50 jobs
- reduce the max number of mium and heavy workers because it's not possible currently, with 50 nodes, to get 100 medium workers and 50 heavy workers at the same time | closed | 2024-03-15T16:22:09Z | 2024-03-15T16:26:43Z | 2024-03-15T16:26:42Z | severo |
2,188,554,099 | Compute stats for Sequence feature only if it's underlying schema is List | Should fix https://huggingface.co/datasets/OpenAssistant/oasst1 (see "emojis" and "label" columns).
When sequence feature is dict of lists (not list of dicts), compute lengths over first subfield values. | Compute stats for Sequence feature only if it's underlying schema is List: Should fix https://huggingface.co/datasets/OpenAssistant/oasst1 (see "emojis" and "label" columns).
When sequence feature is dict of lists (not list of dicts), compute lengths over first subfield values. | closed | 2024-03-15T13:45:48Z | 2024-03-22T11:01:05Z | 2024-03-22T11:01:04Z | polinaeterna |
2,188,034,401 | Catch exception DatasetGenerationCastError | See https://huggingface.co/datasets/ykcajiad/CS162/discussions/1
The JSONL file is malformed, and `datasets` raises `DatasetGenerationCastError`, but we don't catch it, resulting in an `UnexpectedError`. We should catch it and pass the details to the user so they can fix the data.
```
An error occurred while gen... | Catch exception DatasetGenerationCastError: See https://huggingface.co/datasets/ykcajiad/CS162/discussions/1
The JSONL file is malformed, and `datasets` raises `DatasetGenerationCastError`, but we don't catch it, resulting in an `UnexpectedError`. We should catch it and pass the details to the user so they can fix t... | closed | 2024-03-15T09:03:47Z | 2024-05-13T18:03:44Z | 2024-05-13T18:03:44Z | severo |
2,187,371,624 | move backfill cron job time | null | move backfill cron job time: | closed | 2024-03-14T22:27:01Z | 2024-03-14T22:27:34Z | 2024-03-14T22:27:06Z | severo |
2,187,365,656 | add missing dependency between steps | follows #2577
also: add two missing tests | add missing dependency between steps: follows #2577
also: add two missing tests | closed | 2024-03-14T22:22:07Z | 2024-03-15T11:01:03Z | 2024-03-14T22:25:42Z | severo |
2,187,282,891 | set statistics as required in response to /is-valid | all the cache entries have been updated | set statistics as required in response to /is-valid: all the cache entries have been updated | closed | 2024-03-14T21:17:07Z | 2024-03-14T21:20:23Z | 2024-03-14T21:17:11Z | severo |
2,186,874,941 | Add retry when loading dataset builder | Related to https://github.com/huggingface/datasets-server/issues/1443 for `config-parquet-and-info`, we have 1065 entries with HfHubHTTPError UnexpectedError, we should retry because sometimes it is not a final error but just an issue with Hub's connectivity.
| Add retry when loading dataset builder: Related to https://github.com/huggingface/datasets-server/issues/1443 for `config-parquet-and-info`, we have 1065 entries with HfHubHTTPError UnexpectedError, we should retry because sometimes it is not a final error but just an issue with Hub's connectivity.
| closed | 2024-03-14T17:13:25Z | 2024-03-14T19:33:00Z | 2024-03-14T19:32:59Z | AndreaFrancis |
2,186,870,374 | Delete obsolete cache entries during backfill | The backfill does not delete all the cache entries it should.
I found this looking at the `split-is-valid` entries: a lot of cache entries don't have the last version (4), eg:
```
{ _id: ObjectId("64d2bb292727fd84d43f6777"),
config: 'default',
dataset: 'chenyuxuan/wikigold',
kind: 'split-is-valid',
... | Delete obsolete cache entries during backfill: The backfill does not delete all the cache entries it should.
I found this looking at the `split-is-valid` entries: a lot of cache entries don't have the last version (4), eg:
```
{ _id: ObjectId("64d2bb292727fd84d43f6777"),
config: 'default',
dataset: 'cheny... | open | 2024-03-14T17:10:41Z | 2024-03-14T17:10:49Z | null | severo |
2,186,730,005 | Fix cell truncation | We were using the same `list` for all the cells. So if a cell has truncated columns then all the cells end up with the same value for `truncated_cells`
Close https://github.com/huggingface/datasets-server/issues/2586 | Fix cell truncation: We were using the same `list` for all the cells. So if a cell has truncated columns then all the cells end up with the same value for `truncated_cells`
Close https://github.com/huggingface/datasets-server/issues/2586 | closed | 2024-03-14T15:59:09Z | 2024-03-15T11:22:22Z | 2024-03-15T11:00:03Z | lhoestq |
2,186,574,675 | Fix openapi deatils | - fix assets/cached-assets urls in openapi spec
- fix vscode extension | Fix openapi deatils: - fix assets/cached-assets urls in openapi spec
- fix vscode extension | closed | 2024-03-14T14:50:19Z | 2024-03-14T15:12:56Z | 2024-03-14T15:12:55Z | severo |
2,186,503,761 | Improve search functionality | Given that we have already implemented the search feature on the API and hub, let's discuss here what improvements can be made.
Some suggestions:
- [ ] Investigate if it is really used (Maybe it is needed a re design of the idea)
- [ ] Investigate execution time | Improve search functionality: Given that we have already implemented the search feature on the API and hub, let's discuss here what improvements can be made.
Some suggestions:
- [ ] Investigate if it is really used (Maybe it is needed a re design of the idea)
- [ ] Investigate execution time | closed | 2024-03-14T14:18:58Z | 2024-07-30T20:11:27Z | 2024-07-30T20:11:27Z | AndreaFrancis |
2,186,354,662 | Force backfill | null | Force backfill: | closed | 2024-03-14T13:11:48Z | 2024-03-14T13:12:44Z | 2024-03-14T13:11:53Z | severo |
2,186,346,824 | e2e tests are failing | possibly due to the removal of cookies | e2e tests are failing: possibly due to the removal of cookies | closed | 2024-03-14T13:08:17Z | 2024-03-14T13:37:12Z | 2024-03-14T13:37:12Z | severo |
2,186,303,924 | increment job runner version | following https://github.com/huggingface/datasets-server/pull/2454 | increment job runner version: following https://github.com/huggingface/datasets-server/pull/2454 | closed | 2024-03-14T12:52:32Z | 2024-03-14T12:54:54Z | 2024-03-14T12:54:53Z | severo |
2,186,250,981 | retry all PreviousStepSOMETHING | null | retry all PreviousStepSOMETHING: | closed | 2024-03-14T12:33:54Z | 2024-03-14T12:35:56Z | 2024-03-14T12:35:55Z | severo |
2,186,095,183 | block a new dataset | null | block a new dataset: | closed | 2024-03-14T11:19:52Z | 2024-03-14T11:20:38Z | 2024-03-14T11:19:59Z | severo |
2,186,058,339 | Follow-ups to #2543 | See https://github.com/huggingface/datasets-server/pull/2543#issuecomment-1985494442
- [ ] we might factorize code between [query](https://github.com/huggingface/datasets-server/blob/87c224fa218420d69b20cd28d7befee0daf3c236/libs/libcommon/src/libcommon/parquet_utils.py#L330) and [query_truncated_binary](https://gith... | Follow-ups to #2543: See https://github.com/huggingface/datasets-server/pull/2543#issuecomment-1985494442
- [ ] we might factorize code between [query](https://github.com/huggingface/datasets-server/blob/87c224fa218420d69b20cd28d7befee0daf3c236/libs/libcommon/src/libcommon/parquet_utils.py#L330) and [query_truncated... | open | 2024-03-14T11:00:26Z | 2024-07-30T16:54:25Z | null | severo |
2,186,017,234 | Cells are marked as truncated whereas they're not | See https://datasets-server.huggingface.co/first-rows?dataset=HuggingFaceH4/deita-6k-v0-sft&config=default&split=train_sft
<details><summary>JSON response</summary>
<pre>
{
"dataset": "HuggingFaceH4/deita-6k-v0-sft",
"config": "default",
"split": "train_sft",
"features": [
{
"feature_idx": ... | Cells are marked as truncated whereas they're not: See https://datasets-server.huggingface.co/first-rows?dataset=HuggingFaceH4/deita-6k-v0-sft&config=default&split=train_sft
<details><summary>JSON response</summary>
<pre>
{
"dataset": "HuggingFaceH4/deita-6k-v0-sft",
"config": "default",
"split": "train_s... | closed | 2024-03-14T10:39:45Z | 2024-03-15T12:15:05Z | 2024-03-15T11:00:04Z | severo |
2,184,261,730 | Update polars to fix PanicException | should fix the viewer for https://huggingface.co/datasets/teknium/OpenHermes-2.5 and presumably many others
There is and issue for this in polars: https://github.com/pola-rs/polars/issues/3942.
I didn't understand though why some cases with nested structs work while some like this doesn't | Update polars to fix PanicException: should fix the viewer for https://huggingface.co/datasets/teknium/OpenHermes-2.5 and presumably many others
There is and issue for this in polars: https://github.com/pola-rs/polars/issues/3942.
I didn't understand though why some cases with nested structs work while some like ... | closed | 2024-03-13T15:12:55Z | 2024-03-14T17:14:28Z | 2024-03-14T17:14:27Z | polinaeterna |
2,183,840,300 | opus decoding error | see https://huggingface.co/datasets/stable-speech/mls_eng_10k/discussions/1#65ef6e9d440a5fc3d94a40ad
To fix this maybe we should pin `soundfile` library to `>=1.0.31` (first version that supported opus) like [we do in `datasets` library](https://github.com/huggingface/datasets/blob/main/src/datasets/config.py#L144).... | opus decoding error: see https://huggingface.co/datasets/stable-speech/mls_eng_10k/discussions/1#65ef6e9d440a5fc3d94a40ad
To fix this maybe we should pin `soundfile` library to `>=1.0.31` (first version that supported opus) like [we do in `datasets` library](https://github.com/huggingface/datasets/blob/main/src/data... | closed | 2024-03-13T12:07:21Z | 2024-05-16T11:28:57Z | 2024-05-16T04:38:42Z | polinaeterna |
2,183,692,842 | Update HF Croissant to Croissant 1.0 | null | Update HF Croissant to Croissant 1.0: | closed | 2024-03-13T10:53:44Z | 2024-03-19T13:22:57Z | 2024-03-19T09:54:59Z | ccl-core |
2,183,626,169 | Fix size of splits | fixes #2581
---
After merging and deploying, I'll refresh all the affected datasets.
It's not so much (<2k jobs)
```
db.cachedResponsesBlue.countDocuments({
"kind": "config-parquet-and-info"
})
140609
db.cachedResponsesBlue.countDocuments({
"kind": "config-parquet-and-info",
"updated_at"... | Fix size of splits: fixes #2581
---
After merging and deploying, I'll refresh all the affected datasets.
It's not so much (<2k jobs)
```
db.cachedResponsesBlue.countDocuments({
"kind": "config-parquet-and-info"
})
140609
db.cachedResponsesBlue.countDocuments({
"kind": "config-parquet-and-info... | closed | 2024-03-13T10:21:34Z | 2024-03-13T10:42:28Z | 2024-03-13T10:42:27Z | severo |
2,183,104,285 | The API returns the wrong row number | As stated in this post: https://discuss.huggingface.co/t/got-wrong-row-number-of-dataset-viewer/77132, I got the wrong number on the dataset repo page. The numbers are correct if I download the repo, but when I follow the instructions in https://huggingface.co/docs/datasets-server/size?code=curl, the numbers of rows in... | The API returns the wrong row number: As stated in this post: https://discuss.huggingface.co/t/got-wrong-row-number-of-dataset-viewer/77132, I got the wrong number on the dataset repo page. The numbers are correct if I download the repo, but when I follow the instructions in https://huggingface.co/docs/datasets-server/... | closed | 2024-03-13T04:42:53Z | 2024-03-13T10:58:03Z | 2024-03-13T10:42:28Z | yuanyehome |
2,182,797,886 | remove support for cookies | Fixes #1011
and then
Fixes #2572
Fixes #2052 | remove support for cookies: Fixes #1011
and then
Fixes #2572
Fixes #2052 | closed | 2024-03-12T22:33:19Z | 2024-03-13T09:48:18Z | 2024-03-13T09:48:18Z | severo |
2,182,713,300 | Replace canonical datasets with community ones in the docs/tests | fixes #2578
In the codebase, we still access the following canonical datasets, which have not been moved to an org: `cnn_dailymail`, `mnist`, `blog_authorship_corpus`, `rotten_tomatoes`, `ett`, `amazon_polarity`, `imagenet-1k`, `cifar100`, `superb`, `imdb`, `atomic`. | Replace canonical datasets with community ones in the docs/tests: fixes #2578
In the codebase, we still access the following canonical datasets, which have not been moved to an org: `cnn_dailymail`, `mnist`, `blog_authorship_corpus`, `rotten_tomatoes`, `ett`, `amazon_polarity`, `imagenet-1k`, `cifar100`, `superb`, ... | closed | 2024-03-12T21:29:49Z | 2024-03-13T09:54:53Z | 2024-03-13T09:54:52Z | severo |
2,181,722,457 | Replace canonical datasets with community ones in the docs/tests | For example, `glue` is now `nyu-mll/glue`
cc @lhoestq @albertvillanova for visibility. | Replace canonical datasets with community ones in the docs/tests: For example, `glue` is now `nyu-mll/glue`
cc @lhoestq @albertvillanova for visibility. | closed | 2024-03-12T14:10:11Z | 2024-03-13T09:54:54Z | 2024-03-13T09:54:53Z | severo |
2,181,305,830 | Add "statistics" field to is-valid workers | todo:
- [x] update openapi.json
- [x] update admin-ui | Add "statistics" field to is-valid workers: todo:
- [x] update openapi.json
- [x] update admin-ui | closed | 2024-03-12T10:50:54Z | 2024-03-12T16:42:21Z | 2024-03-12T16:42:20Z | polinaeterna |
2,180,495,761 | clearer without the intermediate type | null | clearer without the intermediate type: | closed | 2024-03-12T00:33:38Z | 2024-03-12T00:37:22Z | 2024-03-12T00:34:47Z | severo |
2,180,485,571 | remove openapi-spec-validator (use spectral) | because I like these stats:
<img width="135" alt="Capture d’écran 2024-03-12 à 01 22 19" src="https://github.com/huggingface/datasets-server/assets/1676121/81b79030-4276-458d-bf43-72cbdd61d3ef">
| remove openapi-spec-validator (use spectral): because I like these stats:
<img width="135" alt="Capture d’écran 2024-03-12 à 01 22 19" src="https://github.com/huggingface/datasets-server/assets/1676121/81b79030-4276-458d-bf43-72cbdd61d3ef">
| closed | 2024-03-12T00:21:39Z | 2024-03-12T00:23:12Z | 2024-03-12T00:23:12Z | severo |
2,180,438,756 | Fix openapi.json | null | Fix openapi.json: | closed | 2024-03-11T23:30:53Z | 2024-03-12T00:17:50Z | 2024-03-12T00:17:33Z | severo |
2,180,433,047 | Incoherencies with types for descriptive statistics? | https://datasets-server.huggingface.co/statistics?dataset=mstz/wine&config=wine&split=train
gives:
<img width="329" alt="Capture d’écran 2024-03-12 à 00 21 12" src="https://github.com/huggingface/datasets-server/assets/1676121/b196a35e-485a-4dfd-8437-b1a7006777de">
but normally, all the categorical columns s... | Incoherencies with types for descriptive statistics?: https://datasets-server.huggingface.co/statistics?dataset=mstz/wine&config=wine&split=train
gives:
<img width="329" alt="Capture d’écran 2024-03-12 à 00 21 12" src="https://github.com/huggingface/datasets-server/assets/1676121/b196a35e-485a-4dfd-8437-b1a7006... | closed | 2024-03-11T23:24:47Z | 2024-03-12T17:30:00Z | 2024-03-11T23:31:20Z | severo |
2,180,383,987 | The API is broken when passing a cookie + 2FA enabled | All the API calls from my main browser (such as https://datasets-server.huggingface.co/splits?dataset=mnist) fail with
```json
{"error":"Unexpected error."}
```
The underlying logs are:
```
INFO: 2024-03-11 22:38:50,327 - httpx - HTTP Request: GET https://huggingface.co/api/datasets/mnist/auth-check "HTTP/1... | The API is broken when passing a cookie + 2FA enabled: All the API calls from my main browser (such as https://datasets-server.huggingface.co/splits?dataset=mnist) fail with
```json
{"error":"Unexpected error."}
```
The underlying logs are:
```
INFO: 2024-03-11 22:38:50,327 - httpx - HTTP Request: GET https... | closed | 2024-03-11T22:40:19Z | 2024-03-13T09:48:19Z | 2024-03-13T09:48:19Z | severo |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.