id int64 959M 2.55B | title stringlengths 3 133 | body stringlengths 1 65.5k ⌀ | description stringlengths 5 65.6k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | closed_at stringlengths 20 20 ⌀ | user stringclasses 174
values |
|---|---|---|---|---|---|---|---|---|
1,732,634,565 | Add /rows docs | null | Add /rows docs: | closed | 2023-05-30T16:54:13Z | 2023-05-31T13:48:19Z | 2023-05-31T13:32:09Z | lhoestq |
1,732,199,452 | Dataset Viewer issue for dineshpatil341341/demo | ### Link
https://huggingface.co/datasets/dineshpatil341341/demo
### Description
The dataset viewer is not working for dataset dineshpatil341341/demo.
Error details:
```
Error code: ResponseNotReady
```
| Dataset Viewer issue for dineshpatil341341/demo: ### Link
https://huggingface.co/datasets/dineshpatil341341/demo
### Description
The dataset viewer is not working for dataset dineshpatil341341/demo.
Error details:
```
Error code: ResponseNotReady
```
| closed | 2023-05-30T12:51:33Z | 2023-05-31T05:35:12Z | 2023-05-31T05:35:12Z | kishorgujjar |
1,731,750,704 | Fix missing slash in admin endpoints | As discussed with @severo, this PR fixes the missing slash in some admin endpoints, e.g.:
- https://datasets-server.huggingface.co/admin/force-refreshdataset-config-names
After this PR, it will be:
- https://datasets-server.huggingface.co/admin/force-refresh/dataset-config-names
Related to:
- #1246 | Fix missing slash in admin endpoints: As discussed with @severo, this PR fixes the missing slash in some admin endpoints, e.g.:
- https://datasets-server.huggingface.co/admin/force-refreshdataset-config-names
After this PR, it will be:
- https://datasets-server.huggingface.co/admin/force-refresh/dataset-config-na... | closed | 2023-05-30T08:13:28Z | 2023-05-31T12:54:23Z | 2023-05-30T12:49:43Z | albertvillanova |
1,730,971,046 | Part #2 - Adding "partition" field on queue and cache db | Part of https://github.com/huggingface/datasets-server/issues/1087, adding partition field in queue and cache collections. | Part #2 - Adding "partition" field on queue and cache db: Part of https://github.com/huggingface/datasets-server/issues/1087, adding partition field in queue and cache collections. | closed | 2023-05-29T15:51:45Z | 2023-10-10T13:29:31Z | 2023-06-01T14:37:23Z | AndreaFrancis |
1,729,518,701 | Dataset Viewer issue for Muennighoff/xP3x | ### Link
https://huggingface.co/datasets/Muennighoff/xP3x
### Description
The dataset viewer is not working for dataset Muennighoff/xP3x.
Error details:
```
The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.
```
This dataset... | Dataset Viewer issue for Muennighoff/xP3x: ### Link
https://huggingface.co/datasets/Muennighoff/xP3x
### Description
The dataset viewer is not working for dataset Muennighoff/xP3x.
Error details:
```
The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them ... | closed | 2023-05-28T09:53:38Z | 2023-05-30T06:58:28Z | 2023-05-30T06:58:28Z | Muennighoff |
1,728,607,825 | Dataset Viewer issue for minioh1234/martin_valen_dataset | ### Link
https://huggingface.co/datasets/minioh1234/martin_valen_dataset
### Description
The dataset viewer is not working for dataset minioh1234/martin_valen_dataset.
Error details:
```
Error code: ResponseNotReady
```
The dataset preview is not available for this dataset.
The server is busier than us... | Dataset Viewer issue for minioh1234/martin_valen_dataset: ### Link
https://huggingface.co/datasets/minioh1234/martin_valen_dataset
### Description
The dataset viewer is not working for dataset minioh1234/martin_valen_dataset.
Error details:
```
Error code: ResponseNotReady
```
The dataset preview is not... | closed | 2023-05-27T09:44:16Z | 2023-06-26T15:38:53Z | 2023-06-26T15:38:53Z | ohchangmin |
1,727,982,372 | feat: 🎸 create orchestrator | In this PR:
1. we centralize all the operations on the graph, within DatasetOrchestrator:
- `.set_revision`: used by the webhook, sets the current git revision for a dataset, which will refresh the "root" steps that has a different revision (and by cascade, all the graph)
- `.finish_job`: used by the workers (an... | feat: 🎸 create orchestrator: In this PR:
1. we centralize all the operations on the graph, within DatasetOrchestrator:
- `.set_revision`: used by the webhook, sets the current git revision for a dataset, which will refresh the "root" steps that has a different revision (and by cascade, all the graph)
- `.finish... | closed | 2023-05-26T17:19:52Z | 2023-06-01T12:47:42Z | 2023-06-01T12:44:38Z | severo |
1,727,570,771 | feat: Part #1 - New processing step to calculate/get split partitions | Part of https://github.com/huggingface/datasets-server/issues/1087
Given a chunk size and split row number, this job runner will generate partitions for a split.
Part of this code was already introduced in https://github.com/huggingface/datasets-server/pull/1213 but maybe is better to separate the PR in one for "part... | feat: Part #1 - New processing step to calculate/get split partitions: Part of https://github.com/huggingface/datasets-server/issues/1087
Given a chunk size and split row number, this job runner will generate partitions for a split.
Part of this code was already introduced in https://github.com/huggingface/datasets-s... | closed | 2023-05-26T12:51:53Z | 2024-01-26T11:56:03Z | 2023-06-01T14:37:20Z | AndreaFrancis |
1,727,345,980 | Update doc index | null | Update doc index: | closed | 2023-05-26T10:27:55Z | 2023-05-26T18:04:37Z | 2023-05-26T18:01:20Z | lhoestq |
1,726,513,702 | Generate 5GB parquet files for big datasets | For datasets over 5GB, let's generate 5GB parquet files (with shards) instead of ignoring them. The fact that the dataset was truncated should be stored somewhere.
---
Currently, datasets server gets stores parquet files if the dataset size is less than PARQUET_AND_INFO_MAX_DATASET_SIZE config.
- `PARQUET_AND_IN... | Generate 5GB parquet files for big datasets: For datasets over 5GB, let's generate 5GB parquet files (with shards) instead of ignoring them. The fact that the dataset was truncated should be stored somewhere.
---
Currently, datasets server gets stores parquet files if the dataset size is less than PARQUET_AND_INF... | closed | 2023-05-25T21:29:19Z | 2023-07-03T15:40:33Z | 2023-07-03T15:40:33Z | AndreaFrancis |
1,726,417,949 | Separate opt in out urls scan | Part of the second approach for spawning full scan https://github.com/huggingface/datasets-server/issues/1087
"Run full scan in separated Jobs and store results in separated cache entries".
I am moving logic to inspect if a split has image URL columns to another step `"split-image-url-columns"`.
Now, `"split-opt-i... | Separate opt in out urls scan: Part of the second approach for spawning full scan https://github.com/huggingface/datasets-server/issues/1087
"Run full scan in separated Jobs and store results in separated cache entries".
I am moving logic to inspect if a split has image URL columns to another step `"split-image-url... | closed | 2023-05-25T20:02:57Z | 2023-05-26T12:32:03Z | 2023-05-26T12:28:44Z | AndreaFrancis |
1,726,061,255 | Fix audio data in pagination of audio datasets | Currently pagination is only enabled for testing purposes on [arabic_speech_corpus](https://huggingface.co/datasets/arabic_speech_corpus) but times out because the "transform to list" step that writes the audio files to disk takes too much time.
Currently it writes both MP3 and WAV - but we should find which one is ... | Fix audio data in pagination of audio datasets: Currently pagination is only enabled for testing purposes on [arabic_speech_corpus](https://huggingface.co/datasets/arabic_speech_corpus) but times out because the "transform to list" step that writes the audio files to disk takes too much time.
Currently it writes bot... | closed | 2023-05-25T15:36:49Z | 2023-09-15T07:59:54Z | 2023-09-15T07:59:53Z | lhoestq |
1,726,036,223 | Opt in/out scan only image urls | Context: https://huggingface.slack.com/archives/C0311GZ7R6K/p1684962431285069
Before, datasets-server scanned all url columns for spawning opt-in/out, now it will filter those image URLs only. | Opt in/out scan only image urls: Context: https://huggingface.slack.com/archives/C0311GZ7R6K/p1684962431285069
Before, datasets-server scanned all url columns for spawning opt-in/out, now it will filter those image URLs only. | closed | 2023-05-25T15:22:11Z | 2023-10-10T13:29:48Z | 2023-05-25T20:02:46Z | AndreaFrancis |
1,725,729,653 | Revert "fix: 🐛 finish the job before backfilling, to get the status (… | …#1252)"
This reverts commit 1cbd9ede2ea7de7f93662c0e802cb77d378eac3c.
The backfill() step still lasts too long in the workers, for datasets with a lot of configs/splits, leading to concurrency issues. As we don't have prometheus metrics for the workers, we cannot benchmark on prod data.
Reverting, and I will ... | Revert "fix: 🐛 finish the job before backfilling, to get the status (…: …#1252)"
This reverts commit 1cbd9ede2ea7de7f93662c0e802cb77d378eac3c.
The backfill() step still lasts too long in the workers, for datasets with a lot of configs/splits, leading to concurrency issues. As we don't have prometheus metrics for... | closed | 2023-05-25T12:28:51Z | 2023-05-25T12:32:43Z | 2023-05-25T12:29:03Z | severo |
1,725,651,095 | fix: 🐛 finish the job before backfilling, to get the status | instead of finishing all the jobs with CANCELLED though backfill(), first finish the job with SUCCESS or ERROR, then backfill. | fix: 🐛 finish the job before backfilling, to get the status: instead of finishing all the jobs with CANCELLED though backfill(), first finish the job with SUCCESS or ERROR, then backfill. | closed | 2023-05-25T11:41:32Z | 2023-05-25T11:57:23Z | 2023-05-25T11:53:46Z | severo |
1,725,621,370 | Simplify queue (jobs are now only WAITING or STARTED) | instead of changing its status to cancelled | Simplify queue (jobs are now only WAITING or STARTED): instead of changing its status to cancelled | closed | 2023-05-25T11:20:25Z | 2023-05-25T11:30:33Z | 2023-05-25T11:27:38Z | severo |
1,725,367,672 | fix: 🐛 delete pending jobs for other revisions | when backfilling a new revision, all pending jobs for other revisions (be it started or waiting) are canceled. | fix: 🐛 delete pending jobs for other revisions: when backfilling a new revision, all pending jobs for other revisions (be it started or waiting) are canceled. | closed | 2023-05-25T08:46:09Z | 2023-05-25T09:19:39Z | 2023-05-25T09:16:42Z | severo |
1,724,801,888 | feat: 🎸 increase number of parallel jobs for the same namespace | null | feat: 🎸 increase number of parallel jobs for the same namespace: | closed | 2023-05-24T22:08:41Z | 2023-05-24T22:13:16Z | 2023-05-24T22:09:41Z | severo |
1,724,798,036 | Dataset Viewer issue for TempoFunk/big | ### Link
https://huggingface.co/datasets/TempoFunk/big
### Description
The dataset viewer is not working for dataset TempoFunk/big.
Error details:
```
Error code: JobManagerCrashedError
```
| Dataset Viewer issue for TempoFunk/big: ### Link
https://huggingface.co/datasets/TempoFunk/big
### Description
The dataset viewer is not working for dataset TempoFunk/big.
Error details:
```
Error code: JobManagerCrashedError
```
| closed | 2023-05-24T22:04:25Z | 2023-05-25T05:31:37Z | 2023-05-25T05:31:36Z | chavinlo |
1,724,745,733 | feat: 🎸 create all jobs in backfill in one operation | instead of one (or multiple) operations for every job creation | feat: 🎸 create all jobs in backfill in one operation: instead of one (or multiple) operations for every job creation | closed | 2023-05-24T21:12:04Z | 2023-05-24T21:50:15Z | 2023-05-24T21:47:17Z | severo |
1,724,437,820 | Rename `/config-names` processing step | the last change of graph steps names in https://github.com/huggingface/datasets-server/issues/1086
next: endpoints and docs :)
**reminder: don't forget to stop the workers before mirgrations!** | Rename `/config-names` processing step: the last change of graph steps names in https://github.com/huggingface/datasets-server/issues/1086
next: endpoints and docs :)
**reminder: don't forget to stop the workers before mirgrations!** | closed | 2023-05-24T17:15:54Z | 2023-05-26T09:22:49Z | 2023-05-26T09:19:42Z | polinaeterna |
1,724,091,985 | Reduce requests to mongo (deleteMany) | Do only one request to mongo to delete multiple jobs, instead of one per job deletion. | Reduce requests to mongo (deleteMany): Do only one request to mongo to delete multiple jobs, instead of one per job deletion. | closed | 2023-05-24T14:07:13Z | 2023-05-24T15:23:37Z | 2023-05-24T15:19:56Z | severo |
1,723,452,447 | provide a decorator for StepProfiler (prometheus) | Currently, adding `StepProfiler` in the code, to get metrics about the duration of part of the code, implies changing the indentation, which makes it complicated to follow the commits.
It would be simpler to use a `@step_profiler()` decorator on functions.
If we do so, note that we might have to upgrade to Python... | provide a decorator for StepProfiler (prometheus): Currently, adding `StepProfiler` in the code, to get metrics about the duration of part of the code, implies changing the indentation, which makes it complicated to follow the commits.
It would be simpler to use a `@step_profiler()` decorator on functions.
If we ... | open | 2023-05-24T08:29:00Z | 2024-06-19T14:12:34Z | null | severo |
1,723,444,261 | Dataset Viewer issue for fabraz/writingPromptAug | ### Link
https://huggingface.co/datasets/fabraz/writingPromptAug
### Description
The dataset viewer is not working for dataset fabraz/writingPromptAug.
Error details:
```
Error code: ConfigNamesError
Exception: ValueError
Message: Split train already present
Traceback: The previous step failed... | Dataset Viewer issue for fabraz/writingPromptAug: ### Link
https://huggingface.co/datasets/fabraz/writingPromptAug
### Description
The dataset viewer is not working for dataset fabraz/writingPromptAug.
Error details:
```
Error code: ConfigNamesError
Exception: ValueError
Message: Split train alrea... | closed | 2023-05-24T08:23:48Z | 2023-05-25T05:03:45Z | 2023-05-25T05:03:45Z | Patrick-Ni |
1,723,404,679 | fix: 🐛 fix order of the migrations | null | fix: 🐛 fix order of the migrations: | closed | 2023-05-24T08:00:24Z | 2023-05-24T08:06:02Z | 2023-05-24T08:02:30Z | severo |
1,722,799,399 | Adding temporal hardcoded data for opt in/out: laion/laion2B-en and kakaobrain/coyo-700m | As per discussion in https://github.com/huggingface/moon-landing/pull/6332#discussion_r1202989143
Fake data will be hardcoded on the server side so that it is consistent with what API returns and what we show in the UI.
NOTE.- This is a temporal solution, once https://github.com/huggingface/datasets-server/issues/108... | Adding temporal hardcoded data for opt in/out: laion/laion2B-en and kakaobrain/coyo-700m: As per discussion in https://github.com/huggingface/moon-landing/pull/6332#discussion_r1202989143
Fake data will be hardcoded on the server side so that it is consistent with what API returns and what we show in the UI.
NOTE.- T... | closed | 2023-05-23T21:17:22Z | 2023-05-24T13:13:00Z | 2023-05-24T13:09:55Z | AndreaFrancis |
1,722,558,958 | feat: 🎸 add an index | recommended by mongo atlas
<img width="1060" alt="Capture d’écran 2023-05-23 à 20 09 57" src="https://github.com/huggingface/datasets-server/assets/1676121/e6817023-40c0-443e-a590-90115b9eee6c">
| feat: 🎸 add an index: recommended by mongo atlas
<img width="1060" alt="Capture d’écran 2023-05-23 à 20 09 57" src="https://github.com/huggingface/datasets-server/assets/1676121/e6817023-40c0-443e-a590-90115b9eee6c">
| closed | 2023-05-23T18:11:31Z | 2023-05-23T18:26:42Z | 2023-05-23T18:23:59Z | severo |
1,722,312,510 | Add numba cache to api | this should fix issues when importing librosa in the API (it causes issues in /rows for audio datasets) | Add numba cache to api: this should fix issues when importing librosa in the API (it causes issues in /rows for audio datasets) | closed | 2023-05-23T15:28:37Z | 2023-05-24T10:47:02Z | 2023-05-24T10:44:10Z | lhoestq |
1,722,303,198 | feat: 🎸 reduce the duration of the TTL index on finished_at | from 1 day to 10 minutes. Hopefully it will help reducing the time of the requests
Note also that we refactored a bit the migration script to factorize code | feat: 🎸 reduce the duration of the TTL index on finished_at: from 1 day to 10 minutes. Hopefully it will help reducing the time of the requests
Note also that we refactored a bit the migration script to factorize code | closed | 2023-05-23T15:23:17Z | 2023-05-23T18:08:42Z | 2023-05-23T15:28:14Z | severo |
1,722,284,671 | Use parquet metadata for all datasets | I still keep "Audio" unsupported since there are some errors with librosa on API workers | Use parquet metadata for all datasets: I still keep "Audio" unsupported since there are some errors with librosa on API workers | closed | 2023-05-23T15:12:09Z | 2023-05-23T15:39:31Z | 2023-05-23T15:36:26Z | lhoestq |
1,722,077,731 | Use parquet metadata for more datasets | including text, audio and full hd image datasets | Use parquet metadata for more datasets: including text, audio and full hd image datasets | closed | 2023-05-23T13:22:11Z | 2023-05-23T13:50:19Z | 2023-05-23T13:46:49Z | lhoestq |
1,722,065,712 | Instrument backfill | - add StepProfiler to libcommon.state, to be able to profile the code duration when doing a backfill
- refactor code to manage prometheus from libcommon
- detail: don't put empty (0) values in cache and queue metrics if the metrics database is empty. it's ok not to have values until the background metrics job has run | Instrument backfill: - add StepProfiler to libcommon.state, to be able to profile the code duration when doing a backfill
- refactor code to manage prometheus from libcommon
- detail: don't put empty (0) values in cache and queue metrics if the metrics database is empty. it's ok not to have values until the backgroun... | closed | 2023-05-23T13:15:42Z | 2023-05-23T14:16:57Z | 2023-05-23T14:13:39Z | severo |
1,721,590,012 | feat: 🎸 update dependencies to fix vulnerability | null | feat: 🎸 update dependencies to fix vulnerability: | closed | 2023-05-23T09:08:40Z | 2023-05-23T09:12:32Z | 2023-05-23T09:09:16Z | severo |
1,721,569,438 | Reduce number of concurrent jobs in namespace | the idea is to reduce the number of pending jobs, currently we have > 200,000 jobs, from a lot of different datasets.
And as it seems like a cause of the issues with the queue is that concurrent backfill processes run at the same time for the same dataset, we reduce the concurrency drastically to 1 job per namespace... | Reduce number of concurrent jobs in namespace: the idea is to reduce the number of pending jobs, currently we have > 200,000 jobs, from a lot of different datasets.
And as it seems like a cause of the issues with the queue is that concurrent backfill processes run at the same time for the same dataset, we reduce the... | closed | 2023-05-23T08:58:21Z | 2023-05-23T09:20:25Z | 2023-05-23T09:17:35Z | severo |
1,721,251,296 | chore(deps): bump requests from 2.28.2 to 2.31.0 in /libs/libcommon | Bumps [requests](https://github.com/psf/requests) from 2.28.2 to 2.31.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/psf/requests/releases">requests's releases</a>.</em></p>
<blockquote>
<h2>v2.31.0</h2>
<h2>2.31.0 (2023-05-22)</h2>
<p><strong>Security</strong></p>
<ul>
<l... | chore(deps): bump requests from 2.28.2 to 2.31.0 in /libs/libcommon: Bumps [requests](https://github.com/psf/requests) from 2.28.2 to 2.31.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/psf/requests/releases">requests's releases</a>.</em></p>
<blockquote>
<h2>v2.31.0</h2>
... | closed | 2023-05-23T05:57:17Z | 2023-05-23T09:01:26Z | 2023-05-23T09:01:24Z | dependabot[bot] |
1,721,249,333 | chore(deps-dev): bump requests from 2.28.2 to 2.31.0 in /e2e | Bumps [requests](https://github.com/psf/requests) from 2.28.2 to 2.31.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/psf/requests/releases">requests's releases</a>.</em></p>
<blockquote>
<h2>v2.31.0</h2>
<h2>2.31.0 (2023-05-22)</h2>
<p><strong>Security</strong></p>
<ul>
<l... | chore(deps-dev): bump requests from 2.28.2 to 2.31.0 in /e2e: Bumps [requests](https://github.com/psf/requests) from 2.28.2 to 2.31.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/psf/requests/releases">requests's releases</a>.</em></p>
<blockquote>
<h2>v2.31.0</h2>
<h2>2.3... | closed | 2023-05-23T05:55:56Z | 2023-05-23T09:04:49Z | 2023-05-23T09:01:32Z | dependabot[bot] |
1,721,232,025 | Dataset Viewer issue for vishnun/NLP-KnowledgeGraph | ### Link
https://huggingface.co/datasets/vishnun/NLP-KnowledgeGraph
### Description
The dataset viewer is not working for dataset vishnun/NLP-KnowledgeGraph.
Error details:
```
Error code: ConfigNamesError
Exception: OSError
Message: [Errno 116] Stale file handle
Traceback: The previous step f... | Dataset Viewer issue for vishnun/NLP-KnowledgeGraph: ### Link
https://huggingface.co/datasets/vishnun/NLP-KnowledgeGraph
### Description
The dataset viewer is not working for dataset vishnun/NLP-KnowledgeGraph.
Error details:
```
Error code: ConfigNamesError
Exception: OSError
Message: [Errno 116]... | closed | 2023-05-23T05:42:27Z | 2023-05-30T07:26:54Z | 2023-05-30T07:26:54Z | MangoFF |
1,721,112,866 | Dataset Viewer issue for shibing624/medical | ### Link
https://huggingface.co/datasets/shibing624/medical
### Description
The dataset viewer is not working for dataset shibing624/medical.
Error details:
```
Error code: ResponseNotReady
```
| Dataset Viewer issue for shibing624/medical: ### Link
https://huggingface.co/datasets/shibing624/medical
### Description
The dataset viewer is not working for dataset shibing624/medical.
Error details:
```
Error code: ResponseNotReady
```
| closed | 2023-05-23T03:54:00Z | 2023-05-23T16:50:51Z | 2023-05-23T16:50:51Z | shibing624 |
1,720,883,768 | Dataset Viewer issue for beskrovnykh/daniel-dataset-fragments | ### Link
https://huggingface.co/datasets/beskrovnykh/daniel-dataset-fragments
### Description
The dataset viewer is not working for dataset beskrovnykh/daniel-dataset-fragments.
Error details:
```
Error code: ResponseNotReady
```
| Dataset Viewer issue for beskrovnykh/daniel-dataset-fragments: ### Link
https://huggingface.co/datasets/beskrovnykh/daniel-dataset-fragments
### Description
The dataset viewer is not working for dataset beskrovnykh/daniel-dataset-fragments.
Error details:
```
Error code: ResponseNotReady
```
| closed | 2023-05-23T00:12:58Z | 2023-05-23T19:17:03Z | 2023-05-23T19:17:02Z | beskrovnykh |
1,720,558,896 | feat: 🎸 write cache + backfill only if job finished as expected | ie: if it has been cancelled, we ignore it. See previous work at https://github.com/huggingface/datasets-server/pull/1188. Note that after #1222, the number of warnings "...has a non-empty finished_at field..." has fallen to 26 logs among 20,000, while it was like 20% of the logs before!
Also:
- upgrade `requests`(... | feat: 🎸 write cache + backfill only if job finished as expected: ie: if it has been cancelled, we ignore it. See previous work at https://github.com/huggingface/datasets-server/pull/1188. Note that after #1222, the number of warnings "...has a non-empty finished_at field..." has fallen to 26 logs among 20,000, while i... | closed | 2023-05-22T21:10:51Z | 2023-05-23T09:03:33Z | 2023-05-23T09:00:35Z | severo |
1,720,137,557 | Rename /split-names-from-dataset-info | part of https://github.com/huggingface/datasets-server/issues/1086 | Rename /split-names-from-dataset-info: part of https://github.com/huggingface/datasets-server/issues/1086 | closed | 2023-05-22T17:37:26Z | 2023-05-23T18:39:59Z | 2023-05-23T18:36:48Z | polinaeterna |
1,720,041,617 | Add parquet metadata to api chart | forgot it in #1214 | Add parquet metadata to api chart: forgot it in #1214 | closed | 2023-05-22T16:38:13Z | 2023-05-22T16:42:02Z | 2023-05-22T16:38:56Z | lhoestq |
1,719,828,595 | Dataset Viewer issue for kietzmannlab/ecoset | ### Link
https://huggingface.co/datasets/kietzmannlab/ecoset
### Description
The dataset viewer is not working for dataset kietzmannlab/ecoset.
Error details:
```
Error code: ConfigNamesError
Exception: ImportError
Message: To be able to use kietzmannlab/ecoset, you need to install the following d... | Dataset Viewer issue for kietzmannlab/ecoset: ### Link
https://huggingface.co/datasets/kietzmannlab/ecoset
### Description
The dataset viewer is not working for dataset kietzmannlab/ecoset.
Error details:
```
Error code: ConfigNamesError
Exception: ImportError
Message: To be able to use kietzmannl... | closed | 2023-05-22T14:34:12Z | 2024-02-02T17:09:59Z | 2024-02-02T17:09:58Z | v-bosch |
1,719,689,350 | refactor: 💡 do only one request to get jobs in DatasetState | null | refactor: 💡 do only one request to get jobs in DatasetState: | closed | 2023-05-22T13:23:02Z | 2023-05-22T18:38:45Z | 2023-05-22T18:36:09Z | severo |
1,719,461,882 | fix: 🐛 backfill the dataset after finishing the job | null | fix: 🐛 backfill the dataset after finishing the job: | closed | 2023-05-22T11:10:41Z | 2023-05-22T11:23:31Z | 2023-05-22T11:20:36Z | severo |
1,719,317,766 | fix: 🐛 if a step depend on parallel steps, both must be used | otherwise, the "error" "Response has already been computed and stored in cache kind: split-first-rows-from-parquet. Compute will be skipped" is propagated, instead of using the other cache entry as it was meant to.
Unfortunately, we will have to relaunch a lot of jobs | fix: 🐛 if a step depend on parallel steps, both must be used: otherwise, the "error" "Response has already been computed and stored in cache kind: split-first-rows-from-parquet. Compute will be skipped" is propagated, instead of using the other cache entry as it was meant to.
Unfortunately, we will have to relaunch... | closed | 2023-05-22T09:45:57Z | 2023-05-22T10:25:15Z | 2023-05-22T10:22:31Z | severo |
1,719,201,163 | feat: 🎸 tweak queue parameters to flush quick jobs | null | feat: 🎸 tweak queue parameters to flush quick jobs: | closed | 2023-05-22T08:44:44Z | 2023-05-22T08:49:19Z | 2023-05-22T08:46:15Z | severo |
1,719,167,405 | Update old cache entries automatically | Some cache entries are very old.
For example, the following entry has been computed more than 3 months ago, and contain an error_code that is no more present in the codebase. It should have been recomputed at some point, but we didn't for some reason:
<img width="1652" alt="Capture d’écran 2023-05-22 à 10 22 29... | Update old cache entries automatically: Some cache entries are very old.
For example, the following entry has been computed more than 3 months ago, and contain an error_code that is no more present in the codebase. It should have been recomputed at some point, but we didn't for some reason:
<img width="1652" alt=... | closed | 2023-05-22T08:23:51Z | 2024-01-09T15:45:16Z | 2024-01-09T15:45:16Z | severo |
1,719,147,715 | Dataset Viewer issue for nyuuzyou/AnimeHeadsv3 | **Link**
https://huggingface.co/datasets/nyuuzyou/AnimeHeadsv3
**Description**
Currently, when attempting to view the dataset using the provided viewer, I am encountering the following error:
```
ERROR: type should be image, got {"src": "https://datasets-server.huggingface.co/assets/nyuuzyou/AnimeHeadsv3/--/With a... | Dataset Viewer issue for nyuuzyou/AnimeHeadsv3: **Link**
https://huggingface.co/datasets/nyuuzyou/AnimeHeadsv3
**Description**
Currently, when attempting to view the dataset using the provided viewer, I am encountering the following error:
```
ERROR: type should be image, got {"src": "https://datasets-server.huggi... | closed | 2023-05-22T08:13:12Z | 2023-06-26T18:58:47Z | 2023-06-26T18:58:46Z | nyuuzyou |
1,719,098,572 | feat: 🎸 delete metrics for /split-names-from-streaming | we missed this migration, which leads to still having 5,000 pending jobs for this step in the Grafana charts, while these jobs don't exist now. | feat: 🎸 delete metrics for /split-names-from-streaming: we missed this migration, which leads to still having 5,000 pending jobs for this step in the Grafana charts, while these jobs don't exist now. | closed | 2023-05-22T07:46:35Z | 2023-05-22T07:51:37Z | 2023-05-22T07:48:52Z | severo |
1,718,973,689 | Dataset Viewer issue for ceval/ceval-exam | ### Link
https://huggingface.co/datasets/ceval/ceval-exam
### Description
The dataset viewer is not working for dataset ceval/ceval-exam.
Error details:
```
Error code: ConfigNamesError
Exception: FileNotFoundError
Message: Couldn't find a dataset script at /src/services/worker/ceval/ceval-exam/ce... | Dataset Viewer issue for ceval/ceval-exam: ### Link
https://huggingface.co/datasets/ceval/ceval-exam
### Description
The dataset viewer is not working for dataset ceval/ceval-exam.
Error details:
```
Error code: ConfigNamesError
Exception: FileNotFoundError
Message: Couldn't find a dataset script ... | closed | 2023-05-22T06:17:48Z | 2023-05-22T07:39:49Z | 2023-05-22T07:39:49Z | jxhe |
1,718,577,854 | Dataset Viewer issue for AntonioRenatoMontefusco/kddChallenge2023 | ### Link
https://huggingface.co/datasets/AntonioRenatoMontefusco/kddChallenge2023
### Description
The dataset viewer is not working for dataset AntonioRenatoMontefusco/kddChallenge2023.
Error details:
```
Error code: JobManagerCrashedError
```
| Dataset Viewer issue for AntonioRenatoMontefusco/kddChallenge2023: ### Link
https://huggingface.co/datasets/AntonioRenatoMontefusco/kddChallenge2023
### Description
The dataset viewer is not working for dataset AntonioRenatoMontefusco/kddChallenge2023.
Error details:
```
Error code: JobManagerCrashedError
... | closed | 2023-05-21T17:24:16Z | 2023-05-23T08:29:44Z | 2023-05-23T08:29:43Z | AntonioRenatoMontefusco |
1,718,189,053 | Use parquet metadata in /rows | Step 2 of https://github.com/huggingface/datasets-server/issues/1186
## Implementation details
I implemented ParquetIndexWithMetadata (new) and ParquetIndexWithoutMetadata (from the existing code).:
- ParquetIndexWithMetadata is used when `config-parquet-metadata` is cached and is fast
- ParquetIndexWithoutMe... | Use parquet metadata in /rows: Step 2 of https://github.com/huggingface/datasets-server/issues/1186
## Implementation details
I implemented ParquetIndexWithMetadata (new) and ParquetIndexWithoutMetadata (from the existing code).:
- ParquetIndexWithMetadata is used when `config-parquet-metadata` is cached and i... | closed | 2023-05-20T14:30:24Z | 2023-05-22T16:27:28Z | 2023-05-22T16:24:39Z | lhoestq |
1,717,744,192 | feat: Part #3: Adding "partition" granularity level logic | First part of approach # 2 of https://github.com/huggingface/datasets-server/issues/1087
Adding a new granularity level - "Partition", will imply to also have a new field in Job and Cache.
Depends on https://github.com/huggingface/datasets-server/pull/1263, https://github.com/huggingface/datasets-server/pull/1259
an... | feat: Part #3: Adding "partition" granularity level logic: First part of approach # 2 of https://github.com/huggingface/datasets-server/issues/1087
Adding a new granularity level - "Partition", will imply to also have a new field in Job and Cache.
Depends on https://github.com/huggingface/datasets-server/pull/1263, h... | closed | 2023-05-19T19:49:58Z | 2023-10-10T13:29:38Z | 2023-06-01T14:37:28Z | AndreaFrancis |
1,717,723,316 | Dataset Viewer issue for tarteel-ai/everyayah | ### Link
https://huggingface.co/datasets/tarteel-ai/everyayah
### Description
The dataset viewer is not working for dataset tarteel-ai/everyayah.
Error details:
```
Error code: JobRunnerCrashedError
```
| Dataset Viewer issue for tarteel-ai/everyayah: ### Link
https://huggingface.co/datasets/tarteel-ai/everyayah
### Description
The dataset viewer is not working for dataset tarteel-ai/everyayah.
Error details:
```
Error code: JobRunnerCrashedError
```
| closed | 2023-05-19T19:27:22Z | 2023-05-23T08:42:51Z | 2023-05-23T08:42:51Z | manna1lix |
1,717,480,516 | feat: 🎸 add logs to the migrations | null | feat: 🎸 add logs to the migrations: | closed | 2023-05-19T16:03:14Z | 2023-05-19T16:08:54Z | 2023-05-19T16:06:16Z | severo |
1,717,450,689 | fix: 🐛 missing refactoring in the last merge | null | fix: 🐛 missing refactoring in the last merge: | closed | 2023-05-19T15:40:58Z | 2023-05-19T15:58:43Z | 2023-05-19T15:56:00Z | severo |
1,717,434,436 | A lot of jobs finish with Warning: ... has a non-empty finished_at field. Force finishing anyway | See https://github.com/huggingface/datasets-server/pull/1203#issuecomment-1554544553
Started jobs should have an empty finished_at field. | A lot of jobs finish with Warning: ... has a non-empty finished_at field. Force finishing anyway: See https://github.com/huggingface/datasets-server/pull/1203#issuecomment-1554544553
Started jobs should have an empty finished_at field. | closed | 2023-05-19T15:27:33Z | 2023-08-11T15:27:18Z | 2023-08-11T15:27:17Z | severo |
1,717,389,689 | chore: 🤖 ignore a vulnerability for now | null | chore: 🤖 ignore a vulnerability for now: | closed | 2023-05-19T14:57:19Z | 2023-05-19T15:13:41Z | 2023-05-19T15:10:42Z | severo |
1,717,374,182 | refactor: 💡 only pass is_success to finish_job | so that the caller does not have to know the queue job statuses.
Also: finish_job returns a boolean to say if it was in an expected state. | refactor: 💡 only pass is_success to finish_job: so that the caller does not have to know the queue job statuses.
Also: finish_job returns a boolean to say if it was in an expected state. | closed | 2023-05-19T14:45:48Z | 2023-05-19T15:31:24Z | 2023-05-19T15:28:34Z | severo |
1,717,357,015 | refactor: 💡 remove two methods | null | refactor: 💡 remove two methods: | closed | 2023-05-19T14:34:11Z | 2023-05-19T15:33:58Z | 2023-05-19T15:31:10Z | severo |
1,717,320,407 | fix: 🐛 the started jobinfo always contained priority=NORMAL | Now we get the value as expected. This means that the backfill function will create jobs at the same level of priority, instead of moving everything to the NORMAL priority queue | fix: 🐛 the started jobinfo always contained priority=NORMAL: Now we get the value as expected. This means that the backfill function will create jobs at the same level of priority, instead of moving everything to the NORMAL priority queue | closed | 2023-05-19T14:09:55Z | 2023-05-19T14:36:04Z | 2023-05-19T14:32:35Z | severo |
1,717,308,966 | Update transformers for pip audit | null | Update transformers for pip audit: | closed | 2023-05-19T14:04:58Z | 2023-05-19T15:00:55Z | 2023-05-19T14:58:06Z | lhoestq |
1,717,139,265 | Again: ignore result of job runner if job has been canceled | First PR: #1188
Reverted by #1196
New try. First, I get the code again, and then will commit the fix, once I find the issue.
| Again: ignore result of job runner if job has been canceled: First PR: #1188
Reverted by #1196
New try. First, I get the code again, and then will commit the fix, once I find the issue.
| closed | 2023-05-19T12:03:53Z | 2024-01-26T09:01:34Z | 2023-05-19T13:52:37Z | severo |
1,716,948,398 | Dataset Viewer issue for phamson02/vietnamese-poetry-corpus | ### Link
https://huggingface.co/datasets/phamson02/vietnamese-poetry-corpus
### Description
The dataset viewer is not working for dataset phamson02/vietnamese-poetry-corpus.
Error details:
```
Error code: ConfigNamesError
Exception: OSError
Message: [Errno 116] Stale file handle
Traceback: Tra... | Dataset Viewer issue for phamson02/vietnamese-poetry-corpus: ### Link
https://huggingface.co/datasets/phamson02/vietnamese-poetry-corpus
### Description
The dataset viewer is not working for dataset phamson02/vietnamese-poetry-corpus.
Error details:
```
Error code: ConfigNamesError
Exception: OSError
M... | closed | 2023-05-19T09:38:54Z | 2023-05-22T06:52:50Z | 2023-05-22T06:52:50Z | phamson02 |
1,716,133,975 | Dedicated worker for split-opt-in-out-urls-scan | null | Dedicated worker for split-opt-in-out-urls-scan: | closed | 2023-05-18T19:12:23Z | 2023-05-18T19:19:22Z | 2023-05-18T19:16:44Z | AndreaFrancis |
1,716,105,921 | Temporaly adding a dedicated worker for config/dataset-opt-in-out-urls-count | null | Temporaly adding a dedicated worker for config/dataset-opt-in-out-urls-count: | closed | 2023-05-18T18:49:38Z | 2023-05-18T19:00:33Z | 2023-05-18T18:57:11Z | AndreaFrancis |
1,716,091,421 | Descriptive statistics | This PR introduces the following measurements/statistics:
### numerical columns (float and int):
- nan values count
- nan values percentage
- min
- max
- mean
- median
- std
- histogram:
- for float: fixed number of bins (which is a global config parameter - tell me if it's an overkill :D)
- for integer... | Descriptive statistics: This PR introduces the following measurements/statistics:
### numerical columns (float and int):
- nan values count
- nan values percentage
- min
- max
- mean
- median
- std
- histogram:
- for float: fixed number of bins (which is a global config parameter - tell me if it's an overk... | closed | 2023-05-18T18:36:25Z | 2023-07-27T15:56:48Z | 2023-07-27T15:51:05Z | polinaeterna |
1,716,035,065 | disable prod backfill for now | the opt-in-out-urls jobs are filling up the job queue faster that it's being emptied, leading to 300k+ waiting jobs | disable prod backfill for now: the opt-in-out-urls jobs are filling up the job queue faster that it's being emptied, leading to 300k+ waiting jobs | closed | 2023-05-18T17:56:39Z | 2023-05-19T08:36:08Z | 2023-05-18T18:50:07Z | lhoestq |
1,716,005,973 | Set datetime types in admin ui | to fix errors when duckdb tries to cast the columns like "started_at"
(already deployed on HF - I ran my tests there ^^') | Set datetime types in admin ui: to fix errors when duckdb tries to cast the columns like "started_at"
(already deployed on HF - I ran my tests there ^^') | closed | 2023-05-18T17:34:26Z | 2023-05-19T11:46:59Z | 2023-05-19T11:43:30Z | lhoestq |
1,715,648,909 | Revert "feat: 🎸 ignore result of job runner if job has been canceled … | …(#1188)"
This reverts commit a85b08697399a06dc2a98539dd4b9679cf6da8be.
For some reason the queue stopped picking jobs after the deploy that included this change in queue.py | Revert "feat: 🎸 ignore result of job runner if job has been canceled …: …(#1188)"
This reverts commit a85b08697399a06dc2a98539dd4b9679cf6da8be.
For some reason the queue stopped picking jobs after the deploy that included this change in queue.py | closed | 2023-05-18T13:32:38Z | 2023-05-18T13:46:39Z | 2023-05-18T13:43:50Z | lhoestq |
1,715,554,211 | Dataset Viewer issue for under-tree/prepared-yagpt | ### Link
https://huggingface.co/datasets/under-tree/prepared-yagpt
### Description
The dataset viewer is not working for dataset under-tree/prepared-yagpt.
Error details:
Dataset was pushed in next way
```python
final_dataset.push_to_hub(checkpoint)
```
where final_dataset is DatasetDict object
```
Error... | Dataset Viewer issue for under-tree/prepared-yagpt: ### Link
https://huggingface.co/datasets/under-tree/prepared-yagpt
### Description
The dataset viewer is not working for dataset under-tree/prepared-yagpt.
Error details:
Dataset was pushed in next way
```python
final_dataset.push_to_hub(checkpoint)
```
whe... | closed | 2023-05-18T12:29:09Z | 2023-05-19T08:34:58Z | 2023-05-19T08:34:58Z | RodionfromHSE |
1,715,476,279 | Dataset Viewer issue for Fredithefish/GPTeacher-for-RedPajama-Chat | ### Link
https://huggingface.co/datasets/Fredithefish/GPTeacher-for-RedPajama-Chat
### Description
The dataset viewer is not working for dataset Fredithefish/GPTeacher-for-RedPajama-Chat.
Error details:
```
Error code: ResponseNotReady
```
| Dataset Viewer issue for Fredithefish/GPTeacher-for-RedPajama-Chat: ### Link
https://huggingface.co/datasets/Fredithefish/GPTeacher-for-RedPajama-Chat
### Description
The dataset viewer is not working for dataset Fredithefish/GPTeacher-for-RedPajama-Chat.
Error details:
```
Error code: ResponseNotReady
`... | closed | 2023-05-18T11:29:37Z | 2023-05-19T08:35:31Z | 2023-05-19T08:35:31Z | fredi-python |
1,714,920,021 | Dataset Viewer issue for RengJEY/Fast_Food_classification | ### Link
https://huggingface.co/datasets/RengJEY/Fast_Food_classification
### Description
The dataset viewer is not working for dataset RengJEY/Fast_Food_classification.
Error details:
```
Error code: ResponseNotReady
```
| Dataset Viewer issue for RengJEY/Fast_Food_classification: ### Link
https://huggingface.co/datasets/RengJEY/Fast_Food_classification
### Description
The dataset viewer is not working for dataset RengJEY/Fast_Food_classification.
Error details:
```
Error code: ResponseNotReady
```
| closed | 2023-05-18T03:20:35Z | 2023-05-18T05:25:04Z | 2023-05-18T05:24:48Z | RENGJEY |
1,713,811,938 | Don't return an error on /first-rows (or later: /rows) if one image is failing | See https://huggingface.co/datasets/datadrivenscience/ship-detection
<img width="1034" alt="Capture d’écran 2023-05-17 à 14 33 42" src="https://github.com/huggingface/datasets-server/assets/1676121/9e12612f-bb42-4460-8c7d-91fc75534518">
```
Error code: StreamingRowsError
Exception: DecompressionBombError... | Don't return an error on /first-rows (or later: /rows) if one image is failing: See https://huggingface.co/datasets/datadrivenscience/ship-detection
<img width="1034" alt="Capture d’écran 2023-05-17 à 14 33 42" src="https://github.com/huggingface/datasets-server/assets/1676121/9e12612f-bb42-4460-8c7d-91fc75534518"... | closed | 2023-05-17T12:35:45Z | 2023-06-14T09:44:17Z | 2023-06-14T09:44:16Z | severo |
1,713,772,619 | Update starlette to 0.27.0 | Fix pip-audit for admin and api
```
Found 1 known vulnerability in 1 package
Name Version ID Fix Versions
--------- ------- ------------------- ------------
starlette 0.25.0 GHSA-v5gw-mw7f-84px 0.27.0
``` | Update starlette to 0.27.0: Fix pip-audit for admin and api
```
Found 1 known vulnerability in 1 package
Name Version ID Fix Versions
--------- ------- ------------------- ------------
starlette 0.25.0 GHSA-v5gw-mw7f-84px 0.27.0
``` | closed | 2023-05-17T12:13:48Z | 2023-05-17T12:40:37Z | 2023-05-17T12:37:43Z | lhoestq |
1,712,421,891 | Cache parquet metadata to optimize /rows | Step 1 of https://github.com/huggingface/datasets-server/issues/1186
I added a new job that gets the parquet metadata of each parquet file and write them on disk in the `assets_directory`.
These metadata will be used to optimize random access to rows, that I will implement in a subsequent PR.
The parquet metad... | Cache parquet metadata to optimize /rows: Step 1 of https://github.com/huggingface/datasets-server/issues/1186
I added a new job that gets the parquet metadata of each parquet file and write them on disk in the `assets_directory`.
These metadata will be used to optimize random access to rows, that I will implemen... | closed | 2023-05-16T17:17:52Z | 2023-05-19T15:52:21Z | 2023-05-19T14:00:21Z | lhoestq |
1,712,001,601 | feat: 🎸 return X-Revision header when possible on endpoints | it will help show the status of the cache entry on the Hub. | feat: 🎸 return X-Revision header when possible on endpoints: it will help show the status of the cache entry on the Hub. | closed | 2023-05-16T13:10:27Z | 2023-05-17T16:00:42Z | 2023-05-17T15:57:56Z | severo |
1,711,924,002 | feat: 🎸 ignore result of job runner if job has been canceled | also: refactor to remove two queue methods (kill_zombies, kill_long_jobs): job_manager is now in charge of finishing the jobs, and updating the cache (if needed). | feat: 🎸 ignore result of job runner if job has been canceled: also: refactor to remove two queue methods (kill_zombies, kill_long_jobs): job_manager is now in charge of finishing the jobs, and updating the cache (if needed). | closed | 2023-05-16T12:27:16Z | 2023-05-22T21:11:39Z | 2023-05-17T15:27:02Z | severo |
1,710,554,790 | Set git revision at job creation | The proposal in the PR is to add a field `revision` to the jobs, at creation, and it must be non-null (it should be the commit hash).
This way, the job runners don't have to reach the hub to check for the current revision, and we're preparing for (one day) handle multiple revisions in the cache for the same dataset.... | Set git revision at job creation: The proposal in the PR is to add a field `revision` to the jobs, at creation, and it must be non-null (it should be the commit hash).
This way, the job runners don't have to reach the hub to check for the current revision, and we're preparing for (one day) handle multiple revisions ... | closed | 2023-05-15T17:57:06Z | 2023-05-17T14:53:58Z | 2023-05-17T14:51:07Z | severo |
1,710,306,468 | Re-enable image and audio in the viewer | The current caching mechanism from #1026 is almost never used:
a. the parquet index is stored in memory per worker and there are too many of them
b. the image/audio files are always recreated
Because of that the viewer was too slow for image and audio datasets and we disabled it in #1144
To fix this issue we ... | Re-enable image and audio in the viewer: The current caching mechanism from #1026 is almost never used:
a. the parquet index is stored in memory per worker and there are too many of them
b. the image/audio files are always recreated
Because of that the viewer was too slow for image and audio datasets and we disab... | closed | 2023-05-15T15:11:50Z | 2023-05-25T15:33:34Z | 2023-05-25T15:33:34Z | lhoestq |
1,709,919,677 | fix: 🐛 don't fill truncated_cells w/ unsupported cols on /rows | See
https://github.com/huggingface/moon-landing/pull/6300#issuecomment-1547590109 for reference (internal link). | fix: 🐛 don't fill truncated_cells w/ unsupported cols on /rows: See
https://github.com/huggingface/moon-landing/pull/6300#issuecomment-1547590109 for reference (internal link). | closed | 2023-05-15T11:43:24Z | 2023-05-15T12:39:27Z | 2023-05-15T12:36:13Z | severo |
1,708,787,938 | Dataset Viewer issue for annabely/ukiyoe_10_30_control_net | ### Link
https://huggingface.co/datasets/annabely/ukiyoe_10_30_control_net
### Description
The dataset viewer is not working for dataset annabely/ukiyoe_10_30_control_net.
Error details:
```
Error code: UnexpectedError
```
| Dataset Viewer issue for annabely/ukiyoe_10_30_control_net: ### Link
https://huggingface.co/datasets/annabely/ukiyoe_10_30_control_net
### Description
The dataset viewer is not working for dataset annabely/ukiyoe_10_30_control_net.
Error details:
```
Error code: UnexpectedError
```
| closed | 2023-05-14T01:43:25Z | 2023-05-15T08:19:10Z | 2023-05-15T08:19:10Z | annabelyim |
1,708,565,593 | fix: 🐛 hot fix - catch exception on git revision | try to fix #1182 | fix: 🐛 hot fix - catch exception on git revision: try to fix #1182 | closed | 2023-05-13T11:25:28Z | 2023-05-15T06:43:27Z | 2023-05-13T11:27:57Z | severo |
1,708,557,188 | The workers fail with `mongoengine.errors.FieldDoesNotExist: The fields "{'force'}" do not exist on the document "Job"` | ```
INFO: 2023-05-13 10:50:18,007 - root - Worker loop started
INFO: 2023-05-13 10:50:18,023 - root - Starting heartbeat.
ERROR: 2023-05-13 10:50:18,115 - asyncio - Task exception was never retrieved
future: <Task finished name='Task-2' coro=<every() done, defined at /src/services/worker/src/worker/executor.py:26> ... | The workers fail with `mongoengine.errors.FieldDoesNotExist: The fields "{'force'}" do not exist on the document "Job"`: ```
INFO: 2023-05-13 10:50:18,007 - root - Worker loop started
INFO: 2023-05-13 10:50:18,023 - root - Starting heartbeat.
ERROR: 2023-05-13 10:50:18,115 - asyncio - Task exception was never retrie... | closed | 2023-05-13T10:52:20Z | 2023-06-12T15:05:35Z | 2023-06-12T15:05:34Z | severo |
1,708,553,564 | Dataset Viewer issue for kingjambal/jambal_common_voice | ### Link
https://huggingface.co/datasets/kingjambal/jambal_common_voice
### Description
The dataset viewer is not working for dataset kingjambal/jambal_common_voice.
Error details:
```
Error code: ResponseNotReady
```
| Dataset Viewer issue for kingjambal/jambal_common_voice: ### Link
https://huggingface.co/datasets/kingjambal/jambal_common_voice
### Description
The dataset viewer is not working for dataset kingjambal/jambal_common_voice.
Error details:
```
Error code: ResponseNotReady
```
| closed | 2023-05-13T10:37:57Z | 2023-05-15T09:11:55Z | 2023-05-15T09:11:55Z | kingjambal |
1,708,552,218 | Dataset Viewer issue for 0x22almostEvil/reasoning_bg_oa | ### Link
https://huggingface.co/datasets/0x22almostEvil/reasoning_bg_oa
### Description
The dataset viewer is not working for dataset 0x22almostEvil/reasoning_bg_oa.
Error details:
```
Error code: ResponseNotReady
```
| Dataset Viewer issue for 0x22almostEvil/reasoning_bg_oa: ### Link
https://huggingface.co/datasets/0x22almostEvil/reasoning_bg_oa
### Description
The dataset viewer is not working for dataset 0x22almostEvil/reasoning_bg_oa.
Error details:
```
Error code: ResponseNotReady
```
| closed | 2023-05-13T10:33:27Z | 2023-05-13T14:55:08Z | 2023-05-13T14:55:08Z | echo0x22 |
1,708,545,981 | Dataset Viewer issue for Abrumu/Fashion_controlnet_dataset_V2 | ### Link
https://huggingface.co/datasets/Abrumu/Fashion_controlnet_dataset_V2
### Description
The dataset viewer is not working for dataset Abrumu/Fashion_controlnet_dataset_V2.
Error details:
```
Error code: ResponseNotReady
```
| Dataset Viewer issue for Abrumu/Fashion_controlnet_dataset_V2: ### Link
https://huggingface.co/datasets/Abrumu/Fashion_controlnet_dataset_V2
### Description
The dataset viewer is not working for dataset Abrumu/Fashion_controlnet_dataset_V2.
Error details:
```
Error code: ResponseNotReady
```
| closed | 2023-05-13T10:09:09Z | 2023-05-15T09:22:31Z | 2023-05-15T09:22:31Z | abdelrahmanabdelghany |
1,708,417,580 | Dataset Viewer issue for lliillyy/controlnet_ap10k_val | ### Link
https://huggingface.co/datasets/lliillyy/controlnet_ap10k_val
### Description
The dataset viewer is not working for dataset lliillyy/controlnet_ap10k_val.
Error details:
```
Error code: ResponseNotReady
```
| Dataset Viewer issue for lliillyy/controlnet_ap10k_val: ### Link
https://huggingface.co/datasets/lliillyy/controlnet_ap10k_val
### Description
The dataset viewer is not working for dataset lliillyy/controlnet_ap10k_val.
Error details:
```
Error code: ResponseNotReady
```
| closed | 2023-05-13T03:50:41Z | 2023-05-15T09:27:04Z | 2023-05-15T09:27:03Z | Lliillyy |
1,708,342,990 | Adding full_scan field in opt-in-out cache | According to PR https://github.com/huggingface/moon-landing/pull/6289/files, we will need full_scan flag for UI
| Adding full_scan field in opt-in-out cache: According to PR https://github.com/huggingface/moon-landing/pull/6289/files, we will need full_scan flag for UI
| closed | 2023-05-12T23:43:13Z | 2023-05-15T12:45:48Z | 2023-05-15T12:42:39Z | AndreaFrancis |
1,708,132,717 | Dataset Viewer issue for claritylab/UTCD | ### Link
https://huggingface.co/datasets/claritylab/UTCD
### Description
The dataset viewer is not working for dataset claritylab/UTCD.
Error details:
```
Error code: TooManyColumnsError
```
I'm having trouble to get dataset viewer to work.
I did a bit of research:
- https://discuss.huggi... | Dataset Viewer issue for claritylab/UTCD: ### Link
https://huggingface.co/datasets/claritylab/UTCD
### Description
The dataset viewer is not working for dataset claritylab/UTCD.
Error details:
```
Error code: TooManyColumnsError
```
I'm having trouble to get dataset viewer to work.
I did a... | closed | 2023-05-12T19:53:16Z | 2023-05-16T08:53:22Z | 2023-05-16T08:53:22Z | StefanHeng |
1,707,648,352 | Dataset Viewer issue for kingjambal/jambal_common_voice | ### Link
https://huggingface.co/datasets/kingjambal/jambal_common_voice
### Description
The dataset viewer is not working for dataset kingjambal/jambal_common_voice.
Error details:
```
Error code: JobManagerCrashedError
Traceback: The previous step failed, the error is copied to this step: kind='/confi... | Dataset Viewer issue for kingjambal/jambal_common_voice: ### Link
https://huggingface.co/datasets/kingjambal/jambal_common_voice
### Description
The dataset viewer is not working for dataset kingjambal/jambal_common_voice.
Error details:
```
Error code: JobManagerCrashedError
Traceback: The previous ste... | closed | 2023-05-12T13:33:14Z | 2023-05-15T09:07:39Z | 2023-05-15T09:07:39Z | kingjambal |
1,707,621,414 | Add a field, and rename another one, in /opt-in-out-urls | The current response for /opt-in-out-urls is:
```
{
"urls_columns": ["url"],
"has_urls_columns": true,
"num_opt_in_urls": 0,
"num_opt_out_urls": 4052,
"num_scanned_rows": 12452281,
"num_urls": 12452281
}
```
I think we should:
- rename `num_urls` into `num_scanned_urls`
- add `num_rows` wit... | Add a field, and rename another one, in /opt-in-out-urls: The current response for /opt-in-out-urls is:
```
{
"urls_columns": ["url"],
"has_urls_columns": true,
"num_opt_in_urls": 0,
"num_opt_out_urls": 4052,
"num_scanned_rows": 12452281,
"num_urls": 12452281
}
```
I think we should:
- renam... | closed | 2023-05-12T13:15:40Z | 2023-05-12T13:54:14Z | 2023-05-12T13:23:57Z | severo |
1,707,525,466 | change color and size of nodes | sorry i forgot to push one commit to the previous plot PR, this is to make test on nodes visible. | change color and size of nodes: sorry i forgot to push one commit to the previous plot PR, this is to make test on nodes visible. | closed | 2023-05-12T12:11:04Z | 2023-05-12T12:14:18Z | 2023-05-12T12:11:32Z | polinaeterna |
1,707,453,117 | Process part of the columns, instead of giving an error? | When the number of columns is above 1000, we don't process the split. See https://github.com/huggingface/datasets-server/issues/1143.
Should we instead "truncate", and only process the first 1000 columns, and give a hint to the user that only the first 1000 columns were used? | Process part of the columns, instead of giving an error?: When the number of columns is above 1000, we don't process the split. See https://github.com/huggingface/datasets-server/issues/1143.
Should we instead "truncate", and only process the first 1000 columns, and give a hint to the user that only the first 1000 c... | open | 2023-05-12T11:26:37Z | 2024-06-19T14:11:48Z | null | severo |
1,707,180,264 | Dataset Viewer issue for cbt and "raw" configuration: Cannot GET | ### Link
https://huggingface.co/datasets/cbt
### Description
There is an issue with the URL to show a specific split for the "raw" configuration:
```
Cannot GET /datasets/cbt/viewer/raw/train
```
- See: https://huggingface.co/datasets/cbt/viewer/raw/train
However, it works when no split name is provided in t... | Dataset Viewer issue for cbt and "raw" configuration: Cannot GET: ### Link
https://huggingface.co/datasets/cbt
### Description
There is an issue with the URL to show a specific split for the "raw" configuration:
```
Cannot GET /datasets/cbt/viewer/raw/train
```
- See: https://huggingface.co/datasets/cbt/viewer/... | closed | 2023-05-12T08:18:41Z | 2023-06-19T15:11:49Z | 2023-06-19T15:04:23Z | albertvillanova |
1,706,479,162 | Removing non necessary attributes in job runner init | Small fix for https://github.com/huggingface/datasets-server/pull/1146#discussion_r1191621514
We don't need to initialize job_manager attributes on job_runner | Removing non necessary attributes in job runner init: Small fix for https://github.com/huggingface/datasets-server/pull/1146#discussion_r1191621514
We don't need to initialize job_manager attributes on job_runner | closed | 2023-05-11T20:09:19Z | 2023-05-12T14:58:13Z | 2023-05-12T14:55:23Z | AndreaFrancis |
1,705,780,784 | Refactor errors | <img width="154" alt="Capture d’écran 2023-05-11 à 14 59 20" src="https://github.com/huggingface/datasets-server/assets/1676121/d9282ccb-07a5-483c-8db4-a629c8b188bb">
^ yes!
### update
<img width="147" alt="Capture d’écran 2023-05-15 à 16 43 54" src="https://github.com/huggingface/datasets-server/assets/16... | Refactor errors: <img width="154" alt="Capture d’écran 2023-05-11 à 14 59 20" src="https://github.com/huggingface/datasets-server/assets/1676121/d9282ccb-07a5-483c-8db4-a629c8b188bb">
^ yes!
### update
<img width="147" alt="Capture d’écran 2023-05-15 à 16 43 54" src="https://github.com/huggingface/datasets... | closed | 2023-05-11T12:58:44Z | 2023-05-17T13:28:26Z | 2023-05-17T13:25:41Z | severo |
1,705,677,480 | Rename `/split-names-from-streaming` job runner | Part of https://github.com/huggingface/datasets-server/issues/1086 and https://github.com/huggingface/datasets-server/issues/867 | Rename `/split-names-from-streaming` job runner: Part of https://github.com/huggingface/datasets-server/issues/1086 and https://github.com/huggingface/datasets-server/issues/867 | closed | 2023-05-11T11:55:43Z | 2023-05-19T16:23:07Z | 2023-05-19T12:35:31Z | polinaeterna |
1,705,431,098 | Remove should_skip_job | null | Remove should_skip_job: | closed | 2023-05-11T09:29:33Z | 2023-05-12T15:31:24Z | 2023-05-12T15:28:40Z | severo |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.