id
int64
959M
2.55B
title
stringlengths
3
133
body
stringlengths
1
65.5k
description
stringlengths
5
65.6k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
closed_at
stringlengths
20
20
user
stringclasses
174 values
2,180,356,543
check openapi spec against spectral
also: fix action name also: update openapi.json detail just to trigger the action
check openapi spec against spectral: also: fix action name also: update openapi.json detail just to trigger the action
closed
2024-03-11T22:15:53Z
2024-03-11T22:29:05Z
2024-03-11T22:29:05Z
severo
2,179,805,298
remove call to the Hub when processing a webhook
Since https://github.com/huggingface/moon-landing/pull/9061 (internal), the webhooks now contain the list of updated refs, which should make the following call redundant: https://github.com/huggingface/datasets-server/blob/60bcc28d3bcbc27ced08c29966aa332af666d02c/services/api/src/api/routes/webhook.py#L84-L88 We ...
remove call to the Hub when processing a webhook: Since https://github.com/huggingface/moon-landing/pull/9061 (internal), the webhooks now contain the list of updated refs, which should make the following call redundant: https://github.com/huggingface/datasets-server/blob/60bcc28d3bcbc27ced08c29966aa332af666d02c/ser...
open
2024-03-11T17:51:17Z
2024-03-28T13:37:58Z
null
severo
2,176,481,726
remove temporary EFS
fixes #2568 - [x] descriptive statistics - [x] HF datasets - [x] duckdb index (only for workers)
remove temporary EFS: fixes #2568 - [x] descriptive statistics - [x] HF datasets - [x] duckdb index (only for workers)
closed
2024-03-08T17:34:18Z
2024-03-11T14:04:14Z
2024-03-11T14:04:04Z
severo
2,176,445,140
Use local storage (NVMe) instead of EFS volumes
The worker machines have NVMe, and we should use this local storage instead of EFS for: - stats cache - HF datasets cache - duckdb index cache We still need to use EFS for parquet metadata since it's shared data, and for duckdb local cache in `services/search`. Note that `services/admin` and `clean-...` cron ...
Use local storage (NVMe) instead of EFS volumes: The worker machines have NVMe, and we should use this local storage instead of EFS for: - stats cache - HF datasets cache - duckdb index cache We still need to use EFS for parquet metadata since it's shared data, and for duckdb local cache in `services/search`. ...
closed
2024-03-08T17:11:32Z
2024-03-11T14:04:05Z
2024-03-11T14:04:05Z
severo
2,176,368,381
tweak autoscale parameters (again)
I increased the minimum number of workers to what we had nominally before autoscaling.
tweak autoscale parameters (again): I increased the minimum number of workers to what we had nominally before autoscaling.
closed
2024-03-08T16:27:57Z
2024-03-08T16:28:32Z
2024-03-08T16:28:13Z
severo
2,176,313,155
increase threshold for heavy workers to 50
temporary fix for https://github.com/huggingface/datasets-server/issues/889#issuecomment-1985943755
increase threshold for heavy workers to 50: temporary fix for https://github.com/huggingface/datasets-server/issues/889#issuecomment-1985943755
closed
2024-03-08T15:57:42Z
2024-03-08T15:58:14Z
2024-03-08T15:57:49Z
severo
2,176,258,357
Regularly review logs starting with `ERROR: ` and open issues for them
See https://kibana.elastic.huggingface.tech/app/discover#/?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-25h,to:now))&_a=(columns:!(message,kubernetes.namespace),filters:!(('$state':(store:appState),meta:(alias:!n,disabled:!f,index:de38ff80-ac19-11ec-bb45-ad141ad1c5f8,key:kubernetes.namespace,negate...
Regularly review logs starting with `ERROR: ` and open issues for them: See https://kibana.elastic.huggingface.tech/app/discover#/?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-25h,to:now))&_a=(columns:!(message,kubernetes.namespace),filters:!(('$state':(store:appState),meta:(alias:!n,disabled:!f,in...
open
2024-03-08T15:27:26Z
2024-08-22T14:28:07Z
null
severo
2,176,106,959
Fix memory leak in `config-parquet-and-info`: do not store pq.ParquetFIle objects in memory
should fix [this](https://github.com/huggingface/datasets-server/issues/2552) \+ i added test for `fill_builder_info` with multiple parquet files (seems that all other tests for this worker use only 1)
Fix memory leak in `config-parquet-and-info`: do not store pq.ParquetFIle objects in memory: should fix [this](https://github.com/huggingface/datasets-server/issues/2552) \+ i added test for `fill_builder_info` with multiple parquet files (seems that all other tests for this worker use only 1)
closed
2024-03-08T14:03:39Z
2024-03-12T09:55:20Z
2024-03-12T09:55:19Z
polinaeterna
2,176,098,851
-- incorrect pull request ignore this
lol incorrect branch
-- incorrect pull request ignore this: lol incorrect branch
closed
2024-03-08T13:59:55Z
2024-03-08T14:04:47Z
2024-03-08T14:00:30Z
polinaeterna
2,175,870,914
Use `revision_exists` (hfh)
Here (and maybe elsewhere): https://github.com/huggingface/datasets-server/blob/b40606d399731137846f3d67ec3bda964cd25946/services/worker/src/worker/utils.py#L202 Use the new `revision_exists` method (see https://github.com/huggingface/huggingface_hub/releases/tag/v0.21.0)
Use `revision_exists` (hfh): Here (and maybe elsewhere): https://github.com/huggingface/datasets-server/blob/b40606d399731137846f3d67ec3bda964cd25946/services/worker/src/worker/utils.py#L202 Use the new `revision_exists` method (see https://github.com/huggingface/huggingface_hub/releases/tag/v0.21.0)
open
2024-03-08T11:38:42Z
2024-03-08T11:39:03Z
null
severo
2,175,808,766
Make CI faster?
See https://github.com/huggingface/huggingface_hub/issues/2062 for reference
Make CI faster?: See https://github.com/huggingface/huggingface_hub/issues/2062 for reference
open
2024-03-08T11:04:30Z
2024-03-08T11:04:48Z
null
severo
2,175,633,391
tweaking the threshold for autoscaling
30 jobs is too low for the medium jobs (and for the light ones too). On the other side, I think we can lower the limit for the heavy ones to 20. <img width="407" alt="Capture d’écran 2024-03-08 à 10 26 57" src="https://github.com/huggingface/datasets-server/assets/1676121/213e1943-3df5-473a-a481-da46396cf3ce">
tweaking the threshold for autoscaling: 30 jobs is too low for the medium jobs (and for the light ones too). On the other side, I think we can lower the limit for the heavy ones to 20. <img width="407" alt="Capture d’écran 2024-03-08 à 10 26 57" src="https://github.com/huggingface/datasets-server/assets/1676121/21...
closed
2024-03-08T09:27:41Z
2024-03-08T14:53:57Z
2024-03-08T14:53:56Z
severo
2,175,168,890
Rollback delete duckdb indexes job
Once datasets-server-job-delete-duckdb-indexes job finishes in prod, we can merge this PR in order to avoid analyzing again the datasets to delete old duckdb indexes from refs/convert/parquet
Rollback delete duckdb indexes job: Once datasets-server-job-delete-duckdb-indexes job finishes in prod, we can merge this PR in order to avoid analyzing again the datasets to delete old duckdb indexes from refs/convert/parquet
closed
2024-03-08T02:42:07Z
2024-03-08T12:00:55Z
2024-03-08T12:00:54Z
AndreaFrancis
2,174,627,617
Fix: delete duckb indexes failed for datasets without refs/convert/parquet revision
null
Fix: delete duckb indexes failed for datasets without refs/convert/parquet revision:
closed
2024-03-07T19:34:07Z
2024-03-07T19:36:23Z
2024-03-07T19:36:22Z
AndreaFrancis
2,174,454,038
Add `/compatible-libraries` route
- Renamed `dataset-loading-tags` -> `dataset-compatible-libraries` in code - Added migration for queue and cache - Added route
Add `/compatible-libraries` route: - Renamed `dataset-loading-tags` -> `dataset-compatible-libraries` in code - Added migration for queue and cache - Added route
closed
2024-03-07T18:08:10Z
2024-03-14T16:22:01Z
2024-03-14T16:22:01Z
lhoestq
2,174,354,508
let's use more workers when needed
null
let's use more workers when needed:
closed
2024-03-07T17:15:01Z
2024-03-07T18:04:32Z
2024-03-07T18:04:32Z
severo
2,174,324,200
Sort the rows along a column
See https://github.com/huggingface/moon-landing/issues/9201 (internal)
Sort the rows along a column: See https://github.com/huggingface/moon-landing/issues/9201 (internal)
closed
2024-03-07T16:57:43Z
2024-05-03T11:14:07Z
2024-05-03T11:14:07Z
severo
2,174,065,256
Default config first
Show the default config first in the Viewer. I did this by ordering the configs in `dataset-config-names` to have the default config in first position fix https://github.com/huggingface/datasets-server/issues/2541 No need to recompute all the entries, it's fine to apply this to new / updated datasets imo cc...
Default config first: Show the default config first in the Viewer. I did this by ordering the configs in `dataset-config-names` to have the default config in first position fix https://github.com/huggingface/datasets-server/issues/2541 No need to recompute all the entries, it's fine to apply this to new / upda...
closed
2024-03-07T14:59:53Z
2024-03-07T17:09:54Z
2024-03-07T17:09:53Z
lhoestq
2,173,843,189
Determine if a string column is string or category by proportion of unique values
...instead of by an absolute value. Will solve https://github.com/huggingface/datasets-server/issues/1953 Set to 0.2. UPD: i also added the maximum value of categories to 1000 (for example, there are ~700 languages in the-stack-v2). Yes, it makes no sense for the viewer to display 1k labels but i think it should be...
Determine if a string column is string or category by proportion of unique values: ...instead of by an absolute value. Will solve https://github.com/huggingface/datasets-server/issues/1953 Set to 0.2. UPD: i also added the maximum value of categories to 1000 (for example, there are ~700 languages in the-stack-v2). ...
closed
2024-03-07T13:16:45Z
2024-03-12T17:17:42Z
2024-03-12T17:17:41Z
polinaeterna
2,173,527,327
`config-parquet-and-info` crashes with OOM at pq files validation, presumably when there are a lot of files
[bigcode/the-stack-v2](https://huggingface.co/datasets/bigcode/the-stack-v2) keeps failing at config-parquet-and-info step and seems that the problem is that there are too many parquet files to validate (there are about 1k of them). It [starts](https://github.com/huggingface/datasets-server/blob/main/services/worke...
`config-parquet-and-info` crashes with OOM at pq files validation, presumably when there are a lot of files: [bigcode/the-stack-v2](https://huggingface.co/datasets/bigcode/the-stack-v2) keeps failing at config-parquet-and-info step and seems that the problem is that there are too many parquet files to validate (there ...
closed
2024-03-07T10:30:16Z
2024-03-12T18:53:10Z
2024-03-12T09:55:20Z
polinaeterna
2,171,418,131
Update openapi specs according to changes in stats #2453
Some values of numerical results might be null now (rare cases when all values in column are null) See https://github.com/huggingface/datasets-server/pull/2453
Update openapi specs according to changes in stats #2453: Some values of numerical results might be null now (rare cases when all values in column are null) See https://github.com/huggingface/datasets-server/pull/2453
closed
2024-03-06T12:41:53Z
2024-03-08T14:09:29Z
2024-03-06T15:55:52Z
polinaeterna
2,170,240,553
More precise dataset size computation
Currently, the Hub uses the `/size` endpoint's `num_bytes_original_files` value to display the `Size of downloaded dataset files` on a dataset's card page. However, this value does not consider a possible overlap between the configs' data files (and simply [sums](https://github.com/huggingface/datasets-server/blob/e4aa...
More precise dataset size computation: Currently, the Hub uses the `/size` endpoint's `num_bytes_original_files` value to display the `Size of downloaded dataset files` on a dataset's card page. However, this value does not consider a possible overlap between the configs' data files (and simply [sums](https://github.co...
open
2024-03-05T22:22:24Z
2024-05-24T20:59:36Z
null
mariosasko
2,168,915,818
Fix: Add datasets cache to loading tags job runner
`dataset-hub-cache` is failing because of `dataset-loading-tags` - PermissionError - [Errno 13] Permission denied: '/.cache' Currently 754 affected datasets.
Fix: Add datasets cache to loading tags job runner: `dataset-hub-cache` is failing because of `dataset-loading-tags` - PermissionError - [Errno 13] Permission denied: '/.cache' Currently 754 affected datasets.
closed
2024-03-05T11:04:08Z
2024-03-05T11:17:16Z
2024-03-05T11:17:15Z
AndreaFrancis
2,167,946,331
Don't truncate image bytes
to fix https://huggingface.co/datasets/Major-TOM/Core-S2L2A RowsPostProcessingError
Don't truncate image bytes: to fix https://huggingface.co/datasets/Major-TOM/Core-S2L2A RowsPostProcessingError
closed
2024-03-04T23:05:06Z
2024-03-04T23:05:48Z
2024-03-04T23:05:48Z
lhoestq
2,167,925,930
hot fix for images
fix https://github.com/huggingface/datasets-server/pull/2545
hot fix for images: fix https://github.com/huggingface/datasets-server/pull/2545
closed
2024-03-04T22:47:44Z
2024-03-04T22:47:55Z
2024-03-04T22:47:55Z
lhoestq
2,167,880,172
Revert max_workers for pq validation
revert https://github.com/huggingface/datasets-server/pull/2544 since it made it too slow to process https://huggingface.co/datasets/Major-TOM/Core-S2L2A
Revert max_workers for pq validation: revert https://github.com/huggingface/datasets-server/pull/2544 since it made it too slow to process https://huggingface.co/datasets/Major-TOM/Core-S2L2A
closed
2024-03-04T22:14:17Z
2024-03-05T08:58:11Z
2024-03-04T22:15:35Z
lhoestq
2,167,762,617
Support binary columns for images
... as `datasets` does fix for https://huggingface.co/datasets/Major-TOM/Core-S2L1C
Support binary columns for images: ... as `datasets` does fix for https://huggingface.co/datasets/Major-TOM/Core-S2L1C
closed
2024-03-04T21:11:02Z
2024-03-04T21:17:11Z
2024-03-04T21:16:57Z
lhoestq
2,167,315,911
Set max workers for `retry_and_validate_get_parquet_file_and_size` in `config-parquet-and-info` to 4
I assume the fact that we don't restrict the max number of workers (which results in [32 workers](https://github.com/tqdm/tqdm/blob/master/tqdm/contrib/concurrent.py#L44)* from `tqdm.contrib.concurrent.thread_map` which we use at this step) might result in (likely) OOM for `bigcode/the-stack-v2`. Since there are about ...
Set max workers for `retry_and_validate_get_parquet_file_and_size` in `config-parquet-and-info` to 4: I assume the fact that we don't restrict the max number of workers (which results in [32 workers](https://github.com/tqdm/tqdm/blob/master/tqdm/contrib/concurrent.py#L44)* from `tqdm.contrib.concurrent.thread_map` whic...
closed
2024-03-04T17:00:30Z
2024-03-04T19:01:57Z
2024-03-04T18:09:55Z
polinaeterna
2,167,252,990
Truncate binary for major tom
Fix viewer for https://huggingface.co/datasets/Major-TOM/Core-S2L2A The dataset has one row per row group because each row contains multiple binary columns with a lot of data. I fixed this by truncating binary data iteratively on row groups to not fill up the RAM For now I only enable this logic on Major-TOM d...
Truncate binary for major tom: Fix viewer for https://huggingface.co/datasets/Major-TOM/Core-S2L2A The dataset has one row per row group because each row contains multiple binary columns with a lot of data. I fixed this by truncating binary data iteratively on row groups to not fill up the RAM For now I only e...
closed
2024-03-04T16:29:30Z
2024-03-11T21:10:09Z
2024-03-04T20:26:03Z
lhoestq
2,166,988,697
Update datasets and hfh
This should make the viewer faster to get for datasets with images/audio in parquet file. I also updated hfh - close https://github.com/huggingface/datasets-server/issues/2317
Update datasets and hfh: This should make the viewer faster to get for datasets with images/audio in parquet file. I also updated hfh - close https://github.com/huggingface/datasets-server/issues/2317
closed
2024-03-04T14:30:53Z
2024-03-04T20:15:09Z
2024-03-04T20:15:08Z
lhoestq
2,166,935,086
Display default config first in the viewer
Would be nice to have an option to choose which data to show first because sometimes dataset starts with ugly data lol. example: [bigcode/the-stack-v2](https://huggingface.co/datasets/bigcode/the-stack-v2). (yes, there is only one config with all the data for now but in theory it should be each folder inside `data/` ...
Display default config first in the viewer: Would be nice to have an option to choose which data to show first because sometimes dataset starts with ugly data lol. example: [bigcode/the-stack-v2](https://huggingface.co/datasets/bigcode/the-stack-v2). (yes, there is only one config with all the data for now but in the...
closed
2024-03-04T14:06:25Z
2024-03-07T17:09:54Z
2024-03-07T17:09:54Z
polinaeterna
2,166,835,238
Post-deploy job to delete old duckdb index files
Fix https://github.com/huggingface/datasets-server/issues/2520 and last part of https://github.com/huggingface/datasets-server/issues/1914 I propose a post-deploy execution job to remove old duckdb index files from **refs/convert/parquet branch**. This script could be removed once all old index files have been remove...
Post-deploy job to delete old duckdb index files: Fix https://github.com/huggingface/datasets-server/issues/2520 and last part of https://github.com/huggingface/datasets-server/issues/1914 I propose a post-deploy execution job to remove old duckdb index files from **refs/convert/parquet branch**. This script could be...
closed
2024-03-04T13:19:51Z
2024-04-11T05:26:24Z
2024-03-07T19:03:53Z
AndreaFrancis
2,163,888,456
Better statistics for URL columns
URLs are stored as strings, so their stat info is their (string) lengths. We could make this info more informative by replacing the lengths with URL domain distribution (or some other stat/metric?).
Better statistics for URL columns: URLs are stored as strings, so their stat info is their (string) lengths. We could make this info more informative by replacing the lengths with URL domain distribution (or some other stat/metric?).
open
2024-03-01T17:43:58Z
2024-04-02T22:53:38Z
null
mariosasko
2,163,406,719
Enable workers HPA based on queue size (next)
null
Enable workers HPA based on queue size (next):
closed
2024-03-01T13:22:58Z
2024-03-01T13:40:40Z
2024-03-01T13:40:39Z
rtrompier
2,163,043,503
Enable workers HPA based on queue size
null
Enable workers HPA based on queue size:
closed
2024-03-01T10:02:57Z
2024-03-01T12:21:41Z
2024-03-01T10:35:00Z
rtrompier
2,161,865,783
Support yodas
[espnet/yodas](https://huggingface.co/datasets/espnet/yodas)
Support yodas: [espnet/yodas](https://huggingface.co/datasets/espnet/yodas)
closed
2024-02-29T18:35:12Z
2024-02-29T18:37:39Z
2024-02-29T18:37:38Z
lhoestq
2,161,540,523
[Do not merge] Follow croissant 1.0 specs
null
[Do not merge] Follow croissant 1.0 specs:
closed
2024-02-29T15:36:14Z
2024-02-29T16:12:58Z
2024-02-29T16:12:39Z
ccl-core
2,161,371,736
use a generic LongStepProfiler
follow-up to #2469 and #2533
use a generic LongStepProfiler: follow-up to #2469 and #2533
closed
2024-02-29T14:16:39Z
2024-02-29T14:29:02Z
2024-02-29T14:29:01Z
severo
2,161,132,089
Adjust Prometheus hist buckets for `split-descriptive-statistics` job runner
to be more aligned with the default ones. See https://github.com/huggingface/datasets-server/pull/2469/files#r1507469902
Adjust Prometheus hist buckets for `split-descriptive-statistics` job runner: to be more aligned with the default ones. See https://github.com/huggingface/datasets-server/pull/2469/files#r1507469902
closed
2024-02-29T12:21:40Z
2024-02-29T12:36:17Z
2024-02-29T12:36:17Z
polinaeterna
2,160,985,334
Fix worker crash when first heartbeat conflicts with job start
``` INFO: 2024-02-27 21:49:47,093 - root - [config-split-names] compute JobManager(job_id=65de58fa62c68c8c749fbce3 dataset=ChristophSchuhmann/queries job_info={'job_id': '65de58fa62c68c8c749fbce3', 'type': 'config-split-names', 'params': {'dataset': 'ChristophSchuhmann/queries', 'revision': '641561f9b27193174a37e5a57f...
Fix worker crash when first heartbeat conflicts with job start: ``` INFO: 2024-02-27 21:49:47,093 - root - [config-split-names] compute JobManager(job_id=65de58fa62c68c8c749fbce3 dataset=ChristophSchuhmann/queries job_info={'job_id': '65de58fa62c68c8c749fbce3', 'type': 'config-split-names', 'params': {'dataset': 'Chri...
open
2024-02-29T11:00:17Z
2024-06-19T14:27:26Z
null
lhoestq
2,160,973,054
Restore missing triggers between steps
fixes #2530 it: - reverts #2489 - adds the missing triggering step - adds a check when creating the processing graph, to ensure all the config- and dataset-level steps have at least one triggering step of the same level, to handle the case where we couldn't get the configs or the splits.
Restore missing triggers between steps: fixes #2530 it: - reverts #2489 - adds the missing triggering step - adds a check when creating the processing graph, to ensure all the config- and dataset-level steps have at least one triggering step of the same level, to handle the case where we couldn't get the configs ...
closed
2024-02-29T10:54:13Z
2024-02-29T11:56:15Z
2024-02-29T11:56:14Z
severo
2,160,858,215
Missing triggers between steps
possibly due to https://github.com/huggingface/datasets-server/pull/2489 We get errors like: - https://huggingface.co/datasets/jmc255/aphantasia_drawing_dataset/discussions/1 - https://huggingface.co/datasets/yzeng58/CoBSAT/discussions/1 where the first step `dataset-config-names` gives an error, and no other e...
Missing triggers between steps: possibly due to https://github.com/huggingface/datasets-server/pull/2489 We get errors like: - https://huggingface.co/datasets/jmc255/aphantasia_drawing_dataset/discussions/1 - https://huggingface.co/datasets/yzeng58/CoBSAT/discussions/1 where the first step `dataset-config-names...
closed
2024-02-29T09:54:07Z
2024-02-29T11:56:15Z
2024-02-29T11:56:15Z
severo
2,160,824,291
statistics fail on 2A2I/Arabic-OpenHermes-2.5
https://huggingface.co/datasets/2A2I/Arabic-OpenHermes-2.5 the error: ``` TypeError: unsupported operand type(s) for *: 'NoneType' and 'float' ``` Full trace: ``` { "error": "unsupported operand type(s) for *: 'NoneType' and 'float'", "cause_exception": "TypeError", "cause_message": "unsuppo...
statistics fail on 2A2I/Arabic-OpenHermes-2.5: https://huggingface.co/datasets/2A2I/Arabic-OpenHermes-2.5 the error: ``` TypeError: unsupported operand type(s) for *: 'NoneType' and 'float' ``` Full trace: ``` { "error": "unsupported operand type(s) for *: 'NoneType' and 'float'", "cause_exceptio...
closed
2024-02-29T09:39:58Z
2024-02-29T11:13:27Z
2024-02-29T11:13:20Z
severo
2,160,804,825
run the backfill on retryable errors every 30 min (not every 10 min)
null
run the backfill on retryable errors every 30 min (not every 10 min):
closed
2024-02-29T09:31:19Z
2024-02-29T10:12:44Z
2024-02-29T09:36:49Z
severo
2,159,385,240
Don't unquote path_in_repo
Support paths containing `%`
Don't unquote path_in_repo: Support paths containing `%`
closed
2024-02-28T16:43:35Z
2024-04-08T15:04:13Z
2024-04-08T15:04:13Z
lhoestq
2,158,955,584
add metrics on jobs count per worker type
See #2095. Add metrics `worker_size_jobs_count{pid="10",worker_size="heavy"}: 12` to prometheus (value for worker_size: `heavy`, `medium` and `light`). It will be used to autoscale the workers Also: - difficulty_min is now excluded (and 0 is not a valid difficulty anymore) - in prod, medium and heavy worker...
add metrics on jobs count per worker type: See #2095. Add metrics `worker_size_jobs_count{pid="10",worker_size="heavy"}: 12` to prometheus (value for worker_size: `heavy`, `medium` and `light`). It will be used to autoscale the workers Also: - difficulty_min is now excluded (and 0 is not a valid difficulty a...
closed
2024-02-28T13:18:54Z
2024-02-28T16:52:45Z
2024-02-28T16:52:45Z
severo
2,158,648,183
Use hf_transfer to speed up downloads from the Hub
In /search, in particular https://github.com/huggingface/hf_transfer/ or `HF_HUB_ENABLE_HF_TRANSFER` in huggingface_hub
Use hf_transfer to speed up downloads from the Hub: In /search, in particular https://github.com/huggingface/hf_transfer/ or `HF_HUB_ENABLE_HF_TRANSFER` in huggingface_hub
closed
2024-02-28T10:40:31Z
2024-04-15T11:37:07Z
2024-04-15T11:37:07Z
severo
2,157,709,268
Compute croissant in workers
The croissant metadata is computed by workers and stored in the cache When served by the API service, and if `?full=false` is passed in the query, the columns are limited to 1,000 in the response. fix #2521
Compute croissant in workers: The croissant metadata is computed by workers and stored in the cache When served by the API service, and if `?full=false` is passed in the query, the columns are limited to 1,000 in the response. fix #2521
closed
2024-02-27T22:18:45Z
2024-03-29T10:03:48Z
2024-03-29T10:03:48Z
severo
2,157,406,547
Don't augment difficulty of children jobs
Some easy jobs have unexpected high difficulties because of that ![image](https://github.com/huggingface/datasets-server/assets/42851186/4c905bc0-f671-4c11-97b3-3f0c1ec10cb2)
Don't augment difficulty of children jobs: Some easy jobs have unexpected high difficulties because of that ![image](https://github.com/huggingface/datasets-server/assets/42851186/4c905bc0-f671-4c11-97b3-3f0c1ec10cb2)
closed
2024-02-27T19:07:44Z
2024-02-27T21:07:42Z
2024-02-27T21:07:42Z
lhoestq
2,157,306,761
Fix backfill wrong difficulties, causing OOMs
The backfill was creating duckdb indexing jobs for big datasets without adjusting the difficulty. Because of that the medium workers were processing them and OOMing and every retry.
Fix backfill wrong difficulties, causing OOMs: The backfill was creating duckdb indexing jobs for big datasets without adjusting the difficulty. Because of that the medium workers were processing them and OOMing and every retry.
closed
2024-02-27T18:09:42Z
2024-02-27T20:54:33Z
2024-02-27T20:54:32Z
lhoestq
2,157,258,083
Move /croissant to the cache
As it's called on every dataset page, we should cache it
Move /croissant to the cache: As it's called on every dataset page, we should cache it
closed
2024-02-27T17:40:10Z
2024-03-29T10:03:49Z
2024-03-29T10:03:49Z
severo
2,156,847,921
Delete all .duckdb files from refs/convert/parquet
see https://github.com/huggingface/datasets-server/issues/1914
Delete all .duckdb files from refs/convert/parquet: see https://github.com/huggingface/datasets-server/issues/1914
closed
2024-02-27T15:10:53Z
2024-03-08T11:58:17Z
2024-03-08T11:58:17Z
severo
2,156,796,234
backfill retryable errors every 10 minutes
fixes #2439. For example, if a Parquet file commit to the Hub fails, we want to retry quickly (but not immediately, in case the Hub has an issue). So... checking and retrying every 10 minutes seems a good compromise.
backfill retryable errors every 10 minutes: fixes #2439. For example, if a Parquet file commit to the Hub fails, we want to retry quickly (but not immediately, in case the Hub has an issue). So... checking and retrying every 10 minutes seems a good compromise.
closed
2024-02-27T14:52:38Z
2024-02-28T18:08:13Z
2024-02-28T18:08:13Z
severo
2,156,610,693
Tweak the production resources
See https://www.notion.so/huggingface2/Infrastructure-b4fd07f015e04a84a41ec6472c8a0ff5 and https://huggingface.slack.com/archives/C04L6P8KNQ5/p1702553107305559 (internal links)
Tweak the production resources: See https://www.notion.so/huggingface2/Infrastructure-b4fd07f015e04a84a41ec6472c8a0ff5 and https://huggingface.slack.com/archives/C04L6P8KNQ5/p1702553107305559 (internal links)
closed
2024-02-27T13:33:50Z
2024-02-28T17:55:50Z
2024-02-28T17:55:49Z
severo
2,156,042,660
Understand why simple steps get high difficulty
It's weird that jobs like `split-is-valid` and `config-duckdb-index-size`, which are very simple, get a high difficulty value. <img width="476" alt="Capture d’écran 2024-02-27 à 10 01 10" src="https://github.com/huggingface/datasets-server/assets/1676121/0c69cadd-78a0-4478-a37f-5abff642627d">
Understand why simple steps get high difficulty: It's weird that jobs like `split-is-valid` and `config-duckdb-index-size`, which are very simple, get a high difficulty value. <img width="476" alt="Capture d’écran 2024-02-27 à 10 01 10" src="https://github.com/huggingface/datasets-server/assets/1676121/0c69cadd-78...
closed
2024-02-27T09:02:28Z
2024-02-29T10:58:38Z
2024-02-29T10:58:37Z
severo
2,155,884,924
Update orjson to 3.9.15 to fix vulnerability
Update `orjson` to 3.9.15 to fix vulnerability: https://github.com/ijl/orjson/releases/tag/3.9.15 - Affected versions: < 3.9.15 This will fix 11 Dependabot alerts.
Update orjson to 3.9.15 to fix vulnerability: Update `orjson` to 3.9.15 to fix vulnerability: https://github.com/ijl/orjson/releases/tag/3.9.15 - Affected versions: < 3.9.15 This will fix 11 Dependabot alerts.
closed
2024-02-27T07:41:45Z
2024-02-27T09:22:05Z
2024-02-27T09:22:04Z
albertvillanova
2,155,254,955
Bump the pip group group in /libs/libapi with 1 update
Bumps the pip group group in /libs/libapi with 1 update: [orjson](https://github.com/ijl/orjson). Updates `orjson` from 3.9.7 to 3.9.15 <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/ijl/orjson/releases">orjson's releases</a>.</em></p> <blockquote> <h2>3.9.15</h2> <h3>Fixed<...
Bump the pip group group in /libs/libapi with 1 update: Bumps the pip group group in /libs/libapi with 1 update: [orjson](https://github.com/ijl/orjson). Updates `orjson` from 3.9.7 to 3.9.15 <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/ijl/orjson/releases">orjson's releas...
closed
2024-02-26T22:09:26Z
2024-02-27T09:23:21Z
2024-02-27T09:23:20Z
dependabot[bot]
2,155,254,659
Bump orjson from 3.9.7 to 3.9.15 in /services/sse-api
Bumps [orjson](https://github.com/ijl/orjson) from 3.9.7 to 3.9.15. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/ijl/orjson/releases">orjson's releases</a>.</em></p> <blockquote> <h2>3.9.15</h2> <h3>Fixed</h3> <ul> <li>Implement recursion limit of 1024 on <code>orjson.loads...
Bump orjson from 3.9.7 to 3.9.15 in /services/sse-api: Bumps [orjson](https://github.com/ijl/orjson) from 3.9.7 to 3.9.15. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/ijl/orjson/releases">orjson's releases</a>.</em></p> <blockquote> <h2>3.9.15</h2> <h3>Fixed</h3> <ul> <li>...
closed
2024-02-26T22:09:14Z
2024-02-27T09:23:13Z
2024-02-27T09:23:12Z
dependabot[bot]
2,155,253,810
Bump orjson from 3.9.2 to 3.9.15 in /services/search
Bumps [orjson](https://github.com/ijl/orjson) from 3.9.2 to 3.9.15. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/ijl/orjson/releases">orjson's releases</a>.</em></p> <blockquote> <h2>3.9.15</h2> <h3>Fixed</h3> <ul> <li>Implement recursion limit of 1024 on <code>orjson.loads...
Bump orjson from 3.9.2 to 3.9.15 in /services/search: Bumps [orjson](https://github.com/ijl/orjson) from 3.9.2 to 3.9.15. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/ijl/orjson/releases">orjson's releases</a>.</em></p> <blockquote> <h2>3.9.15</h2> <h3>Fixed</h3> <ul> <li>I...
closed
2024-02-26T22:08:40Z
2024-02-27T09:22:57Z
2024-02-27T09:22:55Z
dependabot[bot]
2,155,225,470
Bump orjson from 3.9.1 to 3.9.15 in /services/rows
Bumps [orjson](https://github.com/ijl/orjson) from 3.9.1 to 3.9.15. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/ijl/orjson/releases">orjson's releases</a>.</em></p> <blockquote> <h2>3.9.15</h2> <h3>Fixed</h3> <ul> <li>Implement recursion limit of 1024 on <code>orjson.loads...
Bump orjson from 3.9.1 to 3.9.15 in /services/rows: Bumps [orjson](https://github.com/ijl/orjson) from 3.9.1 to 3.9.15. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/ijl/orjson/releases">orjson's releases</a>.</em></p> <blockquote> <h2>3.9.15</h2> <h3>Fixed</h3> <ul> <li>Imp...
closed
2024-02-26T21:51:53Z
2024-02-27T09:22:52Z
2024-02-27T09:22:50Z
dependabot[bot]
2,155,222,681
Bump orjson from 3.9.0 to 3.9.15 in /services/worker
Bumps [orjson](https://github.com/ijl/orjson) from 3.9.0 to 3.9.15. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/ijl/orjson/releases">orjson's releases</a>.</em></p> <blockquote> <h2>3.9.15</h2> <h3>Fixed</h3> <ul> <li>Implement recursion limit of 1024 on <code>orjson.loads...
Bump orjson from 3.9.0 to 3.9.15 in /services/worker: Bumps [orjson](https://github.com/ijl/orjson) from 3.9.0 to 3.9.15. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/ijl/orjson/releases">orjson's releases</a>.</em></p> <blockquote> <h2>3.9.15</h2> <h3>Fixed</h3> <ul> <li>I...
closed
2024-02-26T21:50:41Z
2024-02-27T09:23:06Z
2024-02-27T09:23:05Z
dependabot[bot]
2,155,219,047
Bump orjson from 3.9.0 to 3.9.15 in /front/admin_ui
Bumps [orjson](https://github.com/ijl/orjson) from 3.9.0 to 3.9.15. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/ijl/orjson/releases">orjson's releases</a>.</em></p> <blockquote> <h2>3.9.15</h2> <h3>Fixed</h3> <ul> <li>Implement recursion limit of 1024 on <code>orjson.loads...
Bump orjson from 3.9.0 to 3.9.15 in /front/admin_ui: Bumps [orjson](https://github.com/ijl/orjson) from 3.9.0 to 3.9.15. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/ijl/orjson/releases">orjson's releases</a>.</em></p> <blockquote> <h2>3.9.15</h2> <h3>Fixed</h3> <ul> <li>Im...
closed
2024-02-26T21:47:21Z
2024-02-27T09:22:59Z
2024-02-27T09:22:58Z
dependabot[bot]
2,155,218,297
Bump orjson from 3.9.0 to 3.9.15 in /libs/libcommon
Bumps [orjson](https://github.com/ijl/orjson) from 3.9.0 to 3.9.15. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/ijl/orjson/releases">orjson's releases</a>.</em></p> <blockquote> <h2>3.9.15</h2> <h3>Fixed</h3> <ul> <li>Implement recursion limit of 1024 on <code>orjson.loads...
Bump orjson from 3.9.0 to 3.9.15 in /libs/libcommon: Bumps [orjson](https://github.com/ijl/orjson) from 3.9.0 to 3.9.15. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/ijl/orjson/releases">orjson's releases</a>.</em></p> <blockquote> <h2>3.9.15</h2> <h3>Fixed</h3> <ul> <li>Im...
closed
2024-02-26T21:46:51Z
2024-02-27T09:22:51Z
2024-02-27T09:22:50Z
dependabot[bot]
2,155,218,017
Bump orjson from 3.9.0 to 3.9.15 in /jobs/cache_maintenance
Bumps [orjson](https://github.com/ijl/orjson) from 3.9.0 to 3.9.15. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/ijl/orjson/releases">orjson's releases</a>.</em></p> <blockquote> <h2>3.9.15</h2> <h3>Fixed</h3> <ul> <li>Implement recursion limit of 1024 on <code>orjson.loads...
Bump orjson from 3.9.0 to 3.9.15 in /jobs/cache_maintenance: Bumps [orjson](https://github.com/ijl/orjson) from 3.9.0 to 3.9.15. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/ijl/orjson/releases">orjson's releases</a>.</em></p> <blockquote> <h2>3.9.15</h2> <h3>Fixed</h3> <ul...
closed
2024-02-26T21:46:41Z
2024-02-27T09:23:35Z
2024-02-27T09:22:48Z
dependabot[bot]
2,155,218,013
Bump orjson from 3.9.0 to 3.9.15 in /jobs/cache_maintenance
Bumps [orjson](https://github.com/ijl/orjson) from 3.9.0 to 3.9.15. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/ijl/orjson/releases">orjson's releases</a>.</em></p> <blockquote> <h2>3.9.15</h2> <h3>Fixed</h3> <ul> <li>Implement recursion limit of 1024 on <code>orjson.loads...
Bump orjson from 3.9.0 to 3.9.15 in /jobs/cache_maintenance: Bumps [orjson](https://github.com/ijl/orjson) from 3.9.0 to 3.9.15. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/ijl/orjson/releases">orjson's releases</a>.</em></p> <blockquote> <h2>3.9.15</h2> <h3>Fixed</h3> <ul...
closed
2024-02-26T21:46:40Z
2024-02-27T09:23:35Z
2024-02-27T09:23:33Z
dependabot[bot]
2,155,217,989
Bump orjson from 3.9.0 to 3.9.15 in /services/admin
Bumps [orjson](https://github.com/ijl/orjson) from 3.9.0 to 3.9.15. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/ijl/orjson/releases">orjson's releases</a>.</em></p> <blockquote> <h2>3.9.15</h2> <h3>Fixed</h3> <ul> <li>Implement recursion limit of 1024 on <code>orjson.loads...
Bump orjson from 3.9.0 to 3.9.15 in /services/admin: Bumps [orjson](https://github.com/ijl/orjson) from 3.9.0 to 3.9.15. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/ijl/orjson/releases">orjson's releases</a>.</em></p> <blockquote> <h2>3.9.15</h2> <h3>Fixed</h3> <ul> <li>Im...
closed
2024-02-26T21:46:39Z
2024-02-27T09:22:50Z
2024-02-27T09:22:49Z
dependabot[bot]
2,155,217,978
Bump orjson from 3.9.0 to 3.9.15 in /services/api
Bumps [orjson](https://github.com/ijl/orjson) from 3.9.0 to 3.9.15. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/ijl/orjson/releases">orjson's releases</a>.</em></p> <blockquote> <h2>3.9.15</h2> <h3>Fixed</h3> <ul> <li>Implement recursion limit of 1024 on <code>orjson.loads...
Bump orjson from 3.9.0 to 3.9.15 in /services/api: Bumps [orjson](https://github.com/ijl/orjson) from 3.9.0 to 3.9.15. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/ijl/orjson/releases">orjson's releases</a>.</em></p> <blockquote> <h2>3.9.15</h2> <h3>Fixed</h3> <ul> <li>Impl...
closed
2024-02-26T21:46:39Z
2024-02-27T09:23:34Z
2024-02-27T09:23:33Z
dependabot[bot]
2,155,217,892
Bump orjson from 3.9.0 to 3.9.15 in /jobs/mongodb_migration
Bumps [orjson](https://github.com/ijl/orjson) from 3.9.0 to 3.9.15. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/ijl/orjson/releases">orjson's releases</a>.</em></p> <blockquote> <h2>3.9.15</h2> <h3>Fixed</h3> <ul> <li>Implement recursion limit of 1024 on <code>orjson.loads...
Bump orjson from 3.9.0 to 3.9.15 in /jobs/mongodb_migration: Bumps [orjson](https://github.com/ijl/orjson) from 3.9.0 to 3.9.15. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/ijl/orjson/releases">orjson's releases</a>.</em></p> <blockquote> <h2>3.9.15</h2> <h3>Fixed</h3> <ul...
closed
2024-02-26T21:46:36Z
2024-02-27T09:22:49Z
2024-02-27T09:22:48Z
dependabot[bot]
2,154,767,928
Support parquet datasets with >10k files
By respecting the 10k files limit per folder when copying them in `refs/convert/parquet` . To do so I'm using a directory scheme like config/train-part0/0000.parquet up to config/train-part9/9999.parquet when there are >10k files Close https://github.com/huggingface/datasets-server/issues/2498 cc @guipenedo
Support parquet datasets with >10k files: By respecting the 10k files limit per folder when copying them in `refs/convert/parquet` . To do so I'm using a directory scheme like config/train-part0/0000.parquet up to config/train-part9/9999.parquet when there are >10k files Close https://github.com/huggingface/datas...
closed
2024-02-26T17:51:02Z
2024-02-27T16:52:34Z
2024-02-27T16:52:33Z
lhoestq
2,151,869,571
Number of cache entries in metrics is incorrect
Currently 4.22M, while it should be 3.7M see #2496.
Number of cache entries in metrics is incorrect: Currently 4.22M, while it should be 3.7M see #2496.
closed
2024-02-23T21:59:05Z
2024-02-23T22:00:54Z
2024-02-23T22:00:53Z
severo
2,151,858,600
reduce resources to 4/10/4
null
reduce resources to 4/10/4:
closed
2024-02-23T21:49:40Z
2024-02-23T21:50:19Z
2024-02-23T21:49:48Z
severo
2,151,844,666
move the backfill hour + refresh cache metrics every 3 hours
null
move the backfill hour + refresh cache metrics every 3 hours:
closed
2024-02-23T21:37:25Z
2024-02-23T21:38:04Z
2024-02-23T21:37:31Z
severo
2,151,525,679
Fix extensions directory search
While deploying https://github.com/huggingface/datasets-server/pull/2482 in staging, this error appeared: ``` INFO: 2024-02-23 17:03:36,387 - root - /search dataset='asoria/bolivian-recipes' config='default' split='same' query='beef' offset=0 length=100 DEBUG: 2024-02-23 17:03:36,413 - root - connect to index file /...
Fix extensions directory search: While deploying https://github.com/huggingface/datasets-server/pull/2482 in staging, this error appeared: ``` INFO: 2024-02-23 17:03:36,387 - root - /search dataset='asoria/bolivian-recipes' config='default' split='same' query='beef' offset=0 length=100 DEBUG: 2024-02-23 17:03:36,413...
closed
2024-02-23T17:39:22Z
2024-02-23T17:50:18Z
2024-02-23T17:50:18Z
AndreaFrancis
2,151,427,623
Support datasets with >10k Parquet files
Fix this ([internal](https://huggingface.slack.com/archives/C02V51Q3800/p1708702334005179?thread_ts=1708606229.359269&cid=C02V51Q3800)) ![image](https://github.com/huggingface/datasets-server/assets/42851186/d8e546c1-7b5f-4e00-ac0a-05e955568e83) I'll use a directory scheme like `config/train-part0/0000.parquet` up...
Support datasets with >10k Parquet files: Fix this ([internal](https://huggingface.slack.com/archives/C02V51Q3800/p1708702334005179?thread_ts=1708606229.359269&cid=C02V51Q3800)) ![image](https://github.com/huggingface/datasets-server/assets/42851186/d8e546c1-7b5f-4e00-ac0a-05e955568e83) I'll use a directory scheme...
closed
2024-02-23T16:34:19Z
2024-02-27T16:52:34Z
2024-02-27T16:52:34Z
lhoestq
2,151,199,156
Fix metrics
fixes #2496
Fix metrics: fixes #2496
closed
2024-02-23T14:24:27Z
2024-02-23T14:37:04Z
2024-02-23T14:37:04Z
severo
2,150,985,120
Cache and Queue metrics should not include obsolete values
For example, after deleting the step "config-split-names-from-streaming", we still had a lot of entries for it in the metrics. We should check and delete them automatically, as they are not present anymore in the cache collection. ``` db.cacheTotalMetric.deleteMany({kind: "config-split-names-from-streaming"}) { ac...
Cache and Queue metrics should not include obsolete values: For example, after deleting the step "config-split-names-from-streaming", we still had a lot of entries for it in the metrics. We should check and delete them automatically, as they are not present anymore in the cache collection. ``` db.cacheTotalMetric.d...
closed
2024-02-23T12:27:03Z
2024-02-23T15:06:25Z
2024-02-23T14:37:05Z
severo
2,150,980,402
Cache/Queue metrics should not be negative
In the cache metrics collection, we have entries like: ``` { _id: ObjectId("65d60409acf7b3719825797e"), error_code: 'PreviousStepStillProcessingError', http_status: 500, kind: 'split-duckdb-index-010', total: -67 } ``` The total should never be lower than 0
Cache/Queue metrics should not be negative: In the cache metrics collection, we have entries like: ``` { _id: ObjectId("65d60409acf7b3719825797e"), error_code: 'PreviousStepStillProcessingError', http_status: 500, kind: 'split-duckdb-index-010', total: -67 } ``` The total should never be lower than ...
open
2024-02-23T12:23:58Z
2024-03-08T15:42:26Z
null
severo
2,150,839,878
remove MetricsMongoResource and METRICS_MONGOENGINE_ALIAS
fixes https://github.com/huggingface/datasets-server/issues/2210
remove MetricsMongoResource and METRICS_MONGOENGINE_ALIAS: fixes https://github.com/huggingface/datasets-server/issues/2210
closed
2024-02-23T10:53:03Z
2024-02-23T11:24:01Z
2024-02-23T11:24:00Z
severo
2,150,067,161
set updated_at field to the same value for all steps of a dataset
After the migrations: - `_20240221103200_cache_merge_config_split_names` - `_20240221160700_cache_merge_split_first_rows` the updated_at field is not accurate anymore, leading to backfilling a lot of them (in the daily cronjob), leading to unnecessary jobs. To fix that, we take the updated_at value for the root...
set updated_at field to the same value for all steps of a dataset: After the migrations: - `_20240221103200_cache_merge_config_split_names` - `_20240221160700_cache_merge_split_first_rows` the updated_at field is not accurate anymore, leading to backfilling a lot of them (in the daily cronjob), leading to unnecess...
closed
2024-02-22T22:43:05Z
2024-02-23T12:57:07Z
2024-02-23T12:57:06Z
severo
2,150,032,390
Bump the pip group across 2 directories with 1 update
Bumps the pip group with 1 update in the /front/admin_ui directory: [gradio](https://github.com/gradio-app/gradio). Updates `gradio` from 4.18.0 to 4.19.2 <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/gradio-app/gradio/releases">gradio's releases</a>.</em></p> <blockquote> ...
Bump the pip group across 2 directories with 1 update: Bumps the pip group with 1 update in the /front/admin_ui directory: [gradio](https://github.com/gradio-app/gradio). Updates `gradio` from 4.18.0 to 4.19.2 <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/gradio-app/gradio/...
closed
2024-02-22T22:15:37Z
2024-02-23T07:21:37Z
2024-02-22T22:53:16Z
dependabot[bot]
2,149,961,591
retry error PreviousStepStillProcessingError - at least every day
null
retry error PreviousStepStillProcessingError - at least every day:
closed
2024-02-22T21:22:26Z
2024-02-22T21:22:36Z
2024-02-22T21:22:35Z
severo
2,149,587,793
upgrade ruff to >=2.1.0
fixes #2479
upgrade ruff to >=2.1.0: fixes #2479
closed
2024-02-22T17:42:40Z
2024-02-22T18:00:18Z
2024-02-22T18:00:17Z
severo
2,149,560,027
remove unnecessary triggers from processing graph
null
remove unnecessary triggers from processing graph:
closed
2024-02-22T17:26:30Z
2024-02-22T17:59:25Z
2024-02-22T17:59:24Z
severo
2,149,253,370
count is deprecated
null
count is deprecated:
closed
2024-02-22T14:51:00Z
2024-02-22T14:52:55Z
2024-02-22T14:52:54Z
severo
2,149,206,724
use the new job runner version when migrating
null
use the new job runner version when migrating:
closed
2024-02-22T14:30:45Z
2024-02-22T14:32:10Z
2024-02-22T14:32:09Z
severo
2,148,702,361
Add schema.org/license and schema.org/identifier to Croissant.
null
Add schema.org/license and schema.org/identifier to Croissant.:
closed
2024-02-22T10:15:01Z
2024-02-22T13:57:29Z
2024-02-22T13:57:29Z
marcenacp
2,148,397,951
Update cryptography to 42.0.4 to fix vulnerability
Update `cryptography` to 42.0.4 to fix vulnerability: - Affected versions: >= 38.0.0, < 42.0.4 This will fix 12 Dependabot alerts.
Update cryptography to 42.0.4 to fix vulnerability: Update `cryptography` to 42.0.4 to fix vulnerability: - Affected versions: >= 38.0.0, < 42.0.4 This will fix 12 Dependabot alerts.
closed
2024-02-22T07:26:13Z
2024-02-22T09:14:57Z
2024-02-22T09:14:56Z
albertvillanova
2,147,809,473
Increase resources
Same as https://github.com/huggingface/datasets-server/pull/2468, increasing resources to 10/80/10 since we have like 28K pending jobs for split-duckdb-index-010
Increase resources: Same as https://github.com/huggingface/datasets-server/pull/2468, increasing resources to 10/80/10 since we have like 28K pending jobs for split-duckdb-index-010
closed
2024-02-21T22:18:48Z
2024-02-21T22:29:03Z
2024-02-21T22:29:02Z
AndreaFrancis
2,147,421,947
Update pandas version to 2.2
Part of https://github.com/huggingface/datasets-server/issues/1914 Depends on https://github.com/huggingface/datasets-server/pull/2482
Update pandas version to 2.2: Part of https://github.com/huggingface/datasets-server/issues/1914 Depends on https://github.com/huggingface/datasets-server/pull/2482
closed
2024-02-21T18:23:51Z
2024-02-23T16:47:44Z
2024-02-23T16:47:43Z
AndreaFrancis
2,147,388,275
Switch to duckdb 010 in search/filter service
Part of https://github.com/huggingface/datasets-server/issues/1914 Following https://github.com/huggingface/datasets-server/pull/2463 **WARNING:** This PR should be merged only once all "split-duckdb-index-010" entries have been computed. And workers should be scaled down to 0 before deploying. Changes done: - Mi...
Switch to duckdb 010 in search/filter service: Part of https://github.com/huggingface/datasets-server/issues/1914 Following https://github.com/huggingface/datasets-server/pull/2463 **WARNING:** This PR should be merged only once all "split-duckdb-index-010" entries have been computed. And workers should be scaled d...
closed
2024-02-21T18:04:22Z
2024-02-23T16:27:30Z
2024-02-23T16:27:29Z
AndreaFrancis
2,146,938,324
temporary hardcode filter target revision
null
temporary hardcode filter target revision:
closed
2024-02-21T14:47:22Z
2024-02-21T14:47:40Z
2024-02-21T14:47:40Z
AndreaFrancis
2,146,890,292
fix search/filter temporary hardcode target revision
null
fix search/filter temporary hardcode target revision:
closed
2024-02-21T14:25:54Z
2024-02-21T14:37:33Z
2024-02-21T14:37:32Z
AndreaFrancis
2,146,883,011
Upgrade ruff?
My VSCode plugin says: ``` Ruff >=0.2.1 required, but found 0.1.3 at /home/slesage/hf/datasets-server/services/worker/.venv/bin/ruff ```
Upgrade ruff?: My VSCode plugin says: ``` Ruff >=0.2.1 required, but found 0.1.3 at /home/slesage/hf/datasets-server/services/worker/.venv/bin/ruff ```
closed
2024-02-21T14:23:16Z
2024-02-22T18:00:19Z
2024-02-22T18:00:19Z
severo
2,146,813,357
Remove logic for "manual download" here?
Restriction to manual downloads is a feature for script-based datasets. Should we stop handling this in the code?
Remove logic for "manual download" here?: Restriction to manual downloads is a feature for script-based datasets. Should we stop handling this in the code?
closed
2024-02-21T13:51:55Z
2024-07-31T08:23:00Z
2024-07-31T08:23:00Z
severo
2,144,730,928
remove parallel steps
With this PR, we remove the weird concept of parallel steps (which do not exist in DAG software/theory, I think). Benefits: - multiple parts of the code are now simpler - fixes #2460 Drawbacks: - we might end up computing from "streaming" twice when the Parquet has any error. Anyway, it should be 1. exception...
remove parallel steps: With this PR, we remove the weird concept of parallel steps (which do not exist in DAG software/theory, I think). Benefits: - multiple parts of the code are now simpler - fixes #2460 Drawbacks: - we might end up computing from "streaming" twice when the Parquet has any error. Anyway, it...
closed
2024-02-20T16:07:15Z
2024-02-22T14:03:59Z
2024-02-22T14:03:58Z
severo
2,144,471,994
Remove unnecessary dependencies once we remove support for all script based datasets
Normally: services/worker should only depend on `datasets[audio,image]` to be able to load all the datasets. No need for specific ones like `pdf2image`, `kenlm` etc.
Remove unnecessary dependencies once we remove support for all script based datasets: Normally: services/worker should only depend on `datasets[audio,image]` to be able to load all the datasets. No need for specific ones like `pdf2image`, `kenlm` etc.
closed
2024-02-20T14:11:17Z
2024-05-29T09:18:37Z
2024-05-29T09:18:37Z
severo
2,144,467,936
remove dependency 'trec-car-tools' from worker
fixes #2474
remove dependency 'trec-car-tools' from worker: fixes #2474
closed
2024-02-20T14:09:21Z
2024-02-20T14:57:32Z
2024-02-20T14:57:31Z
severo
2,144,460,242
Remove vendored dependency `trec-car-tools`
See https://github.com/huggingface/datasets-server/tree/main/services/worker/vendors/trec-car-tools It's not required anymore since we don't allow datasets with script.
Remove vendored dependency `trec-car-tools`: See https://github.com/huggingface/datasets-server/tree/main/services/worker/vendors/trec-car-tools It's not required anymore since we don't allow datasets with script.
closed
2024-02-20T14:05:35Z
2024-02-20T14:57:32Z
2024-02-20T14:57:32Z
severo
2,144,064,689
Remove the parallel steps?
I think it would be simpler to delete the concept of "parallel" steps. Currently, we have two pairs of parallel steps: 1. `config-split-names-from-streaming` and `config-split-names-from-info` 2. `split-first-rows-from-streaming` and `split-first-rows-from-parquet` It generates some complications: - we don't...
Remove the parallel steps?: I think it would be simpler to delete the concept of "parallel" steps. Currently, we have two pairs of parallel steps: 1. `config-split-names-from-streaming` and `config-split-names-from-info` 2. `split-first-rows-from-streaming` and `split-first-rows-from-parquet` It generates som...
closed
2024-02-20T10:37:24Z
2024-02-22T14:04:00Z
2024-02-22T14:04:00Z
severo
2,143,176,730
Revert "increase resources to 10/80/10 (#2468)"
This reverts commit f495e2c437689519f6bed507fb43b5a6f2d9b39c.
Revert "increase resources to 10/80/10 (#2468)": This reverts commit f495e2c437689519f6bed507fb43b5a6f2d9b39c.
closed
2024-02-19T21:34:36Z
2024-02-19T21:35:09Z
2024-02-19T21:34:55Z
severo