id
int64
959M
2.55B
title
stringlengths
3
133
body
stringlengths
1
65.5k
description
stringlengths
5
65.6k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
closed_at
stringlengths
20
20
user
stringclasses
174 values
1,601,597,185
Turn /parquet-and-dataset-info into a config-level job
We should: - compute the values for every config independently: `config--parquet-and-dataset-info` - no need to compute things at the dataset level See #735
Turn /parquet-and-dataset-info into a config-level job: We should: - compute the values for every config independently: `config--parquet-and-dataset-info` - no need to compute things at the dataset level See #735
closed
2023-02-27T17:12:18Z
2023-05-15T08:18:19Z
2023-05-15T08:18:19Z
severo
1,601,595,175
Turn /parquet into a config-level job
We should: - compute the values for every config independently: `config--parquet` - compute the dataset-level response `dataset--parquet` each time a `config--parquet` is computed (allowing partial responses: some config responses can be missing or erroneous) - return the appropriate response on /parquet, depending ...
Turn /parquet into a config-level job: We should: - compute the values for every config independently: `config--parquet` - compute the dataset-level response `dataset--parquet` each time a `config--parquet` is computed (allowing partial responses: some config responses can be missing or erroneous) - return the appro...
closed
2023-02-27T17:11:15Z
2023-03-15T12:07:25Z
2023-03-15T11:03:49Z
severo
1,601,593,151
Turn /dataset-info into a config-level job
We should: - compute the values for every config independently: `config--dataset-info` - compute the dataset-level response `dataset--dataset-info` each time a `config--dataset-info` is computed (allowing partial responses: some config responses can be missing or erroneous) - return the appropriate response on /data...
Turn /dataset-info into a config-level job: We should: - compute the values for every config independently: `config--dataset-info` - compute the dataset-level response `dataset--dataset-info` each time a `config--dataset-info` is computed (allowing partial responses: some config responses can be missing or erroneous)...
closed
2023-02-27T17:10:08Z
2023-03-24T10:46:57Z
2023-03-24T10:46:57Z
severo
1,601,470,902
Turn /sizes into a config-level job
We should: - compute the values for every config independently: `config--sizes` - compute the dataset-level response `dataset--sizes` each time a `config--sizes` is computed (allowing partial responses: some config responses can be missing or erroneous) - return the appropriate response on /sizes, depending on the i...
Turn /sizes into a config-level job: We should: - compute the values for every config independently: `config--sizes` - compute the dataset-level response `dataset--sizes` each time a `config--sizes` is computed (allowing partial responses: some config responses can be missing or erroneous) - return the appropriate r...
closed
2023-02-27T16:01:43Z
2023-03-14T15:36:13Z
2023-03-14T15:36:13Z
severo
1,601,357,867
Ensure the dates stored in mongo (jobs, cache) are localized
See https://github.com/huggingface/datasets-server/pull/850#discussion_r1118518118.
Ensure the dates stored in mongo (jobs, cache) are localized: See https://github.com/huggingface/datasets-server/pull/850#discussion_r1118518118.
closed
2023-02-27T14:59:58Z
2024-08-22T09:49:11Z
2024-08-22T09:49:11Z
severo
1,601,220,984
Avoid collisions in unicity_id field in the jobs collection
See https://github.com/huggingface/datasets-server/pull/837#pullrequestreview-1315579718
Avoid collisions in unicity_id field in the jobs collection: See https://github.com/huggingface/datasets-server/pull/837#pullrequestreview-1315579718
closed
2023-02-27T13:47:38Z
2023-04-27T07:22:18Z
2023-04-27T07:22:18Z
severo
1,601,219,803
Unit test the database migration scripts
See https://github.com/huggingface/datasets-server/pull/837#pullrequestreview-1315579718
Unit test the database migration scripts: See https://github.com/huggingface/datasets-server/pull/837#pullrequestreview-1315579718
closed
2023-02-27T13:46:52Z
2023-03-29T18:06:09Z
2023-03-29T18:06:09Z
severo
1,601,216,873
Fix the id field in the migration scripts
See https://github.com/huggingface/datasets-server/pull/825#discussion_r1115923999
Fix the id field in the migration scripts: See https://github.com/huggingface/datasets-server/pull/825#discussion_r1115923999
closed
2023-02-27T13:44:58Z
2023-05-01T15:04:04Z
2023-05-01T15:04:04Z
severo
1,601,119,285
Dataset Viewer issue for UrukHan/t5-russian-summarization
### Link https://huggingface.co/datasets/UrukHan/t5-russian-summarization ### Description The dataset viewer is not working for dataset UrukHan/t5-russian-summarization. Error details: ``` Error code: ClientConnectionError ```
Dataset Viewer issue for UrukHan/t5-russian-summarization: ### Link https://huggingface.co/datasets/UrukHan/t5-russian-summarization ### Description The dataset viewer is not working for dataset UrukHan/t5-russian-summarization. Error details: ``` Error code: ClientConnectionError ```
closed
2023-02-27T12:46:32Z
2023-02-27T12:53:00Z
2023-02-27T12:53:00Z
islombek751
1,601,092,930
Contribute to https://github.com/huggingface/huggingface.js?
https://github.com/huggingface/huggingface.js is a JS client for the Hub and inference. We could propose to add a client for the datasets-server.
Contribute to https://github.com/huggingface/huggingface.js?: https://github.com/huggingface/huggingface.js is a JS client for the Hub and inference. We could propose to add a client for the datasets-server.
closed
2023-02-27T12:27:43Z
2023-04-08T15:04:09Z
2023-04-08T15:04:09Z
severo
1,601,000,591
Dataset Viewer issue for bigscience/P3
### Link https://huggingface.co/datasets/bigscience/P3 ### Description The dataset viewer is not working for dataset bigscience/P3. Error details: ``` Error code: ClientConnectionError ```
Dataset Viewer issue for bigscience/P3: ### Link https://huggingface.co/datasets/bigscience/P3 ### Description The dataset viewer is not working for dataset bigscience/P3. Error details: ``` Error code: ClientConnectionError ```
closed
2023-02-27T11:28:14Z
2023-03-01T12:30:34Z
2023-03-01T12:30:34Z
FangxuLiu
1,600,892,793
Dataset Viewer issue for openai/summarize_from_feedback
### Link https://huggingface.co/datasets/openai/summarize_from_feedback ### Description The dataset viewer is not working for dataset openai/summarize_from_feedback. Error details: ``` Error code: ClientConnectionError ```
Dataset Viewer issue for openai/summarize_from_feedback: ### Link https://huggingface.co/datasets/openai/summarize_from_feedback ### Description The dataset viewer is not working for dataset openai/summarize_from_feedback. Error details: ``` Error code: ClientConnectionError ```
closed
2023-02-27T10:23:18Z
2023-02-27T10:29:46Z
2023-02-27T10:29:45Z
lewtun
1,600,782,797
Turn `get_new_splits` into an abstract method
See https://github.com/huggingface/datasets-server/pull/839#issuecomment-1443264927. cc @AndreaFrancis
Turn `get_new_splits` into an abstract method: See https://github.com/huggingface/datasets-server/pull/839#issuecomment-1443264927. cc @AndreaFrancis
closed
2023-02-27T09:16:40Z
2023-05-10T16:05:12Z
2023-05-10T16:05:12Z
severo
1,600,717,827
Support all the characters in dataset, config and split
For example, a space is an allowed character in a config, while it's not supported in datasets-server. https://discuss.huggingface.co/t/problem-with-dataset-preview-with-audio-files/31475/3?u=severo cc @polinaeterna
Support all the characters in dataset, config and split: For example, a space is an allowed character in a config, while it's not supported in datasets-server. https://discuss.huggingface.co/t/problem-with-dataset-preview-with-audio-files/31475/3?u=severo cc @polinaeterna
closed
2023-02-27T08:32:34Z
2023-06-26T07:34:40Z
2023-06-26T07:34:40Z
severo
1,600,713,870
Store the parquet metadata in their own file?
See https://github.com/huggingface/datasets/issues/5380#issuecomment-1444281177 > From looking at Arrow's source, it seems Parquet stores metadata at the end, which means one needs to iterate over a Parquet file's data before accessing its metadata. We could mimic Dask to address this "limitation" and write metadata...
Store the parquet metadata in their own file?: See https://github.com/huggingface/datasets/issues/5380#issuecomment-1444281177 > From looking at Arrow's source, it seems Parquet stores metadata at the end, which means one needs to iterate over a Parquet file's data before accessing its metadata. We could mimic Dask ...
closed
2023-02-27T08:29:12Z
2023-05-01T15:04:07Z
2023-05-01T15:04:07Z
severo
1,599,353,609
POC: Adding mongo TTL index to Jobs
Will fix https://github.com/huggingface/datasets-server/issues/818 From mongo doc [TTL index](https://www.mongodb.com/docs/manual/core/index-ttl/) : > TTL indexes are special single-field indexes that MongoDB can use to automatically remove documents from a collection after a certain amount of time or at a specific...
POC: Adding mongo TTL index to Jobs: Will fix https://github.com/huggingface/datasets-server/issues/818 From mongo doc [TTL index](https://www.mongodb.com/docs/manual/core/index-ttl/) : > TTL indexes are special single-field indexes that MongoDB can use to automatically remove documents from a collection after a ce...
closed
2023-02-24T22:33:59Z
2023-02-27T18:54:09Z
2023-02-27T18:51:08Z
AndreaFrancis
1,599,176,975
Reorganize executor
Took @severo 's comments from https://github.com/huggingface/datasets-server/pull/827 and https://github.com/huggingface/datasets-server/pull/824 In particular: - I moved the queue related stuff to queue.py - I moved all the job runner related stuff to job_runner.py - I shortened the env variable names - I added...
Reorganize executor: Took @severo 's comments from https://github.com/huggingface/datasets-server/pull/827 and https://github.com/huggingface/datasets-server/pull/824 In particular: - I moved the queue related stuff to queue.py - I moved all the job runner related stuff to job_runner.py - I shortened the env vari...
closed
2023-02-24T19:29:09Z
2023-02-28T09:51:44Z
2023-02-28T09:49:13Z
lhoestq
1,598,844,039
Serve openapi.json from the docs
Currently https://github.com/huggingface/datasets-server/blob/main/chart/static-files/openapi.json is a file in chart; every change in it should trigger a new version of the Chart. But, semantically, it does not belong to the chart and should be part of the <strike>API service</strike> docs.
Serve openapi.json from the docs: Currently https://github.com/huggingface/datasets-server/blob/main/chart/static-files/openapi.json is a file in chart; every change in it should trigger a new version of the Chart. But, semantically, it does not belong to the chart and should be part of the <strike>API service</strike>...
closed
2023-02-24T15:24:29Z
2024-06-19T14:03:22Z
2024-06-19T14:03:22Z
severo
1,598,805,446
Setup argocd action
null
Setup argocd action:
closed
2023-02-24T15:06:05Z
2023-02-24T15:35:24Z
2023-02-24T15:32:32Z
severo
1,598,639,547
CI is failing due to vulnerability in markdown-it-py 2.1.0
Vulnerabilities: https://github.com/huggingface/datasets-server/actions/runs/4262829184/jobs/7418801386 ``` Found 2 known vulnerabilities in 1 package Name Version ID Fix Versions -------------- ------- ------------------- ------------ markdown-it-py 2.1.0 GHSA-jrwr-5x3p-hvc3 2.2.0 ma...
CI is failing due to vulnerability in markdown-it-py 2.1.0: Vulnerabilities: https://github.com/huggingface/datasets-server/actions/runs/4262829184/jobs/7418801386 ``` Found 2 known vulnerabilities in 1 package Name Version ID Fix Versions -------------- ------- ------------------- ------...
closed
2023-02-24T13:32:13Z
2023-02-24T13:55:21Z
2023-02-24T13:55:21Z
albertvillanova
1,598,584,025
[once in hfh] improve /parquet-and-dataset-info job runner
Once https://github.com/huggingface/huggingface_hub/pull/1331 is released, upgrade huggingface_hub and rework how we create the refs/convert/parquet branch in https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/parquet_and_dataset_info.py#L805 to have an empty history.
[once in hfh] improve /parquet-and-dataset-info job runner: Once https://github.com/huggingface/huggingface_hub/pull/1331 is released, upgrade huggingface_hub and rework how we create the refs/convert/parquet branch in https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/job_runners/parqu...
closed
2023-02-24T12:58:52Z
2023-04-11T14:14:41Z
2023-04-11T14:14:41Z
severo
1,598,448,053
chore: 🤖 upgrade dependencies to fix vulnerability
replaces #838
chore: 🤖 upgrade dependencies to fix vulnerability: replaces #838
closed
2023-02-24T11:23:22Z
2023-02-24T13:57:53Z
2023-02-24T13:55:20Z
severo
1,598,324,097
Some datasets on the hub have a broken refs/convert/parquet
See https://huggingface.co/datasets/EleutherAI/the_pile_deduplicated/tree/refs%2Fconvert%2Fparquet <img width="1074" alt="Capture d’écran 2023-02-24 à 11 11 35" src="https://user-images.githubusercontent.com/1676121/221152375-15b2a4a6-0017-4086-8824-88906a161478.png"> It seems like the branch has been created f...
Some datasets on the hub have a broken refs/convert/parquet: See https://huggingface.co/datasets/EleutherAI/the_pile_deduplicated/tree/refs%2Fconvert%2Fparquet <img width="1074" alt="Capture d’écran 2023-02-24 à 11 11 35" src="https://user-images.githubusercontent.com/1676121/221152375-15b2a4a6-0017-4086-8824-8890...
closed
2023-02-24T10:12:09Z
2023-04-13T15:04:18Z
2023-04-13T15:04:18Z
severo
1,598,233,383
Add pdb to avoid disruption when we update kubernetes
null
Add pdb to avoid disruption when we update kubernetes:
closed
2023-02-24T09:13:09Z
2023-02-24T10:23:54Z
2023-02-24T10:21:21Z
XciD
1,598,173,782
Dataset Viewer issue for bridgeconn/snow-mountain
### Link https://huggingface.co/datasets/bridgeconn/snow-mountain ### Description The dataset viewer is not working for dataset bridgeconn/snow-mountain. Error details: ``` Error code: ResponseNotReady ```
Dataset Viewer issue for bridgeconn/snow-mountain: ### Link https://huggingface.co/datasets/bridgeconn/snow-mountain ### Description The dataset viewer is not working for dataset bridgeconn/snow-mountain. Error details: ``` Error code: ResponseNotReady ```
closed
2023-02-24T08:34:01Z
2023-02-28T08:28:54Z
2023-02-28T08:28:54Z
anjalyjayakrishnan
1,597,557,654
Dataset Viewer issue for artem9k/ai-text-detection-pile
### Link https://huggingface.co/datasets/artem9k/ai-text-detection-pile ### Description The dataset viewer is not working for dataset artem9k/ai-text-detection-pile. I am trying to load a jsonl file Error details: ``` Error code: ResponseNotReady ```
Dataset Viewer issue for artem9k/ai-text-detection-pile: ### Link https://huggingface.co/datasets/artem9k/ai-text-detection-pile ### Description The dataset viewer is not working for dataset artem9k/ai-text-detection-pile. I am trying to load a jsonl file Error details: ``` Error code: ResponseNotReady `...
closed
2023-02-23T21:33:10Z
2023-02-28T08:29:32Z
2023-02-28T08:29:31Z
sumo43
1,597,547,115
Dataset Viewer issue for birgermoell/synthetic_compassion
### Link https://huggingface.co/datasets/birgermoell/synthetic_compassion ### Description The dataset viewer is not working for dataset birgermoell/synthetic_compassion. Error details: ``` Error code: ClientConnectionError ``` I'm getting this error when creating my new dataset. I have a metadata.csv fi...
Dataset Viewer issue for birgermoell/synthetic_compassion: ### Link https://huggingface.co/datasets/birgermoell/synthetic_compassion ### Description The dataset viewer is not working for dataset birgermoell/synthetic_compassion. Error details: ``` Error code: ClientConnectionError ``` I'm getting this e...
closed
2023-02-23T21:22:32Z
2023-03-01T12:30:49Z
2023-03-01T12:30:48Z
BirgerMoell
1,597,542,036
Adding missing function on split-names-from-dataset-info worker
`get_new_splits` function is used when creating children jobs, this function was missing in the new worker `split-names-from-dataset-info`
Adding missing function on split-names-from-dataset-info worker: `get_new_splits` function is used when creating children jobs, this function was missing in the new worker `split-names-from-dataset-info`
closed
2023-02-23T21:17:40Z
2023-02-27T09:17:08Z
2023-02-24T20:30:39Z
AndreaFrancis
1,597,447,850
chore(deps): bump markdown-it-py from 2.1.0 to 2.2.0 in /front/admin_ui
Bumps [markdown-it-py](https://github.com/executablebooks/markdown-it-py) from 2.1.0 to 2.2.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/executablebooks/markdown-it-py/releases">markdown-it-py's releases</a>.</em></p> <blockquote> <h2>v2.2.0</h2> <h2>What's Changed</h2> ...
chore(deps): bump markdown-it-py from 2.1.0 to 2.2.0 in /front/admin_ui: Bumps [markdown-it-py](https://github.com/executablebooks/markdown-it-py) from 2.1.0 to 2.2.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/executablebooks/markdown-it-py/releases">markdown-it-py's rel...
closed
2023-02-23T19:55:42Z
2023-02-24T12:53:30Z
2023-02-24T12:53:19Z
dependabot[bot]
1,597,359,687
Fix typos in split-names-from-streaming
null
Fix typos in split-names-from-streaming:
closed
2023-02-23T18:45:40Z
2023-02-27T16:22:09Z
2023-02-27T16:19:24Z
AndreaFrancis
1,597,200,392
Shoud we run the complete CI on push to main?
Currently we only build the docker images when merging a PR to `main`, and we have to rely on the PR CI to be all green. Maybe we could launch everything when we merge into `main`, just to be sure, and to keep a trace of possible issues.
Shoud we run the complete CI on push to main?: Currently we only build the docker images when merging a PR to `main`, and we have to rely on the PR CI to be all green. Maybe we could launch everything when we merge into `main`, just to be sure, and to keep a trace of possible issues.
closed
2023-02-23T16:51:11Z
2023-08-07T15:58:48Z
2023-08-05T15:03:58Z
severo
1,596,911,335
Move `required_by_dataset_viewer` to services/api
See https://github.com/huggingface/datasets-server/pull/817#issuecomment-1431477692
Move `required_by_dataset_viewer` to services/api: See https://github.com/huggingface/datasets-server/pull/817#issuecomment-1431477692
closed
2023-02-23T13:51:55Z
2023-06-14T12:15:06Z
2023-06-14T12:15:05Z
severo
1,595,827,930
Endpoint respond by input type
Part of https://github.com/huggingface/datasets-server/issues/755 This change will allow endpoints respond based on input type having a list of processing steps e.g: - /splits with dataset param will reach out to /splits cache kind - /splits with config param will reach out to /splits-from-streaming first and then...
Endpoint respond by input type: Part of https://github.com/huggingface/datasets-server/issues/755 This change will allow endpoints respond based on input type having a list of processing steps e.g: - /splits with dataset param will reach out to /splits cache kind - /splits with config param will reach out to /spli...
closed
2023-02-22T21:04:03Z
2023-03-01T14:40:18Z
2023-03-01T14:37:44Z
AndreaFrancis
1,595,323,349
Lower parquet row group size for image datasets
REQUIRES test_get_writer_batch_size to be merged, and to update the `datasets` version to use this feature. This should help optimize random access to parquet files for https://github.com/huggingface/datasets-server/pull/687/files
Lower parquet row group size for image datasets: REQUIRES test_get_writer_batch_size to be merged, and to update the `datasets` version to use this feature. This should help optimize random access to parquet files for https://github.com/huggingface/datasets-server/pull/687/files
closed
2023-02-22T15:34:07Z
2023-04-21T14:12:40Z
2023-04-21T14:09:52Z
lhoestq
1,595,294,878
Fix CI mypy after datasets 2.10.0 release
Fix #831.
Fix CI mypy after datasets 2.10.0 release: Fix #831.
closed
2023-02-22T15:16:56Z
2023-02-22T15:36:03Z
2023-02-22T15:33:00Z
albertvillanova
1,595,289,463
CI is broken after datasets 2.10.0 release
After updating `datasets` dependency to 2.10.0, the CI is broken: ``` error: Skipping analyzing "datasets": module is installed, but missing library stubs or py.typed marker ``` See: https://github.com/huggingface/datasets-server/actions/runs/4243203372/jobs/7375645159
CI is broken after datasets 2.10.0 release: After updating `datasets` dependency to 2.10.0, the CI is broken: ``` error: Skipping analyzing "datasets": module is installed, but missing library stubs or py.typed marker ``` See: https://github.com/huggingface/datasets-server/actions/runs/4243203372/jobs/7375645159
closed
2023-02-22T15:13:44Z
2023-02-22T15:33:15Z
2023-02-22T15:33:15Z
albertvillanova
1,595,123,903
Add `worker_version` to queued `Job`
This can be useful when e.g. killing a zombie job because right now we assume that the current worker version is the one the worker was using when the job started. Then we can properly kill zombie jobs that come from an older worker version. See https://github.com/huggingface/datasets-server/pull/827#discussion_r...
Add `worker_version` to queued `Job`: This can be useful when e.g. killing a zombie job because right now we assume that the current worker version is the one the worker was using when the job started. Then we can properly kill zombie jobs that come from an older worker version. See https://github.com/huggingface...
closed
2023-02-22T13:39:54Z
2023-04-03T09:19:19Z
2023-04-02T15:03:47Z
lhoestq
1,595,108,195
Update datasets dependency to 2.10.0 version
Close #828.
Update datasets dependency to 2.10.0 version: Close #828.
closed
2023-02-22T13:28:59Z
2023-02-24T14:43:44Z
2023-02-24T14:40:51Z
albertvillanova
1,595,098,486
Update datasets to 2.10.0
After 2.10.0 `datasets` release, update dependencies on it.
Update datasets to 2.10.0: After 2.10.0 `datasets` release, update dependencies on it.
closed
2023-02-22T13:22:19Z
2023-02-24T14:40:53Z
2023-02-24T14:40:53Z
albertvillanova
1,592,084,545
Set error response when zombie job is killed
Last step for https://github.com/huggingface/datasets-server/issues/741 regarding zombies
Set error response when zombie job is killed: Last step for https://github.com/huggingface/datasets-server/issues/741 regarding zombies
closed
2023-02-20T15:40:57Z
2023-02-24T17:38:06Z
2023-02-22T13:59:50Z
lhoestq
1,589,688,709
Kill zombies
I defined zombies as started jobs with a `last_heartbeat` that is older than `max_missing_heartbeats * heartbeat_time_interval_seconds` Then I added `kill_zombies` to the worker executor. It runs every `kill_zombies_time_interval_seconds` seconds and set the zombie jobs status to ERROR. Given that heartbeats happ...
Kill zombies: I defined zombies as started jobs with a `last_heartbeat` that is older than `max_missing_heartbeats * heartbeat_time_interval_seconds` Then I added `kill_zombies` to the worker executor. It runs every `kill_zombies_time_interval_seconds` seconds and set the zombie jobs status to ERROR. Given that h...
closed
2023-02-17T17:03:15Z
2023-02-23T18:21:59Z
2023-02-17T18:15:05Z
lhoestq
1,588,325,083
Renaming split-names to split-names-from-streaming
Part of code for https://github.com/huggingface/datasets-server/issues/755 Since we are going to handle two different sources for split-names, the previous worker/processing step/cache-kind have to be changed to /split-names-from-streaming (There already exist /split-names-from-dataset-info) TODO: Once this PR is mer...
Renaming split-names to split-names-from-streaming: Part of code for https://github.com/huggingface/datasets-server/issues/755 Since we are going to handle two different sources for split-names, the previous worker/processing step/cache-kind have to be changed to /split-names-from-streaming (There already exist /split...
closed
2023-02-16T20:33:29Z
2023-02-27T13:45:21Z
2023-02-17T14:33:40Z
AndreaFrancis
1,586,393,904
Add heartbeat
Add heartbeat to workers. It adds a `last_heartbeat` field to documents in the queue. The field is not mandatory - it only appears for jobs that are or were running when a heartbeat happens (once per minute by default). ## Implementations details I added a `WorkerExecutor` that runs the worker loop in a **s...
Add heartbeat: Add heartbeat to workers. It adds a `last_heartbeat` field to documents in the queue. The field is not mandatory - it only appears for jobs that are or were running when a heartbeat happens (once per minute by default). ## Implementations details I added a `WorkerExecutor` that runs the worke...
closed
2023-02-15T19:07:08Z
2023-02-27T10:50:48Z
2023-02-16T22:54:15Z
lhoestq
1,585,902,949
Update dependencies
This PR upgrades Starlette (vulnerability) (already done by #821 for front/ - this PR also fixes services/admin and services/api) It also upgrades all the dependencies to the next minor version. I checked the important ones: starlette, uvicorn. And nearly all the changes come from the upgrade of mypy to v1 -> I ...
Update dependencies: This PR upgrades Starlette (vulnerability) (already done by #821 for front/ - this PR also fixes services/admin and services/api) It also upgrades all the dependencies to the next minor version. I checked the important ones: starlette, uvicorn. And nearly all the changes come from the upgrad...
closed
2023-02-15T13:55:35Z
2023-02-15T15:35:42Z
2023-02-15T15:32:56Z
severo
1,585,442,523
Add /dataset-status to the admin panel
See https://github.com/huggingface/datasets-server/pull/815
Add /dataset-status to the admin panel: See https://github.com/huggingface/datasets-server/pull/815
closed
2023-02-15T08:41:15Z
2023-04-11T11:47:25Z
2023-04-11T11:47:25Z
severo
1,584,978,575
chore(deps): bump starlette from 0.23.1 to 0.25.0 in /front/admin_ui
Bumps [starlette](https://github.com/encode/starlette) from 0.23.1 to 0.25.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/encode/starlette/releases">starlette's releases</a>.</em></p> <blockquote> <h2>Version 0.25.0</h2> <h3>Fixed</h3> <ul> <li>Limit the number of fields a...
chore(deps): bump starlette from 0.23.1 to 0.25.0 in /front/admin_ui: Bumps [starlette](https://github.com/encode/starlette) from 0.23.1 to 0.25.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/encode/starlette/releases">starlette's releases</a>.</em></p> <blockquote> <h2>Ve...
closed
2023-02-14T23:30:07Z
2023-02-15T14:00:53Z
2023-02-15T14:00:52Z
dependabot[bot]
1,584,733,391
Adding new job runner for split names based on dataset info cached response
As part of https://github.com/huggingface/datasets-server/issues/755 we will need a new job runner to compute split names from dataset-info response.
Adding new job runner for split names based on dataset info cached response: As part of https://github.com/huggingface/datasets-server/issues/755 we will need a new job runner to compute split names from dataset-info response.
closed
2023-02-14T19:48:50Z
2023-02-15T17:06:17Z
2023-02-15T17:03:33Z
AndreaFrancis
1,584,661,389
Change db migrations from jobs to init containers
From comment https://github.com/huggingface/datasets-server/pull/810#discussion_r1105871837 > there are so many past jobs that we don't need anymore that we could delete them periodically. Manually or with a cron job. Otherwise, the migration jobs take forever, and the helm update timeouts and fails @severo suges...
Change db migrations from jobs to init containers: From comment https://github.com/huggingface/datasets-server/pull/810#discussion_r1105871837 > there are so many past jobs that we don't need anymore that we could delete them periodically. Manually or with a cron job. Otherwise, the migration jobs take forever, and ...
closed
2023-02-14T18:47:53Z
2023-04-20T15:04:08Z
2023-04-20T15:04:08Z
AndreaFrancis
1,584,659,576
Periodically clean the queue database deleting the old, finished jobs
From comment on https://github.com/huggingface/datasets-server/pull/810#discussion_r1105871837 > there are so many past jobs that we don't need anymore that we could delete them periodically. Manually or with a cron job. Otherwise, the migration jobs take forever, and the helm update timeouts and fails. @severo s...
Periodically clean the queue database deleting the old, finished jobs: From comment on https://github.com/huggingface/datasets-server/pull/810#discussion_r1105871837 > there are so many past jobs that we don't need anymore that we could delete them periodically. Manually or with a cron job. Otherwise, the migration ...
closed
2023-02-14T18:46:10Z
2023-02-27T18:51:10Z
2023-02-27T18:51:10Z
AndreaFrancis
1,584,622,342
Separate endpoint from processing step logic
Firs part of code for https://github.com/huggingface/datasets-server/issues/755 Since we are going to have two different processing steps for one same endpoint: - Changed the dependency between endpoint-cache_kind/job_type/name in ProcessingStep - Moved configuration for endpoints to api service, it should n...
Separate endpoint from processing step logic: Firs part of code for https://github.com/huggingface/datasets-server/issues/755 Since we are going to have two different processing steps for one same endpoint: - Changed the dependency between endpoint-cache_kind/job_type/name in ProcessingStep - Moved configura...
closed
2023-02-14T18:14:49Z
2023-02-23T13:52:29Z
2023-02-17T15:38:41Z
AndreaFrancis
1,584,452,027
Create an orchestrator
## Proposal We should add a new service: orchestrator. On one side, it would receive events: - webhooks when a dataset has changed (added, updated, deleted, changed to gated, etc.) - manual trigger: refresh a dataset, update all the datasets for a specific step, etc. On the other side, it would command the j...
Create an orchestrator: ## Proposal We should add a new service: orchestrator. On one side, it would receive events: - webhooks when a dataset has changed (added, updated, deleted, changed to gated, etc.) - manual trigger: refresh a dataset, update all the datasets for a specific step, etc. On the other side...
closed
2023-02-14T16:17:44Z
2024-02-02T16:59:23Z
2024-02-02T16:59:23Z
severo
1,584,432,443
feat: 🎸 add a new admin endpoint: /dataset-status
While looking at https://github.com/huggingface/datasets-server/issues/764 and https://github.com/huggingface/datasets-server/issues/736#issuecomment-1412242342, I added a new admin endpoint that gives the current status of a dataset. I think it can help to get insights about a dataset when doing support manually. I...
feat: 🎸 add a new admin endpoint: /dataset-status: While looking at https://github.com/huggingface/datasets-server/issues/764 and https://github.com/huggingface/datasets-server/issues/736#issuecomment-1412242342, I added a new admin endpoint that gives the current status of a dataset. I think it can help to get insigh...
closed
2023-02-14T16:06:22Z
2023-02-15T08:43:33Z
2023-02-15T08:40:47Z
severo
1,583,861,483
refactor: 💡 factorize the workers templates
null
refactor: 💡 factorize the workers templates:
closed
2023-02-14T10:03:59Z
2023-02-14T16:15:51Z
2023-02-14T16:12:23Z
severo
1,583,840,800
fix: 🐛 add missing config
null
fix: 🐛 add missing config:
closed
2023-02-14T09:51:14Z
2023-02-14T09:54:59Z
2023-02-14T09:52:10Z
severo
1,583,772,932
fix: 🐛 add missing volumes
null
fix: 🐛 add missing volumes:
closed
2023-02-14T09:07:25Z
2023-02-14T09:25:05Z
2023-02-14T09:22:04Z
severo
1,583,671,084
fix: 🐛 ensure all the workers have the same access to the disk
null
fix: 🐛 ensure all the workers have the same access to the disk:
closed
2023-02-14T07:52:39Z
2023-02-14T07:57:38Z
2023-02-14T07:54:56Z
severo
1,583,124,191
WIP- Separate endoint from processing step
Firs part of code for https://github.com/huggingface/datasets-server/issues/755 Since we are going to have two different processing steps for one same endpoint: - Changed the dependency between endpoint-cache_kind in ProcessingStep - Moved configuration for endpoints to api service, it should not exist in other laye...
WIP- Separate endoint from processing step: Firs part of code for https://github.com/huggingface/datasets-server/issues/755 Since we are going to have two different processing steps for one same endpoint: - Changed the dependency between endpoint-cache_kind in ProcessingStep - Moved configuration for endpoints to ap...
closed
2023-02-13T21:54:54Z
2023-02-14T18:19:48Z
2023-02-14T17:45:54Z
AndreaFrancis
1,582,859,313
Ignore big datasets from external files
Raise an error for big datasets that use a loading script to download data files external to HF. It was the only case left for a big dataset to not be ignored. Hopefully it makes the `/parquet-and-dataset-info` job stop wasting too much resources on big datasets. Close https://github.com/huggingface/datasets-serv...
Ignore big datasets from external files: Raise an error for big datasets that use a loading script to download data files external to HF. It was the only case left for a big dataset to not be ignored. Hopefully it makes the `/parquet-and-dataset-info` job stop wasting too much resources on big datasets. Close htt...
closed
2023-02-13T18:34:04Z
2023-02-15T14:13:25Z
2023-02-15T14:10:18Z
lhoestq
1,582,609,623
Update chart
depends on #807
Update chart: depends on #807
closed
2023-02-13T15:53:14Z
2023-02-13T18:19:19Z
2023-02-13T18:16:08Z
severo
1,582,587,195
chore: 🤖 add VERSION file
null
chore: 🤖 add VERSION file:
closed
2023-02-13T15:40:21Z
2023-02-13T18:18:16Z
2023-02-13T18:15:35Z
severo
1,582,406,784
Parquet export: ignore ALL datasets bigger than a certain size
Cases to ignore: - [x] the dataset repository is >max_size - [ ] the dataset uses a script that downloads more >max_size of data To fix the second case I think we can pass a custom download manager to the dataset builder `_split_generators` to record the size of the files to donwload. It can also be implemented by...
Parquet export: ignore ALL datasets bigger than a certain size: Cases to ignore: - [x] the dataset repository is >max_size - [ ] the dataset uses a script that downloads more >max_size of data To fix the second case I think we can pass a custom download manager to the dataset builder `_split_generators` to record ...
closed
2023-02-13T14:01:44Z
2023-02-15T14:10:20Z
2023-02-15T14:10:20Z
lhoestq
1,582,380,227
Rename obsolete mentions to datasets_based
- rename prefix WORKER_LOOP_ to WORKER_ - rename DATASETS_BASED_ENDPOINT to WORKER_ENDPOINT - rename DATASETS_BASED_CONTENT_MAX_BYTES to WORKER_CONTENT_MAX_BYTES - ensure WORKER_STORAGE_PATHS is always used in Helm
Rename obsolete mentions to datasets_based: - rename prefix WORKER_LOOP_ to WORKER_ - rename DATASETS_BASED_ENDPOINT to WORKER_ENDPOINT - rename DATASETS_BASED_CONTENT_MAX_BYTES to WORKER_CONTENT_MAX_BYTES - ensure WORKER_STORAGE_PATHS is always used in Helm
closed
2023-02-13T13:44:22Z
2023-02-13T14:43:27Z
2023-02-13T14:40:10Z
severo
1,580,402,058
Rollback mechanism for /parquet-and-dataset-info
This job creates parquet files in Hub repos. When the admin cancel endpoint is called, we need to rollback the revert the commits that were done in the repo. One way to do it would be to add the job ID in the commit description, this way we can know which commits to revert. Though it may require to patch `dataset...
Rollback mechanism for /parquet-and-dataset-info: This job creates parquet files in Hub repos. When the admin cancel endpoint is called, we need to rollback the revert the commits that were done in the repo. One way to do it would be to add the job ID in the commit description, this way we can know which commits to ...
closed
2023-02-10T22:11:42Z
2023-03-21T15:04:11Z
2023-03-21T15:04:11Z
lhoestq
1,579,363,570
Upgrade dependencies, fix kenlm
null
Upgrade dependencies, fix kenlm:
closed
2023-02-10T09:47:36Z
2023-02-10T16:33:47Z
2023-02-10T16:31:04Z
severo
1,578,218,535
Generic worker
This PR allows a worker to process different job types. We can still dedicate workers to a sublist of jobs using a comma-separated list of the jobs in `WORKER_LOOP_ONLY_JOB_TYPES`. This way, we will be able to reduce the allocated but unused resources. As you can see in chart/env/prod.yaml, I let all the previous...
Generic worker: This PR allows a worker to process different job types. We can still dedicate workers to a sublist of jobs using a comma-separated list of the jobs in `WORKER_LOOP_ONLY_JOB_TYPES`. This way, we will be able to reduce the allocated but unused resources. As you can see in chart/env/prod.yaml, I let ...
closed
2023-02-09T16:31:11Z
2023-02-13T15:32:11Z
2023-02-13T15:29:23Z
severo
1,578,178,282
Add admin ui url
null
Add admin ui url:
closed
2023-02-09T16:05:45Z
2023-02-10T22:15:20Z
2023-02-10T22:12:28Z
lhoestq
1,577,781,329
Move workers/datasets_based to services/worker
Based on #792.
Move workers/datasets_based to services/worker: Based on #792.
closed
2023-02-09T12:13:56Z
2023-02-13T08:33:21Z
2023-02-13T08:30:35Z
severo
1,577,702,774
Use shared action to publish helm chart
null
Use shared action to publish helm chart:
closed
2023-02-09T11:17:50Z
2023-02-09T12:50:28Z
2023-02-09T12:47:36Z
rtrompier
1,577,381,904
Allow to use http instead of https
https://github.com/huggingface/private-hub-package/issues/15
Allow to use http instead of https: https://github.com/huggingface/private-hub-package/issues/15
closed
2023-02-09T07:27:13Z
2023-02-09T08:35:59Z
2023-02-09T08:33:14Z
rtrompier
1,576,899,889
Change split names upstream and source
Partial fix for https://github.com/huggingface/datasets-server/issues/755 - Changing the predecessor of split-names to dataset-info - Changing source of split-names (db instead of dataset lib in streaming mode)
Change split names upstream and source: Partial fix for https://github.com/huggingface/datasets-server/issues/755 - Changing the predecessor of split-names to dataset-info - Changing source of split-names (db instead of dataset lib in streaming mode)
closed
2023-02-08T22:43:45Z
2023-02-10T08:59:23Z
2023-02-09T17:15:27Z
AndreaFrancis
1,576,658,518
Paginate responses
### Link _No response_ ### Description As per comment on https://github.com/huggingface/datasets-server/pull/780#issuecomment-1422238212 We should paginate split names, config names, parquet files response having a max size configuration
Paginate responses: ### Link _No response_ ### Description As per comment on https://github.com/huggingface/datasets-server/pull/780#issuecomment-1422238212 We should paginate split names, config names, parquet files response having a max size configuration
closed
2023-02-08T19:13:26Z
2023-04-13T15:04:21Z
2023-04-13T15:04:20Z
AndreaFrancis
1,576,655,790
Handle the 16MB limit in MongoDB with a dedicated error
### Link _No response_ ### Description As per comment on https://github.com/huggingface/datasets-server/pull/780#issuecomment-1422238212 We should handle the db limitation per document with a dedicated error
Handle the 16MB limit in MongoDB with a dedicated error: ### Link _No response_ ### Description As per comment on https://github.com/huggingface/datasets-server/pull/780#issuecomment-1422238212 We should handle the db limitation per document with a dedicated error
closed
2023-02-08T19:11:14Z
2023-08-07T15:56:31Z
2023-08-05T15:04:00Z
AndreaFrancis
1,576,598,432
Delete extracted downloaded files of a dataset
Will close https://github.com/huggingface/datasets-server/issues/753
Delete extracted downloaded files of a dataset: Will close https://github.com/huggingface/datasets-server/issues/753
closed
2023-02-08T18:24:01Z
2023-02-23T14:15:07Z
2023-02-22T18:49:25Z
polinaeterna
1,576,516,385
Basic stats
Started a `/basic-stats` endpoints that computes histogram for numerical data using dask. Not sure if we want to merge this feature right away, I implemented this mostly to trigger some discussions on how to add new data aggregates: does this way sound correct to you ? Feel free to also comment on how we could ex...
Basic stats: Started a `/basic-stats` endpoints that computes histogram for numerical data using dask. Not sure if we want to merge this feature right away, I implemented this mostly to trigger some discussions on how to add new data aggregates: does this way sound correct to you ? Feel free to also comment on ho...
closed
2023-02-08T17:21:27Z
2023-02-17T14:08:21Z
2023-02-17T14:04:05Z
lhoestq
1,576,505,891
Check dataset connection before migration job (and other apps)
Before starting the services (api, admin) and workers, we ensure the database is accessible and the assets directory (if needed) exists. In the case of the migration job: if the database cannot be accessed, we skip the migration, to avoid blocking Helm Fixes #763. Replaces #767. Depends on #791.
Check dataset connection before migration job (and other apps): Before starting the services (api, admin) and workers, we ensure the database is accessible and the assets directory (if needed) exists. In the case of the migration job: if the database cannot be accessed, we skip the migration, to avoid blocking Helm ...
closed
2023-02-08T17:14:07Z
2023-02-10T17:23:58Z
2023-02-10T17:20:41Z
severo
1,576,432,647
use classmethod for factories instead of staticmethod
See https://stackoverflow.com/questions/12179271/meaning-of-classmethod-and-staticmethod-for-beginner for example Depends on: #790
use classmethod for factories instead of staticmethod: See https://stackoverflow.com/questions/12179271/meaning-of-classmethod-and-staticmethod-for-beginner for example Depends on: #790
closed
2023-02-08T16:27:04Z
2023-02-10T16:32:18Z
2023-02-10T16:29:43Z
severo
1,576,413,974
feat: 🎸 ensure immutability of the configs
if we decide to allow changing config parameters later, it will need to be explicit. Depends on: #784
feat: 🎸 ensure immutability of the configs: if we decide to allow changing config parameters later, it will need to be explicit. Depends on: #784
closed
2023-02-08T16:14:32Z
2023-02-10T16:10:19Z
2023-02-10T16:07:28Z
severo
1,576,145,015
feat: 🎸 add logs when an unexpected error occurs
null
feat: 🎸 add logs when an unexpected error occurs:
closed
2023-02-08T13:40:45Z
2023-02-08T14:52:32Z
2023-02-08T14:49:31Z
severo
1,575,998,706
feat: remove job after 5 minutes
To avoid to block the uninstall process : the PVC wait for job deletion before remove.
feat: remove job after 5 minutes: To avoid to block the uninstall process : the PVC wait for job deletion before remove.
closed
2023-02-08T12:05:15Z
2023-02-08T15:49:33Z
2023-02-08T15:46:44Z
rtrompier
1,575,849,150
Fix dockerfiles
Running `make e2e` locally, I had the following error: ``` => ERROR [e2e-worker-dataset-info 11/13] RUN poetry install --no-cache ...
Fix dockerfiles: Running `make e2e` locally, I had the following error: ``` => ERROR [e2e-worker-dataset-info 11/13] RUN poetry install --no-cache ...
closed
2023-02-08T10:16:56Z
2023-02-08T12:00:35Z
2023-02-08T11:57:24Z
severo
1,575,741,404
ci: 🎡 run e2e tests only once for a push or pull-request
see https://github.com/huggingface/datasets-server/pull/775#issuecomment-1422255785.
ci: 🎡 run e2e tests only once for a push or pull-request: see https://github.com/huggingface/datasets-server/pull/775#issuecomment-1422255785.
closed
2023-02-08T09:04:35Z
2023-02-08T11:58:51Z
2023-02-08T11:56:15Z
severo
1,575,223,675
Dataset Viewer issue for allenai/scirepeval
### Link https://huggingface.co/datasets/allenai/scirepeval ### Description The dataset viewer is not working for dataset allenai/scirepeval. Error details: ``` Error code: ResponseNotReady ```
Dataset Viewer issue for allenai/scirepeval: ### Link https://huggingface.co/datasets/allenai/scirepeval ### Description The dataset viewer is not working for dataset allenai/scirepeval. Error details: ``` Error code: ResponseNotReady ```
closed
2023-02-08T00:04:58Z
2023-02-08T15:59:04Z
2023-02-08T15:59:03Z
amanpreet692
1,575,000,363
feat: 🎸 add concept of Resource
This long PR creates a new concept, Resource. The resource is aimed at being allocated and then released after use: connection to a database, modification of the datasets library config, or creation of storage directories... Before this PR, it was done in the Config step. Now the Config step should never raise and sho...
feat: 🎸 add concept of Resource: This long PR creates a new concept, Resource. The resource is aimed at being allocated and then released after use: connection to a database, modification of the datasets library config, or creation of storage directories... Before this PR, it was done in the Config step. Now the Conf...
closed
2023-02-07T20:47:27Z
2023-02-10T15:11:17Z
2023-02-10T15:08:39Z
severo
1,574,932,026
Updating docker image hash
Updating docker image hash values
Updating docker image hash: Updating docker image hash values
closed
2023-02-07T19:51:32Z
2023-02-07T20:02:07Z
2023-02-07T19:56:06Z
AndreaFrancis
1,574,829,491
Catch KILL signal from the worker to exit cleanly
in the worker_loop
Catch KILL signal from the worker to exit cleanly: in the worker_loop
closed
2023-02-07T18:25:10Z
2023-03-28T15:04:19Z
2023-03-28T15:04:19Z
severo
1,574,575,096
Update the repo. and remove remaining variables that are not in use.
Update the repo. and remove remaining variables that are not in use.
Update the repo. and remove remaining variables that are not in use. : Update the repo. and remove remaining variables that are not in use.
closed
2023-02-07T15:50:09Z
2023-02-07T16:00:41Z
2023-02-07T15:59:01Z
JatinKumar001
1,574,536,454
Dataset info big content error
Adding content size validation before trying to insert/update in db. Fix for https://github.com/huggingface/datasets-server/issues/762 and https://github.com/huggingface/datasets-server/issues/770 New `DATASETS_BASED_CONTENT_MAX_BYTES` configuration is added to limit the size of the content result of a worker compu...
Dataset info big content error: Adding content size validation before trying to insert/update in db. Fix for https://github.com/huggingface/datasets-server/issues/762 and https://github.com/huggingface/datasets-server/issues/770 New `DATASETS_BASED_CONTENT_MAX_BYTES` configuration is added to limit the size of the ...
closed
2023-02-07T15:26:20Z
2023-02-09T17:58:01Z
2023-02-09T17:55:34Z
AndreaFrancis
1,573,994,373
Pass processing step to worker
null
Pass processing step to worker:
closed
2023-02-07T09:38:20Z
2023-02-07T13:15:11Z
2023-02-07T12:35:51Z
severo
1,572,954,432
Fix CI mypy error: "WorkerFactory" has no attribute "app_config"
Fix type checking on WorkerLoop.loop method, when tryin to access the worker_factory's attribute app_config. This PR fixes an issue introduced by: - #774 ``` src/datasets_based/worker_loop.py:100: error: "WorkerFactory" has no attribute "app_config" ```
Fix CI mypy error: "WorkerFactory" has no attribute "app_config": Fix type checking on WorkerLoop.loop method, when tryin to access the worker_factory's attribute app_config. This PR fixes an issue introduced by: - #774 ``` src/datasets_based/worker_loop.py:100: error: "WorkerFactory" has no attribute "app_conf...
closed
2023-02-06T17:11:28Z
2023-02-07T12:28:29Z
2023-02-07T12:25:40Z
albertvillanova
1,572,543,532
Remove variable
null
Remove variable:
closed
2023-02-06T13:11:08Z
2023-02-07T15:41:42Z
2023-02-06T14:19:17Z
JatinKumar001
1,572,509,231
Dataset Viewer issue for mozilla-foundation/common_voice_11_0
### Link https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0 ### Description The dataset viewer for Portuguese Language is not working for dataset mozilla-foundation/common_voice_11_0. Error details: ``` Error code: ResponseNotReady ```
Dataset Viewer issue for mozilla-foundation/common_voice_11_0: ### Link https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0 ### Description The dataset viewer for Portuguese Language is not working for dataset mozilla-foundation/common_voice_11_0. Error details: ``` Error code: Respon...
closed
2023-02-06T12:47:00Z
2023-02-12T15:33:59Z
2023-02-08T15:58:26Z
gassis
1,572,423,152
ci: 🎡 the e2e tests must now be run on any code change
null
ci: 🎡 the e2e tests must now be run on any code change:
closed
2023-02-06T11:46:55Z
2023-02-08T09:02:07Z
2023-02-07T15:45:40Z
severo
1,572,400,886
Use hub-ci locally
I switched the HF endpoint to the hub-ci one and added the corresponding tokens, and fixed the docker-compose to use the right base. Now the local dev environment from `make dev-start` works correctly. Close https://github.com/huggingface/datasets-server/issues/765
Use hub-ci locally: I switched the HF endpoint to the hub-ci one and added the corresponding tokens, and fixed the docker-compose to use the right base. Now the local dev environment from `make dev-start` works correctly. Close https://github.com/huggingface/datasets-server/issues/765
closed
2023-02-06T11:29:07Z
2023-02-06T13:59:55Z
2023-02-06T13:56:38Z
lhoestq
1,572,337,574
refactor: 💡 hard-code the value of the fallback
The fallback will be removed once https://github.com/huggingface/datasets-server/issues/755 is implemented. Meanwhile, we hide the parameter to prepare deprecation.
refactor: 💡 hard-code the value of the fallback: The fallback will be removed once https://github.com/huggingface/datasets-server/issues/755 is implemented. Meanwhile, we hide the parameter to prepare deprecation.
closed
2023-02-06T10:48:28Z
2023-02-06T11:27:34Z
2023-02-06T11:24:58Z
severo
1,572,242,085
Make workers' errors derive from WorkerError
Make workers' errors derive from `WorkerError`, instead of parent `CustomError`.
Make workers' errors derive from WorkerError: Make workers' errors derive from `WorkerError`, instead of parent `CustomError`.
closed
2023-02-06T09:44:47Z
2023-02-07T14:07:19Z
2023-02-07T14:04:03Z
albertvillanova
1,571,008,308
remove first rows fallback variable
Remove the FIRST_ROWS_FALLBACK_MAX_DATASET_SIZE from the code.
remove first rows fallback variable: Remove the FIRST_ROWS_FALLBACK_MAX_DATASET_SIZE from the code.
closed
2023-02-04T16:06:47Z
2023-02-08T10:44:28Z
2023-02-08T10:41:40Z
JatinKumar001
1,569,553,489
Add a check in /first-rows worker if truncating didn't succeed
See https://github.com/huggingface/datasets-server/pull/749#discussion_r1095587218
Add a check in /first-rows worker if truncating didn't succeed: See https://github.com/huggingface/datasets-server/pull/749#discussion_r1095587218
closed
2023-02-03T09:48:55Z
2023-02-13T14:44:56Z
2023-02-13T14:44:55Z
severo
1,569,542,931
Remove `FIRST_ROWS_FALLBACK_MAX_DATASET_SIZE` from code
It's not used anymore.
Remove `FIRST_ROWS_FALLBACK_MAX_DATASET_SIZE` from code: It's not used anymore.
closed
2023-02-03T09:44:14Z
2023-02-13T11:17:46Z
2023-02-13T11:17:45Z
severo
1,568,648,796
Create doc for every PR
This is what is done in the other HF repos on github. This way the Delete doc github action has something to delete. If the doc doesn't exists, the job fails
Create doc for every PR: This is what is done in the other HF repos on github. This way the Delete doc github action has something to delete. If the doc doesn't exists, the job fails
closed
2023-02-02T19:23:42Z
2023-02-03T11:10:16Z
2023-02-03T11:04:53Z
lhoestq
1,568,245,507
Check dataset before migration job
null
Check dataset before migration job:
closed
2023-02-02T15:17:58Z
2023-02-08T17:15:09Z
2023-02-08T17:15:05Z
severo