id
int64
959M
2.55B
title
stringlengths
3
133
body
stringlengths
1
65.5k
โŒ€
description
stringlengths
5
65.6k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
closed_at
stringlengths
20
20
โŒ€
user
stringclasses
174 values
1,235,528,600
Prod env
null
Prod env:
closed
2022-05-13T18:00:44Z
2022-05-13T18:01:32Z
2022-05-13T18:01:31Z
severo
1,235,503,749
Adapt the sleep time of the workers
When a worker just finished a job, it should ask for another job right away. But if it has already polled the dataset multiple times and got no job, it should increase the sleep time between two polls, in order to avoid hammering the database. By the way, it might just be: - finish a job: directly ask for a new ...
Adapt the sleep time of the workers: When a worker just finished a job, it should ask for another job right away. But if it has already polled the dataset multiple times and got no job, it should increase the sleep time between two polls, in order to avoid hammering the database. By the way, it might just be: - ...
closed
2022-05-13T17:29:55Z
2022-05-13T18:01:32Z
2022-05-13T18:01:32Z
severo
1,235,488,464
Add a way to gracefully stop the workers
Currently, if we stop the workers: ``` kubectl scale --replicas=0 deploy/datasets-server-prod-datasets-worker kubectl scale --replicas=0 deploy/datasets-server-prod-splits-worker ``` the started jobs will remain forever and potentially will block other jobs from the same dataset (because of MAX_JOBS_PER_DATASE...
Add a way to gracefully stop the workers: Currently, if we stop the workers: ``` kubectl scale --replicas=0 deploy/datasets-server-prod-datasets-worker kubectl scale --replicas=0 deploy/datasets-server-prod-splits-worker ``` the started jobs will remain forever and potentially will block other jobs from the sa...
closed
2022-05-13T17:11:09Z
2022-09-19T09:56:17Z
2022-09-19T09:56:16Z
severo
1,235,232,724
fix: ๐Ÿ› add support for mongodb+srv:// URLs using dnspython
See https://stackoverflow.com/a/71749071/7351594.
fix: ๐Ÿ› add support for mongodb+srv:// URLs using dnspython: See https://stackoverflow.com/a/71749071/7351594.
closed
2022-05-13T13:17:30Z
2022-05-13T13:18:01Z
2022-05-13T13:18:00Z
severo
1,235,088,427
feat: ๐ŸŽธ upgrade images to get /prometheus endpoint
null
feat: ๐ŸŽธ upgrade images to get /prometheus endpoint:
closed
2022-05-13T10:57:31Z
2022-05-13T10:57:38Z
2022-05-13T10:57:37Z
severo
1,235,017,107
[infra] Add monitoring to the hub-ephemeral namespace
It does not belong to this project, but it's needed to test the ServiceMonitor (#260) cc @XciD
[infra] Add monitoring to the hub-ephemeral namespace: It does not belong to this project, but it's needed to test the ServiceMonitor (#260) cc @XciD
closed
2022-05-13T09:56:37Z
2022-09-16T17:42:48Z
2022-09-16T17:42:48Z
severo
1,235,014,687
Add service monitor
null
Add service monitor:
closed
2022-05-13T09:54:14Z
2022-05-16T13:51:38Z
2022-05-16T13:51:37Z
severo
1,234,180,599
Support big-bench
see the thread by @lhoestq on Slack: https://huggingface.slack.com/archives/C034N0A7H09/p1652370311934619?thread_ts=1651846540.985739&cid=C034N0A7H09 ``` pip install "bigbench @ https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz" ``` > it has some dependencies though - just make ...
Support big-bench: see the thread by @lhoestq on Slack: https://huggingface.slack.com/archives/C034N0A7H09/p1652370311934619?thread_ts=1651846540.985739&cid=C034N0A7H09 ``` pip install "bigbench @ https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz" ``` > it has some dependencies ...
closed
2022-05-12T15:47:45Z
2024-02-02T17:15:44Z
2024-02-02T17:15:44Z
severo
1,234,162,186
Add metrics
null
Add metrics:
closed
2022-05-12T15:31:37Z
2022-05-13T09:30:04Z
2022-05-13T09:30:03Z
severo
1,233,720,391
Setup alerts
Once #250 is done, we will be able to setup alerts when something goes wrong. See https://prometheus.io/docs/practices/alerting/ for best practices.
Setup alerts: Once #250 is done, we will be able to setup alerts when something goes wrong. See https://prometheus.io/docs/practices/alerting/ for best practices.
closed
2022-05-12T09:35:37Z
2022-09-16T17:42:57Z
2022-09-16T17:42:57Z
severo
1,233,708,296
Implement a heartbeat client for the jobs
from https://prometheus.io/docs/practices/instrumentation/#offline-processing > Knowing the last time that a system processed something is useful for detecting if it has stalled, but it is very localised information. A better approach is to send a heartbeat through the system: some dummy item that gets passed all th...
Implement a heartbeat client for the jobs: from https://prometheus.io/docs/practices/instrumentation/#offline-processing > Knowing the last time that a system processed something is useful for detecting if it has stalled, but it is very localised information. A better approach is to send a heartbeat through the syst...
closed
2022-05-12T09:25:09Z
2022-09-16T17:43:25Z
2022-09-16T17:43:25Z
severo
1,233,655,686
Create a custom nginx image?
I think it would be clearer to create a custom nginx image, in /services/reverse-proxy, than the current "hack" with a template and env vars on the official nginx image. This way, all the services (API, worker, reverse-proxy) would follow the same flow.
Create a custom nginx image?: I think it would be clearer to create a custom nginx image, in /services/reverse-proxy, than the current "hack" with a template and env vars on the official nginx image. This way, all the services (API, worker, reverse-proxy) would follow the same flow.
closed
2022-05-12T08:48:12Z
2022-09-16T17:43:30Z
2022-09-16T17:43:30Z
severo
1,233,639,231
feat: ๐ŸŽธ use images with datasets 2.2.1
null
feat: ๐ŸŽธ use images with datasets 2.2.1:
closed
2022-05-12T08:35:47Z
2022-05-12T08:48:35Z
2022-05-12T08:48:34Z
severo
1,233,635,900
feat: ๐ŸŽธ upgrade datasets to 2.2.1
null
feat: ๐ŸŽธ upgrade datasets to 2.2.1:
closed
2022-05-12T08:32:51Z
2022-05-12T08:33:26Z
2022-05-12T08:33:25Z
severo
1,233,630,927
Upgrade datasets to 2.2.1
https://github.com/huggingface/datasets/releases/tag/2.2.1
Upgrade datasets to 2.2.1: https://github.com/huggingface/datasets/releases/tag/2.2.1
closed
2022-05-12T08:28:27Z
2022-05-12T08:52:02Z
2022-05-12T08:52:02Z
severo
1,233,604,146
Autoscale the worker pods
Once the prod is done (#223), we might want to autoscale the number of worker pods.
Autoscale the worker pods: Once the prod is done (#223), we might want to autoscale the number of worker pods.
closed
2022-05-12T08:06:54Z
2022-06-08T08:36:20Z
2022-06-08T08:36:20Z
severo
1,233,602,101
Setup prometheus + grafana
Related to #2 - [x] expose a `/metrics` endpoint using the Prometheus spec in the API, eg using https://github.com/prometheus/client_python - see #258. Beware: cache and queue metrics removed after https://github.com/huggingface/datasets-server/issues/279 - [x] Use a ServiceMonitor in the Helm chart: https://github...
Setup prometheus + grafana: Related to #2 - [x] expose a `/metrics` endpoint using the Prometheus spec in the API, eg using https://github.com/prometheus/client_python - see #258. Beware: cache and queue metrics removed after https://github.com/huggingface/datasets-server/issues/279 - [x] Use a ServiceMonitor in th...
closed
2022-05-12T08:05:04Z
2022-08-03T18:23:02Z
2022-06-03T13:53:57Z
severo
1,233,593,210
Create the prod infrastructure on Kubernetes
- [x] 4 nodes (4 machines t2 2xlarge, 8 vcpu, 32 gb ram) - [x] NFS -> 4TB - To increase the size later: directly on AWS, eg https://us-east-1.console.aws.amazon.com/fsx/home?region=us-east-1#file-system-details/fs-097afa9688029b62a (terraform does not allow to change the size of a storage once created, to avoid deleti...
Create the prod infrastructure on Kubernetes: - [x] 4 nodes (4 machines t2 2xlarge, 8 vcpu, 32 gb ram) - [x] NFS -> 4TB - To increase the size later: directly on AWS, eg https://us-east-1.console.aws.amazon.com/fsx/home?region=us-east-1#file-system-details/fs-097afa9688029b62a (terraform does not allow to change the s...
closed
2022-05-12T07:57:07Z
2022-05-13T18:02:51Z
2022-05-13T18:02:51Z
severo
1,232,830,863
Check if shared /cache is an issue for the workers
All the workers (in the kubernetes infra) share the same `datasets` library directory, both for the data and the modules. We must check if this shared access in read/write mode can lead to inconsistencies. The alternative would be to create an empty cache for every new worker.
Check if shared /cache is an issue for the workers: All the workers (in the kubernetes infra) share the same `datasets` library directory, both for the data and the modules. We must check if this shared access in read/write mode can lead to inconsistencies. The alternative would be to create an empty cache for every n...
closed
2022-05-11T15:23:39Z
2022-06-23T12:18:01Z
2022-06-23T10:27:58Z
severo
1,232,792,549
feat: ๐ŸŽธ upgrade the docker images to use datasets 2.2.0
null
feat: ๐ŸŽธ upgrade the docker images to use datasets 2.2.0:
closed
2022-05-11T14:58:39Z
2022-05-11T14:58:53Z
2022-05-11T14:58:51Z
severo
1,232,763,262
feat: ๐ŸŽธ upgrade datasets to 2.2.0
Fixes https://github.com/huggingface/datasets-server/issues/243
feat: ๐ŸŽธ upgrade datasets to 2.2.0: Fixes https://github.com/huggingface/datasets-server/issues/243
closed
2022-05-11T14:40:15Z
2022-05-11T14:54:57Z
2022-05-11T14:54:56Z
severo
1,232,577,566
Nginx proxy
null
Nginx proxy:
closed
2022-05-11T12:41:20Z
2022-05-11T14:13:47Z
2022-05-11T14:13:06Z
severo
1,232,227,132
Create the HF_TOKEN secret in infra
See https://github.com/huggingface/datasets-server/pull/236#discussion_r870009514 As for now, the secret containing the HF_TOKEN is created manually
Create the HF_TOKEN secret in infra: See https://github.com/huggingface/datasets-server/pull/236#discussion_r870009514 As for now, the secret containing the HF_TOKEN is created manually
closed
2022-05-11T08:44:48Z
2022-09-19T08:58:12Z
2022-09-19T08:58:12Z
severo
1,232,132,676
Upgrade datasets to 2.2.0
https://github.com/huggingface/datasets/releases/tag/2.2.0
Upgrade datasets to 2.2.0: https://github.com/huggingface/datasets/releases/tag/2.2.0
closed
2022-05-11T07:24:32Z
2022-05-11T14:54:56Z
2022-05-11T14:54:56Z
severo
1,231,336,429
Nfs
null
Nfs:
closed
2022-05-10T15:25:53Z
2022-05-10T15:52:03Z
2022-05-10T15:52:02Z
severo
1,231,323,674
Setup the users directly in the images, not in Kubernetes?
See the second point in https://snyk.io/blog/10-kubernetes-security-context-settings-you-should-understand/: using `runAsUser` / `runAsGroup` is a (relative) security risk.
Setup the users directly in the images, not in Kubernetes?: See the second point in https://snyk.io/blog/10-kubernetes-security-context-settings-you-should-understand/: using `runAsUser` / `runAsGroup` is a (relative) security risk.
closed
2022-05-10T15:15:49Z
2022-09-19T08:57:20Z
2022-09-19T08:57:20Z
severo
1,230,849,874
Kube: restrict the rights
In the deployments, run with a non root user: ``` securityContext: runAsGroup: 1000 runAsNonRoot: true runAsUser: 1000 ``` Beware: just adding this generates errors (Permission denied) when trying to write to the mounted volumes. We have to mount with writing rights for the user
Kube: restrict the rights: In the deployments, run with a non root user: ``` securityContext: runAsGroup: 1000 runAsNonRoot: true runAsUser: 1000 ``` Beware: just adding this generates errors (Permission denied) when trying to write to the mounted volumes. We have to mount with ...
closed
2022-05-10T09:06:59Z
2022-05-11T15:20:39Z
2022-05-11T15:20:38Z
severo
1,230,717,705
Monitor the RAM used by the workers
Since https://github.com/huggingface/datasets-server/pull/236/commits/12646ee6b7bd72e7563aeb0c16dfcf08eace9fb8, the workers loop infinitely (instead of being stopped after the first job, then restarted by pm2 or k8s). This might lead to increased RAM use. We should: - [ ] monitor the usage - [ ] check the code for p...
Monitor the RAM used by the workers: Since https://github.com/huggingface/datasets-server/pull/236/commits/12646ee6b7bd72e7563aeb0c16dfcf08eace9fb8, the workers loop infinitely (instead of being stopped after the first job, then restarted by pm2 or k8s). This might lead to increased RAM use. We should: - [ ] monitor ...
closed
2022-05-10T07:11:38Z
2022-06-08T08:39:31Z
2022-06-08T08:39:30Z
severo
1,229,647,375
In the CI, test if two instances can be deployed in the same Kube namespace
https://github.com/huggingface/datasets-server/pull/227#issuecomment-1120715631 > Did you try to install it twice? with another domain? it's a good test to see if your helm chart works with multiple instances we would need to: - have two files in env/: test1.yaml, test2.yaml, each one with its own domain (datase...
In the CI, test if two instances can be deployed in the same Kube namespace: https://github.com/huggingface/datasets-server/pull/227#issuecomment-1120715631 > Did you try to install it twice? with another domain? it's a good test to see if your helm chart works with multiple instances we would need to: - have tw...
closed
2022-05-09T12:41:19Z
2022-09-16T17:44:27Z
2022-09-16T17:44:27Z
severo
1,229,363,293
Move the infra docs/ to Notion
See https://github.com/huggingface/datasets-server/pull/227#issuecomment-1120712847
Move the infra docs/ to Notion: See https://github.com/huggingface/datasets-server/pull/227#issuecomment-1120712847
closed
2022-05-09T08:28:11Z
2022-07-29T15:47:09Z
2022-07-29T15:47:09Z
severo
1,227,954,103
Add datasets-server-worker to the Kube cluster
see #223
Add datasets-server-worker to the Kube cluster: see #223
closed
2022-05-06T14:39:14Z
2022-05-11T08:55:39Z
2022-05-11T08:55:38Z
severo
1,227,889,662
Define the number of replicas and uvicorn workers of the API
The API runs on uvicorn, with a number of workers (see `webConcurrency`). And Kubernetes allows running various pods (`replicas`) for the same app. How to set these two numbers? I imagine that `replicas` is easier to change dynamically (we scale up or down) Do you have any advice @XciD?
Define the number of replicas and uvicorn workers of the API: The API runs on uvicorn, with a number of workers (see `webConcurrency`). And Kubernetes allows running various pods (`replicas`) for the same app. How to set these two numbers? I imagine that `replicas` is easier to change dynamically (we scale up or down...
closed
2022-05-06T13:44:11Z
2022-09-16T17:44:18Z
2022-09-16T17:44:18Z
severo
1,227,876,889
Add a nginx reverse proxy in front of the API
It will allow to: 1. serve the assets directly 2. cache the responses The `image:` should point to the official nginx docker image The nginx config can be generated as a [ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/): - see https://github.com/huggingface/gitaly-internals/blob/main/kube...
Add a nginx reverse proxy in front of the API: It will allow to: 1. serve the assets directly 2. cache the responses The `image:` should point to the official nginx docker image The nginx config can be generated as a [ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/): - see https://github....
closed
2022-05-06T13:32:23Z
2022-05-11T14:14:01Z
2022-05-11T14:14:00Z
severo
1,227,871,322
Add an authentication layer to access the dev environment
See https://github.com/huggingface/moon-landing/blob/2d7150500997f57eba0f137a8cb46ab5678d082d/infra/hub/env/ephemeral.yaml#L33-L34
Add an authentication layer to access the dev environment: See https://github.com/huggingface/moon-landing/blob/2d7150500997f57eba0f137a8cb46ab5678d082d/infra/hub/env/ephemeral.yaml#L33-L34
closed
2022-05-06T13:27:49Z
2022-09-16T17:44:35Z
2022-09-16T17:44:35Z
severo
1,227,835,539
Apply the migrations to the mongodb databases
We can do it using a container launched as a [initContainer](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) This container should be idempotent and apply all the remaining migrations (which means that the migrations should be registered in the database) See https://github.com/huggingface/git...
Apply the migrations to the mongodb databases: We can do it using a container launched as a [initContainer](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) This container should be idempotent and apply all the remaining migrations (which means that the migrations should be registered in the data...
closed
2022-05-06T12:58:07Z
2022-09-16T17:44:54Z
2022-09-16T17:44:54Z
severo
1,227,766,866
Launch a test suite on every Helm upgrade
See https://helm.sh/docs/topics/chart_tests/
Launch a test suite on every Helm upgrade: See https://helm.sh/docs/topics/chart_tests/
closed
2022-05-06T11:52:02Z
2022-09-16T17:45:00Z
2022-09-16T17:45:00Z
severo
1,227,766,283
Use an ephemeral env on Kubernetes to run the e2e tests
Instead of relying on docker-compose, which might differ too much from the production environment Requires #229
Use an ephemeral env on Kubernetes to run the e2e tests: Instead of relying on docker-compose, which might differ too much from the production environment Requires #229
closed
2022-05-06T11:51:21Z
2022-09-16T17:45:07Z
2022-09-16T17:45:07Z
severo
1,227,765,303
Create one ephemeral environment per Pull Request
It will allow to test every branch. See how it's done in moonlanding: - https://github.com/huggingface/moon-landing/blob/main/.github/workflows/docker.yml#L68: one domain per branch, calling Helm from GitHub action - https://github.com/huggingface/moon-landing/blob/main/.github/workflows/ephemeral-clean.yml: remov...
Create one ephemeral environment per Pull Request: It will allow to test every branch. See how it's done in moonlanding: - https://github.com/huggingface/moon-landing/blob/main/.github/workflows/docker.yml#L68: one domain per branch, calling Helm from GitHub action - https://github.com/huggingface/moon-landing/blo...
closed
2022-05-06T11:50:18Z
2022-09-16T17:45:18Z
2022-09-16T17:45:18Z
severo
1,226,313,664
No data for facebook/winoground
https://huggingface.co/datasets/facebook/winoground <img width="801" alt="Capture dโ€™eฬcran 2022-05-05 aฬ€ 09 51 02" src="https://user-images.githubusercontent.com/1676121/166882123-e81c580a-d990-48e6-a522-8a852c274660.png"> See also https://github.com/huggingface/datasets/issues/4149
No data for facebook/winoground: https://huggingface.co/datasets/facebook/winoground <img width="801" alt="Capture dโ€™eฬcran 2022-05-05 aฬ€ 09 51 02" src="https://user-images.githubusercontent.com/1676121/166882123-e81c580a-d990-48e6-a522-8a852c274660.png"> See also https://github.com/huggingface/datasets/issues/...
closed
2022-05-05T07:51:25Z
2022-06-17T15:01:07Z
2022-06-17T15:01:07Z
severo
1,224,256,017
Use kubernetes
See #223 This first PR only installs the API in the Kubernetes cluster. Other PRs will install 1. the workers and 2. the nginx reverse proxy
Use kubernetes: See #223 This first PR only installs the API in the Kubernetes cluster. Other PRs will install 1. the workers and 2. the nginx reverse proxy
closed
2022-05-03T15:31:36Z
2022-05-09T12:17:28Z
2022-05-09T07:30:55Z
severo
1,224,225,249
Generate and publish an OpenAPI (swagger) doc for the API
- [x] create the OpenAPI spec: https://github.com/huggingface/datasets-server/pull/424 - [x] publish the OpenAPI spec as a static file: https://github.com/huggingface/datasets-server/pull/426 - it's here: https://datasets-server.huggingface.co/openapi.json - [x] render the OpenAPI spec as an HTML page: no, better jus...
Generate and publish an OpenAPI (swagger) doc for the API: - [x] create the OpenAPI spec: https://github.com/huggingface/datasets-server/pull/424 - [x] publish the OpenAPI spec as a static file: https://github.com/huggingface/datasets-server/pull/426 - it's here: https://datasets-server.huggingface.co/openapi.json - ...
closed
2022-05-03T15:05:12Z
2023-08-10T23:39:36Z
2023-08-10T23:39:36Z
severo
1,224,224,672
Send metrics to promotheus and show in Grafana
null
Send metrics to promotheus and show in Grafana:
closed
2022-05-03T15:04:43Z
2022-05-16T15:29:25Z
2022-05-16T15:29:24Z
severo
1,224,224,316
Create a web administration console?
The alternative is to use: - metrics / grafana to know the current state of the server - scripts, or an authenticated API, to launch commands (warm the cache, stop the started jobs, etc)
Create a web administration console?: The alternative is to use: - metrics / grafana to know the current state of the server - scripts, or an authenticated API, to launch commands (warm the cache, stop the started jobs, etc)
closed
2022-05-03T15:04:24Z
2022-09-19T08:56:45Z
2022-09-19T08:56:45Z
severo
1,224,220,280
Use a kubernetes infrastructure
As done for https://github.com/huggingface/tensorboard-launcher/ and https://github.com/huggingface/moon-landing gitaly before. - [x] API: see https://github.com/huggingface/datasets-server/pull/227 - [x] workers: see #236 - [x] nginx reverse proxy: see #234 - [x] deploy to the prod cluster: see #249 - [x] respo...
Use a kubernetes infrastructure: As done for https://github.com/huggingface/tensorboard-launcher/ and https://github.com/huggingface/moon-landing gitaly before. - [x] API: see https://github.com/huggingface/datasets-server/pull/227 - [x] workers: see #236 - [x] nginx reverse proxy: see #234 - [x] deploy to the pr...
closed
2022-05-03T15:01:08Z
2022-05-31T14:12:29Z
2022-05-31T14:12:06Z
severo
1,224,217,160
Change domain name to datasets-server.huggingface.tech
The current domain is `datasets-preview.huggingface.tech`.
Change domain name to datasets-server.huggingface.tech: The current domain is `datasets-preview.huggingface.tech`.
closed
2022-05-03T14:58:27Z
2022-05-16T15:08:01Z
2022-05-16T15:08:01Z
severo
1,224,215,179
Rename to datasets server
null
Rename to datasets server:
closed
2022-05-03T14:56:49Z
2022-05-03T15:13:08Z
2022-05-03T15:13:07Z
severo
1,223,980,706
Send docker images to ecr
null
Send docker images to ecr:
closed
2022-05-03T11:37:55Z
2022-05-03T14:04:46Z
2022-05-03T14:04:20Z
severo
1,223,866,573
Save the docker images to Amazon ECR
ECR = Amazon Elastic Container Registry See https://github.com/huggingface/datasets-preview-backend/blob/dockerize/.github/workflows/docker.yml by @XciD ```yml name: '[ALL] Build docker images' on: workflow_dispatch: push: env: REGISTRY: 707930574880.dkr.ecr.us-east-1.amazonaws.com REGION: us-...
Save the docker images to Amazon ECR: ECR = Amazon Elastic Container Registry See https://github.com/huggingface/datasets-preview-backend/blob/dockerize/.github/workflows/docker.yml by @XciD ```yml name: '[ALL] Build docker images' on: workflow_dispatch: push: env: REGISTRY: 707930574880.dkr.ecr....
closed
2022-05-03T09:25:35Z
2022-05-03T14:04:57Z
2022-05-03T14:04:57Z
severo
1,223,864,930
Optimize the size of the docker images, and reduce the time of the build
Also: check security aspects Maybe see https://github.com/python-poetry/poetry/discussions/1879 Also, the code by @XciD in https://github.com/huggingface/datasets-preview-backend/tree/dockerize ```dockerfile FROM python:3.9.6-bullseye ENV DEBIAN_FRONTEND=noninteractive RUN apt-get update \ && apt-get...
Optimize the size of the docker images, and reduce the time of the build: Also: check security aspects Maybe see https://github.com/python-poetry/poetry/discussions/1879 Also, the code by @XciD in https://github.com/huggingface/datasets-preview-backend/tree/dockerize ```dockerfile FROM python:3.9.6-bullseye ...
closed
2022-05-03T09:23:52Z
2022-09-19T08:52:52Z
2022-09-19T08:52:52Z
severo
1,223,153,197
Obscure error "config is not in the list" when the dataset is empty or stalled?
> Hey ! Not sure what the error means for [imagenet-1k](https://huggingface.co/datasets/imagenet-1k) on the dataset preview <img width="244" alt="image" src="https://user-images.githubusercontent.com/1676121/166296588-cad3f0b6-b424-498c-a8da-a23fb969380b.png"> See (private group) thread on slack: https://hugging...
Obscure error "config is not in the list" when the dataset is empty or stalled?: > Hey ! Not sure what the error means for [imagenet-1k](https://huggingface.co/datasets/imagenet-1k) on the dataset preview <img width="244" alt="image" src="https://user-images.githubusercontent.com/1676121/166296588-cad3f0b6-b424-498...
closed
2022-05-02T17:37:45Z
2022-05-16T15:29:59Z
2022-05-16T15:29:58Z
severo
1,223,075,040
Docker
null
Docker:
closed
2022-05-02T16:16:20Z
2022-05-03T09:21:25Z
2022-05-03T09:21:24Z
severo
1,222,840,693
Standalone viewer
Idea by @thomasw21. > Quick question, is there a way to use the dataset viewer locally without having to push to the hub? Seems like a pretty handy thing to use to look at a local dataset in `datasets` format, typically images/videos/audios See chat on Slack: https://huggingface.slack.com/archives/C02V51Q3800/p16...
Standalone viewer: Idea by @thomasw21. > Quick question, is there a way to use the dataset viewer locally without having to push to the hub? Seems like a pretty handy thing to use to look at a local dataset in `datasets` format, typically images/videos/audios See chat on Slack: https://huggingface.slack.com/archi...
closed
2022-05-02T12:42:28Z
2022-09-16T19:59:03Z
2022-09-16T19:59:03Z
severo
1,222,831,268
Dockerize
null
Dockerize:
closed
2022-05-02T12:32:11Z
2022-05-03T09:21:56Z
2022-05-03T09:21:56Z
severo
1,216,213,431
Cache the result even if the request to the API has been canceled
https://huggingface.co/datasets/MLCommons/ml_spoken_words <img width="1145" alt="Capture dโ€™eฬcran 2022-04-26 aฬ€ 18 38 37" src="https://user-images.githubusercontent.com/1676121/165349982-0697ec8a-78df-4955-9a9a-2f9dd984dbe4.png"> See https://huggingface.slack.com/archives/C01GSG1QFPF/p1650989872783289, reported b...
Cache the result even if the request to the API has been canceled: https://huggingface.co/datasets/MLCommons/ml_spoken_words <img width="1145" alt="Capture dโ€™eฬcran 2022-04-26 aฬ€ 18 38 37" src="https://user-images.githubusercontent.com/1676121/165349982-0697ec8a-78df-4955-9a9a-2f9dd984dbe4.png"> See https://huggi...
closed
2022-04-26T16:38:32Z
2022-09-16T19:59:51Z
2022-09-16T19:59:50Z
severo
1,215,830,749
split the code and move to a monorepo
Fixes #203
split the code and move to a monorepo: Fixes #203
closed
2022-04-26T11:41:58Z
2022-04-29T18:34:22Z
2022-04-29T18:34:21Z
severo
1,204,505,255
188 upgrade datasets
null
188 upgrade datasets:
closed
2022-04-14T13:05:11Z
2022-04-14T13:21:17Z
2022-04-14T13:21:16Z
severo
1,204,441,597
Skip local configs in mixed local+downloadable datasets
Some benchmark-type datasets can have non-downloadable configs, e.g. when one of its subsets is a closed-license dataset. One of such datasets is BABEL, as part of a larger benchmark collection XTREME-S: https://huggingface.co/datasets/google/xtreme_s Here the preview backend is trying to load the `babel.<lang>` con...
Skip local configs in mixed local+downloadable datasets: Some benchmark-type datasets can have non-downloadable configs, e.g. when one of its subsets is a closed-license dataset. One of such datasets is BABEL, as part of a larger benchmark collection XTREME-S: https://huggingface.co/datasets/google/xtreme_s Here the...
closed
2022-04-14T12:02:20Z
2023-03-28T09:19:28Z
2023-03-28T09:19:27Z
anton-l
1,201,624,182
fix: ๐Ÿ› allow streaming=False in get_rows
it fixes #206.
fix: ๐Ÿ› allow streaming=False in get_rows: it fixes #206.
closed
2022-04-12T10:26:21Z
2022-04-12T11:44:24Z
2022-04-12T11:44:23Z
severo
1,201,622,590
regression: fallback if streaming fails is disabled
Causes https://github.com/huggingface/datasets/issues/3185 for example: the fallback should have loaded the dataset in normal mode.
regression: fallback if streaming fails is disabled: Causes https://github.com/huggingface/datasets/issues/3185 for example: the fallback should have loaded the dataset in normal mode.
closed
2022-04-12T10:25:21Z
2022-04-12T11:44:23Z
2022-04-12T11:44:23Z
severo
1,200,462,131
Add indexes to the mongo databases?
The database is somewhat big, with 1664598 elements in the `rows` collection. Note: the `rows` collection will disappear in https://github.com/huggingface/datasets-preview-backend/pull/202, but still: we should keep an eye on the sizes and the performance.
Add indexes to the mongo databases?: The database is somewhat big, with 1664598 elements in the `rows` collection. Note: the `rows` collection will disappear in https://github.com/huggingface/datasets-preview-backend/pull/202, but still: we should keep an eye on the sizes and the performance.
closed
2022-04-11T19:53:46Z
2022-05-23T12:57:09Z
2022-05-23T12:57:09Z
severo
1,197,473,090
Reduce the size of the endpoint responses?
Currently, the data contains a lot of redundancy, for example every row of the `/rows` response contains three fields for the dataset, config and split, and their value is the same for all the rows. It comes from a previous version in which we were able to request rows for several configs or splits at the same time. C...
Reduce the size of the endpoint responses?: Currently, the data contains a lot of redundancy, for example every row of the `/rows` response contains three fields for the dataset, config and split, and their value is the same for all the rows. It comes from a previous version in which we were able to request rows for se...
closed
2022-04-08T15:31:35Z
2022-08-24T18:03:38Z
2022-08-24T18:03:38Z
severo
1,197,469,898
Refactor the code
- `models/` use classes instead of functions - split the code more clearly between the app and the worker (in particular: two config files). Possibly, three directories: common or database, worker (or dataset_worker, split_worker), app. It will make it easier to start https://github.com/huggingface/datasets-server/
Refactor the code: - `models/` use classes instead of functions - split the code more clearly between the app and the worker (in particular: two config files). Possibly, three directories: common or database, worker (or dataset_worker, split_worker), app. It will make it easier to start https://github.com/huggingface...
closed
2022-04-08T15:28:36Z
2022-04-29T18:34:21Z
2022-04-29T18:34:21Z
severo
1,197,461,728
Simplify cache by dropping two collections
instead of keeping a large collection of rows and columns, then compute the response on every endpoint call, possibly truncating the response, we now pre-compute the response and store it in the cache. We lose the ability to get the original data, but we don't need it. It fixes https://github.com/huggingface/dataset...
Simplify cache by dropping two collections: instead of keeping a large collection of rows and columns, then compute the response on every endpoint call, possibly truncating the response, we now pre-compute the response and store it in the cache. We lose the ability to get the original data, but we don't need it. It ...
closed
2022-04-08T15:21:23Z
2022-04-12T08:15:55Z
2022-04-12T08:15:54Z
severo
1,194,824,190
[BREAKING] fix: ๐Ÿ› quick fix to avoid mongodb errors with big rows
if a row is too big to be inserted in the cache database, we just store the empty string for each of its cells, and mark it as erroneous. All the cells are marked as truncated in the /rows endpoints. See https://github.com/huggingface/datasets-preview-backend/issues/197. This commit also contains the first migrat...
[BREAKING] fix: ๐Ÿ› quick fix to avoid mongodb errors with big rows: if a row is too big to be inserted in the cache database, we just store the empty string for each of its cells, and mark it as erroneous. All the cells are marked as truncated in the /rows endpoints. See https://github.com/huggingface/datasets-previ...
closed
2022-04-06T16:09:56Z
2022-04-07T08:11:14Z
2022-04-07T08:11:14Z
severo
1,194,268,131
Setup a "migrations" mechanism to upgrade/downgrade the databases
It would allow migrating the data when the structure of the database (queue or cache) changes. Until now, we just emptied the data and recomputed it every time.
Setup a "migrations" mechanism to upgrade/downgrade the databases: It would allow migrating the data when the structure of the database (queue or cache) changes. Until now, we just emptied the data and recomputed it every time.
closed
2022-04-06T08:35:54Z
2022-04-07T08:11:51Z
2022-04-07T08:11:50Z
severo
1,193,524,491
Replace real world datasets with fake in the tests
It would be better for unit tests to remove the dependency on external datasets, and use fake datasets instead. For e2e tests, we could have both: - fake datasets, but hosted on the hub - real datasets
Replace real world datasets with fake in the tests: It would be better for unit tests to remove the dependency on external datasets, and use fake datasets instead. For e2e tests, we could have both: - fake datasets, but hosted on the hub - real datasets
closed
2022-04-05T17:47:15Z
2022-08-24T16:26:28Z
2022-08-24T16:26:28Z
severo
1,192,897,333
Fix detection of pending jobs
It allows showing a message like `The dataset is being processed. Retry later.` or `The split is being processed. Retry later.`
Fix detection of pending jobs: It allows showing a message like `The dataset is being processed. Retry later.` or `The split is being processed. Retry later.`
closed
2022-04-05T09:49:47Z
2022-04-05T17:17:28Z
2022-04-05T17:17:27Z
severo
1,192,862,488
Row too big to be stored in cache
Seen with https://huggingface.co/datasets/elena-soare/crawled-ecommerce at least one row is too big (~90MB) to be stored in the MongoDB cache: ``` pymongo.errors.DocumentTooLarge: BSON document too large (90954494 bytes) - the connected server supports BSON document sizes up to 16793598 bytes. ``` We should ...
Row too big to be stored in cache: Seen with https://huggingface.co/datasets/elena-soare/crawled-ecommerce at least one row is too big (~90MB) to be stored in the MongoDB cache: ``` pymongo.errors.DocumentTooLarge: BSON document too large (90954494 bytes) - the connected server supports BSON document sizes up to...
closed
2022-04-05T09:19:36Z
2022-04-12T09:24:07Z
2022-04-12T08:15:54Z
severo
1,192,726,002
Error: "DbSplit matching query does not exist"
https://github.com/huggingface/datasets/issues/4093 https://huggingface.co/datasets/elena-soare/crawled-ecommerce ``` Server error Status code: 400 Exception: DoesNotExist Message: DbSplit matching query does not exist. ``` Introduced by #193? _edit:_ no
Error: "DbSplit matching query does not exist": https://github.com/huggingface/datasets/issues/4093 https://huggingface.co/datasets/elena-soare/crawled-ecommerce ``` Server error Status code: 400 Exception: DoesNotExist Message: DbSplit matching query does not exist. ``` Introduced by #193? ...
closed
2022-04-05T07:13:32Z
2022-04-07T08:16:07Z
2022-04-07T08:15:43Z
severo
1,192,089,374
feat: ๐ŸŽธ install libsndfile 1.0.30 and support opus files
fixes https://github.com/huggingface/datasets-preview-backend/issues/194
feat: ๐ŸŽธ install libsndfile 1.0.30 and support opus files: fixes https://github.com/huggingface/datasets-preview-backend/issues/194
closed
2022-04-04T17:23:26Z
2022-04-07T10:29:10Z
2022-04-04T17:41:37Z
severo
1,192,056,493
support opus files
https://huggingface.co/datasets/polinaeterna/ml_spoken_words ``` Message: Decoding .opus files requires 'libsndfile'>=1.0.30, it can be installed via conda: `conda install -c conda-forge libsndfile>=1.0.30` ``` The current version of libsndfile on Ubuntu stable (20.04) is 1.0.28, and we need version 1.0.3...
support opus files: https://huggingface.co/datasets/polinaeterna/ml_spoken_words ``` Message: Decoding .opus files requires 'libsndfile'>=1.0.30, it can be installed via conda: `conda install -c conda-forge libsndfile>=1.0.30` ``` The current version of libsndfile on Ubuntu stable (20.04) is 1.0.28, and w...
closed
2022-04-04T16:51:06Z
2022-04-04T17:41:37Z
2022-04-04T17:41:37Z
severo
1,191,999,920
give reason in error if dataset/split cache is refreshing
fixes #186
give reason in error if dataset/split cache is refreshing: fixes #186
closed
2022-04-04T15:59:48Z
2022-04-04T16:17:17Z
2022-04-04T16:17:16Z
severo
1,191,841,017
test: ๐Ÿ’ re-enable tests for temporarily disabled datasets
And: disable tests on common_voice
test: ๐Ÿ’ re-enable tests for temporarily disabled datasets: And: disable tests on common_voice
closed
2022-04-04T14:01:31Z
2022-04-04T14:24:40Z
2022-04-04T14:24:39Z
severo
1,191,519,057
Error with RGBA images
https://huggingface.co/datasets/huggan/few-shot-skulls ``` Status code: 500 Exception: Status500Error Message: cannot write mode RGBA as JPEG ``` reported by @NielsRogge
Error with RGBA images: https://huggingface.co/datasets/huggan/few-shot-skulls ``` Status code: 500 Exception: Status500Error Message: cannot write mode RGBA as JPEG ``` reported by @NielsRogge
closed
2022-04-04T09:38:52Z
2022-06-21T16:46:11Z
2022-06-21T16:24:53Z
severo
1,187,825,497
Cache is not refreshed when a dataset is moved (renamed)?
See https://huggingface.slack.com/archives/C01BWJU0YKW/p1648720173531679?thread_ts=1648719150.059249&cid=C01BWJU0YKW The dataset https://huggingface.co/datasets/huggan/horse2zebra-aligned has been renamed https://huggingface.co/datasets/huggan/horse2zebra (on 2022/03/31). The [preview](https://huggingface.co/dataset...
Cache is not refreshed when a dataset is moved (renamed)?: See https://huggingface.slack.com/archives/C01BWJU0YKW/p1648720173531679?thread_ts=1648719150.059249&cid=C01BWJU0YKW The dataset https://huggingface.co/datasets/huggan/horse2zebra-aligned has been renamed https://huggingface.co/datasets/huggan/horse2zebra (o...
closed
2022-03-31T09:56:01Z
2023-09-25T12:11:35Z
2022-09-19T09:38:40Z
severo
1,186,577,966
remove "gated datasets unlock" logic
the new tokens are not related to a user and have read access to the gated datasets (right, @SBrandeis?). also: add two tests to ensure the gated datasets can be accessed
remove "gated datasets unlock" logic: the new tokens are not related to a user and have read access to the gated datasets (right, @SBrandeis?). also: add two tests to ensure the gated datasets can be accessed
closed
2022-03-30T14:48:39Z
2022-04-01T16:39:24Z
2022-04-01T16:39:23Z
severo
1,224,197,146
Access images through an IIIF Image API
> The Image API https://iiif.io/api/image/3.0/ is likely the most suitable first candidate for integration with datasets. The Image API offers a consistent protocol for requesting images via a URL: > > ``` > {scheme}://{server}{/prefix}/{identifier}/{region}/{size}/{rotation}/{quality}.{format} > ``` > > A conc...
Access images through an IIIF Image API: > The Image API https://iiif.io/api/image/3.0/ is likely the most suitable first candidate for integration with datasets. The Image API offers a consistent protocol for requesting images via a URL: > > ``` > {scheme}://{server}{/prefix}/{identifier}/{region}/{size}/{rotation...
closed
2022-03-30T12:59:02Z
2022-09-16T20:03:54Z
2022-09-16T20:03:54Z
severo
1,180,823,959
Upgrade datasets to fix the issue with common_voice
See https://github.com/huggingface/datasets-preview-backend/runs/5690421290?check_suite_focus=true Upgrading to the next release (next week) should fix the CI and the common_voice dataset viewer
Upgrade datasets to fix the issue with common_voice: See https://github.com/huggingface/datasets-preview-backend/runs/5690421290?check_suite_focus=true Upgrading to the next release (next week) should fix the CI and the common_voice dataset viewer
closed
2022-03-25T13:54:24Z
2022-04-14T13:21:56Z
2022-04-14T13:21:55Z
severo
1,180,613,607
Update blocked datasets
null
Update blocked datasets:
closed
2022-03-25T10:28:42Z
2022-03-25T13:55:00Z
2022-03-25T13:54:59Z
severo
1,224,197,612
Avoid being blocked by Google (and other providers)
See https://github.com/huggingface/datasets/issues/4005#issuecomment-1077897284 or https://github.com/huggingface/datasets/pull/3979#issuecomment-1077838956
Avoid being blocked by Google (and other providers): See https://github.com/huggingface/datasets/issues/4005#issuecomment-1077897284 or https://github.com/huggingface/datasets/pull/3979#issuecomment-1077838956
closed
2022-03-25T10:17:30Z
2022-06-17T12:55:22Z
2022-06-17T12:55:22Z
severo
1,179,775,656
Show a better error message when the cache is refreshing
For example, in https://github.com/huggingface/datasets-preview-backend/issues/185, the message was "No data" which is misleading
Show a better error message when the cache is refreshing: For example, in https://github.com/huggingface/datasets-preview-backend/issues/185, the message was "No data" which is misleading
closed
2022-03-24T16:49:38Z
2022-04-12T09:43:28Z
2022-04-12T09:43:28Z
severo
1,175,617,103
No data on nielsr/CelebA-faces
https://huggingface.co/datasets/nielsr/CelebA-faces <img width="1056" alt="Capture dโ€™eฬcran 2022-03-21 aฬ€ 17 12 55" src="https://user-images.githubusercontent.com/1676121/159303539-b3495de4-4308-477a-a1bd-fb65ee598933.png"> reported by @NielsRogge Possibly because the dataset is still in the job queue (https:...
No data on nielsr/CelebA-faces: https://huggingface.co/datasets/nielsr/CelebA-faces <img width="1056" alt="Capture dโ€™eฬcran 2022-03-21 aฬ€ 17 12 55" src="https://user-images.githubusercontent.com/1676121/159303539-b3495de4-4308-477a-a1bd-fb65ee598933.png"> reported by @NielsRogge Possibly because the dataset i...
closed
2022-03-21T16:14:16Z
2022-03-24T16:48:38Z
2022-03-24T16:48:37Z
severo
1,175,518,304
Show the images in cgarciae/cartoonset
https://huggingface.co/datasets/cgarciae/cartoonset <img width="759" alt="Capture dโ€™eฬcran 2022-03-21 aฬ€ 16 03 40" src="https://user-images.githubusercontent.com/1676121/159289733-5c8d0bd9-15a5-472e-8446-1f16f7d6eaf0.png"> reported by @thomwolf at https://huggingface.slack.com/archives/C02V51Q3800/p16478749581929...
Show the images in cgarciae/cartoonset: https://huggingface.co/datasets/cgarciae/cartoonset <img width="759" alt="Capture dโ€™eฬcran 2022-03-21 aฬ€ 16 03 40" src="https://user-images.githubusercontent.com/1676121/159289733-5c8d0bd9-15a5-472e-8446-1f16f7d6eaf0.png"> reported by @thomwolf at https://huggingface.slack....
closed
2022-03-21T15:04:08Z
2022-03-24T16:47:21Z
2022-03-24T16:47:20Z
severo
1,172,248,584
Issue with access to API for gated datasets?
https://github.com/huggingface/datasets/issues/3954 https://huggingface.co/datasets/tdklab/Hebrew_Squad_v1 ``` Status code: 400 Exception: HTTPError Message: 403 Client Error: Forbidden for url: https://huggingface.co/api/datasets/tdklab/Hebrew_Squad_v1?full=true ```
Issue with access to API for gated datasets?: https://github.com/huggingface/datasets/issues/3954 https://huggingface.co/datasets/tdklab/Hebrew_Squad_v1 ``` Status code: 400 Exception: HTTPError Message: 403 Client Error: Forbidden for url: https://huggingface.co/api/datasets/tdklab/Hebrew_Squad_v1...
closed
2022-03-17T11:18:52Z
2022-04-04T16:24:29Z
2022-04-04T16:24:29Z
severo
1,170,866,049
feat: ๐ŸŽธ upgrade to datasets 2.0.0
fixes #181
feat: ๐ŸŽธ upgrade to datasets 2.0.0: fixes #181
closed
2022-03-16T11:03:04Z
2022-03-16T11:10:24Z
2022-03-16T11:10:23Z
severo
1,170,814,904
Upgrade datasets to 2.0.0
https://github.com/huggingface/datasets/releases/tag/2.0.0 In particular, it should fix up to 50 datasets hosted on Google Drive thanks to https://github.com/huggingface/datasets/pull/3843
Upgrade datasets to 2.0.0: https://github.com/huggingface/datasets/releases/tag/2.0.0 In particular, it should fix up to 50 datasets hosted on Google Drive thanks to https://github.com/huggingface/datasets/pull/3843
closed
2022-03-16T10:20:49Z
2022-03-16T11:10:23Z
2022-03-16T11:10:23Z
severo
1,170,804,202
Issue when transferring a dataset from a user to an org?
https://huggingface.slack.com/archives/C01229B19EX/p1647372809022069 > When transferring a dataset from a user to an org, the datasets viewer no longer worked. Whatโ€™s the issue here? reported by @ktangri
Issue when transferring a dataset from a user to an org?: https://huggingface.slack.com/archives/C01229B19EX/p1647372809022069 > When transferring a dataset from a user to an org, the datasets viewer no longer worked. Whatโ€™s the issue here? reported by @ktangri
closed
2022-03-16T10:11:28Z
2022-03-21T08:41:34Z
2022-03-21T08:41:34Z
severo
1,168,468,181
feat: ๐ŸŽธ revert double limit on the rows size (reverts #162)
null
feat: ๐ŸŽธ revert double limit on the rows size (reverts #162):
closed
2022-03-14T14:32:50Z
2022-03-14T14:32:56Z
2022-03-14T14:32:55Z
severo
1,168,433,474
feat: ๐ŸŽธ truncate cell contents instead of removing rows
Add a ROWS_MIN_NUMBER environment variable, which defines how many rows should be returned as a minimum. If the size of these rows is greater than the ROWS_MAX_BYTES limit, then the cells themselves are truncated (transformed to strings, then truncated to 100 bytes which is an hardcoded limit). In that case, the ne...
feat: ๐ŸŽธ truncate cell contents instead of removing rows: Add a ROWS_MIN_NUMBER environment variable, which defines how many rows should be returned as a minimum. If the size of these rows is greater than the ROWS_MAX_BYTES limit, then the cells themselves are truncated (transformed to strings, then truncated to 100...
closed
2022-03-14T14:09:33Z
2022-03-14T14:13:38Z
2022-03-14T14:13:37Z
severo
1,166,626,189
No data for cnn_dailymail
https://huggingface.co/datasets/cnn_dailymail returns "No data"
No data for cnn_dailymail: https://huggingface.co/datasets/cnn_dailymail returns "No data"
closed
2022-03-11T16:36:16Z
2022-04-25T11:41:49Z
2022-04-25T11:41:49Z
severo
1,165,265,890
Truncate the row cells to a maximum size
As the purpose of this backend is to serve 100 rows of every dataset split to moonlanding, to show them inside a table, we can optimize a bit for that purpose. In particular, when the 100 rows are really big, the browsers have a hard time rendering the table, in particular Safari. Also: this can generate a lot of tr...
Truncate the row cells to a maximum size: As the purpose of this backend is to serve 100 rows of every dataset split to moonlanding, to show them inside a table, we can optimize a bit for that purpose. In particular, when the 100 rows are really big, the browsers have a hard time rendering the table, in particular S...
closed
2022-03-10T14:01:53Z
2022-03-14T15:33:42Z
2022-03-14T15:33:41Z
severo
1,161,755,421
Fix ci
null
Fix ci:
closed
2022-03-07T18:08:48Z
2022-03-07T20:15:48Z
2022-03-07T20:15:47Z
severo
1,161,698,359
feat: ๐ŸŽธ upgrade datasets to 1.18.4
see https://github.com/huggingface/datasets/releases/tag/1.18.4
feat: ๐ŸŽธ upgrade datasets to 1.18.4: see https://github.com/huggingface/datasets/releases/tag/1.18.4
closed
2022-03-07T17:16:26Z
2022-03-07T17:26:20Z
2022-03-07T17:16:30Z
severo
1,159,490,859
Issue with mongo
https://huggingface.co/datasets/circa ``` Message: Tried to save duplicate unique keys (E11000 duplicate key error collection: datasets_preview_cache.rows index: dataset_name_1_config_name_1_split_name_1_row_idx_1 dup key: { dataset_name: "circa", config_name: "default", split_name: "train", row_idx: 0 }, ful...
Issue with mongo: https://huggingface.co/datasets/circa ``` Message: Tried to save duplicate unique keys (E11000 duplicate key error collection: datasets_preview_cache.rows index: dataset_name_1_config_name_1_split_name_1_row_idx_1 dup key: { dataset_name: "circa", config_name: "default", split_name: "train",...
closed
2022-03-04T10:29:44Z
2022-03-07T09:23:55Z
2022-03-07T09:23:55Z
severo
1,159,457,524
Support segmented images
Issue proposed by @NielsRogge eg https://huggingface.co/datasets/scene_parse_150 <img width="785" alt="Capture dโ€™eฬcran 2022-03-04 aฬ€ 10 56 05" src="https://user-images.githubusercontent.com/1676121/156741519-fbae6844-2606-4c28-837e-279d83d00865.png"> Every pixel in the images of the `annotation` column has a...
Support segmented images: Issue proposed by @NielsRogge eg https://huggingface.co/datasets/scene_parse_150 <img width="785" alt="Capture dโ€™eฬcran 2022-03-04 aฬ€ 10 56 05" src="https://user-images.githubusercontent.com/1676121/156741519-fbae6844-2606-4c28-837e-279d83d00865.png"> Every pixel in the images of the...
open
2022-03-04T09:55:50Z
2024-06-19T14:00:42Z
null
severo
1,154,130,333
Warm the nginx cache after every dataset update?
Currently, when the list of splits, or the list of rows of a split, are updated, only the cache (MongoDB) is updated. But the cache in nginx still a) has the old value until it expires, or b) is still empty until the endpoint is requested for that dataset. Case a) is not very important since the cache expiration is ...
Warm the nginx cache after every dataset update?: Currently, when the list of splits, or the list of rows of a split, are updated, only the cache (MongoDB) is updated. But the cache in nginx still a) has the old value until it expires, or b) is still empty until the endpoint is requested for that dataset. Case a) is...
closed
2022-02-28T14:03:08Z
2022-06-17T12:52:15Z
2022-06-17T12:52:15Z
severo
1,150,603,977
feat: ๐ŸŽธ hide expected errors from the worker logs
null
feat: ๐ŸŽธ hide expected errors from the worker logs:
closed
2022-02-25T15:52:25Z
2022-02-25T15:52:31Z
2022-02-25T15:52:31Z
severo
1,150,593,087
fix: ๐Ÿ› force job finishing in any case
null
fix: ๐Ÿ› force job finishing in any case:
closed
2022-02-25T15:40:31Z
2022-02-25T15:41:51Z
2022-02-25T15:41:49Z
severo
1,150,293,567
STARTED jobs with a non empty "finished_at" field
Some jobs cannot be finished correctly and get stuck in the STARTED status (see https://github.com/huggingface/datasets-preview-backend/blob/9e360f9de0df91be28001587964c1af25f82d051/src/datasets_preview_backend/io/queue.py#L253) ``` 36|worker-splits-A | DEBUG: 2022-02-25 10:32:44,579 - datasets_preview_backend.io....
STARTED jobs with a non empty "finished_at" field: Some jobs cannot be finished correctly and get stuck in the STARTED status (see https://github.com/huggingface/datasets-preview-backend/blob/9e360f9de0df91be28001587964c1af25f82d051/src/datasets_preview_backend/io/queue.py#L253) ``` 36|worker-splits-A | DEBUG: 202...
closed
2022-02-25T10:35:42Z
2022-02-25T15:56:28Z
2022-02-25T15:56:28Z
severo
1,150,281,750
fix: ๐Ÿ› fix CI
null
fix: ๐Ÿ› fix CI:
closed
2022-02-25T10:23:27Z
2022-02-25T10:23:34Z
2022-02-25T10:23:33Z
severo