repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/diffusers | 228 | stable-diffusion-v1-4 link in release v0.2.3 is broken | ### Describe the bug
@anton-l the link (https://huggingface.co/CompVis/stable-diffusion-v1-4) in the [release v0.2.3](https://github.com/huggingface/diffusers/releases/tag/v0.2.3) returns a 404.
### Reproduction
_No response_
### Logs
_No response_
### System Info
```shell
N/A
```
| https://github.com/huggingface/diffusers/issues/228 | closed | [
"question"
] | 2022-08-22T09:07:27Z | 2022-08-22T20:53:00Z | null | leszekhanusz |
huggingface/pytorch-image-models | 1,424 | [FEATURE] What hyperparameters is used to get the results stated in the paper with the ViT-B pretrained miil weights on imagenet1k? | **Is your feature request related to a problem? Please describe.**
What hyperparameters are used to get the results stated in this paper (https://arxiv.org/pdf/2104.10972.pdf) on ImageNet1k with the ViT-B pretrained miil weights from vision_transformer.py in line 164-167? I tried the hyperparemeters as stated in the p... | https://github.com/huggingface/pytorch-image-models/issues/1424 | closed | [
"enhancement"
] | 2022-08-21T22:26:48Z | 2022-08-22T04:17:43Z | null | Phuoc-Hoan-Le |
huggingface/optimum | 351 | Add all available ONNX models to ORTConfigManager | This issue is linked to the [ONNXConfig for all](https://huggingface.co/OWG) working group created for implementing an ONNXConfig for all available models. Let's extend our work and try to add all models with a fully functional ONNXConfig implemented to ORTConfigManager.
Adding models to ORTConfigManager will allow ... | https://github.com/huggingface/optimum/issues/351 | open | [
"good first issue"
] | 2022-08-16T08:18:50Z | 2025-11-19T13:24:40Z | 3 | chainyo |
huggingface/optimum | 350 | Migrate metrics used in all examples from Datasets to Evaluate | ### Feature request
Copied from https://github.com/huggingface/transformers/issues/18306
The metrics are slowly leaving [Datasets](https://github.com/huggingface/datasets) (they are being deprecated as we speak) to move to the [Evaluate](https://github.com/huggingface/evaluate) library. We are looking for contribut... | https://github.com/huggingface/optimum/issues/350 | closed | [] | 2022-08-16T08:04:07Z | 2022-10-27T10:07:58Z | 0 | fxmarty |
huggingface/datasets | 4,839 | ImageFolder dataset builder does not read the validation data set if it is named as "val" | **Is your feature request related to a problem? Please describe.**
Currently, the `'imagefolder'` data set builder in [`load_dataset()`](https://github.com/huggingface/datasets/blob/2.4.0/src/datasets/load.py#L1541] ) only [supports](https://github.com/huggingface/datasets/blob/6c609a322da994de149b2c938f19439bca9940... | https://github.com/huggingface/datasets/issues/4839 | closed | [
"enhancement"
] | 2022-08-12T13:26:00Z | 2022-08-30T10:14:55Z | 1 | akt42 |
huggingface/datasets | 4,836 | Is it possible to pass multiple links to a split in load script? | **Is your feature request related to a problem? Please describe.**
I wanted to use a python loading script in hugging face datasets that use different sources of text (it's somehow a compilation of multiple datasets + my own dataset) based on how `load_dataset` [works](https://huggingface.co/docs/datasets/loading) I a... | https://github.com/huggingface/datasets/issues/4836 | open | [
"enhancement"
] | 2022-08-12T11:06:11Z | 2022-08-12T11:06:11Z | 0 | sadrasabouri |
huggingface/datasets | 4,820 | Terminating: fork() called from a process already using GNU OpenMP, this is unsafe. | Hi, when i try to run prepare_dataset function in [fine tuning ASR tutorial 4](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_tuning_Wav2Vec2_for_English_ASR.ipynb) , i got this error.
I got this error
Terminating: fork() called from a process already using GNU OpenMP, this is un... | https://github.com/huggingface/datasets/issues/4820 | closed | [
"bug"
] | 2022-08-10T19:42:33Z | 2022-08-10T19:53:10Z | 1 | talhaanwarch |
huggingface/dataset-viewer | 502 | Improve the docs: what is needed to make the dataset viewer work? | See https://discuss.huggingface.co/t/the-dataset-preview-has-been-disabled-on-this-dataset/21339 | https://github.com/huggingface/dataset-viewer/issues/502 | closed | [
"documentation"
] | 2022-08-08T13:27:21Z | 2022-09-19T09:12:00Z | null | severo |
huggingface/dataset-viewer | 498 | Test cookie authentication | Testing token authentication is easy, see https://github.com/huggingface/datasets-server/issues/199#issuecomment-1205528302, but testing session cookie authentication might be a bit more complex since we need to log in to get the cookie. I prefer to get a dedicate issue for it. | https://github.com/huggingface/dataset-viewer/issues/498 | closed | [
"question",
"tests"
] | 2022-08-04T17:06:31Z | 2022-08-22T18:34:29Z | null | severo |
huggingface/datasets | 4,791 | Dataset Viewer issue for Team-PIXEL/rendered-wikipedia-english | ### Link
https://huggingface.co/datasets/Team-PIXEL/rendered-wikipedia-english/viewer/rendered-wikipedia-en/train
### Description
The dataset can be loaded fine but the viewer shows this error:
```
Server Error
Status code: 400
Exception: Status400Error
Message: The dataset does not exist.
```
... | https://github.com/huggingface/datasets/issues/4791 | closed | [
"dataset-viewer"
] | 2022-08-04T12:49:16Z | 2022-08-04T13:43:16Z | 1 | xplip |
huggingface/datasets | 4,776 | RuntimeError when using torchaudio 0.12.0 to load MP3 audio file | Current version of `torchaudio` (0.12.0) raises a RuntimeError when trying to use `sox_io` backend but non-Python dependency `sox` is not installed:
https://github.com/pytorch/audio/blob/2e1388401c434011e9f044b40bc8374f2ddfc414/torchaudio/backend/sox_io_backend.py#L21-L29
```python
def _fail_load(
filepath: str... | https://github.com/huggingface/datasets/issues/4776 | closed | [] | 2022-08-01T14:11:23Z | 2023-03-02T15:58:16Z | 3 | albertvillanova |
huggingface/optimum | 327 | Any workable example of exporting and inferencing with GPU? | ### System Info
```shell
Been tried many methods, but never successfully done it. Thanks.
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My ... | https://github.com/huggingface/optimum/issues/327 | closed | [
"bug"
] | 2022-08-01T05:12:15Z | 2022-08-01T06:19:26Z | 1 | lkluo |
huggingface/datasets | 4,757 | Document better when relative paths are transformed to URLs | As discussed with @ydshieh, when passing a relative path as `data_dir` to `load_dataset` of a dataset hosted on the Hub, the relative path is transformed to the corresponding URL of the Hub dataset.
Currently, we mention this in our docs here: [Create a dataset loading script > Download data files and organize split... | https://github.com/huggingface/datasets/issues/4757 | closed | [
"documentation"
] | 2022-07-28T08:46:27Z | 2022-08-25T18:34:24Z | 0 | albertvillanova |
huggingface/diffusers | 143 | Running difussers with GPU | Running the example codes i see that the CPU and not the GPU is used, is there a way to use GPU instead | https://github.com/huggingface/diffusers/issues/143 | closed | [
"question"
] | 2022-07-28T08:34:12Z | 2022-08-15T17:27:31Z | null | jfdelgad |
huggingface/optimum | 320 | Feature request: allow user to provide tokenizer when loading transformer model | ### Feature request
When I try to load a locally saved transformers model with `ORTModelForSequenceClassification.from_pretrained(<path>, from_transformers=True)` an error occurs ("unable to generate dummy inputs for model") unless I also save the tokenizer in the checkpoint. A reproducible example of this is below.... | https://github.com/huggingface/optimum/issues/320 | closed | [
"Stale"
] | 2022-07-27T20:01:32Z | 2025-07-27T02:17:59Z | 3 | jessecambon |
huggingface/datasets | 4,744 | Remove instructions to generate dummy data from our docs | In our docs, we indicate to generate the dummy data: https://huggingface.co/docs/datasets/dataset_script#testing-data-and-checksum-metadata
However:
- dummy data makes sense only for datasets in our GitHub repo: so that we can test their loading with our CI
- for datasets on the Hub:
- they do not pass any CI t... | https://github.com/huggingface/datasets/issues/4744 | closed | [
"documentation"
] | 2022-07-26T07:32:58Z | 2022-08-02T23:50:30Z | 2 | albertvillanova |
huggingface/datasets | 4,742 | Dummy data nowhere to be found | ## Describe the bug
To finalize my dataset, I wanted to create dummy data as per the guide and I ran
```shell
datasets-cli dummy_data datasets/hebban-reviews --auto_generate
```
where hebban-reviews is [this repo](https://huggingface.co/datasets/BramVanroy/hebban-reviews). And even though the scripts runs an... | https://github.com/huggingface/datasets/issues/4742 | closed | [
"bug"
] | 2022-07-25T19:18:42Z | 2022-11-04T14:04:24Z | 3 | BramVanroy |
huggingface/dataset-viewer | 466 | Take decisions before launching in public | ## Version
Should we integrate a version in the path or domain, to help with future breaking changes?
Three options:
1. domain based: https://v1.datasets-server.huggingface.co
2. path based: https://datasets-server.huggingface.co/v1/
3. no version (current): https://datasets-server.huggingface.co
I think 3 ... | https://github.com/huggingface/dataset-viewer/issues/466 | closed | [
"question"
] | 2022-07-25T18:04:59Z | 2022-07-26T14:39:46Z | null | severo |
huggingface/dataset-viewer | 458 | Move /webhook to admin instead of api? | As we've done with the technical endpoints in https://github.com/huggingface/datasets-server/pull/457?
It might help to protect the endpoint (#95), even if it's not really dangerous to let people add jobs to refresh datasets IMHO for now. | https://github.com/huggingface/dataset-viewer/issues/458 | closed | [
"question"
] | 2022-07-22T20:21:39Z | 2022-09-16T17:24:05Z | null | severo |
huggingface/dataset-viewer | 455 | what to do with /is-valid? | Currently, the endpoint /is-valid is not documented in https://redocly.github.io/redoc/?url=https://datasets-server.huggingface.co/openapi.json (but it is in https://github.com/huggingface/datasets-server/blob/main/services/api/README.md).
It's not used in the dataset viewer in moonlanding, but https://github.com/hu... | https://github.com/huggingface/dataset-viewer/issues/455 | closed | [
"question"
] | 2022-07-22T19:29:08Z | 2022-08-02T14:16:24Z | null | severo |
huggingface/datasets | 4,736 | Dataset Viewer issue for deepklarity/huggingface-spaces-dataset | ### Link
https://huggingface.co/datasets/deepklarity/huggingface-spaces-dataset/viewer/deepklarity--huggingface-spaces-dataset/train
### Description
Hi Team,
I'm getting the following error on a uploaded dataset. I'm getting the same status for a couple of hours now. The dataset size is `<1MB` and the format is cs... | https://github.com/huggingface/datasets/issues/4736 | closed | [
"dataset-viewer"
] | 2022-07-22T12:14:18Z | 2022-07-22T13:46:38Z | 1 | dk-crazydiv |
huggingface/datasets | 4,732 | Document better that loading a dataset passing its name does not use the local script | As reported by @TrentBrick here https://github.com/huggingface/datasets/issues/4725#issuecomment-1191858596, it could be more clear that loading a dataset by passing its name does not use the (modified) local script of it.
What he did:
- he installed `datasets` from source
- he modified locally `datasets/the_pile/... | https://github.com/huggingface/datasets/issues/4732 | closed | [
"documentation"
] | 2022-07-22T06:07:31Z | 2022-08-23T16:32:23Z | 3 | albertvillanova |
huggingface/datasets | 4,719 | Issue loading TheNoob3131/mosquito-data dataset | 
So my dataset is public in the Huggingface Hub, but when I try to load it using the load_dataset command, it shows that it is downloading the files, but throws a ValueError. When I went to my directory to ... | https://github.com/huggingface/datasets/issues/4719 | closed | [] | 2022-07-19T17:47:37Z | 2022-07-20T06:46:57Z | 2 | thenerd31 |
huggingface/datasets | 4,711 | Document how to create a dataset loading script for audio/vision | Currently, in our docs for Audio/Vision/Text, we explain how to:
- Load data
- Process data
However we only explain how to *Create a dataset loading script* for text data.
I think it would be useful that we add the same for Audio/Vision as these have some specificities different from Text.
See, for example:
... | https://github.com/huggingface/datasets/issues/4711 | closed | [
"documentation"
] | 2022-07-19T08:03:40Z | 2023-07-25T16:07:52Z | 1 | albertvillanova |
huggingface/optimum | 306 | `ORTModelForConditionalGeneration` did not have `generate()` module after converting from `T5ForConditionalGeneration` | ### System Info
```shell
Machine: Apple M1 Pro
Optimum version: 1.3.0
Transformers version: 4.20.1
Onnxruntime version: 1.11.1
# Question
How to inference a quantized onnx model from class ORTModelForConditionalGeneration (previously using T5ForConditionalGeneration). I've successfully converted T5ForConditiona... | https://github.com/huggingface/optimum/issues/306 | closed | [
"bug"
] | 2022-07-19T07:14:48Z | 2022-07-19T09:29:09Z | 2 | tiketdatailham |
huggingface/datasets | 4,694 | Distributed data parallel training for streaming datasets | ### Feature request
Any documentations for the the `load_dataset(streaming=True)` for (multi-node multi-GPU) DDP training?
### Motivation
Given a bunch of data files, it is expected to split them onto different GPUs. Is there a guide or documentation?
### Your contribution
Does it requires manually spli... | https://github.com/huggingface/datasets/issues/4694 | open | [
"enhancement"
] | 2022-07-17T01:29:43Z | 2023-04-26T18:21:09Z | 6 | cyk1337 |
huggingface/datasets | 4,684 | How to assign new values to Dataset? | 
Hi, if I want to change some values of the dataset, or add new columns to it, how can I do it?
For example, I want to change all the labels of the SST2 dataset to `0`:
```python
from datasets import l... | https://github.com/huggingface/datasets/issues/4684 | closed | [
"enhancement"
] | 2022-07-15T04:17:57Z | 2023-03-20T15:50:41Z | 2 | beyondguo |
huggingface/datasets | 4,682 | weird issue/bug with columns (dataset iterable/stream mode) | I have a dataset online (CloverSearch/cc-news-mutlilingual) that has a bunch of columns, two of which are "score_title_maintext" and "score_title_description". the original files are jsonl formatted. I was trying to iterate through via streaming mode and grab all "score_title_description" values, but I kept getting key... | https://github.com/huggingface/datasets/issues/4682 | open | [] | 2022-07-14T13:26:47Z | 2022-07-14T13:26:47Z | 0 | eunseojo |
huggingface/optimum | 290 | Quantized Model size difference when using Optimum vs. Onnxruntime | Package versions


`, an arrow table is returned and I am unable to use it by passing it to a PyTorch DataLoader: please see the code below.
## Steps to reproduce the bug
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
ds = load_dataset(
... | https://github.com/huggingface/datasets/issues/4675 | open | [
"bug"
] | 2022-07-12T15:04:04Z | 2022-07-14T14:17:46Z | 1 | BlueskyFR |
huggingface/datasets | 4,671 | Dataset Viewer issue for wmt16 | ### Link
https://huggingface.co/datasets/wmt16
### Description
[Reported](https://huggingface.co/spaces/autoevaluate/model-evaluator/discussions/12#62cb83f14c7f35284e796f9c) by a user of AutoTrain Evaluate. AFAIK this dataset was working 1-2 weeks ago, and I'm not sure how to interpret this error.
```
Status cod... | https://github.com/huggingface/datasets/issues/4671 | closed | [
"dataset-viewer"
] | 2022-07-11T08:34:11Z | 2022-09-13T13:27:02Z | 6 | lewtun |
huggingface/optimum | 276 | Force write of vanilla onnx model with `ORTQuantizer.export()` | ### Feature request
Force write of the non-quantized onnx model with `ORTQuantizer.export()`, or add an option to force write.
### Motivation
Currently, if the `onnx_model_path` already exists, we don't write the non-quantized model in to the indicated path.
https://github.com/huggingface/optimum/blob/04a2a6d290c... | https://github.com/huggingface/optimum/issues/276 | closed | [] | 2022-07-09T08:44:27Z | 2022-07-11T10:38:48Z | 2 | fxmarty |
huggingface/optimum | 262 | How can i set number of threads for Optimum exported model? | ### System Info
```shell
optimum==1.2.3
onnxruntime==1.11.1
onnx==1.12.0
transformers==4.20.1
python version 3.7.13
```
### Who can help?
@JingyaHuang @echarlaix
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task i... | https://github.com/huggingface/optimum/issues/262 | closed | [
"bug"
] | 2022-07-06T06:53:30Z | 2022-09-19T11:25:23Z | 1 | MiladMolazadeh |
huggingface/optimum | 257 | Optimum Inference next steps | # What is this issue for?
This issue is a list of potential next steps for improving inference experience using `optimum`. The current list applies to the main namespace of optimum but should be soon extended to other namespaces including `intel`, `habana`, `graphcore`.
## Next Steps/Features
- [x] #199
- [... | https://github.com/huggingface/optimum/issues/257 | closed | [
"inference",
"Stale"
] | 2022-07-06T05:02:12Z | 2025-09-13T02:01:29Z | 1 | philschmid |
huggingface/datasets | 4,621 | ImageFolder raises an error with parameters drop_metadata=True and drop_labels=False when metadata.jsonl is present | ## Describe the bug
If you pass `drop_metadata=True` and `drop_labels=False` when a `data_dir` contains at least one `matadata.jsonl` file, you will get a KeyError. This is probably not a very useful case but we shouldn't get an error anyway. Asking users to move metadata files manually outside `data_dir` or pass fe... | https://github.com/huggingface/datasets/issues/4621 | closed | [
"bug"
] | 2022-07-04T11:21:44Z | 2022-07-15T14:24:24Z | 0 | polinaeterna |
huggingface/datasets | 4,619 | np arrays get turned into native lists | ## Describe the bug
When attaching an `np.array` field, it seems that it automatically gets turned into a list (see below). Why is this happening? Could it lose precision? Is there a way to make sure this doesn't happen?
## Steps to reproduce the bug
```python
>>> import datasets, numpy as np
>>> dataset = datas... | https://github.com/huggingface/datasets/issues/4619 | open | [
"bug"
] | 2022-07-02T17:54:57Z | 2022-07-03T20:27:07Z | 3 | ZhaofengWu |
huggingface/datasets | 4,603 | CI fails recurrently and randomly on Windows | As reported by @lhoestq,
The windows CI is currently flaky: some dependencies like `aiobotocore`, `multiprocess` and `seqeval` sometimes fail to install.
In particular it seems that building the wheels fail. Here is an example of logs:
```
Building wheel for seqeval (setup.py): started
Running command 'C:\to... | https://github.com/huggingface/datasets/issues/4603 | closed | [
"bug"
] | 2022-06-30T10:59:58Z | 2022-06-30T13:22:25Z | 0 | albertvillanova |
huggingface/dataset-viewer | 430 | Shuffle the rows? | see https://github.com/huggingface/moon-landing/issues/3375 | https://github.com/huggingface/dataset-viewer/issues/430 | closed | [
"question",
"feature request",
"P2"
] | 2022-06-30T08:31:20Z | 2023-09-08T13:41:42Z | null | severo |
huggingface/datasets | 4,591 | Can't push Images to hub with manual Dataset | ## Describe the bug
If I create a dataset including an 'Image' feature manually, when pushing to hub decoded images are not pushed,
instead it looks for image where image local path is/used to be.
This doesn't (at least didn't used to) happen with imagefolder. I want to build dataset manually because it is compli... | https://github.com/huggingface/datasets/issues/4591 | closed | [
"bug"
] | 2022-06-29T00:01:23Z | 2022-07-08T12:01:36Z | 1 | cceyda |
huggingface/dataset-viewer | 423 | Add terms of service to the API? | See https://swagger.io/specification/#info-object
Maybe to mention a rate-limiter, if we implement one | https://github.com/huggingface/dataset-viewer/issues/423 | closed | [
"question"
] | 2022-06-28T11:27:16Z | 2022-09-16T17:30:38Z | null | severo |
huggingface/datasets | 4,571 | move under the facebook org? | ### Link
https://huggingface.co/datasets/gsarti/flores_101
### Description
It seems like streaming isn't supported for this dataset:
```
Server Error
Status code: 400
Exception: NotImplementedError
Message: Extraction protocol for TAR archives like 'https://dl.fbaipublicfiles.com/flores101/dataset... | https://github.com/huggingface/datasets/issues/4571 | open | [] | 2022-06-26T11:19:09Z | 2023-09-25T12:05:18Z | 3 | lewtun |
huggingface/datasets | 4,570 | Dataset sharding non-contiguous? | ## Describe the bug
I'm not sure if this is a bug; more likely normal behavior but i wanted to double check.
Is it normal that `datasets.shard` does not produce chunks that, when concatenated produce the original ordering of the sharded dataset?
This might be related to this pull request (https://github.com/huggi... | https://github.com/huggingface/datasets/issues/4570 | closed | [
"bug"
] | 2022-06-26T08:34:05Z | 2022-06-30T11:00:47Z | 5 | cakiki |
huggingface/datasets | 4,569 | Dataset Viewer issue for sst2 | ### Link
https://huggingface.co/datasets/sst2
### Description
Not sure what is causing this, however it seems that `load_dataset("sst2")` also hangs (even though it downloads the files without problem):
```
Status code: 400
Exception: Exception
Message: Give up after 5 attempts with Connectio... | https://github.com/huggingface/datasets/issues/4569 | closed | [
"dataset-viewer"
] | 2022-06-26T07:32:54Z | 2022-06-27T06:37:48Z | 2 | lewtun |
huggingface/dataset-viewer | 416 | Remove the Kubernetes CPU "limits"? | https://github.com/robusta-dev/alert-explanations/wiki/CPUThrottlingHigh-%28Prometheus-Alert%29#why-you-dont-need-cpu-limits
> ## Why you don't need CPU limits
>
> As long as your pod has a CPU request, [Kubernetes maintainers like Tim Hockin recommend not using limits at all](https://twitter.com/thockin/status/1... | https://github.com/huggingface/dataset-viewer/issues/416 | closed | [
"question"
] | 2022-06-23T12:26:39Z | 2022-07-22T13:15:41Z | null | severo |
huggingface/dataset-viewer | 415 | Expose an endpoint with the column types/modalities of each dataset? | It could be used on the Hub to find all the "images" or "audio" datasets.
By the way, the info is normally already in the datasets-info.json (.features) | https://github.com/huggingface/dataset-viewer/issues/415 | closed | [
"question"
] | 2022-06-23T10:36:01Z | 2022-09-16T17:32:45Z | null | severo |
huggingface/datasets | 4,542 | [to_tf_dataset] Use Feather for better compatibility with TensorFlow ? | To have better performance in TensorFlow, it is important to provide lists of data files in supported formats. For example sharded TFRecords datasets are extremely performant. This is because tf.data can better leverage parallelism in this case, and load one file at a time in memory.
It seems that using `tensorflow_... | https://github.com/huggingface/datasets/issues/4542 | open | [
"generic discussion"
] | 2022-06-22T14:42:00Z | 2022-10-11T08:45:45Z | 48 | lhoestq |
huggingface/dataset-viewer | 413 | URL design | Currently, the API is available at the root, ie: https://datasets-server.huggingface.co/rows?...
This can lead to some issues:
- if we add other services, such as /doc or /search, the API will share the namespace with these other services. This means that we must take care of avoiding collisions between services an... | https://github.com/huggingface/dataset-viewer/issues/413 | closed | [
"question"
] | 2022-06-22T07:13:24Z | 2022-06-28T08:48:02Z | null | severo |
huggingface/datasets | 4,538 | Dataset Viewer issue for Pile of Law | ### Link
https://huggingface.co/datasets/pile-of-law/pile-of-law
### Description
Hi, I would like to turn off the dataset viewer for our dataset without enabling access requests. To comply with upstream dataset creator requests/licenses, we would like to make sure that the data is not indexed by search engines... | https://github.com/huggingface/datasets/issues/4538 | closed | [
"dataset-viewer"
] | 2022-06-22T02:48:40Z | 2022-06-27T07:30:23Z | 5 | Breakend |
huggingface/datasets | 4,522 | Try to reduce the number of datasets that require manual download | > Currently, 41 canonical datasets require manual download. I checked their scripts and I'm pretty sure this number can be reduced to ≈ 30 by not relying on bash scripts to download data, hosting data directly on the Hub when the license permits, etc. Then, we will mostly be left with datasets with restricted access, w... | https://github.com/huggingface/datasets/issues/4522 | open | [] | 2022-06-17T11:42:03Z | 2022-06-17T11:52:48Z | 0 | severo |
huggingface/dataset-viewer | 394 | Implement API pagination? | Should we add API pagination right now? Maybe useful for the "technical" endpoints like https://datasets-server.huggingface.co/queue-dump-waiting-started or https://datasets-server.huggingface.co/cache-reports
https://simonwillison.net/2021/Jul/1/pagnis/
| https://github.com/huggingface/dataset-viewer/issues/394 | closed | [
"question"
] | 2022-06-17T08:54:41Z | 2022-08-01T19:02:00Z | null | severo |
huggingface/dataset-viewer | 390 | How to best manage the datasets that we cannot process due to RAM? | The dataset worker pod is killed (OOMKilled) for:
```
bigscience/P3
Graphcore/gqa-lxmert
echarlaix/gqa-lxmert
```
and the split worker pod is killed (OOMKilled) for:
```
imthanhlv/binhvq_news21_raw / started / train
openclimatefix/nimrod-uk-1km / sample / train/test/validation
PolyAI/minds14 / zh-CN / t... | https://github.com/huggingface/dataset-viewer/issues/390 | closed | [
"bug",
"question"
] | 2022-06-17T08:04:45Z | 2022-09-19T09:42:36Z | null | severo |
huggingface/dataset-viewer | 388 | what happened to the pods? | ```
$ k get pods -w
...
datasets-server-prod-datasets-worker-776b774978-g7mpk 1/1 Evicted 0 73m │DEBUG: 2022-06-16 18:42:46,966 - datasets_server.worker - try to process a split job
datasets-server-prod-datasets-worker-776b774978-cdb4b 0/1 Pendin... | https://github.com/huggingface/dataset-viewer/issues/388 | closed | [
"question"
] | 2022-06-16T19:46:00Z | 2022-06-17T07:48:20Z | null | severo |
huggingface/pytorch_block_sparse | 17 | What is "custom" "custom-back" in dispatch_policies.h? | Hi! I am learning SGEMM and find in dispatch_policies.h has a "Custom", "CustomBack". Not sure what does this mean? Thank you!!! | https://github.com/huggingface/pytorch_block_sparse/issues/17 | open | [] | 2022-06-16T05:46:42Z | 2022-06-16T05:46:42Z | null | ziyuhuang123 |
huggingface/datasets | 4,507 | How to let `load_dataset` return a `Dataset` instead of `DatasetDict` in customized loading script | If the dataset does not need splits, i.e., no training and validation split, more like a table. How can I let the `load_dataset` function return a `Dataset` object directly rather than return a `DatasetDict` object with only one key-value pair.
Or I can paraphrase the question in the following way: how to skip `_spl... | https://github.com/huggingface/datasets/issues/4507 | closed | [
"enhancement"
] | 2022-06-15T18:56:34Z | 2022-06-16T10:40:08Z | 2 | liyucheng09 |
huggingface/datasets | 4,504 | Can you please add the Stanford dog dataset? | ## Adding a Dataset
- **Name:** *Stanford dog dataset*
- **Description:** *The dataset is about 120 classes for a total of 20.580 images. You can find the dataset here http://vision.stanford.edu/aditya86/ImageNetDogs/*
- **Paper:** *http://vision.stanford.edu/aditya86/ImageNetDogs/*
- **Data:** *[link to the Github... | https://github.com/huggingface/datasets/issues/4504 | closed | [
"good first issue",
"dataset request"
] | 2022-06-15T15:39:35Z | 2024-12-09T15:44:11Z | 16 | dgrnd4 |
huggingface/datasets | 4,502 | Logic bug in arrow_writer? | https://github.com/huggingface/datasets/blob/88a902d6474fae8d793542d57a4f3b0d187f3c5b/src/datasets/arrow_writer.py#L475-L488
I got some error, and I found it's caused by `batch_examples` being `{}`. I wonder if the code should be as follows:
```
- if batch_examples and len(next(iter(batch_examples.values())... | https://github.com/huggingface/datasets/issues/4502 | closed | [] | 2022-06-15T14:50:00Z | 2022-06-18T15:15:51Z | 10 | changjonathanc |
huggingface/optimum | 219 | Support to wav2vec2 | ### Feature request
Is there any plan to include wav2vec2 class to optimum?
```python
from optimum.onnxruntime.configuration import AutoQuantizationConfig
from optimum.onnxruntime import ORTQuantizer
# The model we wish to quantize
model_checkpoint = "facebook/wav2vec2-base-960h"
# The type of quantization t... | https://github.com/huggingface/optimum/issues/219 | closed | [] | 2022-06-15T12:47:42Z | 2022-07-08T10:34:33Z | 4 | asr-lord |
huggingface/dataset-viewer | 373 | Add support for building GitHub Codespace dev environment | Add support for building a GitHub Codespace dev environment (as it was done for the [moon landing](https://github.com/huggingface/moon-landing/pull/3188) project) to make it easier to contribute to the project. | https://github.com/huggingface/dataset-viewer/issues/373 | closed | [
"question"
] | 2022-06-14T14:37:58Z | 2022-09-19T09:05:26Z | null | mariosasko |
huggingface/datasets | 4,491 | Dataset Viewer issue for Pavithree/test | ### Link
https://huggingface.co/datasets/Pavithree/test
### Description
I have extracted the subset of original eli5 dataset found at hugging face. However, while loading the dataset It throws ArrowNotImplementedError: Unsupported cast from string to null using function cast_null error. Is there anything missi... | https://github.com/huggingface/datasets/issues/4491 | closed | [
"dataset-viewer"
] | 2022-06-14T13:23:10Z | 2022-06-14T14:37:21Z | 1 | Pavithree |
huggingface/datasets | 4,478 | Dataset slow during model training | ## Describe the bug
While migrating towards 🤗 Datasets, I encountered an odd performance degradation: training suddenly slows down dramatically. I train with an image dataset using Keras and execute a `to_tf_dataset` just before training.
First, I have optimized my dataset following https://discuss.huggingface.co/... | https://github.com/huggingface/datasets/issues/4478 | open | [
"bug"
] | 2022-06-11T19:40:19Z | 2022-06-14T12:04:31Z | 5 | lehrig |
huggingface/datasets | 4,439 | TIMIT won't load after manual download: Errors about files that don't exist | ## Describe the bug
I get the message from HuggingFace that it must be downloaded manually. From the URL provided in the message, I got to UPenn page for manual download. (UPenn apparently want $250? for the dataset??) ...So, ok, I obtained a copy from a friend and also a smaller version from Kaggle. But in both c... | https://github.com/huggingface/datasets/issues/4439 | closed | [
"bug"
] | 2022-06-02T16:35:56Z | 2022-06-03T08:44:17Z | 3 | drscotthawley |
huggingface/dataset-viewer | 332 | Change moonlanding app token? | Should we replace `dataset-preview-backend`with `datasets-server`:
- here: https://github.com/huggingface/moon-landing/blob/f2ee3896cff3aa97aafb3476e190ef6641576b6f/server/models/App.ts#L16
- and here: https://github.com/huggingface/moon-landing/blob/82e71c10ed0b385e55a29f43622874acfc35a9e3/server/test/end_to_end_app... | https://github.com/huggingface/dataset-viewer/issues/332 | closed | [
"question"
] | 2022-06-01T09:29:12Z | 2022-09-19T09:33:33Z | null | severo |
huggingface/dataset-viewer | 325 | Test if /valid is a blocking request | https://github.com/huggingface/datasets-server/issues/250#issuecomment-1142013300
> > the requests to /valid are very long: do they block the incoming requests?)
> Depends on if your long running query is blocking the GIL or not. If you have async calls, it should be able to switch and take care of other requests, ... | https://github.com/huggingface/dataset-viewer/issues/325 | closed | [
"bug",
"question"
] | 2022-05-31T13:43:20Z | 2022-09-16T17:39:20Z | null | severo |
huggingface/datasets | 4,419 | Update `unittest` assertions over tuples from `assertEqual` to `assertTupleEqual` | **Is your feature request related to a problem? Please describe.**
So this is more a readability improvement rather than a proposal, wouldn't it be better to use `assertTupleEqual` over the tuples rather than `assertEqual`? As `unittest` added that function in `v3.1`, as detailed at https://docs.python.org/3/library... | https://github.com/huggingface/datasets/issues/4419 | closed | [
"enhancement"
] | 2022-05-30T12:13:18Z | 2022-09-30T16:01:37Z | 3 | alvarobartt |
huggingface/datasets | 4,417 | how to convert a dict generator into a huggingface dataset. | ### Link
_No response_
### Description
Hey there, I have used seqio to get a well distributed mixture of samples from multiple dataset. However the resultant output from seqio is a python generator dict, which I cannot produce back into huggingface dataset.
The generator contains all the samples needed for ... | https://github.com/huggingface/datasets/issues/4417 | closed | [
"question"
] | 2022-05-29T16:28:27Z | 2022-09-16T14:44:19Z | null | StephennFernandes |
huggingface/dataset-viewer | 309 | Scale the worker pods depending on prometheus metrics? | We could scale the number of worker pods depending on:
- the size of the job queue
- the available resources
These data are available in prometheus, and we could use them to autoscale the pods. | https://github.com/huggingface/dataset-viewer/issues/309 | closed | [
"question"
] | 2022-05-25T09:56:05Z | 2022-09-19T09:30:49Z | null | severo |
huggingface/dataset-viewer | 307 | Add a /metrics endpoint on every worker? | https://github.com/huggingface/dataset-viewer/issues/307 | closed | [
"question"
] | 2022-05-25T09:52:28Z | 2022-09-16T17:40:55Z | null | severo | |
huggingface/sentence-transformers | 1,562 | Why is "max_position_embeddings" 514 in sbert where as 512 in bert | Why is "max_position_embeddings" different in sbert then in Bert? | https://github.com/huggingface/sentence-transformers/issues/1562 | open | [] | 2022-05-22T17:27:01Z | 2022-05-22T20:52:40Z | null | omerarshad |
huggingface/datasets | 4,374 | extremely slow processing when using a custom dataset | ## processing a custom dataset loaded as .txt file is extremely slow, compared to a dataset of similar volume from the hub
I have a large .txt file of 22 GB which i load into HF dataset
`lang_dataset = datasets.load_dataset("text", data_files="hi.txt")`
further i use a pre-processing function to clean the d... | https://github.com/huggingface/datasets/issues/4374 | closed | [
"bug",
"question"
] | 2022-05-19T14:18:05Z | 2023-07-25T15:07:17Z | null | StephennFernandes |
huggingface/optimum | 198 | Posibility to load an ORTQuantizer or ORTOptimizer from Onnx | FIrst, thanks a lot for this library, it make work so much easier.
I was wondering if it's possible to quantize and then optimize a model (or the reverse) but looking at the doc, it seems possible to do so only by passing a huggingface vanilla model.
Is it possible to do so with already compiled models?
Lik... | https://github.com/huggingface/optimum/issues/198 | closed | [] | 2022-05-18T20:19:23Z | 2022-06-30T08:33:58Z | 1 | ierezell |
huggingface/datasets | 4,352 | When using `dataset.map()` if passed `Features` types do not match what is returned from the mapped function, execution does not except in an obvious way | ## Describe the bug
Recently I was trying to using `.map()` to preprocess a dataset. I defined the expected Features and passed them into `.map()` like `dataset.map(preprocess_data, features=features)`. My expected `Features` keys matched what came out of `preprocess_data`, but the types i had defined for them did not... | https://github.com/huggingface/datasets/issues/4352 | open | [
"bug"
] | 2022-05-14T17:55:15Z | 2022-05-16T15:09:17Z | null | plamb-viso |
huggingface/optimum | 191 | Not possible to configure GPU in pipelines nor leveraging batch_size parallelisation | When setting the `device` variable in the `pipeline` function/class to `>= 0`, an error appears `AttributeError: 'ORTModelForCausalLM' object has no attribute 'to' - when running in GPU`. This was initially reported in #161 so opening this issue to encompass supporting the `device` parameter in the ORT classes. This is... | https://github.com/huggingface/optimum/issues/191 | closed | [
"inference"
] | 2022-05-14T05:05:51Z | 2022-09-05T08:37:46Z | 4 | axsaucedo |
huggingface/datasets | 4,343 | Metrics documentation is not accessible in the datasets doc UI | **Is your feature request related to a problem? Please describe.**
Search for a metric name like "seqeval" yields no results on https://huggingface.co/docs/datasets/master/en/index . One needs to go look in `datasets/metrics/README.md` to find the doc. Even in the `README.md`, it can be hard to understand what the met... | https://github.com/huggingface/datasets/issues/4343 | closed | [
"enhancement",
"Metric discussion"
] | 2022-05-13T07:46:30Z | 2022-06-03T08:50:25Z | 1 | fxmarty |
huggingface/optimum | 183 | about run_glue.py | how to enable GPU when run run_glue.py | https://github.com/huggingface/optimum/issues/183 | closed | [] | 2022-05-12T12:13:16Z | 2022-06-23T13:35:25Z | 1 | yichuan-w |
huggingface/dataset-viewer | 255 | Create a custom nginx image? | I think it would be clearer to create a custom nginx image, in /services/reverse-proxy, than the current "hack" with a template and env vars on the official nginx image.
This way, all the services (API, worker, reverse-proxy) would follow the same flow. | https://github.com/huggingface/dataset-viewer/issues/255 | closed | [
"question"
] | 2022-05-12T08:48:12Z | 2022-09-16T17:43:30Z | null | severo |
huggingface/datasets | 4,323 | Audio can not find value["bytes"] | ## Describe the bug
I wrote down _generate_examples like:

but where is the bytes?

## ... | https://github.com/huggingface/datasets/issues/4323 | closed | [
"bug"
] | 2022-05-12T08:31:58Z | 2022-07-07T13:16:08Z | 9 | YooSungHyun |
huggingface/dataset-viewer | 241 | Setup the users directly in the images, not in Kubernetes? | See the second point in https://snyk.io/blog/10-kubernetes-security-context-settings-you-should-understand/: using `runAsUser` / `runAsGroup` is a (relative) security risk.
| https://github.com/huggingface/dataset-viewer/issues/241 | closed | [
"question"
] | 2022-05-10T15:15:49Z | 2022-09-19T08:57:20Z | null | severo |
huggingface/datasets | 4,304 | Language code search does direct matches | ## Describe the bug
Hi. Searching for bcp47 tags that are just the language prefix (e.g. `sq` or `da`) excludes datasets that have added extra information in their language metadata (e.g. `sq-AL` or `da-bornholm`). The example codes given in the [tagging app](https://huggingface.co/spaces/huggingface/datasets-taggin... | https://github.com/huggingface/datasets/issues/4304 | open | [
"bug"
] | 2022-05-10T11:59:16Z | 2022-05-10T12:38:42Z | 1 | leondz |
huggingface/datasets | 4,238 | Dataset caching policy | ## Describe the bug
I cannot clean cache of my datasets files, despite I have updated the `csv` files on the repository [here](https://huggingface.co/datasets/loretoparisi/tatoeba-sentences). The original file had a line with bad characters, causing the following error
```
[/usr/local/lib/python3.7/dist-packages/d... | https://github.com/huggingface/datasets/issues/4238 | closed | [
"bug"
] | 2022-04-27T10:42:11Z | 2022-04-27T16:29:25Z | 3 | loretoparisi |
huggingface/datasets | 4,235 | How to load VERY LARGE dataset? | ### System Info
```shell
I am using transformer trainer while meeting the issue.
The trainer requests torch.utils.data.Dataset as input, which loads the whole dataset into the memory at once. Therefore, when the dataset is too large to load, there's nothing I can do except using IterDataset, which loads samples of da... | https://github.com/huggingface/datasets/issues/4235 | closed | [
"bug"
] | 2022-04-27T07:50:13Z | 2023-07-25T15:07:57Z | 1 | CaoYiqingT |
huggingface/datasets | 4,230 | Why the `conll2003` dataset on huggingface only contains the `en` subset? Where is the German data? | 
But on huggingface datasets:

Where is the German data? | https://github.com/huggingface/datasets/issues/4230 | closed | [
"enhancement"
] | 2022-04-27T00:53:52Z | 2023-07-25T15:10:15Z | null | beyondguo |
huggingface/datasets | 4,221 | Dictionary Feature | Hi, I'm trying to create the loading script for a dataset in which one feature is a list of dictionaries, which afaik doesn't fit very well the values and structures supported by Value and Sequence. Is there any suggested workaround, am I missing something?
Thank you in advance. | https://github.com/huggingface/datasets/issues/4221 | closed | [
"question"
] | 2022-04-26T12:50:18Z | 2022-04-29T14:52:19Z | null | jordiae |
huggingface/datasets | 4,181 | Support streaming FLEURS dataset | ## Dataset viewer issue for '*name of the dataset*'
https://huggingface.co/datasets/google/fleurs
```
Status code: 400
Exception: NotImplementedError
Message: Extraction protocol for TAR archives like 'https://storage.googleapis.com/xtreme_translations/FLEURS/af_za.tar.gz' is not implemented in str... | https://github.com/huggingface/datasets/issues/4181 | closed | [
"dataset bug"
] | 2022-04-19T11:09:56Z | 2022-07-25T11:44:02Z | 9 | patrickvonplaten |
huggingface/optimum | 147 | Support for electra model | I came across this tool and it looks very interesting but i am trying to use electra model and i can see this is not supported. By this
`"electra is not supported yet. Only ['albert', 'bart', 'mbart', 'bert', 'ibert', 'camembert', 'distilbert', 'longformer', 'marian', 'roberta', 't5', 'xlm-roberta', 'gpt2', 'gpt-neo'... | https://github.com/huggingface/optimum/issues/147 | closed | [] | 2022-04-15T11:03:21Z | 2022-04-21T07:24:48Z | 1 | OriAlpha |
huggingface/tokenizers | 979 | What is the correct format for file for tokenizer.train_from_files? | I am trying to use this library and train a new model with my own data. But before I start building my corpora, I want to understand what file format should I be looking for, if I am feeding it to [`train_from_files`](https://docs.rs/tokenizers/0.11.3/tokenizers/tokenizer/struct.TokenizerImpl.html#method.train_from_fil... | https://github.com/huggingface/tokenizers/issues/979 | closed | [] | 2022-04-12T22:54:39Z | 2022-04-14T07:05:58Z | null | winston0410 |
huggingface/datasets | 4,141 | Why is the dataset not visible under the dataset preview section? | ## Dataset viewer issue for '*name of the dataset*'
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| https://github.com/huggingface/datasets/issues/4141 | closed | [
"dataset-viewer"
] | 2022-04-11T08:36:42Z | 2022-04-11T18:55:32Z | 0 | Nid989 |
huggingface/datasets | 4,139 | Dataset viewer issue for Winoground | ## Dataset viewer issue for 'Winoground'
**Link:** [*link to the dataset viewer page*](https://huggingface.co/datasets/facebook/winoground/viewer/facebook--winoground/train)
*short description of the issue*
Getting 401, message='Unauthorized'
The dataset is subject to authorization, but I can access the files f... | https://github.com/huggingface/datasets/issues/4139 | closed | [
"dataset-viewer",
"dataset-viewer-gated"
] | 2022-04-11T06:11:41Z | 2022-06-21T16:43:58Z | 11 | alcinos |
huggingface/datasets | 4,138 | Incorrect Russian filenames encoding after extraction by datasets.DownloadManager.download_and_extract() | ## Dataset viewer issue for 'MalakhovIlya/RuREBus'
**Link:** https://huggingface.co/datasets/MalakhovIlya/RuREBus
**Description**
Using os.walk(topdown=False) in DatasetBuilder causes following error:
Status code: 400
Exception: TypeError
Message: xwalk() got an unexpected keyword argument 'topdow... | https://github.com/huggingface/datasets/issues/4138 | closed | [] | 2022-04-11T02:07:13Z | 2022-04-19T03:15:46Z | 5 | iluvvatar |
huggingface/datasets | 4,134 | ELI5 supporting documents | if i am using dense search to create supporting documents for eli5 how much time it will take bcz i read somewhere that it takes about 18 hrs?? | https://github.com/huggingface/datasets/issues/4134 | open | [
"question"
] | 2022-04-08T23:36:27Z | 2022-04-13T13:52:46Z | null | saurabh-0077 |
huggingface/dataset-viewer | 204 | Reduce the size of the endpoint responses? | Currently, the data contains a lot of redundancy, for example every row of the `/rows` response contains three fields for the dataset, config and split, and their value is the same for all the rows. It comes from a previous version in which we were able to request rows for several configs or splits at the same time.
C... | https://github.com/huggingface/dataset-viewer/issues/204 | closed | [
"question"
] | 2022-04-08T15:31:35Z | 2022-08-24T18:03:38Z | null | severo |
huggingface/datasets | 4,101 | How can I download only the train and test split for full numbers using load_dataset()? | How can I download only the train and test split for full numbers using load_dataset()?
I do not need the extra split and it will take 40 mins just to download in Colab. I have very short time in hand. Please help. | https://github.com/huggingface/datasets/issues/4101 | open | [
"enhancement"
] | 2022-04-05T16:00:15Z | 2022-04-06T13:09:01Z | 1 | Nakkhatra |
huggingface/datasets | 4,074 | Error in google/xtreme_s dataset card | **Link:** https://huggingface.co/datasets/google/xtreme_s
Not a big deal but Hungarian is considered an Eastern European language, together with Serbian, Slovak, Slovenian (all correctly categorized; Slovenia is mostly to the West of Hungary, by the way).
| https://github.com/huggingface/datasets/issues/4074 | closed | [
"documentation",
"dataset bug"
] | 2022-03-31T18:07:45Z | 2022-04-01T08:12:56Z | 1 | wranai |
huggingface/datasets | 4,041 | Add support for IIIF in datasets | This is a feature request for support for IIIF in `datasets`. Apologies for the long issue. I have also used a different format to the usual feature request since I think that makes more sense but happy to use the standard template if preferred.
## What is [IIIF](https://iiif.io/)?
IIIF (International Image Inte... | https://github.com/huggingface/datasets/issues/4041 | open | [
"enhancement"
] | 2022-03-28T15:19:25Z | 2022-04-05T18:20:53Z | 1 | davanstrien |
huggingface/datasets | 4,027 | ElasticSearch Indexing example: TypeError: __init__() missing 1 required positional argument: 'scheme' | ## Describe the bug
I am following the example in the documentation for elastic search step by step (on google colab): https://huggingface.co/docs/datasets/faiss_es#elasticsearch
```
from datasets import load_dataset
squad = load_dataset('crime_and_punish', split='train[:1000]')
```
When I run the line:
`sq... | https://github.com/huggingface/datasets/issues/4027 | closed | [
"bug",
"duplicate"
] | 2022-03-25T16:22:28Z | 2022-04-07T10:29:52Z | 2 | MoritzLaurer |
huggingface/datasets | 3,881 | How to use Image folder | Ran this code
```
load_dataset("imagefolder", data_dir="./my-dataset")
```
`https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py` missing
```
---------------------------------------------------------------------------
FileNotFoundError Trace... | https://github.com/huggingface/datasets/issues/3881 | closed | [
"question"
] | 2022-03-09T21:18:52Z | 2022-03-11T08:45:52Z | null | rozeappletree |
huggingface/datasets | 3,854 | load only England English dataset from common voice english dataset | training_data = load_dataset("common_voice", "en",split='train[:250]+validation[:250]')
testing_data = load_dataset("common_voice", "en", split="test[:200]")
I'm trying to load only 8% of the English common voice data with accent == "England English." Can somebody assist me with this?
**Typical Voice Accent Prop... | https://github.com/huggingface/datasets/issues/3854 | closed | [
"question"
] | 2022-03-08T09:40:52Z | 2024-03-23T12:40:58Z | null | amanjaiswal777 |
huggingface/nn_pruning | 33 | What is the difference between "finetune" and "final-finetune" in `/example`. | Hello,
Thanks for the amazing repo!
I'm wondering what is the difference between "finetune" and "final-finetune" in `/example`.
Do we train the model and the mask score in the finetune stage, and only train the optimized model in the final-finetune stage?
Is there a way to directly save the optimized model an... | https://github.com/huggingface/nn_pruning/issues/33 | open | [] | 2022-02-11T03:25:13Z | 2023-01-08T14:27:37Z | null | eric8607242 |
huggingface/transformers | 15,404 | what is the equivalent manner for those lines? | https://github.com/huggingface/transformers/issues/15404 | closed | [] | 2022-01-29T16:03:12Z | 2022-02-18T21:37:08Z | null | mathshangw | |
huggingface/dataset-viewer | 124 | Cache /valid? | <strike>It is called multiple times per second by moon landing, and it impacts a lot the loading time of the /datasets page (https://github.com/huggingface/moon-landing/issues/1871#issuecomment-1024414854).</strike>
Currently, several queries are done to check all the valid datasets on every request | https://github.com/huggingface/dataset-viewer/issues/124 | closed | [
"question"
] | 2022-01-28T17:37:47Z | 2022-01-31T20:31:41Z | null | severo |
huggingface/transformers | 15,223 | where is the 4.16.0dev?? | I'm running the run_mlm.py script.
There is such a line,
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
check_min_version("4.16.0.dev0")
but where is it?
can't find by pip,no in github too. | https://github.com/huggingface/transformers/issues/15223 | closed | [] | 2022-01-19T11:41:04Z | 2022-02-27T15:02:00Z | null | sipie800 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.