repo stringclasses 147 values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2 values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/diffusers | 1,921 | How to finetune inpainting model for object removal? What is the input prompt for object removal for both training and inference? |
Hi team,
Thanks for your great work!
I am trying to get object removal functionality from inpainting of SD.
How to finetune inpainting model for object removal?
What is the input prompt for object removal for both training and inference?
Thanks | https://github.com/huggingface/diffusers/issues/1921 | closed | [
"stale"
] | 2023-01-05T01:23:20Z | 2023-04-03T14:50:38Z | null | hdjsjyl |
huggingface/setfit | 254 | Why are the models fine-tuned with CosineSimilarity between 0 and 1? | Hi everyone,
This is a small question related to how models are fine-tuned during the first step of training. I see that the default loss function is `losses.CosineSimilarityLoss`. But when generating sentence pairs [here](https://github.com/huggingface/setfit/blob/35c0511fa9917e653df50cb95a22105b397e14c0/src/setfit/modeling.py#L546), negative ones are assigned a 0 label.
I understand that having scores between 0 and 1 is ideal, because they can be interpreted as probabilities. But cosine similarity ranges from -1 to 1, so shouldn't we expect the full range to be used? The model head can then make predictions on a more isotropic embedding space.
Is this related to how Sentence Transformers are pre-trained?
Thanks for your clarifications! | https://github.com/huggingface/setfit/issues/254 | open | [
"question"
] | 2023-01-03T09:47:11Z | 2023-03-14T10:24:17Z | null | EdouardVilain-Git |
huggingface/setfit | 251 | Using setfit with the Hugging Face API | Hi, thank you so much for this amazing library!
I have trained my model and pushed it to the Hugging Face hub.
Since the output is a text-classification task, and the model card uploaded is for the sentence transformers, how should I use the model to run the classification model through the Hugging Face API?
Thank you! | https://github.com/huggingface/setfit/issues/251 | open | [
"question"
] | 2022-12-29T01:46:37Z | 2023-01-01T07:53:43Z | null | kwen1510 |
huggingface/setfit | 249 | Sentence Pairs generation: is possible to parallelize it? | My dataset has 20k samples, 200 labels, and 32 iterations, so that means around 128 million samples, right?
there's some way to parallelize the pairs sentences creation?
or at least to save these pairs to create one time and reuse multiple times (i.e. to train with different epochs)
Thanks | https://github.com/huggingface/setfit/issues/249 | open | [
"question"
] | 2022-12-28T17:50:02Z | 2023-02-14T20:04:29Z | null | info2000 |
huggingface/setfit | 245 | extracting embeddings from a trained SetFit model. | Hey First of All, Thank You For This Great Package!
IMy task relates to semantic similarity, in which I find 'closeness' of a query sentence to a list of candidate sentences. Something like [shown here](https://www.sbert.net/docs/usage/semantic_textual_similarity.html)
I wanted to know if there was a way to extract embeddings from a 'trained SetFit' model and then instead of utilizing the classification head just compute similarity of a given query sentences to the embeddings in SetFit.
Awaiting your answer,
Thanks again | https://github.com/huggingface/setfit/issues/245 | closed | [
"question"
] | 2022-12-26T12:27:50Z | 2023-12-06T13:21:04Z | null | moonisali |
huggingface/optimum | 640 | Improve documentations around ONNX export | ### Feature request
* Document `-with-past`, `--for-ort`, why use it
* Add more details in `optimum-cli export onnx --help` directly
### Motivation
/
### Your contribution
/ | https://github.com/huggingface/optimum/issues/640 | closed | [
"documentation",
"onnx",
"exporters"
] | 2022-12-23T15:54:32Z | 2023-01-03T16:34:56Z | 0 | fxmarty |
huggingface/datasets | 5,385 | Is `fs=` deprecated in `load_from_disk()` as well? | ### Describe the bug
The `fs=` argument was deprecated from `Dataset.save_to_disk` and `Dataset.load_from_disk` in favor of automagically figuring it out via fsspec:
https://github.com/huggingface/datasets/blob/9a7272cd4222383a5b932b0083a4cc173fda44e8/src/datasets/arrow_dataset.py#L1339-L1340
Is there a reason the same thing shouldn't also apply to `datasets.load.load_from_disk()` as well ?
https://github.com/huggingface/datasets/blob/9a7272cd4222383a5b932b0083a4cc173fda44e8/src/datasets/load.py#L1779
### Steps to reproduce the bug
n/a
### Expected behavior
n/a
### Environment info
n/a | https://github.com/huggingface/datasets/issues/5385 | closed | [] | 2022-12-22T21:00:45Z | 2023-01-23T10:50:05Z | 3 | dconathan |
huggingface/optimum | 625 | Add support for Speech Encoder Decoder models in `optimum.exporters.onnx` | ### Feature request
Add support for [Speech Encoder Decoder Models](https://huggingface.co/docs/transformers/v4.25.1/en/model_doc/speech-encoder-decoder#speech-encoder-decoder-models)
### Your contribution
Me or other members can implement it (cc @mht-sharma @fxmarty ) | https://github.com/huggingface/optimum/issues/625 | open | [
"feature-request",
"onnx"
] | 2022-12-20T16:48:49Z | 2023-11-15T10:02:54Z | 4 | michaelbenayoun |
huggingface/optimum | 615 | Shall we set diffusers as soft dependency for onnxruntime module? | It seems a little bit strange for me that we need to have diffusers for doing sequence classification.
### System Info
```shell
Dev branch of Optimum
```
### Who can help?
@echarlaix @JingyaHuang
### Reproduction
```python
from optimum.onnxruntime import ORTModelForSequenceClassification
```
### Error message
```
RuntimeError: Failed to import optimum.onnxruntime.modeling_ort because of the following error (look up to see its traceback):
No module named 'diffusers'
```
### Expected behavior
Be able to do sequence classification without diffusers.
### Contribution
I can open a PR to make diffusers a soft dependency | https://github.com/huggingface/optimum/issues/615 | closed | [
"bug"
] | 2022-12-19T11:23:34Z | 2022-12-21T14:02:45Z | 1 | JingyaHuang |
huggingface/transformers | 20,794 | When I use the following code on tpuvm and use model.generate() to infer, the speed is very slow. It seems that the tpu is not used. What is the problem? | ### System Info
When I use the following code on tpuvm and use model.generate() to infer, the speed is very slow. It seems that the tpu is not used. What is the problem?
jax device is exist
```python
import jax
num_devices = jax.device_count()
device_type = jax.devices()[0].device_kind
assert "TPU" in device_type
from transformers import AutoTokenizer, FlaxAutoModelForCausalLM
model = FlaxMT5ForConditionalGeneration.from_pretrained("google/mt5-small")
tokenizer = T5Tokenizer.from_pretrained("google/mt5-small")
input_context = "The dog"
# encode input context
input_ids = tokenizer(input_context, return_tensors="np").input_ids
# generate candidates using sampling
outputs = model.generate(input_ids=input_ids, max_length=20, top_k=30, do_sample=True)
print(outputs)
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
import jax
num_devices = jax.device_count()
device_type = jax.devices()[0].device_kind
assert "TPU" in device_type
from transformers import AutoTokenizer, FlaxAutoModelForCausalLM
model = FlaxMT5ForConditionalGeneration.from_pretrained("google/mt5-small")
tokenizer = T5Tokenizer.from_pretrained("google/mt5-small")
input_context = "The dog"
# encode input context
input_ids = tokenizer(input_context, return_tensors="np").input_ids
# generate candidates using sampling
outputs = model.generate(input_ids=input_ids, max_length=20, top_k=30, do_sample=True)
print(outputs)
```
### Expected behavior
Expect it to be fast | https://github.com/huggingface/transformers/issues/20794 | closed | [] | 2022-12-16T09:15:32Z | 2023-05-21T15:03:06Z | null | joytianya |
huggingface/optimum | 595 | Document and (possibly) improve the `use_past`, `use_past_in_inputs`, `use_present_in_outputs` API | ### Feature request
As the title says.
Basically, for `OnnxConfigWithPast` there are three attributes:
- `use_past_in_inputs`: to specify that the exported model should have `past_key_values` as inputs
- `use_present_in_outputs`: to specify that the exported model should have `past_key_values` as outputs
- `use_past`, which is basically used for either of the previous attributes when those are left unspecified
It is not currently documented, and their current meaning might be unclear to the user.
Also, maybe it is possible to find a better way of handling those.
cc @mht-sharma @fxmarty
### Motivation
The current way is working, but might not be the best way of solving the problem, and might cause some misunderstanding for potential contributors.
### Your contribution
I can work on this. | https://github.com/huggingface/optimum/issues/595 | closed | [
"documentation",
"Stale"
] | 2022-12-15T13:57:43Z | 2025-07-03T02:16:51Z | 2 | michaelbenayoun |
huggingface/datasets | 5,362 | Run 'GPT-J' failure due to download dataset fail (' ConnectionError: Couldn't reach http://eaidata.bmk.sh/data/enron_emails.jsonl.zst ' ) | ### Describe the bug
Run model "GPT-J" with dataset "the_pile" fail.
The fail out is as below:

Looks like which is due to "http://eaidata.bmk.sh/data/enron_emails.jsonl.zst" unreachable .
### Steps to reproduce the bug
Steps to reproduce this issue:
git clone https://github.com/huggingface/transformers
cd transformers
python examples/pytorch/language-modeling/run_clm.py --model_name_or_path EleutherAI/gpt-j-6B --dataset_name the_pile --dataset_config_name enron_emails --do_eval --output_dir /tmp/output --overwrite_output_dir
### Expected behavior
This issue looks like due to "http://eaidata.bmk.sh/data/enron_emails.jsonl.zst " couldn't be reached.
Is there another way to download the dataset "the_pile" ?
Is there another way to cache the dataset "the_pile" but not let the hg to download it when runtime ?
### Environment info
huggingface_hub version: 0.11.1
Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.35
Python version: 3.9.12
Running in iPython ?: No
Running in notebook ?: No
Running in Google Colab ?: No
Token path ?: /home/taosy/.huggingface/token
Has saved token ?: False
Configured git credential helpers:
FastAI: N/A
Tensorflow: N/A
Torch: N/A
Jinja2: N/A
Graphviz: N/A
Pydot: N/A | https://github.com/huggingface/datasets/issues/5362 | closed | [] | 2022-12-15T01:23:03Z | 2022-12-15T07:45:54Z | 2 | shaoyuta |
huggingface/datasets | 5,354 | Consider using "Sequence" instead of "List" | ### Feature request
Hi, please consider using `Sequence` type annotation instead of `List` in function arguments such as in [`Dataset.from_parquet()`](https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L1088). It leads to type checking errors, see below.
**How to reproduce**
```py
list_of_filenames = ["foo.parquet", "bar.parquet"]
ds = Dataset.from_parquet(list_of_filenames)
```
**Expected mypy output:**
```
Success: no issues found
```
**Actual mypy output:**
```py
test.py:19: error: Argument 1 to "from_parquet" of "Dataset" has incompatible type "List[str]"; expected "Union[Union[str, bytes, PathLike[Any]], List[Union[str, bytes, PathLike[Any]]]]" [arg-type]
test.py:19: note: "List" is invariant -- see https://mypy.readthedocs.io/en/stable/common_issues.html#variance
test.py:19: note: Consider using "Sequence" instead, which is covariant
```
**Env:** mypy 0.991, Python 3.10.0, datasets 2.7.1 | https://github.com/huggingface/datasets/issues/5354 | open | [
"enhancement",
"good first issue"
] | 2022-12-12T15:39:45Z | 2025-11-21T22:35:10Z | 13 | tranhd95 |
huggingface/transformers | 20,733 | Verify that a test in `LayoutLMv3` 's tokenizer is checking what we want | I'm taking the liberty of opening an issue to share a question I've been keeping in the corner of my head, but now that I'll have less time to devote to `transformers` I prefer to share it before it's forgotten.
In the PR where the `LayoutLMv3` model was added, I was not very sure about the target value used for one of the tests that had to be overridden (the value was 1 in one of the previous commits and then changed to 0). The comment I am referring to is this one: https://github.com/huggingface/transformers/pull/17060#discussion_r872265358 .
Might be of interest to @ArthurZucker | https://github.com/huggingface/transformers/issues/20733 | closed | [] | 2022-12-12T15:17:36Z | 2023-05-26T10:14:14Z | null | SaulLu |
huggingface/setfit | 227 | Compare with other approaches | Dumb question:
How does setfit compare with other approaches for sentence classification in low data settings? Two that may be worth comparing to:
- Various techniques for [augmented SBERT](https://www.sbert.net/examples/training/data_augmentation/README.html)
- Simple Contrastive Learning [SimCSE](https://github.com/princeton-nlp/SimCSE)
Pros and cons of these approaches? Thoughts? | https://github.com/huggingface/setfit/issues/227 | open | [
"question"
] | 2022-12-12T14:32:51Z | 2022-12-20T08:49:52Z | null | creatorrr |
huggingface/datasets | 5,351 | Do we need to implement `_prepare_split`? | ### Describe the bug
I'm not sure this is a bug or if it's just missing in the documentation, or i'm not doing something correctly, but I'm subclassing `DatasetBuilder` and getting the following error because on the `DatasetBuilder` class the `_prepare_split` method is abstract (as are the others we are required to implement, hence the genesis of my question):
```
Traceback (most recent call last):
File "/home/jason/source/python/prism_machine_learning/examples/create_hf_datasets.py", line 28, in <module>
dataset_builder.download_and_prepare()
File "/home/jason/.virtualenvs/pml/lib/python3.8/site-packages/datasets/builder.py", line 704, in download_and_prepare
self._download_and_prepare(
File "/home/jason/.virtualenvs/pml/lib/python3.8/site-packages/datasets/builder.py", line 793, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/jason/.virtualenvs/pml/lib/python3.8/site-packages/datasets/builder.py", line 1124, in _prepare_split
raise NotImplementedError()
NotImplementedError
```
### Steps to reproduce the bug
I will share implementation if it turns out that everything should be working (i.e. we only need to implement those 3 methods the docs mention), but I don't want to distract from the original question.
### Expected behavior
I just need to know if there are additional methods we need to implement when subclassing `DatasetBuilder` besides what the documentation specifies -> `_info`, `_split_generators` and `_generate_examples`
### Environment info
- `datasets` version: 2.4.0
- Platform: Linux-5.4.0-135-generic-x86_64-with-glibc2.2.5
- Python version: 3.8.12
- PyArrow version: 7.0.0
- Pandas version: 1.4.1
| https://github.com/huggingface/datasets/issues/5351 | closed | [] | 2022-12-12T01:38:54Z | 2022-12-20T18:20:57Z | 11 | jmwoloso |
huggingface/datasets | 5,343 | T5 for Q&A produces truncated sentence | Dear all, I am fine-tuning T5 for Q&A task using the MedQuAD ([GitHub - abachaa/MedQuAD: Medical Question Answering Dataset of 47,457 QA pairs created from 12 NIH websites](https://github.com/abachaa/MedQuAD)) dataset. In the dataset, there are many long answers with thousands of words. I have used pytorch_lightning to train the T5-large model. I have two questions.
For example, I set both the max_length, max_input_length, max_output_length to 128.
How to deal with those long answers? I just left them as is and the T5Tokenizer can automatically handle. I would assume the tokenizer just truncates an answer at the position of 128th word (or 127th). Is it possible that I manually split an answer into different parts, each part has 128 words; and then all these sub-answers serve as a separate answer to the same question?
Another question is that I get incomplete (truncated) answers when using the fine-tuned model in inference, even though the predicted answer is shorter than 128 words. I found a message posted 2 years ago saying that one should add at the end of texts when fine-tuning T5. I followed that but then got a warning message that duplicated were found. I am assuming that this is because the tokenizer truncates an answer text, thus is missing in the truncated answer, such that the end token is not produced in predicted answer. However, I am not sure. Can anybody point out how to address this issue?
Any suggestions are highly appreciated.
Below is some code snippet.
`
import pytorch_lightning as pl
from torch.utils.data import DataLoader
import torch
import numpy as np
import time
from pathlib import Path
from transformers import (
Adafactor,
T5ForConditionalGeneration,
T5Tokenizer,
get_linear_schedule_with_warmup
)
from torch.utils.data import RandomSampler
from question_answering.utils import *
class T5FineTuner(pl.LightningModule):
def __init__(self, hyparams):
super(T5FineTuner, self).__init__()
self.hyparams = hyparams
self.model = T5ForConditionalGeneration.from_pretrained(hyparams.model_name_or_path)
self.tokenizer = T5Tokenizer.from_pretrained(hyparams.tokenizer_name_or_path)
if self.hyparams.freeze_embeds:
self.freeze_embeds()
if self.hyparams.freeze_encoder:
self.freeze_params(self.model.get_encoder())
# assert_all_frozen()
self.step_count = 0
self.output_dir = Path(self.hyparams.output_dir)
n_observations_per_split = {
'train': self.hyparams.n_train,
'validation': self.hyparams.n_val,
'test': self.hyparams.n_test
}
self.n_obs = {k: v if v >= 0 else None for k, v in n_observations_per_split.items()}
self.em_score_list = []
self.subset_score_list = []
data_folder = r'C:\Datasets\MedQuAD-master'
self.train_data, self.val_data, self.test_data = load_medqa_data(data_folder)
def freeze_params(self, model):
for param in model.parameters():
param.requires_grad = False
def freeze_embeds(self):
try:
self.freeze_params(self.model.model.shared)
for d in [self.model.model.encoder, self.model.model.decoder]:
self.freeze_params(d.embed_positions)
self.freeze_params(d.embed_tokens)
except AttributeError:
self.freeze_params(self.model.shared)
for d in [self.model.encoder, self.model.decoder]:
self.freeze_params(d.embed_tokens)
def lmap(self, f, x):
return list(map(f, x))
def is_logger(self):
return self.trainer.proc_rank <= 0
def forward(self, input_ids, attention_mask=None, decoder_input_ids=None, decoder_attention_mask=None, labels=None):
return self.model(
input_ids,
attention_mask=attention_mask,
decoder_input_ids=decoder_input_ids,
decoder_attention_mask=decoder_attention_mask,
labels=labels
)
def _step(self, batch):
labels = batch['target_ids']
labels[labels[:, :] == self.tokenizer.pad_token_id] = -100
outputs = self(
input_ids = batch['source_ids'],
attention_mask=batch['source_mask'],
labels=labels,
decoder_attention_mask=batch['target_mask']
)
loss = outputs[0]
return loss
def ids_to_clean_text(self, generated_ids):
gen_text = self.tokenizer.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
return self.lmap(str.strip, gen_text)
def _generative_step(self, batch):
t0 = time.time()
generated_ids = self.model.generate(
batch["source_ids"],
attention_mask=batch["source_mask"],
use_cache=True,
decode | https://github.com/huggingface/datasets/issues/5343 | closed | [] | 2022-12-08T19:48:46Z | 2022-12-08T19:57:17Z | 0 | junyongyou |
huggingface/optimum | 566 | Add optimization and quantization options to `optimum.exporters.onnx` | ### Feature request
Would be nice to have two more arguments in `optimum.exporters.onnx` in order to have the optimized and quantized version of the exported models along side with the "normal" ones. I can imagine something like:
```
python -m optimum.exporters.onnx --model <model-name> -OX -quantized-arch <arch> output
```
Where:
* `-OX` corresponds to the already available `O1`, `O2`, `O3` and `O4` optimization possibilities.
* `-quantized-arch` can take values such as `arm64`, `avx2`, `avx512`, `avx512_vnni` and `tensorrt`
### Motivation
This will allow to very easily create optimized/quantized version of the models we need.
### Your contribution
I might help on submiting a PR for it, but I'm not able to give a "when" for now. | https://github.com/huggingface/optimum/issues/566 | closed | [] | 2022-12-08T18:49:04Z | 2023-04-11T12:26:54Z | 17 | jplu |
huggingface/transformers | 20,638 | ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (`labels` in this case) have excessive nesting (inputs type `list` where type `int` is expected). | ### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.10.133+-x86_64-with-glibc2.27
- Python version: 3.8.15
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): 2.9.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes (Tesla T4)
- Using distributed or parallel set-up in script?: no
### Who can help?
@sgugger maybe you could help?
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
# Information
I am using the implementation of text classification given in official [documentation ](https://huggingface.co/docs/transformers/tasks/sequence_classification)from huggingface and one given by @lewtun in his book.
I retrained an instance of sentence-transformers using contrastive loss on an unsupervised data dump and now want to finetune the above model on a labeled, binary dataset.
[This ](https://github.com/huggingface/transformers/issues/15505)issue is similar, and I followed the fix but to no help.
# To reproduce
1. Run [this notebook](https://colab.research.google.com/drive/1VMl5l1O4lrgSMiGTh4yKIWEY2XGUgSIm?usp=sharing)
2. Trainer.train() should produce the following error:
```
ValueError Traceback (most recent call last)
[/usr/local/lib/python3.8/dist-packages/transformers/tokenization_utils_base.py](https://localhost:8080/#) in convert_to_tensors(self, tensor_type, prepend_batch_axis)
716 if not is_tensor(value):
--> 717 tensor = as_tensor(value)
718
ValueError: too many dimensions 'str'
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
9 frames
[<ipython-input-75-ce45916ac715>](https://localhost:8080/#) in <module>
7 )
8
----> 9 trainer.train()
[/usr/local/lib/python3.8/dist-packages/transformers/trainer.py](https://localhost:8080/#) in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1525 self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size
1526 )
-> 1527 return inner_training_loop(
1528 args=args,
1529 resume_from_checkpoint=resume_from_checkpoint,
[/usr/local/lib/python3.8/dist-packages/transformers/trainer.py](https://localhost:8080/#) in _inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
1747
1748 step = -1
-> 1749 for step, inputs in enumerate(epoch_iterator):
1750
1751 # Skip past any already trained steps if resuming training
[/usr/local/lib/python3.8/dist-packages/torch/utils/data/dataloader.py](https://localhost:8080/#) in __next__(self)
679 # TODO(https://github.com/pytorch/pytorch/issues/76750)
680 self._reset() # type: ignore[call-arg]
--> 681 data = self._next_data()
682 self._num_yielded += 1
683 if self._dataset_kind == _DatasetKind.Iterable and \
[/usr/local/lib/python3.8/dist-packages/torch/utils/data/dataloader.py](https://localhost:8080/#) in _next_data(self)
719 def _next_data(self):
720 index = self._next_index() # may raise StopIteration
--> 721 data = self._dataset_fetcher.fetch(index) # may raise StopIteration
722 if self._pin_memory:
723 data = _utils.pin_memory.pin_memory(data, self._pin_memory_device)
[/usr/local/lib/python3.8/dist-packages/torch/utils/data/_utils/fetch.py](https://localhost:8080/#) in fetch(self, possibly_batched_index)
50 else:
51 data = self.dataset[possibly_batched_index]
---> 52 return self.collate_fn(data)
[/usr/local/lib/python3.8/dist-packages/transformers/data/data_collator.py](https://localhost:8080/#) in __call__(self, features)
247
248 def __call__(self, features: List[Dict[str, Any]]) -> Dict[str, Any]:
--> 249 batch = self.tokenizer.pad(
250 features,
251 padding=self.padding,
[/usr/local/lib/python3.8/dist-packages/transformers/tokenization_utils_base.py](https://localhost:8080/#) in pad(self, encoded_inputs, padding, max_length, pad_to_multiple_of, return_attention_mask, return_tensors, verbose)
3015 batch_outputs[key].append(value)
3016
-> 3017 return BatchEncoding(batch_outputs, tensor_type=return_tensors)
3018
3019 def create_token_type_ids_from_sequences(
[/usr/local/lib/python3.8/dist-packages/transf | https://github.com/huggingface/transformers/issues/20638 | closed | [] | 2022-12-07T02:10:35Z | 2023-01-31T21:23:46Z | null | vitthal-bhandari |
huggingface/setfit | 222 | Pre-training a generic SentenceTransformer for domain adaptation | When using `SetFit` for classification in a more technical domain, I could imagine the generically-trained `SBERT` models may produce poor sentence embeddings if the domain is not represented well enough in the diverse training corpus. In this case, would it be advantageous to first apply domain adaptation techniques (as discussed [here](https://sbert.net/examples/domain_adaptation/README.html)) to an `SBERT` model before using the model as a base in `SetFit`? Have you considered and/or tested such an approach?
Thanks for the help! | https://github.com/huggingface/setfit/issues/222 | open | [
"question"
] | 2022-12-05T15:22:57Z | 2023-04-30T06:45:47Z | null | zachschillaci27 |
huggingface/setfit | 219 | efficient way of saving finetuned zero-shot models? | Hi guys, pretty interesting project.
I was wondering if there is any way to save models after a zero-shot model is finetuned for few-shot model.
So for example, if I finetuned a couple of say, `sentence-transformers/paraphrase-mpnet-base-v2` models, the major difference between them is just the weights of final few layers, weights for the rest of the model mostly remains the same, so is there a way to efficiently save the necessary final few layers thus reducing the size of models, repetedly being saved.
This way one could save, the disk space by a lot.
And apart form that, while inferencing, I don't have to load multiple huge models and instead I could have just one model containg the common freezed layers that give me some common features and just has to host the final few layers with custom classes that intakes those common features. | https://github.com/huggingface/setfit/issues/219 | open | [
"question"
] | 2022-12-05T07:36:40Z | 2022-12-20T08:49:32Z | null | RaiAmanRai |
huggingface/datasets | 5,326 | No documentation for main branch is built | Since:
- #5250
- Commit: 703b84311f4ead83c7f79639f2dfa739295f0be6
the docs for main branch are no longer built.
The change introduced only triggers the docs building for releases. | https://github.com/huggingface/datasets/issues/5326 | closed | [
"bug"
] | 2022-12-01T16:50:58Z | 2022-12-02T16:26:01Z | 0 | albertvillanova |
huggingface/datasets | 5,325 | map(...batch_size=None) for IterableDataset | ### Feature request
Dataset.map(...) allows batch_size to be None. It would be nice if IterableDataset did too.
### Motivation
Although it may seem a bit of a spurious request given that `IterableDataset` is meant for larger than memory datasets, but there are a couple of reasons why this might be nice.
One is that load_dataset(...) can return either IterableDataset or Dataset. mypy will then complain if batch_size=None even if we know it is Dataset. Of course we can do:
assert isinstance(d, datasets.DatasetDict)
But it is a mild inconvenience. What's more annoying is that whenever we use something like e.g. `combine_datasets(...)`, we end up with the union again, and so have to do the assert again.
Another is that we could actually end up with an IterableDataset small enough for memory in normal/correct usage, e.g. by filtering a massive IterableDataset.
For practical usages, an alternative to this would be to convert from an iterable dataset to a map-style dataset, but it is not obvious how to do this.
### Your contribution
Not this time. | https://github.com/huggingface/datasets/issues/5325 | closed | [
"enhancement",
"good first issue"
] | 2022-12-01T15:43:42Z | 2022-12-07T15:54:43Z | 5 | frankier |
huggingface/datasets | 5,324 | Fix docstrings and types in documentation that appears on the website | While I was working on https://github.com/huggingface/datasets/pull/5313 I've noticed that we have a mess in how we annotate types and format args and return values in the code. And some of it is displayed in the [Reference section](https://huggingface.co/docs/datasets/package_reference/builder_classes) of the documentation on the website.
Would be nice someday, maybe before releasing datasets 3.0.0, to unify it...... | https://github.com/huggingface/datasets/issues/5324 | open | [
"documentation"
] | 2022-12-01T15:34:53Z | 2024-01-23T16:21:54Z | 5 | polinaeterna |
huggingface/datasets | 5,317 | `ImageFolder` performs poorly with large datasets | ### Describe the bug
While testing image dataset creation, I'm seeing significant performance bottlenecks with imagefolders when scanning a directory structure with large number of images.
## Setup
* Nested directories (5 levels deep)
* 3M+ images
* 1 `metadata.jsonl` file
## Performance Degradation Point 1
Degradation occurs because [`get_data_files_patterns`](https://github.com/huggingface/datasets/blob/main/src/datasets/data_files.py#L231-L243) runs the exact same scan for many different types of patterns, and there doesn't seem to be a way to easily limit this. It's controlled by the definition of [`ALL_DEFAULT_PATTERNS`](https://github.com/huggingface/datasets/blob/main/src/datasets/data_files.py#L82-L85).
One scan with 3M+ files takes about 10-15 minutes to complete on my setup, so having those extra scans really slows things down – from 10 minutes to 60+. Most of the scans return no matches, but they still take a significant amount of time to complete – hence the poor performance.
As a side effect, when this scan is run on 3M+ image files, Python also consumes up to 12 GB of RAM, which is not ideal.
## Performance Degradation Point 2
The second performance bottleneck is in [`PackagedDatasetModuleFactory.get_module`](https://github.com/huggingface/datasets/blob/d7dfbc83d68e87ba002c5eb2555f7a932e59038a/src/datasets/load.py#L707-L711), which calls `DataFilesDict.from_local_or_remote`.
It runs for a long time (60min+), consuming significant amounts of RAM – even more than the point 1 above. Based on `iostat -d 2`, it performs **zero** disk operations, which to me suggests that there is a code based bottleneck there that could be sorted out.
### Steps to reproduce the bug
```python
from datasets import load_dataset
import os
import huggingface_hub
dataset = load_dataset(
'imagefolder',
data_dir='/some/path',
# just to spell it out:
split=None,
drop_labels=True,
keep_in_memory=False
)
dataset.push_to_hub('account/dataset', private=True)
```
### Expected behavior
While it's certainly possible to write a custom loader to replace `ImageFolder` with, it'd be great if the off-the-shelf `ImageFolder` would by default have a setup that can scale to large datasets.
Or perhaps there could be a dedicated loader just for large datasets that trades off flexibility for performance? As in, maybe you have to define explicitly how you want it to work rather than it trying to guess your data structure like `_get_data_files_patterns()` does?
### Environment info
- `datasets` version: 2.7.1
- Platform: Linux-4.14.296-222.539.amzn2.x86_64-x86_64-with-glibc2.2.5
- Python version: 3.7.10
- PyArrow version: 10.0.1
- Pandas version: 1.3.5
| https://github.com/huggingface/datasets/issues/5317 | open | [] | 2022-12-01T00:04:21Z | 2022-12-01T21:49:26Z | 3 | salieri |
huggingface/setfit | 209 | Limitations of Setfit Model | Hi, was wondering your thoughts on some of the limitations of the Setfit model. Can it support any sort of few shot text classification, or what are some areas where this model falls short? Are there any research papers / ideas to address some of these limitations.
Also, is the model available to call via Hugging Face's inference API for enterprise. We saw the Ag-News endpoint, but are there any other endpoints that are more generalizable, or how would you recommend distilling a derivative of this model into production? | https://github.com/huggingface/setfit/issues/209 | open | [
"question"
] | 2022-11-28T22:58:35Z | 2023-02-24T20:11:00Z | null | nv78 |
huggingface/Mongoku | 92 | Switch to Svelte(Kit?) | https://github.com/huggingface/Mongoku/issues/92 | closed | [
"enhancement",
"help wanted",
"question"
] | 2022-11-23T21:28:39Z | 2025-10-25T16:03:14Z | null | julien-c | |
huggingface/datasets | 5,286 | FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/enwiki/20220301/dumpstatus.json | ### Describe the bug
I follow the steps provided on the website [https://huggingface.co/datasets/wikipedia](https://huggingface.co/datasets/wikipedia)
$ pip install apache_beam mwparserfromhell
>>> from datasets import load_dataset
>>> load_dataset("wikipedia", "20220301.en")
however this results in the following error:
raise MissingBeamOptions(
datasets.builder.MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/
If you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory).
Example of usage:
`load_dataset('wikipedia', '20220301.en', beam_runner='DirectRunner')`
If I then prompt the system with:
>>> load_dataset('wikipedia', '20220301.en', beam_runner='DirectRunner')
the following error occurs:
raise FileNotFoundError(f"Couldn't find file at {url}")
FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/enwiki/20220301/dumpstatus.json
Here is the exact code:
Python 3.10.6 (main, Nov 2 2022, 18:53:38) [GCC 11.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from datasets import load_dataset
>>> load_dataset('wikipedia', '20220301.en')
Downloading and preparing dataset wikipedia/20220301.en to /home/[EDITED]/.cache/huggingface/datasets/wikipedia/20220301.en/2.0.0/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559...
Downloading: 100%|████████████████████████████████████████████████████████████████████████████| 15.3k/15.3k [00:00<00:00, 22.2MB/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 1741, in load_dataset
builder_instance.download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 822, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1879, in _download_and_prepare
raise MissingBeamOptions(
datasets.builder.MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/
If you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory).
Example of usage:
`load_dataset('wikipedia', '20220301.en', beam_runner='DirectRunner')`
>>> load_dataset('wikipedia', '20220301.en', beam_runner='DirectRunner')
Downloading and preparing dataset wikipedia/20220301.en to /home/[EDITED]/.cache/huggingface/datasets/wikipedia/20220301.en/2.0.0/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559...
Downloading: 100%|████████████████████████████████████████████████████████████████████████████| 15.3k/15.3k [00:00<00:00, 18.8MB/s]
Downloading data files: 0%| | 0/1 [00:00<?, ?it/s]Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 1741, in load_dataset
builder_instance.download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 822, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1909, in _download_and_prepare
super()._download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 891, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/rorytol/.cache/huggingface/modules/datasets_modules/datasets/wikipedia/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559/wikipedia.py", line 945, in _split_generators
downloaded_files = dl_manager.download_and_extract({"info": info_url})
File "/usr/local/lib/python3.10/dist-packages/datasets/download/download_manager.py", line 447, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/usr/local/lib/python3.10/dist-packages/datasets/download/download_manager.py", line 311, in download
downloaded_path_or_paths = map_nested(
File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line | https://github.com/huggingface/datasets/issues/5286 | closed | [] | 2022-11-23T14:54:15Z | 2024-11-23T01:16:41Z | 3 | roritol |
huggingface/setfit | 198 | text similarity | Hi can I use this system to obtain the similaarity scores of my data set to a given prompts.
If not what best solution could help this problem?
thank you | https://github.com/huggingface/setfit/issues/198 | open | [
"question"
] | 2022-11-22T06:59:21Z | 2022-12-20T09:04:53Z | null | aivyon |
huggingface/datasets | 5,274 | load_dataset possibly broken for gated datasets? | ### Describe the bug
When trying to download the [winoground dataset](https://huggingface.co/datasets/facebook/winoground), I get this error unless I roll back the version of huggingface-hub:
```
[/usr/local/lib/python3.7/dist-packages/huggingface_hub/utils/_validators.py](https://localhost:8080/#) in validate_repo_id(repo_id)
165 if repo_id.count("/") > 1:
166 raise HFValidationError(
--> 167 "Repo id must be in the form 'repo_name' or 'namespace/repo_name':"
168 f" '{repo_id}'. Use `repo_type` argument if needed."
169 )
HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': 'datasets/facebook/winoground'. Use `repo_type` argument if needed
```
### Steps to reproduce the bug
Install requirements:
```
pip install transformers
pip install datasets
# It works if you uncomment the following line, rolling back huggingface hub:
# pip install huggingface-hub==0.10.1
```
Then:
```
from datasets import load_dataset
auth_token = "" # Replace with an auth token, which you can get from your huggingface account: Profile -> Settings -> Access Tokens -> New Token
winoground = load_dataset("facebook/winoground", use_auth_token=auth_token)["test"]
```
### Expected behavior
Downloading of the datset
### Environment info
Just a google colab; see here: https://colab.research.google.com/drive/15wwOSte2CjTazdnCWYUm2VPlFbk2NGc0?usp=sharing | https://github.com/huggingface/datasets/issues/5274 | closed | [] | 2022-11-21T21:59:53Z | 2023-05-27T00:06:14Z | 9 | TristanThrush |
huggingface/datasets | 5,272 | Use pyarrow Tensor dtype | ### Feature request
I was going the discussion of converting tensors to lists.
Is there a way to leverage pyarrow's Tensors for nested arrays / embeddings?
For example:
```python
import pyarrow as pa
import numpy as np
x = np.array([[2, 2, 4], [4, 5, 100]], np.int32)
pa.Tensor.from_numpy(x, dim_names=["dim1","dim2"])
```
[Apache docs](https://arrow.apache.org/docs/python/generated/pyarrow.Tensor.html)
Maybe this belongs into the pyarrow features / repo.
### Motivation
Working with big data, we need to make sure to use the best data structures and IO out there
### Your contribution
Can try to a PR if code changes necessary | https://github.com/huggingface/datasets/issues/5272 | open | [
"enhancement"
] | 2022-11-20T15:18:41Z | 2024-11-11T03:03:17Z | 17 | franz101 |
huggingface/optimum | 488 | Community contribution - `BetterTransformer` integration for more models! | ## `BetterTransformer` integration for more models!
`BetterTransformer` API provides faster inference on CPU & GPU through a simple interface!
Models can benefit from very interesting speedups using a one liner and by making sure to install the latest version of PyTorch. A complete guideline on how to convert a new model has been created on the [BetterTransformer documentation](https://huggingface.co/docs/optimum/bettertransformer/tutorials/contribute)!
Here is a list of models that could be potentially supported, pick one of the architecture below and let's discuss about the conversion!
Text models 🖊️ :
- [x] FSMT - [FSMTEncoderLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/fsmt/modeling_fsmt.py#L397) / @Sumanth077 https://github.com/huggingface/optimum/pull/494
- [ ] MobileBERT - [MobileBertLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/mobilebert/modeling_mobilebert.py#L498) / @raghavanone https://github.com/huggingface/optimum/pull/506
- [x] MBart - [MBartEncoderLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/mbart/modeling_mbart.py#L296) + [M2M100EncoderLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/m2m_100/modeling_m2m_100.py#L345) / https://github.com/huggingface/optimum/pull/516 @ravenouse
- [x] ProphetNet - [ProphetNetEncoderLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/prophetnet/modeling_prophetnet.py#L1130)
- [x] RemBert - [RemBertLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/rembert/modeling_rembert.py#L415)
- [x] RocBert - [RocBertLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/roc_bert/modeling_roc_bert.py#LL519C7-L519C19)
- [x] RoFormer - [RoFormerLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/roformer/modeling_roformer.py#L448)
- [x] Tapas - [TapasLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/tapas/modeling_tapas.py#L524) / https://github.com/huggingface/optimum/pull/520
Vision models 📷 :
- [x] Blip - [BlipLayer](https://github.com/huggingface/transformers/blob/fcf813417aa34f3a0ea7d283f7d4f6b0834cf098/src/transformers/models/blip/modeling_blip.py#L372)
- [ ] Detr - [DetrLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/detr/modeling_detr.py#L610)
- [ ] Flava - [FlavaLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/flava/modeling_flava.py#L597)
- [ ] GLPN - [GLPNLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/glpn/modeling_glpn.py#L292) | Cannot be supported
- [x] ViLT - [ViLTLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/vilt/modeling_vilt.py#L472) / https://github.com/huggingface/optimum/pull/508
Audio models 🔉 :
- [ ] Speech2Text - [Speech2TextLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/speech_to_text/modeling_speech_to_text.py#L350)
- [ ] NEW: Audio Speech Transformer - [ASTLayer](https://github.com/huggingface/transformers/blob/f2e7d270ec795be09e6187dd2459edb43bd861c1/src/transformers/models/audio_spectrogram_transformer/modeling_audio_spectrogram_transformer.py#L274)
Let us also know if you think that some architectures can be supported that we missed. Note that for encoder-decoder based models below, we expect to convert the encoder only.
**Support for decoder-based models coming soon!**
cc @michaelbenayoun @fxmarty
https://github.com/huggingface/transformers/issues/20372 | https://github.com/huggingface/optimum/issues/488 | open | [
"good first issue"
] | 2022-11-18T10:45:39Z | 2025-05-20T20:35:02Z | 26 | younesbelkada |
huggingface/setfit | 192 | How to use a custom Sentence Transformer pretrained model | Hello team,
Presently we are using models which are present in hugging face . I have a custom trained Sentence transformer.
How I can use a custom trained Hugging face model in the present pipeline. | https://github.com/huggingface/setfit/issues/192 | open | [
"question"
] | 2022-11-17T09:13:59Z | 2022-12-20T09:05:06Z | null | theainerd |
huggingface/setfit | 191 | How to build multilabel text classfication dataset | From the sample below, param **column_mapping** is used to set up the dataset. What is the format of label column in multilabel?Is it the one-hot label?
trainer = SetFitTrainer(
model=model,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
loss_class=CosineSimilarityLoss,
metric="accuracy",
batch_size=16,
num_iterations=20, # The number of text pairs to generate for contrastive learning
num_epochs=1, # The number of epochs to use for constrastive learning
column_mapping={"sentence": "text", "label": "label"} # Map dataset columns to text/label expected by trainer
| https://github.com/huggingface/setfit/issues/191 | closed | [
"question"
] | 2022-11-17T05:50:56Z | 2022-12-13T22:32:16Z | null | HenryYuen128 |
huggingface/datasets | 5,249 | Protect the main branch from inadvertent direct pushes | We have decided to implement a protection mechanism in this repository, so that nobody (not even administrators) can inadvertently push accidentally directly to the main branch.
See context here:
- d7c942228b8dcf4de64b00a3053dce59b335f618
To do:
- [x] Protect main branch
- Settings > Branches > Branch protection rules > main > Edit
- [x] Check: Do not allow bypassing the above settings
- The above settings will apply to administrators and custom roles with the "bypass branch protections" permission.
- [x] Additionally, uncheck: Require approvals [under "Require a pull request before merging", which was already checked]
- Before, we could exceptionally merge a non-approved PR, using Administrator bypass
- Now that Administrator bypass is no longer possible, we would always need an approval to be able to merge; and pull request authors cannot approve their own pull requests. This could be an inconvenient in some exceptional circumstances when an urgent fix is needed
- Nevertheless, although it is no longer enforced, it is strongly recommended to merge PRs only if they have at least one approval
- [x] #5250
- So that direct pushes to main branch are no longer necessary | https://github.com/huggingface/datasets/issues/5249 | closed | [
"maintenance"
] | 2022-11-16T14:19:03Z | 2023-12-21T10:28:27Z | 1 | albertvillanova |
huggingface/datasets | 5,243 | Download only split data | ### Feature request
Is it possible to download only the data that I am requesting and not the entire dataset? I run out of disk spaceas it seems to download the entire dataset, instead of only the part needed.
common_voice["test"] = load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test",
cache_dir="cache/path...",
use_auth_token=True,
download_config=DownloadConfig(delete_extracted='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')
)
### Motivation
efficiency improvement
### Your contribution
n/a | https://github.com/huggingface/datasets/issues/5243 | open | [
"enhancement"
] | 2022-11-15T10:15:54Z | 2025-02-25T14:47:03Z | 7 | capsabogdan |
huggingface/diffusers | 1,281 | what is the meaning of parameter: "num_class_images" | What is the `num_class_images `parameter used for? I see that in some examples it is 50, sometimes it is 200. In the source code it is said that: "Minimal class images for prior preservation loss. If not have enough images, additional images will be sampled with class_prompt."
I still do not fully grasp it. For example if I have 20 images to train, what should I select this "`num_class_images`"? | https://github.com/huggingface/diffusers/issues/1281 | closed | [] | 2022-11-14T18:32:22Z | 2022-12-06T01:47:42Z | null | himmetozcan |
huggingface/setfit | 178 | Question : evaluation after every training epoch | # Thank you
Hello!
I am Yongtae, a senior ML engineer in japan.
Thank you for publishing a genuinely excellent paper and code.
Few-shot learning and multilingual support are appreciated by engineers like me who work abroad!
# Question
I felt this model easily overfit to train data if the number of epochs is over 2 or train data contains similar data.
Therefore I would like to evaluate the model after every training epoch to find out the best epoch number.
But as shown [here](https://github.com/huggingface/setfit/blob/99c30746799a09e0267427b8a7b8650568222b48/src/setfit/trainer.py#L363), it seems difficult to evaluate the model at every epoch, because the body part is trained on full epoch at the beginning of the training.
so I would like to change like below
```python
for epoch in num_epochs:
self.model.model_body.fit(
train_objectives=[(train_dataloader, train_loss)],
epochs=1,
steps_per_epoch=train_steps,
optimizer_params={"lr": learning_rate},
warmup_steps=warmup_steps,
show_progress_bar=True,
use_amp=self.use_amp,
)
if not is_differentiable_head or not self._freeze:
# Train the final classifier
self.model.fit(
x_train,
y_train,
num_epochs=1,
batch_size=batch_size,
learning_rate=learning_rate,
body_learning_rate=body_learning_rate,
l2_weight=l2_weight,
show_progress_bar=True,
)
somehow_evaluete()
```
Does it make sense to you?
Or if I fork and make that change, are there any problem?
I am looking forward to your reply
Best and thank you in advance!
| https://github.com/huggingface/setfit/issues/178 | closed | [
"question"
] | 2022-11-13T09:36:47Z | 2022-12-26T03:12:16Z | null | Yongtae723 |
huggingface/setfit | 173 | How to setup gradient_accumulation? | Hi,
in order to train a model SetFit, I would like simulate a `batch_size` of 16 but with a `batch_size` of 8. For doing that, I need to setup `gradient_accumulation` to 2.
How to do that?
Thanks. | https://github.com/huggingface/setfit/issues/173 | closed | [
"question"
] | 2022-11-10T21:19:52Z | 2022-12-20T08:49:13Z | null | piegu |
huggingface/datasets | 5,226 | Q: Memory release when removing the column? | ### Describe the bug
How do I release memory when I use methods like `.remove_columns()` or `clear()` in notebooks?
```python
from datasets import load_dataset
common_voice = load_dataset("mozilla-foundation/common_voice_11_0", "ja", use_auth_token=True)
# check memory -> RAM Used (GB): 0.704 / Total (GB) 33.670
common_voice = common_voice.remove_columns(column_names=common_voice.column_names['train'])
common_voice.clear()
# check memory -> RAM Used (GB): 0.705 / Total (GB) 33.670
```
I tried `gc.collect()` but did not help
### Steps to reproduce the bug
1. load dataset
2. remove all the columns
3. check memory is reduced or not
[link to reproduce](https://www.kaggle.com/code/bayartsogtya/huggingface-dataset-memory-issue/notebook?scriptVersionId=110630567)
### Expected behavior
Memory released when I remove the column
### Environment info
- `datasets` version: 2.1.0
- Platform: Linux-5.15.65+-x86_64-with-debian-bullseye-sid
- Python version: 3.7.12
- PyArrow version: 8.0.0
- Pandas version: 1.3.5 | https://github.com/huggingface/datasets/issues/5226 | closed | [] | 2022-11-10T18:35:27Z | 2022-11-29T15:10:10Z | 3 | bayartsogt-ya |
huggingface/datasets | 5,225 | Add video feature | ### Feature request
Add a `Video` feature to the library so folks can include videos in their datasets.
### Motivation
Being able to load Video data would be quite helpful. However, there are some challenges when it comes to videos:
1. Videos, unlike images, can end up being extremely large files
2. Often times when training video models, you need to do some very specific sampling. Videos might end up needing to be broken down into X number of clips used for training/inference
3. Videos have an additional audio stream, which must be accounted for
4. The feature needs to be able to encode/decode videos (with right video settings) from bytes.
### Your contribution
I did work on this a while back in [this (now closed) PR](https://github.com/huggingface/datasets/pull/4532). It used a library I made called [encoded_video](https://github.com/nateraw/encoded-video), which is basically the utils from [pytorchvideo](https://github.com/facebookresearch/pytorchvideo), but without the `torch` dep. It included the ability to read/write from bytes, as we need to do here. We don't want to be using a sketchy library that I made as a dependency in this repo, though.
Would love to use this issue as a place to:
- brainstorm ideas on how to do this right
- list ways/examples to work around it for now
CC @sayakpaul @mariosasko @fcakyon | https://github.com/huggingface/datasets/issues/5225 | open | [
"enhancement",
"help wanted",
"vision"
] | 2022-11-10T17:36:11Z | 2022-12-02T15:13:15Z | 7 | nateraw |
huggingface/optimum | 462 | Add support for EncoderDecoderModel | ### Feature request
There's already support for `marian` and various LLMs. But sometimes users create their own generic `EncoderDecoderModel`, e.g.
```
from transformers import EncoderDecoderModel
from optimum.onnxruntime import ORTModelForSeq2SeqLM
model = EncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-multilingual-cased", "bert-base-multilingual-cased")
model.save_pretrained("model_dir")
# Should be able to load this, but isn't supported yet.
ort_model = ORTModelForSeq2SeqLM.from_pretrained("model_dir", from_transformers=True)
```
### Motivation
The `EncoderDecoderModel` is generic enough to cover quite a lot of use-cases but this is might be hard too since it can most probably only cover EncoderDecoder of ORT supported LLMs
### Your contribution
Maybe, if there's some guidance on how to do so. | https://github.com/huggingface/optimum/issues/462 | closed | [] | 2022-11-10T13:54:48Z | 2023-09-01T11:11:43Z | 1 | alvations |
huggingface/evaluate | 353 | What is the MAE range in evaluate? | In the MAE demo space, it is indicated that "Each MAE float value ranges from 0.0 to 1.0, with the best value being 0.0."
Doesn't it range from 0 to +inf in general ?
Is it a programmatic constraint added on the evaluate MAE score? | https://github.com/huggingface/evaluate/issues/353 | closed | [] | 2022-11-10T13:29:30Z | 2022-11-16T09:45:15Z | null | clefourrier |
huggingface/diffusers | 1,204 | [Community] Can we composite Dreambooth network training? | Very impressed with Dreambooth capabilities. I have what i think is a feature request - or perhaps a clarification on what is and is not possible in training networks with Dreambooth. In particular, i was wondering if there was a way to composite two networks to enable embedding of two instances (e.g. an sks dog >and< an sqs cat). I tried the plain vanilla training one network with an instance prompt using stable v1-5 as base and then fed this network into another Dreambooth training on a second instance prompt - and my result could only represent the first instance prompt. I note i can train a network on a textual inversion token and use this network to feed into Dreambooth - and the resulting network is able to combine the two concepts - the token from textual inversion and the sks instance token from Dreambooth. Just wondering if there was a way to layer multiple tokens with multiple Dreambooth trainings. Again, super powerful - i'm very impressed by how you can embed a variety of different classes of entities in Dreambooth which each responding very realistically to prompts.
| https://github.com/huggingface/diffusers/issues/1204 | closed | [
"question",
"stale"
] | 2022-11-09T01:59:05Z | 2022-12-21T15:03:19Z | null | felgryn |
huggingface/datasets | 5,216 | save_elasticsearch_index | Hi,
I am new to Dataset and elasticsearch. I was wondering is there any equivalent approach to save elasticsearch index as of save_faiss_index locally for later use, to remove the need to re-index a dataset? | https://github.com/huggingface/datasets/issues/5216 | open | [] | 2022-11-08T23:06:52Z | 2022-11-09T13:16:45Z | 1 | amobash2 |
huggingface/diffusers | 1,168 | What is "class images" mean for dreambooth training? | What is "class images" mean for dreambooth training?
If instance images meaning the subject i want to train on , what does "class images" mean? | https://github.com/huggingface/diffusers/issues/1168 | closed | [] | 2022-11-07T03:41:07Z | 2022-11-08T06:07:10Z | null | universewill |
huggingface/transformers | 20,083 | Where is the Translation template ? | I want to translate the doc in leisure time, and I followed the guide, but not found Translation template... | https://github.com/huggingface/transformers/issues/20083 | closed | [] | 2022-11-06T06:44:12Z | 2022-11-14T08:40:44Z | null | bfss |
huggingface/datasets | 5,200 | Some links to canonical datasets in the docs are outdated | As we don't have canonical datasets in the github repo anymore, some old links to them doesn't work. I don't know how many of them are there, I found link to SuperGlue here: https://huggingface.co/docs/datasets/dataset_script#multiple-configurations, probably there are more of them. These links should be replaced by links to the corresponding datasets on the Hub. | https://github.com/huggingface/datasets/issues/5200 | closed | [
"documentation"
] | 2022-11-04T10:06:21Z | 2022-11-07T18:40:20Z | 1 | polinaeterna |
huggingface/setfit | 147 | Reproducing RAFT experiments (Table 3) | Hi, I wasn't able to locate the code to reproduce Table 3. I looked in the `scripts` folder but didn't have success.
Any help with this is greatly appreciated!
A side question on the RAFT results: did you use 10 random seeds for this experiment? | https://github.com/huggingface/setfit/issues/147 | closed | [
"question"
] | 2022-11-02T18:34:55Z | 2022-12-13T22:50:48Z | null | dgiova |
huggingface/setfit | 145 | SetFit for a large number of classes | Hi there, thanks for releasing such an interesting library.
I am curious if any experiments have been run using SetFit in the extreme multiclass setting, say as `n_classes>=100`? | https://github.com/huggingface/setfit/issues/145 | closed | [
"question"
] | 2022-11-02T16:34:51Z | 2024-05-14T10:46:30Z | null | steve-marmalade |
huggingface/datasets | 5,189 | Reduce friction in tabular dataset workflow by eliminating having splits when dataset is loaded | ### Feature request
Sorry for cryptic name but I'd like to explain using code itself. When I want to load a specific dataset from a repository (for instance, this: https://huggingface.co/datasets/inria-soda/tabular-benchmark)
```python
from datasets import load_dataset
dataset = load_dataset("inria-soda/tabular-benchmark", data_files=["reg_cat/house_sales.csv"], streaming=True)
print(next(iter(dataset["train"])))
```
`datasets` library is essentially designed for people who'd like to use benchmark datasets on various modalities to fine-tune their models, and these benchmark datasets usually have pre-defined train and test splits. However, for tabular workflows, having train and test splits usually ends up model overfitting to validation split so usually the users would like to do validation techniques like `StratifiedKFoldCrossValidation` or when they tune for hyperparameters they do `GridSearchCrossValidation` so often the behavior is to create their own splits. Even [in this paper](https://hal.archives-ouvertes.fr/hal-03723551) a benchmark is introduced but the split is done by authors.
It's a bit confusing for average tabular user to try and load a dataset and see `"train"` so it would be nice if we would not load dataset into a split called `train `by default.
```diff
from datasets import load_dataset
dataset = load_dataset("inria-soda/tabular-benchmark", data_files=["reg_cat/house_sales.csv"], streaming=True)
-print(next(iter(dataset["train"])))
+print(next(iter(dataset)))
```
### Motivation
I explained it above 😅
### Your contribution
I think this is quite a big change that seems small (e.g. how to determine datasets that will not be load to train split?), it's best if we discuss first! | https://github.com/huggingface/datasets/issues/5189 | open | [
"enhancement"
] | 2022-11-02T09:15:02Z | 2022-12-06T12:13:17Z | 33 | merveenoyan |
huggingface/datasets | 5,183 | Loading an external dataset in a format similar to conll2003 | I'm trying to load a custom dataset in a Dataset object, it's similar to conll2003 but with 2 columns only (word entity), I used the following script:
features = datasets.Features(
{"tokens": datasets.Sequence(datasets.Value("string")),
"ner_tags": datasets.Sequence(
datasets.features.ClassLabel(
names=["B-PER", .... etc.]))}
)
from datasets import Dataset
INPUT_COLUMNS = "tokens ner_tags".split(" ")
def read_conll(file):
#all_labels = []
example = {col: [] for col in INPUT_COLUMNS}
idx = 0
with open(file) as f:
for line in f:
if line:
if line.startswith("-DOCSTART-") and example["tokens"] != []:
print(idx, example)
yield idx, example
idx += 1
example = {col: [] for col in INPUT_COLUMNS}
elif line == "\n" or (line.startswith("-DOCSTART-") and example["tokens"] == []):
continue
else:
row_cols = line.split(" ")
for i, col in enumerate(example):
example[col] = row_cols[i].rstrip()
dset = Dataset.from_generator(read_conll, gen_kwargs={"file": "/content/new_train.txt"}, features = features)
The following error happened:
[/usr/local/lib/python3.7/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in <genexpr>(.0)
285 for key in unique_values(itertools.chain(*dicts)): # set merge all keys
286 # Will raise KeyError if the dict don't have the same keys
--> 287 yield key, tuple(d[key] for d in dicts)
288
TypeError: tuple indices must be integers or slices, not str
What does this mean and what should I modify? | https://github.com/huggingface/datasets/issues/5183 | closed | [] | 2022-11-01T13:18:29Z | 2022-11-02T11:57:50Z | 0 | Taghreed7878 |
huggingface/datasets | 5,182 | Add notebook / other resource links to the task-specific data loading guides | Does it make sense to include links to notebooks / scripts that show how to use a dataset for training / fine-tuning a model?
For example, here in [https://huggingface.co/docs/datasets/image_classification] we could include a mention of https://github.com/huggingface/notebooks/blob/main/examples/image_classification.ipynb.
Applies to https://huggingface.co/docs/datasets/object_detection as well.
Cc: @osanseviero @nateraw | https://github.com/huggingface/datasets/issues/5182 | closed | [
"enhancement"
] | 2022-11-01T07:57:26Z | 2022-11-03T01:49:57Z | 2 | sayakpaul |
huggingface/datasets | 5,181 | Add a guide for semantic segmentation | Currently, we have these guides for object detection and image classification:
* https://huggingface.co/docs/datasets/object_detection
* https://huggingface.co/docs/datasets/image_classification
I am proposing adding a similar guide for semantic segmentation.
I am happy to contribute a PR for it.
Cc: @osanseviero @nateraw | https://github.com/huggingface/datasets/issues/5181 | closed | [
"documentation"
] | 2022-11-01T07:54:50Z | 2022-11-04T18:23:36Z | 2 | sayakpaul |
huggingface/datasets | 5,180 | An example or recommendations for creating large image datasets? | I know that Apache Beam and `datasets` have [some connector utilities](https://huggingface.co/docs/datasets/beam). But it's a little unclear what we mean by "But if you want to run your own Beam pipeline with Dataflow, here is how:". What does that pipeline do?
As a user, I was wondering if we have this support for creating large image datasets. If so, we should mention that [here](https://huggingface.co/docs/datasets/image_dataset).
Cc @lhoestq | https://github.com/huggingface/datasets/issues/5180 | open | [] | 2022-11-01T07:38:38Z | 2022-11-02T10:17:11Z | 2 | sayakpaul |
huggingface/optimum | 442 | Add support for ORTModelForObjectDetection | ### Feature request
Hi, I went through optimum's code base and could not find support for object detection models. Is there plan to add ORTModelForObjectDetection just like ORTModelForImageClassification exists? Would be great to have this feature.
Object detection task is also supported as part of transformers `pipeline` feature so I guess it should be possible to support this as part of optimum?
### Motivation
I want to leverage onnx support for YOLOS model
### Your contribution
I would be happy to help in adding support for this feature if someone can guide me. | https://github.com/huggingface/optimum/issues/442 | open | [
"onnxruntime",
"onnx"
] | 2022-10-31T19:59:21Z | 2025-12-05T10:42:26Z | 9 | shivalikasingh95 |
huggingface/setfit | 126 | Does num_iterations create duplicate data? | I am trying to get a better understanding behind this hyperparam. As far as I understand, you are iterating over the data `num_iterations` times and create a positive and negative pair by sampling. Could this result in duplicate data?
Also sometimes it tends to result in more examples than potential pairs for example in `imdb` for 3 shot there are 6 examples, 2 per class. Setting `num_iterations` to 5 creates 6 (examples) * 2 (1 positive + 1 negative) * 5 (num_iterations) = 60 examples. The possible combinations though are 6*6/2-6 = 12, essentially half of the matrix of all pairs without the diagonal.
If the above is correct it seems that its like running training for multiple epochs. Is that right? If so, why are you not creating all pairs instead and keep the `epochs` hyperparam as is which might be more intuitive. If you want a way to sample less data, why not introduce a `sample_size` to cap those combinations to a lesser number for experimentation? | https://github.com/huggingface/setfit/issues/126 | open | [
"question"
] | 2022-10-26T13:09:52Z | 2022-12-20T09:10:53Z | null | nsorros |
huggingface/datasets | 5,157 | Consistent caching between python and jupyter | ### Feature request
I hope this is not my mistake, currently if I use `load_dataset` from a python session on a custom dataset to do the preprocessing, it will be saved in the cache and in other python sessions it will be loaded from the cache, however calling the same from a jupyter notebook does not work, meaning the preprocessing starts from scratch.
If adjusting the hashes is impossible, is there a way to manually set dataset fingerprint to "force" this behaviour?
### Motivation
If this is not already the case and I am doing something wrong, it would be useful to have the two fingerprints consistent so one can create the dataset once and then try small things on jupyter without preprocessing everything again.
### Your contribution
I am happy to try a PR if you give me some pointers where the changes should happen | https://github.com/huggingface/datasets/issues/5157 | closed | [
"enhancement"
] | 2022-10-25T01:34:33Z | 2022-11-02T15:43:22Z | 2 | gpucce |
huggingface/setfit | 120 | Using SetFit Embeddings for Semantic Search? | Hi,
I was wondering if the semantic search would improve if one would train a multilabel-classification model and use those embeddings?
After training a binary classification model I have seen that the embeddings between similar topics on `all-MiniLM-L12-v2` vs `all-MiniLM-L12-v2-setfit` (fitted model) are very close in fitted model which makes sense for me.
```python
# Cosine Similarity
def get_cosine_similarity(vector1, vector2):
sim = 1 - spatial.distance.cosine(vector1, vector2)
return sim
word_1 = "acne"
word_2 = "red skin"
emb_fit_1 = model.model_body.encode([word_1])
emb_fit_2 = model.model_body.encode([word_2])
emb_base_1 = model_sbert.encode([word_1])
emb_base_2 = model_sbert.encode([word_2])
print(f"{word_1} vs {word_2} (base)", get_cosine_similarity(emb_base_1, emb_base_2))
print(f"{word_1} vs {word_2} (fit)", get_cosine_similarity(emb_fit_1, emb_fit_2))
```
```
acne vs pimple (base) 0.5959747433662415
acne vs pimple (fit) 0.9996786117553711
acne vs red skin (base) 0.36421263217926025
acne vs red skin (fit) 0.9994498491287231
acne vs red car (base) 0.17558744549751282
acne vs red car (fit) 0.0051751588471233845
```
I would assume that if the model is trained on multi-label-classification task the embeddings would somehow clustered based on the labels which are provided during training. Would that improve the semantic search if enough labels are provided during training?
Of course I could train a model and test it but maybe you have done similar tests and already know if it's working or not :-)
Thanks! | https://github.com/huggingface/setfit/issues/120 | open | [
"question"
] | 2022-10-25T00:00:03Z | 2024-07-12T02:02:04Z | null | Raidus |
huggingface/setfit | 119 | Using SetFit for regression tasks? | I was curious about using SetFit for ordinal Likert scale outcomes (ie IMDB movie reviews). It doesn't seem like an obvious option in the SetFit API. Has anyone tried using SetFit for regression tasks? | https://github.com/huggingface/setfit/issues/119 | open | [
"question"
] | 2022-10-21T19:15:29Z | 2023-02-01T16:48:33Z | null | ericlinML |
huggingface/dataset-viewer | 614 | [feat req] Alphabetical ordering for splits in dataset viewer | ### Link
https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0
### Description
Currently, the datasets splits for the viewer are displayed in a seemingly random order, see example for [Common Voice 11](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0):
<img width="1505" alt="Screenshot 2022-10-21 at 14 04 39" src="https://user-images.githubusercontent.com/93869735/197192381-46ca4041-db69-423e-be55-abf96e70167a.png">
It would be easier to traverse the list of possible splits if they were arranged alphabetically!
| https://github.com/huggingface/dataset-viewer/issues/614 | closed | [
"question",
"feature request"
] | 2022-10-21T12:11:00Z | 2022-10-26T09:48:29Z | null | sanchit-gandhi |
huggingface/datasets | 5,144 | Inconsistent documentation on map remove_columns | ### Describe the bug
The page [process](https://huggingface.co/docs/datasets/process) says this about the parameter `remove_columns` of the function `map`:
When you remove a column, it is only removed after the example has been provided to the mapped function.
So it seems that the `remove_columns` parameter removes after the mapped functions.
However, another page, [the documentation of the function map](https://huggingface.co/docs/datasets/v2.6.1/en/package_reference/main_classes#datasets.Dataset.map.remove_columns) says:
Columns will be removed before updating the examples with the output of `function`, i.e. if `function` is adding columns with names in remove_columns, these columns will be kept.
So one page says "after the mapped function" and another says "before the mapped function."
Is there something wrong?
### Steps to reproduce the bug
Not about code.
### Expected behavior
consistent about the descriptions of the behavior of the parameter `remove_columns` in the function `map`.
### Environment info
datasets V2.6.0 | https://github.com/huggingface/datasets/issues/5144 | closed | [
"documentation",
"duplicate",
"good first issue",
"hacktoberfest"
] | 2022-10-21T08:37:53Z | 2022-11-15T14:15:10Z | 3 | zhaowei-wang-nlp |
huggingface/setfit | 117 | Using this for code gen? | Can we use this for code generation? | https://github.com/huggingface/setfit/issues/117 | closed | [
"question"
] | 2022-10-20T16:53:59Z | 2022-12-20T09:32:50Z | null | krrishdholakia |
huggingface/datasets | 5,143 | DownloadManager Git LFS support | ### Feature request
Maybe I'm mistaken but the `DownloadManager` does not support extracting git lfs files out of the box right?
Using `dl_manager.download()` or `dl_manager.download_and_extract()` still returns lfs files afaict.
Is there a good way to write a dataset loading script for a repo with lfs files?
### Motivation
/
### Your contribution
/ | https://github.com/huggingface/datasets/issues/5143 | closed | [
"enhancement"
] | 2022-10-20T15:29:29Z | 2022-10-20T17:17:10Z | 2 | Muennighoff |
huggingface/setfit | 116 | How to take advantage of Mac M1 GPUs? | More than an issue, this is a request for help.
Do you have advice on how to take advantage of the Mac M1 Pro GPU for training a model, assuming the underlying Torch implementation provides support?
There are some tutorials on how to use Torch with the MPS driver, but I'm not sure how to signal SetFit to use a specific GPU.
| https://github.com/huggingface/setfit/issues/116 | closed | [
"question"
] | 2022-10-20T08:43:24Z | 2024-01-29T16:58:04Z | null | secastro |
huggingface/setfit | 115 | How many samples for setfit? | I understood that setfit is a light weight solution for few shot learning. Two questions came up:
.) What would be a number of samples of class you would switch to standard supervised learning and fine-tuning? E.g. 100 samples?
.) Is there any disadvantage of generating too many pairs (num_iterations) If I have 30 classes, wouldnt be the default of 20 too small to learn meaningful embeddings? | https://github.com/huggingface/setfit/issues/115 | open | [
"question"
] | 2022-10-20T06:13:41Z | 2023-02-27T10:52:50Z | null | hanshupe |
huggingface/optimum | 424 | Convert Seq2Seq model to ONNX while splitting encoder-decoder. | Hi guys, I've recently been trying to convert my trained BART model to onnx. I've found that when using `transformers.onnx` from transformers, the resulting onnx file is a singular `.onnx` file. However, when using `ORTModelForSequenceClassification.from_pretrained()` and then saving the result I have three files, encoder, decoder and decoder-with-past. I want to use the pipeline provided by optimum for inference, but I am unable to convert my PyTorch trained BART model directly into the three different models.
Is there any way I could do this? Thanks. | https://github.com/huggingface/optimum/issues/424 | closed | [
"question",
"onnxruntime"
] | 2022-10-19T09:17:50Z | 2022-10-20T01:29:30Z | null | ZiyueWangUoB |
huggingface/datasets | 5,135 | Update docs once dataset scripts transferred to the Hub | ## Describe the bug
As discussed in:
- https://github.com/huggingface/hub-docs/pull/423#pullrequestreview-1146083701
we should update our docs once dataset scripts have been transferred to the Hub (and removed from GitHub):
- #4974
Concretely:
- [x] Datasets on GitHub (legacy): https://huggingface.co/docs/datasets/main/en/share#datasets-on-github-legacy
- [x] ADD_NEW_DATASET: https://github.com/huggingface/datasets/blob/main/ADD_NEW_DATASET.md
- ...
This PR complements the work of:
- #5067
This PR is a follow-up of PRs:
- #3777
CC: @julien-c | https://github.com/huggingface/datasets/issues/5135 | closed | [
"documentation"
] | 2022-10-19T06:58:19Z | 2022-10-20T08:10:01Z | 0 | albertvillanova |
huggingface/accelerate | 771 | What is the best practice to do inference in bf16 with accelerate during training? | ### System Info
```Shell
Basically, I want to do training with mixed precision and evaluate the model with bfloat16.
I found the model is stored in fp32 after calling acclerate.prepare() and have to convert it to bf16 for faster inference. Can I avoid `explictly` model conversion and make the most use of accelerate?
```
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [ ] My own task or dataset (give details below)
### Reproduction
,
### Expected behavior
```Shell
Ideally, we do not want manually model conversion.
```
| https://github.com/huggingface/accelerate/issues/771 | closed | [] | 2022-10-18T13:15:39Z | 2022-10-18T13:32:02Z | null | huchinlp |
huggingface/setfit | 110 | more metrics addition (i.e f1score, precision ) in the trainer.evaluate() | was just checking the code and saw only accuracy as a metrics, are we planning to add more metrics? | https://github.com/huggingface/setfit/issues/110 | closed | [
"question"
] | 2022-10-18T11:03:18Z | 2023-06-26T14:49:05Z | null | snayan06 |
huggingface/setfit | 108 | Are checkpoints directly available with the SetFitTrainer? | Hi, just looking to see if checkpoints are implemented with the SetFitTrainer. Couldn't find it, unlike how the normal models in Hugging Face use `output_dir` for saving checkpoints when training a model. | https://github.com/huggingface/setfit/issues/108 | open | [
"question"
] | 2022-10-17T18:46:13Z | 2022-12-20T09:34:41Z | null | ajmcgrail |
huggingface/datasets | 5,118 | Installing `datasets` on M1 computers | ## Describe the bug
I wanted to install `datasets` dependencies on my M1 (in order to start contributing to the project). However, I got an error regarding `tensorflow`.
On M1, `tensorflow-macos` needs to be installed instead. Can we add a conditional requirement, so that `tensorflow-macos` would be installed on M1?
## Steps to reproduce the bug
Fresh clone this project (on m1), create a virtualenv and run this:
```python
pip install -e ".[dev]"
```
## Expected results
Installation should be smooth, and all the dependencies should be installed on M1.
## Actual results
You should receive an error, saying pip couldn't find a version that matches this pattern:
```
tensorflow>=2.3,!=2.6.0,!=2.6.1
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.6.2.dev0
- Platform: macOS-12.6-arm64-arm-64bit
- Python version: 3.9.6
- PyArrow version: 7.0.0
- Pandas version: 1.5.0
| https://github.com/huggingface/datasets/issues/5118 | closed | [
"bug"
] | 2022-10-16T16:50:08Z | 2022-10-19T09:10:08Z | 1 | david1542 |
huggingface/setfit | 106 | Function to get probability values of predicted output (like sklearn's predict_proba)? | Hi! I wanted to ask if there was an in-built function to get the probability value of predicted output from a classification task, something like predict_proba() from sklearn?
From what i understand currently the only way to get output is to run SetFitModel([text]), which works similar to sklearn predict(). | https://github.com/huggingface/setfit/issues/106 | closed | [
"question"
] | 2022-10-14T13:45:26Z | 2022-12-20T09:34:57Z | null | a-sharma123 |
huggingface/transformers | 19,592 | Sagemaker Estimator for fine tuning where all the transform code is in the train.py | ### Feature request
I work for a company that is a heavy user of AWS sagemaker. I am on a professional services team where I build a lot of examples for our data scientists to follow. I recently wanted to use the Sagemaker Huggingface estimator to fine tune a transformer and create a model for our custom NLP task.
I had csv data in S3. I found several examples of fine tuning that involved pulling nicely curated datasets from HF hub down to the SM notebook and then transforming it into arrow with `save_to_disk` and pushing it to S3 as a dataset that could be read in the train.py file.
I struggled mightily to find an example and never found a good example of how to start with just CSV files, use the HF existing tools load the data and then pass it to the estimator. Furthermore, the examples I find have the user pulling the data over to the notebook and doing the conversion to arrow there. That seems inefficient when the point of an estimator is to utilize a small instance to host your notebook and a large instance to do the work. If I had a large amount of data to to convert to arrow and I followed the given examples, I would need a large notebook instance and a large estimator instance.
I wrote an example that puts all the transform code in the train.py and only invokes it from the notebook. In my train.py, I use load_dataset with the csv script to transform the data to arrow and do the save and load there. I wanted to use the arrow format for efficiency.
I propose that I update your documentation with this unique example.
### Motivation
I feel that the proposed documentation is unifies several previously documented concepts into a single, useful example.
### Your contribution
I would be happy to build the example and have you guys approve it. I have never contributed to HF before, so I would need a bit of guidance to get started. | https://github.com/huggingface/transformers/issues/19592 | closed | [] | 2022-10-13T19:24:14Z | 2022-11-21T15:02:11Z | null | j2cunningham |
huggingface/setfit | 91 | Using Setfit for similarity classification | Hello,
I would like to test this promising framework on a similarity classification task. So basically, I have got a dataset with 3 columns: (sentence1,sentence2,label). From what I understand, currently it is only possible to train on a single sentence classification problem.
Is there a get around to use Setfit for a pair sentence classification problem ? If not, would it be possible to add this feature in a future integration ?
Thank you in advance | https://github.com/huggingface/setfit/issues/91 | open | [
"question"
] | 2022-10-07T09:58:09Z | 2025-01-21T10:05:54Z | null | castafra |
huggingface/setfit | 86 | num_epochs range | Hi there!
I was wondering whether you can provide a range for typically "good" values to use/test for the argument num_epochs both in the single label classification case and the multi label classification case. Of course, the best performing number depends on the classes to be predicted and the dataset, but in non-FSL settings, typically one uses a range between 2-5 (whereas many researchers may also stick to common defaults such as 3). I'm asking because I noticed that you use rather `num_epochs = 20` in your example scripts, so perhaps in general in setfit num_epochs should be higher than in non-FSL settings? | https://github.com/huggingface/setfit/issues/86 | open | [
"question"
] | 2022-10-06T15:35:48Z | 2022-12-20T09:36:09Z | null | fhamborg |
huggingface/setfit | 83 | Running Evaluation | Hi,
Thanks for sharing this work.
I am wondering if it is possible to run evaluation dataset to tune hyperparameters.
The SetFitTrainer doesn't seem to accept arguments like 'evaluation_strategy', 'save_strategy', 'compute_metrics', etc.
Or perhaps Im doing something wrong?
Thanks.
| https://github.com/huggingface/setfit/issues/83 | open | [
"question"
] | 2022-10-06T05:58:19Z | 2022-12-20T09:36:43Z | null | dhkhey |
huggingface/setfit | 81 | Fine-tuning for Question-Answering | Hello,
Can this library be used for fine-tuning a question-answering model with small amount of data as well ?
I have a data that is in the same format with squad data. It has small amount of context, question, and answers data.
Is it possible use this library to fine tune a question-answering model in huggingface (e.g. deepset/roberta-base-squad2) on my small data ? If it is, how should I set the **column_mapping** argument of the **SetFitTrainer()** function ? | https://github.com/huggingface/setfit/issues/81 | open | [
"question"
] | 2022-10-04T17:47:10Z | 2022-12-20T09:36:55Z | null | ozyurtf |
huggingface/datasets | 5,053 | Intermittent JSON parse error when streaming the Pile | ## Describe the bug
I have an intermittent error when streaming the Pile, where I get a JSON parse error which causes my program to crash.
This is intermittent - when I rerun the program with the same random seed it does not crash in the same way. The exact point this happens also varied - it happened to me 11B tokens and 4 days into a training run, and now just happened 2 minutes into one, but I can't reliably reproduce it.
I'm using a remote machine with 8 A6000 GPUs via runpod.io
## Expected results
I have a DataLoader which can iterate through the whole Pile
## Actual results
Stack trace:
```
Failed to read file 'zstd://12.jsonl::https://the-eye.eu/public/AI/pile/train/12.jsonl.zst' with error <class 'pyarrow.lib.ArrowInvalid'>: JSON parse error: Invalid value. in row 0
```
I'm currently using HuggingFace accelerate, which also gave me the following stack trace, but I've also experienced this problem intermittently when using DataParallel, so I don't think it's to do with parallelisation
```
Traceback (most recent call last):
File "ddp_script.py", line 1258, in <module>
main()
File "ddp_script.py", line 1143, in main
for c, batch in tqdm.tqdm(enumerate(data_iter)):
File "/opt/conda/lib/python3.7/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/opt/conda/lib/python3.7/site-packages/accelerate/data_loader.py", line 503, in __iter__
next_batch, next_batch_info, next_skip = self._fetch_batches(main_iterator)
File "/opt/conda/lib/python3.7/site-packages/accelerate/data_loader.py", line 454, in _fetch_batches
broadcast_object_list(batch_info)
File "/opt/conda/lib/python3.7/site-packages/accelerate/utils/operations.py", line 333, in broadcast_object_list
torch.distributed.broadcast_object_list(object_list, src=from_process)
File "/opt/conda/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1900, in broadcast_object_list
object_list[i] = _tensor_to_object(obj_view, obj_size)
File "/opt/conda/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1571, in _tensor_to_object
return _unpickler(io.BytesIO(buf)).load()
_pickle.UnpicklingError: invalid load key, '@'.
```
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset(
cfg["dataset_name"], streaming=True, split="train")
dataset = dataset.remove_columns("meta")
dataset = dataset.map(tokenize_and_concatenate, batched=True)
dataset = dataset.with_format(type="torch")
train_data_loader = DataLoader(
dataset, batch_size=cfg["batch_size"], num_workers=3)
for batch in train_data_loader:
continue
```
`tokenize_and_concatenate` is a custom tokenization function I defined on the GPT-NeoX tokenizer to tokenize the text, separated by endoftext tokens, and reshape to have length batch_size, I don't think this is related to tokenization:
```
import numpy as np
import einops
import torch
def tokenize_and_concatenate(examples):
texts = examples["text"]
full_text = tokenizer.eos_token.join(texts)
div = 20
length = len(full_text) // div
text_list = [full_text[i * length: (i + 1) * length]
for i in range(div)]
tokens = tokenizer(text_list, return_tensors="np", padding=True)[
"input_ids"
].flatten()
tokens = tokens[tokens != tokenizer.pad_token_id]
n = len(tokens)
curr_batch_size = n // (seq_len - 1)
tokens = tokens[: (seq_len - 1) * curr_batch_size]
tokens = einops.rearrange(
tokens,
"(batch_size seq) -> batch_size seq" | https://github.com/huggingface/datasets/issues/5053 | open | [
"bug"
] | 2022-10-02T11:56:46Z | 2022-10-04T17:59:03Z | 3 | neelnanda-io |
huggingface/datasets | 5,044 | integrate `load_from_disk` into `load_dataset` | **Is your feature request related to a problem? Please describe.**
Is it possible to make `load_dataset` more universal similar to `from_pretrained` in `transformers` so that it can handle the hub, and the local path datasets of all supported types?
Currently one has to choose a different loader depending on how the dataset has been created.
e.g. this won't work:
```
$ git clone https://huggingface.co/datasets/severo/test-parquet
$ python -c 'from datasets import load_dataset; ds=load_dataset("test-parquet"); \
ds.save_to_disk("my_dataset"); load_dataset("my_dataset")'
[...]
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/load.py", line 1746, in load_dataset
builder_instance.download_and_prepare(
File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/builder.py", line 704, in download_and_prepare
self._download_and_prepare(
File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/builder.py", line 793, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/builder.py", line 1277, in _prepare_split
writer.write_table(table)
File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/arrow_writer.py", line 524, in write_table
pa_table = table_cast(pa_table, self._schema)
File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/table.py", line 2005, in table_cast
return cast_table_to_schema(table, schema)
File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/table.py", line 1968, in cast_table_to_schema
raise ValueError(f"Couldn't cast\n{table.schema}\nto\n{features}\nbecause column names don't match")
ValueError: Couldn't cast
_data_files: list<item: struct<filename: string>>
child 0, item: struct<filename: string>
child 0, filename: string
```
both times the dataset is being loaded from disk. Why does it fail the second time?
Why can't `save_to_disk` generate a dataset that can be immediately loaded by `load_dataset`?
e.g. the simplest hack would be to have `save_to_disk` add some flag to the saved dataset, that tells `load_dataset` to internally call `load_from_disk`. like having `save_to_disk` create a `load_me_with_load_from_disk.txt` file ;) and `load_dataset` will support that feature from saved datasets from new `datasets` versions. The old ones will still need to use `load_from_disk` explicitly. Unless the flag is not needed and one can immediately tell by looking at the saved dataset that it was saved via `save_to_disk` and thus use `load_from_disk` internally.
The use-case is defining a simple API where the user only ever needs to pass a `dataset_name_or_path` and it will always just work. Currently one needs to manually add additional switches telling the system whether to use one loading method or the other which works but it's not smooth.
Thank you! | https://github.com/huggingface/datasets/issues/5044 | open | [
"enhancement"
] | 2022-09-29T17:37:12Z | 2025-06-28T09:00:44Z | 15 | stas00 |
huggingface/setfit | 72 | Few-Shot Named Entity Recognition work | Hi, really like your work, have you considered using this framework for few-shot named entity recognition work? or do you have an example code for it, looking forward to the progress in few-shot named entity recognition! | https://github.com/huggingface/setfit/issues/72 | open | [
"question"
] | 2022-09-29T09:32:11Z | 2022-12-20T09:37:02Z | null | zhanghaok |
huggingface/datasets | 5,013 | would huggingface like publish cpp binding for datasets package ? | HI:
I use cpp env libtorch, I like use hugggingface ,but huggingface not cpp binding, would you like publish cpp binding for it.
thanks | https://github.com/huggingface/datasets/issues/5013 | closed | [
"wontfix"
] | 2022-09-23T07:42:49Z | 2023-02-24T16:20:57Z | 5 | mullerhai |
huggingface/datasets | 5,012 | Force JSON format regardless of file naming on S3 | I have a file on S3 created by Data Version Control, it looks like `s3://dvc/ac/badff5b134382a0f25248f1b45d7b2` but contains a json file. If I run
```python
dataset = load_dataset(
"json",
data_files='s3://dvc/ac/badff5b134382a0f25248f1b45d7b2'
)
```
It gives me
```
InvalidSchema: No connection adapters were found for 's3://dvc/ac/badff5b134382a0f25248f1b45d7b2'
```
However, I cannot go ahead and change the names of the s3 file. Is there a way to "force" load a S3 url with certain decoder (JSON, CSV, etc.) regardless of s3 URL naming? | https://github.com/huggingface/datasets/issues/5012 | closed | [
"enhancement"
] | 2022-09-22T18:28:15Z | 2023-08-16T09:58:36Z | 4 | junwang-wish |
huggingface/datasets | 5,000 | Dataset Viewer issue for asapp/slue | ### Link
https://huggingface.co/datasets/asapp/slue/viewer/
### Description
Hi,
I wonder how to get the dataset viewer of our slue dataset to work.
Best,
Felix
### Owner
Yes | https://github.com/huggingface/datasets/issues/5000 | closed | [] | 2022-09-20T16:45:45Z | 2022-09-27T07:04:03Z | 9 | fwu-asapp |
huggingface/datasets | 4,990 | "no-token" is passed to `huggingface_hub` when token is `None` | ## Describe the bug
In the 2 lines listed below, a token is passed to `huggingface_hub` to get information from a dataset. If no token is provided, a "no-token" string is passed. What is the purpose of it ? If no real, I would prefer if the `None` value could be sent directly to be handle by `huggingface_hub`. I feel that here it is working because we assume the token will never be validated.
https://github.com/huggingface/datasets/blob/5b23f58535f14cc4dd7649485bce1ccc836e7bca/src/datasets/load.py#L753
https://github.com/huggingface/datasets/blob/5b23f58535f14cc4dd7649485bce1ccc836e7bca/src/datasets/load.py#L1121
## Expected results
Pass `token=None` to `huggingface_hub`.
## Actual results
`token="no-token"` is passed.
## Environment info
`huggingface_hub v0.10.0dev` | https://github.com/huggingface/datasets/issues/4990 | closed | [
"bug"
] | 2022-09-19T15:14:40Z | 2022-09-30T09:16:00Z | 6 | Wauplin |
huggingface/datasets | 4,983 | How to convert torch.utils.data.Dataset to huggingface dataset? | I look through the huggingface dataset docs, and it seems that there is no offical support function to convert `torch.utils.data.Dataset` to huggingface dataset. However, there is a way to convert huggingface dataset to `torch.utils.data.Dataset`, like below:
```python
from datasets import Dataset
data = [[1, 2],[3, 4]]
ds = Dataset.from_dict({"data": data})
ds = ds.with_format("torch")
ds[0]
ds[:2]
```
So is there something I miss, or there IS no function to convert `torch.utils.data.Dataset` to huggingface dataset. If so, is there any way to do this convert?
Thanks. | https://github.com/huggingface/datasets/issues/4983 | closed | [
"enhancement"
] | 2022-09-16T09:15:10Z | 2023-12-14T20:54:15Z | 15 | DEROOCE |
huggingface/datasets | 4,981 | Can't create a dataset with `float16` features | ## Describe the bug
I can't create a dataset with `float16` features.
I understand from the traceback that this is a `pyarrow` error, but I don't see anywhere in the `datasets` documentation about how to successfully do this. Is it actually supported? I've tried older versions of `pyarrow` as well with the same exact error.
The bug seems to arise from `datasets` casting the values to `double` and then `pyarrow` doesn't know how to convert those back to `float16`... does that sound right? Is there a way to bypass this since it's not necessary in the `numpy` and `torch` cases?
Thanks!
## Steps to reproduce the bug
All of the following raise the following error with the same exact (as far as I can tell) traceback:
```python
ArrowNotImplementedError: Unsupported cast from double to halffloat using function cast_half_float
```
```python
from datasets import Dataset, Features, Value
Dataset.from_dict({"x": [0.0, 1.0, 2.0]}, features=Features(x=Value("float16")))
import numpy as np
Dataset.from_dict({"x": np.arange(3, dtype=np.float16)}, features=Features(x=Value("float16")))
import torch
Dataset.from_dict({"x": torch.arange(3).to(torch.float16)}, features=Features(x=Value("float16")))
```
## Expected results
A dataset with `float16` features is successfully created.
## Actual results
```python
---------------------------------------------------------------------------
ArrowNotImplementedError Traceback (most recent call last)
Cell In [14], line 1
----> 1 Dataset.from_dict({"x": [1.0, 2.0, 3.0]}, features=Features(x=Value("float16")))
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/arrow_dataset.py:870, in Dataset.from_dict(cls, mapping, features, info, split)
865 mapping = features.encode_batch(mapping)
866 mapping = {
867 col: OptimizedTypedSequence(data, type=features[col] if features is not None else None, col=col)
868 for col, data in mapping.items()
869 }
--> 870 pa_table = InMemoryTable.from_pydict(mapping=mapping)
871 if info.features is None:
872 info.features = Features({col: ts.get_inferred_type() for col, ts in mapping.items()})
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:750, in InMemoryTable.from_pydict(cls, *args, **kwargs)
734 @classmethod
735 def from_pydict(cls, *args, **kwargs):
736 """
737 Construct a Table from Arrow arrays or columns
738
(...)
748 :class:`datasets.table.Table`:
749 """
--> 750 return cls(pa.Table.from_pydict(*args, **kwargs))
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/table.pxi:3648, in pyarrow.lib.Table.from_pydict()
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/table.pxi:5174, in pyarrow.lib._from_pydict()
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/array.pxi:343, in pyarrow.lib.asarray()
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/array.pxi:231, in pyarrow.lib.array()
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/array.pxi:110, in pyarrow.lib._handle_arrow_array_protocol()
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py:197, in TypedSequence.__arrow_array__(self, type)
192 # otherwise we can finally use the user's type
193 elif type is not None:
194 # We use cast_array_to_feature to support casting to custom types like Audio and Image
195 # Also, when trying type "string", we don't want to convert integers or floats to "string".
196 # We only do it if trying_type is False - since this is what the user asks for.
--> 197 out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type)
198 return out
199 except (TypeError, pa.lib.ArrowInvalid) as e: # handle type errors and overflows
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:1683, in _wrap_for_chunked_arrays.<locals>.wrapper(array, *args, **kwargs)
1681 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
1682 else:
-> 1683 return func(array, *args, **kwargs)
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:1853, in cast_array_to_feature(array, feature, allow_number_to_str)
1851 return array_cast(array, get_nested_type(feature), allow_number_to_str=allow_number_to_str)
1852 elif not isinstance(feature, (Sequence, dict, list, tuple)):
-> 1853 return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
1854 raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:1683, in _wrap_for_chunked_arrays.<locals>.wrapper(array, *args, **kw | https://github.com/huggingface/datasets/issues/4981 | open | [
"bug"
] | 2022-09-15T21:03:24Z | 2025-06-12T11:47:42Z | 8 | dconathan |
huggingface/dataset-viewer | 560 | Fill some of the dataset card info automatically? | See https://github.com/huggingface/datasets/issues/4977: `Providing dataset size`
Related issues: https://github.com/huggingface/datasets/issues/3507#issuecomment-1033752157 and https://github.com/huggingface/datasets/issues/4876 | https://github.com/huggingface/dataset-viewer/issues/560 | closed | [
"question",
"feature request"
] | 2022-09-14T16:20:30Z | 2023-06-14T12:15:54Z | null | severo |
huggingface/datasets | 4,944 | larger dataset, larger GPU memory in the training phase? Is that correct? | from datasets import set_caching_enabled
set_caching_enabled(False)
for ds_name in ["squad","newsqa","nqopen","narrativeqa"]:
train_ds = load_from_disk("../../../dall/downstream/processedproqa/{}-train.hf".format(ds_name))
break
train_ds = concatenate_datasets([train_ds,train_ds,train_ds,train_ds]) #operation 1
trainer = QuestionAnsweringTrainer( #huggingface trainer
model=model,
args=training_args,
train_dataset=train_ds,
eval_dataset= None,
eval_examples=None,
answer_column_name=answer_column,
dataset_name="squad",
tokenizer=tokenizer,
data_collator=data_collator,
compute_metrics=compute_metrics if training_args.predict_with_generate else None,
)
with operation 1, the GPU memory increases from 16G to 23G | https://github.com/huggingface/datasets/issues/4944 | closed | [
"bug"
] | 2022-09-07T08:46:30Z | 2022-09-07T12:34:58Z | 2 | debby1103 |
huggingface/datasets | 4,942 | Trec Dataset has incorrect labels | ## Describe the bug
Both coarse and fine labels seem to be out of line.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = "trec"
raw_datasets = load_dataset(dataset)
df = pd.DataFrame(raw_datasets["test"])
df.head()
```
## Expected results
text (string) | coarse_label (class label) | fine_label (class label)
-- | -- | --
How far is it from Denver to Aspen ? | 5 (NUM) | 40 (NUM:dist)
What county is Modesto , California in ? | 4 (LOC) | 32 (LOC:city)
Who was Galileo ? | 3 (HUM) | 31 (HUM:desc)
What is an atom ? | 2 (DESC) | 24 (DESC:def)
When did Hawaii become a state ? | 5 (NUM) | 39 (NUM:date)
## Actual results
index | label-coarse |label-fine | text
-- |-- | -- | --
0 | 4 | 40 | How far is it from Denver to Aspen ?
1 | 5 | 21 | What county is Modesto , California in ?
2 | 3 | 12 | Who was Galileo ?
3 | 0 | 7 | What is an atom ?
4 | 4 | 8 | When did Hawaii become a state ?
## Environment info
- `datasets` version: 2.4.0
- Platform: Linux-5.4.0-1086-azure-x86_64-with-glibc2.27
- Python version: 3.9.13
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
| https://github.com/huggingface/datasets/issues/4942 | closed | [
"bug"
] | 2022-09-06T22:13:40Z | 2022-09-08T11:12:03Z | 1 | wmpauli |
huggingface/datasets | 4,936 | vivos (Vietnamese speech corpus) dataset not accessible | ## Describe the bug
VIVOS data is not accessible anymore, neither of these links work (at least from France):
* https://ailab.hcmus.edu.vn/assets/vivos.tar.gz (data)
* https://ailab.hcmus.edu.vn/vivos (dataset page)
Therefore `load_dataset` doesn't work.
## Steps to reproduce the bug
```python
ds = load_dataset("vivos")
```
## Expected results
dataset loaded
## Actual results
```
ConnectionError: Couldn't reach https://ailab.hcmus.edu.vn/assets/vivos.tar.gz (ConnectionError(MaxRetryError("HTTPSConnectionPool(host='ailab.hcmus.edu.vn', port=443): Max retries exceeded with url: /assets/vivos.tar.gz (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f9d8a27d190>: Failed to establish a new connection: [Errno -5] No address associated with hostname'))")))
```
Will try to contact the authors, as we wanted to use Vivos as an example in documentation on how to create scripts for audio datasets (https://github.com/huggingface/datasets/pull/4872), because it's small and straightforward and uses tar archives. | https://github.com/huggingface/datasets/issues/4936 | closed | [
"dataset bug"
] | 2022-09-06T13:17:55Z | 2022-09-21T06:06:02Z | 3 | polinaeterna |
huggingface/datasets | 4,932 | Dataset Viewer issue for bigscience-biomedical/biosses | ### Link
https://huggingface.co/datasets/bigscience-biomedical/biosses
### Description
I've just been working on adding the dataset loader script to this dataset and working with the relative imports. I'm not sure how to interpret the error below (show where the dataset preview used to be) .
```
Status code: 400
Exception: ModuleNotFoundError
Message: No module named 'datasets_modules.datasets.bigscience-biomedical--biosses.ddbd5893bf6c2f4db06f407665eaeac619520ba41f69d94ead28f7cc5b674056.bigbiohub'
```
### Owner
Yes | https://github.com/huggingface/datasets/issues/4932 | closed | [] | 2022-09-05T22:40:32Z | 2022-09-06T14:24:56Z | 4 | galtay |
huggingface/datasets | 4,924 | Concatenate_datasets loads everything into RAM | ## Describe the bug
When loading the datasets seperately and saving them on disk, I want to concatenate them. But `concatenate_datasets` is filling up my RAM and the process gets killed. Is there a way to prevent this from happening or is this intended behaviour? Thanks in advance
## Steps to reproduce the bug
```python
gcs = gcsfs.GCSFileSystem(project='project')
datasets = [load_from_disk(f'path/to/slice/of/data/{i}', fs=gcs, keep_in_memory=False) for i in range(10)]
dataset = concatenate_datasets(datasets)
```
## Expected results
A concatenated dataset which is stored on my disk.
## Actual results
Concatenated dataset gets loaded into RAM and overflows it which gets the process killed.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-glibc2.10
- Python version: 3.8.13
- PyArrow version: 8.0.1
- Pandas version: 1.4.3 | https://github.com/huggingface/datasets/issues/4924 | closed | [
"bug"
] | 2022-09-01T10:25:17Z | 2022-09-01T11:50:54Z | 0 | louisdeneve |
huggingface/diffusers | 267 | Non-squared Image shape | Is it possible to use diffusers on non-squared images?
That would be a very interesting feature! | https://github.com/huggingface/diffusers/issues/267 | closed | [
"question"
] | 2022-08-29T01:29:33Z | 2022-09-13T15:57:36Z | null | LucasSilvaFerreira |
huggingface/dataset-viewer | 534 | Store the cached responses on the Hub instead of mongodb? | The config and split info will be stored in the YAML of the dataset card (see https://github.com/huggingface/datasets/issues/4876), and the idea is to compute them and update the dataset card automatically. This means that storing the responses for `/splits` in the MongoDB is duplication.
If we store the responses for `/first-rows` in the Hub too (maybe in a special git ref), we might get rid of the MongoDB storage, or use another simpler cache mechanism if response time is an issue.
WDYT @huggingface/datasets-server @julien-c ?
| https://github.com/huggingface/dataset-viewer/issues/534 | closed | [
"question"
] | 2022-08-26T16:24:39Z | 2022-09-19T09:09:29Z | null | severo |
huggingface/datasets | 4,902 | Name the default config `default` | Currently, if a dataset has no configuration, a default configuration is created from the dataset name.
For example, for a dataset loaded from the hub repository, such as https://huggingface.co/datasets/user/dataset (repo id is `user/dataset`), the default configuration will be `user--dataset`.
It might be easier to handle to set it to `default`, or another reserved word. | https://github.com/huggingface/datasets/issues/4902 | closed | [
"enhancement",
"question"
] | 2022-08-26T16:16:22Z | 2023-07-24T21:15:31Z | null | severo |
huggingface/optimum | 362 | unexpect behavior GPU runtime with ORTModelForSeq2SeqLM | ### System Info
```shell
OS: Ubuntu 20.04.4 LTS
CARD: RTX 3080
Libs:
python 3.10.4
onnx==1.12.0
onnxruntime-gpu==1.12.1
torch==1.12.1
transformers==4.21.2
```
### Who can help?
@lewtun @michaelbenayoun @JingyaHuang @echarlaix
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Steps to reproceduce the behavior:
1. Convert a public translation from here: [vinai-translate-en2vi](https://huggingface.co/vinai/vinai-translate-en2vi)
```
from optimum.onnxruntime import ORTModelForSeq2SeqLM
save_directory = "models/en2vi_onnx"
# Load a model from transformers and export it through the ONNX format
model = ORTModelForSeq2SeqLM.from_pretrained('vinai/vinai-translate-en2vi', from_transformers=True)
# Save the onnx model and tokenizer
model.save_pretrained(save_directory)
```
2. Load model with modified from [example of origin creater model](https://github.com/VinAIResearch/VinAI_Translate#english-to-vietnamese-translation)
```
from transformers import AutoTokenizer, pipeline
from optimum.onnxruntime import ORTModelForSeq2SeqLM
import torch
import time
device = "cuda:0" if torch.cuda.is_available() else "cpu"
tokenizer_en2vi = AutoTokenizer.from_pretrained("vinai/vinai-translate-en2vi", src_lang="en_XX")
model_en2vi = ORTModelForSeq2SeqLM.from_pretrained("models/en2vi_onnx")
model_en2vi.to(device)
# onnx_en2vi = pipeline("translation_en_to_vi", model=model_en2vi, tokenizer=tokenizer_en2vi, device=0)
# en_text = '''It's very cold to go out.'''
# start = time.time()
# outpt = onnx_en2vi(en_text)
# end = time.time()
# print(outpt)
# print("time: ", end - start)
def translate_en2vi(en_text: str) -> str:
start = time.time()
input_ids = tokenizer_en2vi(en_text, return_tensors="pt").input_ids.to(device)
end = time.time()
print("Tokenize time: {:.2f}s".format(end - start))
# print(input_ids.shape)
# print(input_ids)
start = time.time()
output_ids = model_en2vi.generate(
input_ids,
do_sample=True,
top_k=100,
top_p=0.8,
decoder_start_token_id=tokenizer_en2vi.lang_code_to_id["vi_VN"],
num_return_sequences=1,
)
end = time.time()
print("Generate time: {:.2f}s".format(end - start))
vi_text = tokenizer_en2vi.batch_decode(output_ids, skip_special_tokens=True)
vi_text = " ".join(vi_text)
return vi_text
en_text = '''It's very cold to go out.''' # long paragraph
start = time.time()
result = translate_en2vi(en_text)
print(result)
end = time.time()
print('{:.2f} seconds'.format((end - start)))
```
I change [line 167](https://github.com/huggingface/optimum/blob/661f4423097f580a06759ced557ecd638ab6b13a/optimum/onnxruntime/utils.py#L167) in optimum/onnxruntime/utils.py to _**return "CUDAExecutionProvider"**_ to run with GPU instead of an error.
3. run [example of origin creater model](https://github.com/VinAIResearch/VinAI_Translate#english-to-vietnamese-translation) with gpu and compare runtimes
### Expected behavior
The onnx model was expected run faster the result is unexpected:
- Runtime origin model with gpu is 3-5s while take about 3.5GB GPU

- Runtime onnx converted model with gpu is 70-80s while take about 7.7GB GPU

| https://github.com/huggingface/optimum/issues/362 | closed | [
"bug",
"inference",
"onnxruntime"
] | 2022-08-26T02:11:26Z | 2022-12-09T09:13:22Z | 3 | tranmanhdat |
huggingface/dataset-viewer | 528 | metrics: how to manage variability between the admin pods? | The metrics include one entry per uvicorn worker of the `admin` service, but they give different values.
<details>
<summary>Example of a response to https://datasets-server.huggingface.co/admin/metrics</summary>
<pre>
# HELP starlette_requests_in_progress Multiprocess metric
# TYPE starlette_requests_in_progress gauge
starlette_requests_in_progress{method="GET",path_template="/healthcheck",pid="16"} 0.0
starlette_requests_in_progress{method="GET",path_template="/metrics",pid="16"} 0.0
starlette_requests_in_progress{method="GET",path_template="/healthcheck",pid="12"} 0.0
starlette_requests_in_progress{method="GET",path_template="/metrics",pid="12"} 0.0
starlette_requests_in_progress{method="GET",path_template="/healthcheck",pid="15"} 0.0
starlette_requests_in_progress{method="GET",path_template="/metrics",pid="15"} 0.0
starlette_requests_in_progress{method="GET",path_template="/healthcheck",pid="13"} 0.0
starlette_requests_in_progress{method="GET",path_template="/metrics",pid="13"} 0.0
starlette_requests_in_progress{method="GET",path_template="/healthcheck",pid="11"} 0.0
starlette_requests_in_progress{method="GET",path_template="/metrics",pid="11"} 0.0
starlette_requests_in_progress{method="GET",path_template="/healthcheck",pid="18"} 0.0
starlette_requests_in_progress{method="GET",path_template="/metrics",pid="18"} 0.0
starlette_requests_in_progress{method="GET",path_template="/healthcheck",pid="14"} 0.0
starlette_requests_in_progress{method="GET",path_template="/metrics",pid="14"} 1.0
starlette_requests_in_progress{method="GET",path_template="/healthcheck",pid="10"} 0.0
starlette_requests_in_progress{method="GET",path_template="/metrics",pid="10"} 0.0
starlette_requests_in_progress{method="GET",path_template="/healthcheck",pid="17"} 0.0
starlette_requests_in_progress{method="GET",path_template="/metrics",pid="17"} 0.0
# HELP queue_jobs_total Multiprocess metric
# TYPE queue_jobs_total gauge
queue_jobs_total{pid="16",queue="/splits",status="waiting"} 0.0
queue_jobs_total{pid="16",queue="/splits",status="started"} 5.0
queue_jobs_total{pid="16",queue="/splits",status="success"} 71154.0
queue_jobs_total{pid="16",queue="/splits",status="error"} 41640.0
queue_jobs_total{pid="16",queue="/splits",status="cancelled"} 133.0
queue_jobs_total{pid="16",queue="/rows",status="waiting"} 372.0
queue_jobs_total{pid="16",queue="/rows",status="started"} 21.0
queue_jobs_total{pid="16",queue="/rows",status="success"} 300541.0
queue_jobs_total{pid="16",queue="/rows",status="error"} 121306.0
queue_jobs_total{pid="16",queue="/rows",status="cancelled"} 1500.0
queue_jobs_total{pid="16",queue="/splits-next",status="waiting"} 0.0
queue_jobs_total{pid="16",queue="/splits-next",status="started"} 4.0
queue_jobs_total{pid="16",queue="/splits-next",status="success"} 30896.0
queue_jobs_total{pid="16",queue="/splits-next",status="error"} 25611.0
queue_jobs_total{pid="16",queue="/splits-next",status="cancelled"} 92.0
queue_jobs_total{pid="16",queue="/first-rows",status="waiting"} 11406.0
queue_jobs_total{pid="16",queue="/first-rows",status="started"} 52.0
queue_jobs_total{pid="16",queue="/first-rows",status="success"} 142201.0
queue_jobs_total{pid="16",queue="/first-rows",status="error"} 30097.0
queue_jobs_total{pid="16",queue="/first-rows",status="cancelled"} 573.0
queue_jobs_total{pid="12",queue="/splits",status="waiting"} 0.0
queue_jobs_total{pid="12",queue="/splits",status="started"} 5.0
queue_jobs_total{pid="12",queue="/splits",status="success"} 71154.0
queue_jobs_total{pid="12",queue="/splits",status="error"} 41638.0
queue_jobs_total{pid="12",queue="/splits",status="cancelled"} 133.0
queue_jobs_total{pid="12",queue="/rows",status="waiting"} 424.0
queue_jobs_total{pid="12",queue="/rows",status="started"} 21.0
queue_jobs_total{pid="12",queue="/rows",status="success"} 300489.0
queue_jobs_total{pid="12",queue="/rows",status="error"} 121306.0
queue_jobs_total{pid="12",queue="/rows",status="cancelled"} 1500.0
queue_jobs_total{pid="12",queue="/splits-next",status="waiting"} 0.0
queue_jobs_total{pid="12",queue="/splits-next",status="started"} 4.0
queue_jobs_total{pid="12",queue="/splits-next",status="success"} 30896.0
queue_jobs_total{pid="12",queue="/splits-next",status="error"} 25610.0
queue_jobs_total{pid="12",queue="/splits-next",status="cancelled"} 92.0
queue_jobs_total{pid="12",queue="/first-rows",status="waiting"} 11470.0
queue_jobs_total{pid="12",queue="/first-rows",status="started"} 52.0
queue_jobs_total{pid="12",queue="/first-rows",status="success"} 142144.0
queue_jobs_total{pid="12",queue="/first-rows",status="error"} 30090.0
queue_jobs_total{pid="12",queue="/first-rows",status="cancelled"} 573.0
queue_jobs_total{pid="15",queue="/splits",status="waiting"} 0.0
queue_jobs_total{pid="15",queue="/splits",status="started"} 5.0
queue_jobs_total{pid="15",queue="/splits",status="success"} 71154.0
queue_jobs_total{pid="15",queue="/splits",status="error"} 41640.0
queue_jobs | https://github.com/huggingface/dataset-viewer/issues/528 | closed | [
"bug",
"question"
] | 2022-08-25T19:48:44Z | 2022-09-19T09:10:11Z | null | severo |
huggingface/datasets | 4,881 | Language names and language codes: connecting to a big database (rather than slow enrichment of custom list) | **The problem:**
Language diversity is an important dimension of the diversity of datasets. To find one's way around datasets, being able to search by language name and by standardized codes appears crucial.
Currently the list of language codes is [here](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/resources/languages.json), right? At about 1,500 entries, it is roughly at 1/4th of the world's diversity of extant languages. (Probably less, as the list of 1,418 contains variants that are linguistically very close: 108 varieties of English, for instance.)
Looking forward to ever increasing coverage, how will the list of language names and language codes improve over time?
Enrichment of the custom list by HFT contributors (like [here](https://github.com/huggingface/datasets/pull/4880)) has several issues:
* progress is likely to be slow:

(input required from reviewers, etc.)
* the more contributors, the less consistency can be expected among contributions. No need to elaborate on how much confusion is likely to ensue as datasets accumulate.
* there is no information on which language relates with which: no encoding of the special closeness between the languages of the Northwestern Germanic branch (English+Dutch+German etc.), for instance. Information on phylogenetic closeness can be relevant to run experiments on transfer of technology from one language to its close relatives.
**A solution that seems desirable:**
Connecting to an established database that (i) aims at full coverage of the world's languages and (ii) has information on higher-level groupings, alternative names, etc.
It takes a lot of hard work to do such databases. Two important initiatives are [Ethnologue](https://www.ethnologue.com/) (ISO standard) and [Glottolog](https://glottolog.org/). Both have pros and cons. Glottolog contains references to Ethnologue identifiers, so adopting Glottolog entails getting the advantages of both sets of language codes.
Both seem technically accessible & 'developer-friendly'. Glottolog has a [GitHub repo](https://github.com/glottolog/glottolog). For Ethnologue, harvesting tools have been devised (see [here](https://github.com/lyy1994/ethnologue); I did not try it out).
In case a conversation with linguists seemed in order here, I'd be happy to participate ('pro bono', of course), & to rustle up more colleagues as useful, to help this useful development happen.
With appreciation of HFT, | https://github.com/huggingface/datasets/issues/4881 | open | [
"enhancement"
] | 2022-08-23T20:14:24Z | 2024-04-22T15:57:28Z | 49 | alexis-michaud |
huggingface/datasets | 4,878 | [not really a bug] `identical_ok` is deprecated in huggingface-hub's `upload_file` | In the huggingface-hub dependency, the `identical_ok` argument has no effect in `upload_file` (and it will be removed soon)
See
https://github.com/huggingface/huggingface_hub/blob/43499582b19df1ed081a5b2bd7a364e9cacdc91d/src/huggingface_hub/hf_api.py#L2164-L2169
It's used here:
https://github.com/huggingface/datasets/blob/fcfcc951a73efbc677f9def9a8707d0af93d5890/src/datasets/dataset_dict.py#L1373-L1381
https://github.com/huggingface/datasets/blob/fdcb8b144ce3ef241410281e125bd03e87b8caa1/src/datasets/arrow_dataset.py#L4354-L4362
https://github.com/huggingface/datasets/blob/fdcb8b144ce3ef241410281e125bd03e87b8caa1/src/datasets/arrow_dataset.py#L4197-L4213
We should remove it.
Maybe the third code sample has an unexpected behavior since it uses the non-default value `identical_ok = False`, but the argument is ignored. | https://github.com/huggingface/datasets/issues/4878 | closed | [
"help wanted",
"question"
] | 2022-08-23T17:09:55Z | 2022-09-13T14:00:06Z | null | severo |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.