repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/datasets-tagging | 28 | Why datasets version is pinned in requirements.txt? | In file `requirements.txt`, the version of `datasets` is pinned. Why? | https://github.com/huggingface/datasets-tagging/issues/28 | open | [
"question"
] | 2021-12-29T09:39:40Z | 2021-12-29T11:51:59Z | null | albertvillanova |
huggingface/transformers | 14,482 | where can I find the dataset bert-base-chinese is pretrained on? | https://github.com/huggingface/transformers/issues/14482 | closed | [] | 2021-11-22T09:22:51Z | 2021-12-30T15:02:07Z | null | BoomSky0416 | |
huggingface/transformers | 14,440 | What does "is_beam_sample_gen_mode" mean | Hi, I find there are many ways for generating sequences in `Transformers`(when calling the `generate` method).
According to the code there:
https://github.com/huggingface/transformers/blob/01f8e639d35feb91f16fd3c31f035df11a726cc5/src/transformers/generation_utils.py#L947-L951
As far as I known:
`is_greedy_gen_mode`... | https://github.com/huggingface/transformers/issues/14440 | closed | [] | 2021-11-18T06:31:52Z | 2023-02-28T05:13:29Z | null | huhk-sysu |
huggingface/sentence-transformers | 1,227 | What is the training data to train the checkpoint "nli-roberta-base-v2"? | Hi, I wonder what is the training data for the provided checkpoint "nli-roberta-base-v2"?
The checkpoint name indicates that the training data is related to the nli dataset, but I just want to clarify what it is.
Thanks in advance. | https://github.com/huggingface/sentence-transformers/issues/1227 | closed | [] | 2021-10-25T08:59:45Z | 2021-10-25T09:47:36Z | null | sh0416 |
huggingface/dataset-viewer | 71 | Download and cache the images and other files? | Fields with an image URL are detected, and the "ImageUrl" type is passed in the features, to let the client (moonlanding) put the URL in `<img src="..." />`.
This means that pages such as https://hf.co/datasets/severo/wit will download images directly from Wikipedia for example. Hotlinking presents various [issues](... | https://github.com/huggingface/dataset-viewer/issues/71 | closed | [
"question"
] | 2021-10-18T15:37:59Z | 2022-09-16T20:09:24Z | null | severo |
huggingface/datasets | 3,013 | Improve `get_dataset_infos`? | Using the dedicated function `get_dataset_infos` on a dataset that has no dataset-info.json file returns an empty info:
```
>>> from datasets import get_dataset_infos
>>> get_dataset_infos('wit')
{}
```
While it's totally possible to get it (regenerate it) with:
```
>>> from datasets import load_dataset_b... | https://github.com/huggingface/datasets/issues/3013 | closed | [
"question",
"dataset-viewer"
] | 2021-10-04T09:47:04Z | 2022-02-21T15:57:10Z | null | severo |
huggingface/dataset-viewer | 55 | Should the features be associated to a split, instead of a config? | For now, we assume that all the splits of a config will share the same features, but it seems that it's not necessarily the case (https://github.com/huggingface/datasets/issues/2968). Am I right @lhoestq ?
Is there any example of such a dataset on the hub or in the canonical ones? | https://github.com/huggingface/dataset-viewer/issues/55 | closed | [
"question"
] | 2021-10-01T18:14:53Z | 2021-10-05T09:25:04Z | null | severo |
huggingface/dataset-viewer | 52 | Regenerate dataset-info instead of loading it? | Currently, getting the rows with `/rows` requires a previous (internal) call to `/infos` to get the features (type of the columns). But sometimes the dataset-info.json file is missing, or not coherent with the dataset script (for example: https://huggingface.co/datasets/lhoestq/custom_squad/tree/main), while we are usi... | https://github.com/huggingface/dataset-viewer/issues/52 | closed | [
"question"
] | 2021-09-27T11:28:13Z | 2021-09-27T13:21:00Z | null | severo |
huggingface/transformers | 13,747 | I want to understand the source code of transformers. Where should I start? Is there a tutorial link? thank you very much! | I want to understand the source code of transformers. Where should I start? Is there a tutorial link? thank you very much! | https://github.com/huggingface/transformers/issues/13747 | closed | [
"Migration"
] | 2021-09-26T08:27:24Z | 2021-11-04T15:06:05Z | null | limengqigithub |
huggingface/accelerate | 174 | What is the recommended way of training GANs? | Currently, the examples folder doesn't contain any example of training GAN. I wonder what is the recommended way of handling multiple models and optimizers when using accelerate.
In terms of interface, `Accelerator.prepare` can wrap arbitrary number of models and optimizers at once. However, it seems to me that the ... | https://github.com/huggingface/accelerate/issues/174 | closed | [] | 2021-09-26T07:30:41Z | 2023-10-24T17:55:15Z | null | yuxinyuan |
huggingface/dataset-viewer | 48 | "flatten" the nested values? | See https://huggingface.co/docs/datasets/process.html#flatten | https://github.com/huggingface/dataset-viewer/issues/48 | closed | [
"question"
] | 2021-09-24T12:58:34Z | 2022-09-16T20:10:22Z | null | severo |
huggingface/dataset-viewer | 45 | use `environs` to manage the env vars? | https://pypi.org/project/environs/ instead of utils.py | https://github.com/huggingface/dataset-viewer/issues/45 | closed | [
"question"
] | 2021-09-24T08:05:38Z | 2022-09-19T08:49:33Z | null | severo |
huggingface/dataset-viewer | 41 | Move benchmark to a different repo? | It's a client of the API | https://github.com/huggingface/dataset-viewer/issues/41 | closed | [
"question"
] | 2021-09-23T10:44:08Z | 2021-10-12T08:49:11Z | null | severo |
huggingface/dataset-viewer | 35 | Refresh the cache? | Force a cache refresh on a regular basis (cron) | https://github.com/huggingface/dataset-viewer/issues/35 | closed | [
"question"
] | 2021-09-23T09:36:02Z | 2021-10-12T08:34:41Z | null | severo |
huggingface/dataset-viewer | 30 | Use FastAPI instead of only Starlette? | It would allow to have doc, and surely a lot of other benefits | https://github.com/huggingface/dataset-viewer/issues/30 | closed | [
"question"
] | 2021-09-17T14:45:40Z | 2021-09-20T10:25:17Z | null | severo |
huggingface/datasets | 2,888 | v1.11.1 release date | Hello, i need to use latest features in one of my packages but there have been no new datasets release since 2 months ago.
When do you plan to publush v1.11.1 release? | https://github.com/huggingface/datasets/issues/2888 | closed | [
"question"
] | 2021-09-09T21:53:15Z | 2021-09-12T20:18:35Z | null | fcakyon |
huggingface/dataset-viewer | 18 | CI: how to acknowledge a "safety" warning? | We use `safety` to check vulnerabilities in the dependencies. But in the case below, `tensorflow` is marked as insecure while the last published version on pipy is still 2.6.0. What to do in this case?
```
+==============================================================================+
| ... | https://github.com/huggingface/dataset-viewer/issues/18 | closed | [
"question"
] | 2021-09-01T07:20:45Z | 2021-09-15T11:58:56Z | null | severo |
huggingface/transformers | 13,331 | bert:What is the tf version corresponding to tensformers? | I use python3.7, tf2.4.0, cuda11.1 and cudnn 8.0.4 to run bert-base-un and report an error
- albert, bert, xlm: @LysandreJik
- tensorflow: @Rocketkn
| https://github.com/huggingface/transformers/issues/13331 | closed | [] | 2021-08-30T11:42:36Z | 2021-08-30T15:46:16Z | null | xmcs111 |
huggingface/dataset-viewer | 15 | Add an endpoint to get the dataset card? | See https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L427, `full` argument
The dataset card is the README.md. | https://github.com/huggingface/dataset-viewer/issues/15 | closed | [
"question"
] | 2021-08-26T13:43:29Z | 2022-09-16T20:15:52Z | null | severo |
huggingface/dataset-viewer | 12 | Install the datasets that require manual download | Some datasets require a manual download (https://huggingface.co/datasets/arxiv_dataset, for example). We might manually download them on the server, so that the backend returns the rows, instead of an error. | https://github.com/huggingface/dataset-viewer/issues/12 | closed | [
"question"
] | 2021-08-25T16:30:11Z | 2022-06-17T11:47:18Z | null | severo |
huggingface/dataset-viewer | 10 | Use /info as the source for configs and splits? | It's a refactor. As the dataset info contains the configs and splits, maybe the code can be factorized. Before doing it: review the errors for /info, /configs, and /splits (https://observablehq.com/@huggingface/quality-assessment-of-datasets-loading) and ensure we will not increase the number of erroneous datasets. | https://github.com/huggingface/dataset-viewer/issues/10 | closed | [
"question"
] | 2021-08-25T09:43:51Z | 2021-09-01T07:08:25Z | null | severo |
huggingface/dataset-viewer | 6 | Expand the purpose of this backend? | Depending on the evolution of https://github.com/huggingface/datasets, this project might disappear, or its features might be reduced, in particular, if one day it allows caching the data by self-generating:
- an arrow or a parquet data file (maybe with sharding and compression for the largest datasets)
- or a SQL ... | https://github.com/huggingface/dataset-viewer/issues/6 | closed | [
"question"
] | 2021-08-09T14:03:41Z | 2022-02-04T11:24:32Z | null | severo |
huggingface/transformers | 12,925 | How to reproduce XLNet correctly And What is the config for finetuning XLNet? | I fintune a XLNet for English text classification. But it seems that I did something wrong about it because xlnet-base is worse than bert-base in my case. I set every 1/3 epoch report validation accuracy. At the beginning Bert-base is about 0.50 while XLNet-base is only 0.24. The config I use for xlnet is listed as fol... | https://github.com/huggingface/transformers/issues/12925 | closed | [
"Migration"
] | 2021-07-28T01:16:19Z | 2021-07-29T05:50:07Z | null | sherlcok314159 |
huggingface/transformers | 12,805 | What is the data format of transformers language modeling run_clm.py fine-tuning? | I now use run_clm.py to fine-tune gpt2, the command is as follows:
```
python run_clm.py \\
--model_name_or_path gpt2 \\
--train_file train1.txt \\
--validation_file validation1.txt \\
--do_train \\
--do_eval \\
--output_dir /tmp/test-clm
```
The training data is as follows:
[trai... | https://github.com/huggingface/transformers/issues/12805 | closed | [] | 2021-07-20T09:43:30Z | 2021-08-27T15:07:19Z | null | gongshaojie12 |
huggingface/sentence-transformers | 1,070 | What is the difference between training(https://www.sbert.net/docs/training/overview.html#training-data) and unsupervised learning | Hi,
I have some bunch of PDF's and I am building a QnA system from the pdf's. Currently, I am using deepset/haystack repo for the same task.
My doubt is if we want to generate embeddings for my text which training I should do, what is the difference as both approaches mostly takes sentences right? | https://github.com/huggingface/sentence-transformers/issues/1070 | open | [] | 2021-07-15T12:13:37Z | 2021-07-15T12:41:22Z | null | SAIVENKATARAJU |
huggingface/transformers | 12,704 | Where is the casual mask when using BertLMHeadModel and set config.is_decoder = True? | I hope to use BERT for the task of causal language modeling.
`BertLMHeadModel ` seems to meet my needs, but I did not find any code snippets about the causal mask, even if I set the `config.is_decoder=True`.
I only find the following related code in https://github.com/huggingface/transformers/blob/master/src/tr... | https://github.com/huggingface/transformers/issues/12704 | closed | [] | 2021-07-14T13:15:50Z | 2021-07-24T06:42:04Z | null | Doragd |
huggingface/transformers | 12,105 | What is the correct way to pass labels to DetrForSegmentation? | The [current documentation](https://huggingface.co/transformers/master/model_doc/detr.html#transformers.DetrForSegmentation.forward) for `DetrModelForSegmentation.forward` says the following about `labels` kwarg:
> The class labels themselves should be a torch.LongTensor of len (number of bounding boxes in the image... | https://github.com/huggingface/transformers/issues/12105 | closed | [] | 2021-06-10T22:15:23Z | 2021-06-17T14:37:54Z | null | nateraw |
huggingface/transformers | 12,005 | where is the code for DetrFeatureExtractor, DetrForObjectDetection | Hello my dear friend.
i am long for the model of https://huggingface.co/facebook/detr-resnet-50
i cannot find the code of it in transformers==4.7.0.dev0 and 4.6.1 pleae help me . appreciated.
## Environment info
<!-- You can run the command `transformers-cli env` and copy... | https://github.com/huggingface/transformers/issues/12005 | closed | [] | 2021-06-03T09:28:27Z | 2021-06-10T07:06:59Z | null | zhangbo2008 |
huggingface/notebooks | 42 | what is the ' token classification head'? | https://github.com/huggingface/notebooks/issues/42 | closed | [] | 2021-05-25T09:17:49Z | 2021-05-29T11:36:11Z | null | zingxy | |
huggingface/pytorch-image-models | 572 | What is EfficientNetV2s? What is it relationship with EfficientNetV2? | https://github.com/huggingface/pytorch-image-models/issues/572 | closed | [
"enhancement"
] | 2021-04-21T07:24:51Z | 2021-04-21T15:51:02Z | null | chenyang9799 | |
huggingface/sentence-transformers | 875 | Where is the saved model after the training? | model.fit(train_objectives=[(train_dataloader, train_loss)], output_path=dir, epochs=1, warmup_steps=100)
I have specified the output_path where the model output, but I didn't see any documents after training.
thank you. | https://github.com/huggingface/sentence-transformers/issues/875 | open | [] | 2021-04-17T00:45:41Z | 2021-04-17T09:54:52Z | null | Bulando |
huggingface/datasets | 2,196 | `load_dataset` caches two arrow files? | Hi,
I am using datasets to load large json file of 587G.
I checked the cached folder and found that there are two arrow files created:
* `cache-ed205e500a7dc44c.arrow` - 355G
* `json-train.arrow` - 582G
Why is the first file created?
If I delete it, would I still be able to `load_from_disk`? | https://github.com/huggingface/datasets/issues/2196 | closed | [
"question"
] | 2021-04-09T03:49:19Z | 2021-04-12T05:25:29Z | null | hwijeen |
huggingface/datasets | 2,193 | Filtering/mapping on one column is very slow | I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_colu... | https://github.com/huggingface/datasets/issues/2193 | closed | [
"question"
] | 2021-04-08T18:16:14Z | 2021-04-26T16:13:59Z | null | norabelrose |
huggingface/datasets | 2,187 | Question (potential issue?) related to datasets caching | I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the following:
```
04/07/2021 18:34:42 - WARNING - datasets.build... | https://github.com/huggingface/datasets/issues/2187 | open | [
"question"
] | 2021-04-08T00:16:28Z | 2023-01-03T18:30:38Z | null | ioana-blue |
huggingface/transformers | 11,057 | Difference in tokenizer output depending on where `add_prefix_space` is set. | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4.2
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version... | https://github.com/huggingface/transformers/issues/11057 | closed | [] | 2021-04-05T10:30:25Z | 2021-06-07T15:18:36Z | null | sai-prasanna |
huggingface/transformers | 10,960 | What is the score of trainer.predict()? | I want to know the meaning of output of trainer.predict().
example:
`PredictionOutput(predictions=array([[-2.2704859, 2.442343 ]], dtype=float32), label_ids=array([1]), metrics={'eval_loss': 0.008939245715737343, 'eval_runtime': 0.0215, 'eval_samples_per_second': 46.56})`
What is this score? -> predictions=arra... | https://github.com/huggingface/transformers/issues/10960 | closed | [] | 2021-03-30T07:53:13Z | 2021-03-30T23:41:38Z | null | Yuukp |
huggingface/datasets | 2,108 | Is there a way to use a GPU only when training an Index in the process of add_faisis_index? | Motivation - Some FAISS indexes like IVF consist of the training step that clusters the dataset into a given number of indexes. It would be nice if we can use a GPU to do the training step and covert the index back to CPU as mention in [this faiss example](https://gist.github.com/mdouze/46d6bbbaabca0b9778fca37ed2bcccf6... | https://github.com/huggingface/datasets/issues/2108 | open | [
"question"
] | 2021-03-24T21:32:16Z | 2021-03-25T06:31:43Z | null | shamanez |
huggingface/datasets | 1,973 | Question: what gets stored in the datasets cache and why is it so huge? | I'm running several training jobs (around 10) with a relatively large dataset (3M samples). The datasets cache reached 178G and it seems really large. What is it stored in there and why is it so large? I don't think I noticed this problem before and seems to be related to the new version of the datasets library. Any in... | https://github.com/huggingface/datasets/issues/1973 | closed | [] | 2021-03-02T14:35:53Z | 2021-03-30T14:03:59Z | null | ioana-blue |
huggingface/sentence-transformers | 753 | What is 'sentence_embedding' of a Sentence Transformer Model? | Hey, I try to understand where this comes from. It is just mentioned here [link](https://github.com/UKPLab/sentence-transformers/blob/9932965c92a06835eda255dac7eacd53f48c5cd7/sentence_transformers/SentenceTransformer.py#L144)
But seems not be used anywhere than. Because this feature is used in the losses like Onlin... | https://github.com/huggingface/sentence-transformers/issues/753 | open | [] | 2021-02-11T20:48:07Z | 2021-02-12T14:03:59Z | null | PaulForInvent |
huggingface/transformers | 9,961 | What is the correct way to use Adafactor? | Hi, from the papers I've seen that Adafactor is typically used with no learning rate (as in Pegasus paper), however, when I try to execute run_seq2seq.py or seq2seq/finetune_trainer.py from your examples, and set --adafactor parameter, without specifying learning rate (for no learning rate), it uses the default 3e-05. ... | https://github.com/huggingface/transformers/issues/9961 | closed | [
"wontfix"
] | 2021-02-02T15:42:08Z | 2021-03-06T00:12:07Z | null | avacaondata |
huggingface/datasets | 1,808 | writing Datasets in a human readable format | Hi
I see there is a save_to_disk function to save data, but this is not human readable format, is there a way I could save a Dataset object in a human readable format to a file like json? thanks @lhoestq | https://github.com/huggingface/datasets/issues/1808 | closed | [
"enhancement",
"question"
] | 2021-02-02T02:55:40Z | 2022-06-01T15:38:13Z | null | ghost |
huggingface/transformers | 9,867 | where is position_embedding_type used | When I was using pytorch Electra Model, I read its source code but I didn't find where position_embedding_type is used.
So did I miss something? | https://github.com/huggingface/transformers/issues/9867 | closed | [] | 2021-01-28T08:29:08Z | 2021-01-29T02:00:07Z | null | awdrgyjilplij |
huggingface/datasets | 1,786 | How to use split dataset | 
Hey,
I want to split the lambada dataset into corpus, test, train and valid txt files (like penn treebank) but I am not able to achieve this. What I am doing is, executing the lambada.py file in my pro... | https://github.com/huggingface/datasets/issues/1786 | closed | [
"question"
] | 2021-01-27T21:37:47Z | 2021-04-23T15:17:39Z | null | kkhan188 |
huggingface/sentence-transformers | 693 | What is 'Spearman’s rank correlation between the cosine-similarity of the sentence embeddings and the gold labels.' ? | In your paper,you mention this
`we compute the Spearman’s rank
correlation between the cosine-similarity of the
sentence embeddings and the gold labels.`
in **section 4.1**
Here is my question,what is the `gold labels` mean ,and can you provide a example to explain how to calculate the Spearman’s rank correlati... | https://github.com/huggingface/sentence-transformers/issues/693 | closed | [] | 2021-01-15T08:46:57Z | 2021-01-15T09:55:00Z | null | Gpwner |
huggingface/datasets | 1,733 | connection issue with glue, what is the data url for glue? | Hi
my codes sometimes fails due to connection issue with glue, could you tell me how I can have the URL datasets library is trying to read GLUE from to test the machines I am working on if there is an issue on my side or not
thanks | https://github.com/huggingface/datasets/issues/1733 | closed | [] | 2021-01-13T08:37:40Z | 2021-08-04T18:13:55Z | null | ghost |
huggingface/transformers | 9,556 | Where is convert_bert_original_tf_checkpoint_to_pytorch.py? | HI:
I am getting the following error when implementing entity extraction in BERT. OSError: Error no file named ['pytorch_model.bin', 'tf_model.h5', 'model.ckpt.index']
I am very new to using BERT, and noted that [issue 2110](https://github.com/huggingface/transformers/issues/2110) had a similar issue. Issue 2... | https://github.com/huggingface/transformers/issues/9556 | closed | [
"wontfix",
"Migration"
] | 2021-01-13T02:49:48Z | 2021-03-06T00:13:15Z | null | sednaasil |
huggingface/transformers | 9,387 | Where is the impact when output_attentions=True? | Is there any impact regarding performance (training/fine-tuning time, GPU memory, batch size, etc.) when `output_attentions=True`?
```python
self.bert_encoder = BertModel.from_pretrained(
hparams.architecture, # "bert-base-uncased"
output_attentions=True)
``` | https://github.com/huggingface/transformers/issues/9387 | closed | [
"wontfix"
] | 2021-01-02T23:16:57Z | 2021-03-06T00:13:32Z | null | celsofranssa |
huggingface/sentence-transformers | 635 | sbert.net is down. Where can I view list of pretrained models? | https://github.com/huggingface/sentence-transformers/issues/635 | closed | [] | 2020-12-19T12:16:46Z | 2020-12-19T14:10:36Z | null | mani-rai | |
huggingface/datasets | 1,600 | AttributeError: 'DatasetDict' object has no attribute 'train_test_split' | The following code fails with "'DatasetDict' object has no attribute 'train_test_split'" - am I doing something wrong?
```
from datasets import load_dataset
dataset = load_dataset('csv', data_files='data.txt')
dataset = dataset.train_test_split(test_size=0.1)
```
> AttributeError: 'DatasetDict' object has no at... | https://github.com/huggingface/datasets/issues/1600 | closed | [
"question"
] | 2020-12-18T05:37:10Z | 2023-05-03T04:22:55Z | null | david-waterworth |
huggingface/datasets | 1,514 | how to get all the options of a property in datasets | Hi
could you tell me how I can get all unique options of a property of dataset?
for instance in case of boolq, if the user wants to know which unique labels it has, is there a way to access unique labels without getting all training data lables and then forming a set i mean? thanks | https://github.com/huggingface/datasets/issues/1514 | closed | [
"question"
] | 2020-12-12T16:24:08Z | 2022-05-25T16:27:29Z | null | rabeehk |
huggingface/datasets | 1,167 | ❓ On-the-fly tokenization with datasets, tokenizers, and torch Datasets and Dataloaders | Hi there,
I have a question regarding "on-the-fly" tokenization. This question was elicited by reading the "How to train a new language model from scratch using Transformers and Tokenizers" [here](https://huggingface.co/blog/how-to-train). Towards the end there is this sentence: "If your dataset is very large, you c... | https://github.com/huggingface/datasets/issues/1167 | closed | [
"question",
"generic discussion"
] | 2020-12-05T17:02:56Z | 2023-07-20T15:49:42Z | null | pietrolesci |
huggingface/datasets | 883 | Downloading/caching only a part of a datasets' dataset. | Hi,
I want to use the validation data *only* (of natural question).
I don't want to have the whole dataset cached in my machine, just the dev set.
Is this possible? I can't find a way to do it in the docs.
Thank you,
Sapir | https://github.com/huggingface/datasets/issues/883 | open | [
"enhancement",
"question"
] | 2020-11-24T14:25:18Z | 2020-11-27T13:51:55Z | null | SapirWeissbuch |
huggingface/datasets | 878 | Loading Data From S3 Path in Sagemaker | In Sagemaker Im tring to load the data set from S3 path as follows
`train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv'
valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv'
test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv'
data_files = {}
data_files["train"] = train_path
data_files... | https://github.com/huggingface/datasets/issues/878 | open | [
"enhancement",
"question"
] | 2020-11-23T09:17:22Z | 2020-12-23T09:53:08Z | null | mahesh1amour |
huggingface/datasets | 861 | Possible Bug: Small training/dataset file creates gigantic output | Hey guys,
I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB r... | https://github.com/huggingface/datasets/issues/861 | closed | [
"enhancement",
"question"
] | 2020-11-17T13:48:59Z | 2021-03-30T14:04:04Z | null | NebelAI |
huggingface/datasets | 853 | concatenate_datasets support axis=0 or 1 ? | I want to achieve the following result

| https://github.com/huggingface/datasets/issues/853 | closed | [
"enhancement",
"help wanted",
"question"
] | 2020-11-16T02:46:23Z | 2021-04-19T16:07:18Z | null | renqingcolin |
huggingface/pytorch-image-models | 261 | What is different with paper for mobilenet v3 and efficientNet | Thank for your great works.
The results with your code show much higher accuracy compared to reported accuracy. (mobilenet v3 and efficientNet)
I want to know what is main different with paper.
| https://github.com/huggingface/pytorch-image-models/issues/261 | closed | [] | 2020-10-29T13:34:35Z | 2020-10-30T01:15:38Z | null | gksruf |
huggingface/sentence-transformers | 497 | What is the meaning of warmup_steps when I fine-tune the model, can I remove it? | ```python
evaluator = evaluation.EmbeddingSimilarityEvaluator(sentences1, sentences2, scores)
# Define your train dataset, the dataloader and the train loss
train_dataset = SentencesDataset(train_data, model)
train_dataloader = DataLoader(train_dataset, shuffle=True, batch_size=32)
train_loss = losses.CosineSimila... | https://github.com/huggingface/sentence-transformers/issues/497 | closed | [] | 2020-10-14T10:03:43Z | 2020-10-14T10:31:27Z | null | wmathor |
huggingface/sentence-transformers | 494 | what is the license for this repository? | https://github.com/huggingface/sentence-transformers/issues/494 | closed | [] | 2020-10-12T09:31:41Z | 2020-10-12T09:32:15Z | null | pinkeshbadjatiya | |
huggingface/transformers | 7,727 | what is the perplexity of distilbert-base-uncased ? | # ❓ Questions & Help
## Details
In the [readme](https://github.com/huggingface/transformers/tree/master/examples/distillation) , it is said that distilbert-base-uncased is pretraind on the same data used to pretrain Bert, so I wonder what is the final perplexity or cross entropy of the pretrain?
| https://github.com/huggingface/transformers/issues/7727 | closed | [
"wontfix"
] | 2020-10-12T09:11:49Z | 2020-12-20T13:34:47Z | null | OleNet |
huggingface/transformers | 6,790 | What is the size of the context window in the 'openai-gpt' pre-trained model? | What is the size of the context window in the 'openai-gpt' pre-trained model?
# ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://... | https://github.com/huggingface/transformers/issues/6790 | closed | [
"wontfix"
] | 2020-08-28T09:17:02Z | 2020-11-07T05:42:47Z | null | lzl19971215 |
huggingface/tokenizers | 374 | where is the pre-build tokenizers for 'merge.txt and vacab.json' | or how to build my private version | https://github.com/huggingface/tokenizers/issues/374 | closed | [] | 2020-08-17T08:45:13Z | 2021-01-06T20:02:22Z | null | SeekPoint |
huggingface/sentence-transformers | 335 | What is the key difference between mean pooling BERT vs. mean pooling sentence-transformers? | Hi!
If I run sentence-transformers without pre-training, is it equivalent to apply mean-pooling to the last layer of BERT?
For example, if I run the below code,
```python
# Use BERT for mapping tokens to embeddings
word_embedding_model = models.Transformer('bert-base-uncased')
# Apply mean pooling to get on... | https://github.com/huggingface/sentence-transformers/issues/335 | open | [] | 2020-08-01T02:35:56Z | 2020-08-01T08:39:45Z | null | yuwon |
huggingface/pytorch-image-models | 205 | when I use the old version, the result is good,but I update the newest code, the result is error.what's wrong with me? | same dataset,and same train scripts,with this:
`
./distributed_train.sh 2 /data/data/product/product --model swsl_resnet50 --epochs 20 --warmup-epochs 1 --lr 0.001 --batch-size 16 --img-size 224 --num-classes 30 --pretrained --amp
`
the old code result:
Train: 10 [ 0/185 ( 0%)] Loss: 0.866020 (0.8660) Ti... | https://github.com/huggingface/pytorch-image-models/issues/205 | closed | [] | 2020-07-31T09:54:39Z | 2020-08-03T09:45:04Z | null | runauto |
huggingface/transformers | 6,092 | i dont know what Tranier`s Dataset is. | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a wh... | https://github.com/huggingface/transformers/issues/6092 | closed | [] | 2020-07-28T13:11:48Z | 2020-07-28T13:48:43Z | null | Ted8000 |
huggingface/pytorch-image-models | 201 | where is CheckpointSaver? | hello, going over your repo
(thx for the great repo btw)
I can't find where the code for CheckpointSaver is...
nor do I find any checkpoint saved in my pc..
where can I find them?? | https://github.com/huggingface/pytorch-image-models/issues/201 | closed | [] | 2020-07-28T03:51:45Z | 2020-07-28T04:32:20Z | null | ooodragon94 |
huggingface/transformers | 5,940 | What is the difference between the function of add_tokens() and add_special_tokens() in tokenizer | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a wh... | https://github.com/huggingface/transformers/issues/5940 | closed | [] | 2020-07-21T15:29:14Z | 2025-03-05T20:33:05Z | null | kugwzk |
huggingface/transformers | 5,682 | What is the decoder_input for encoder-decoder transformer in training time? | https://datascience.stackexchange.com/questions/76261/whats-the-input-dimension-for-transformer-decoder-during-training
Is the link's answer right?
Thank you very much! | https://github.com/huggingface/transformers/issues/5682 | closed | [] | 2020-07-11T10:48:07Z | 2020-07-12T03:32:38Z | null | guotong1988 |
huggingface/transformers | 5,564 | Where is the documentation on migrating to the 3.0 tokenizer API? | I see that you folks have completely changed the API to do tokenizing, e.g. for BertTokenizer. I have a lot of code using the two methods `encode_plus()` and `batch_encode_plus()`, and when I went to the [documentation](https://huggingface.co/transformers/main_classes/tokenizer.html) to look up an argument, I found tha... | https://github.com/huggingface/transformers/issues/5564 | closed | [] | 2020-07-07T03:17:26Z | 2020-07-07T21:15:04Z | null | githubrandomuser2017 |
huggingface/transformers | 5,447 | Where did "prepare_for_model" go? What is the replacement? | I'm working with already numericalized data (e.g., where the text has been converted to ids via `tokenizer.tokenize()`) and was using `prepare_for_model` to build the appropriate input dictionary ... ***but*** that method is gone in 3.0.
So ... what should I use/do now?
Thanks | https://github.com/huggingface/transformers/issues/5447 | closed | [] | 2020-07-01T19:20:34Z | 2020-07-03T14:51:22Z | null | ohmeow |
huggingface/transformers | 5,204 | T5 Model : What is maximum sequence length that can be used with pretrained T5 (3b model) checkpoint? | As the paper described, T5 uses a relative attention mechanism and the answer for this [issue](https://github.com/google-research/text-to-text-transfer-transformer/issues/273) says, T5 can use any sequence length were the only constraint is memory.
According to this, can I use T5 to summarize inputs that have more ... | https://github.com/huggingface/transformers/issues/5204 | closed | [] | 2020-06-23T02:36:22Z | 2023-08-29T21:43:31Z | null | shamanez |
huggingface/neuralcoref | 259 | getting a none value for `print(doc._.coref_clusters)` | hey people, I have attached code and the output. As you can see I am getting a none value when I am trying to `print(doc._.coref_clusters)` and the code above line in the given program is giving the output well and good. why is this? something related to new version bugs or something like that? please respond, thanks.... | https://github.com/huggingface/neuralcoref/issues/259 | closed | [
"question"
] | 2020-06-18T19:12:59Z | 2020-06-19T07:58:38Z | null | chettipalli |
huggingface/neuralcoref | 257 | Load new trained model | Dear guys,
Thank you so much for your interesting works. I was able to train a new model based on [this instruction](https://github.com/huggingface/neuralcoref/blob/master/neuralcoref/train/training.md) and this [blog post](https://medium.com/huggingface/how-to-train-a-neural-coreference-model-neuralcoref-2-7bb30c1a... | https://github.com/huggingface/neuralcoref/issues/257 | open | [
"question"
] | 2020-06-13T16:14:52Z | 2021-07-15T07:32:04Z | null | SysDevHayes |
huggingface/transformers | 4,937 | What is the different options for pooler_type in Bert config ? | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make s... | https://github.com/huggingface/transformers/issues/4937 | closed | [] | 2020-06-11T14:26:20Z | 2020-06-18T07:26:02Z | null | ClementViricel |
huggingface/datasets | 246 | What is the best way to cache a dataset? | For example if I want to use streamlit with a nlp dataset:
```
@st.cache
def load_data():
return nlp.load_dataset('squad')
```
This code raises the error "uncachable object"
Right now I just fixed with a constant for my specific case:
```
@st.cache(hash_funcs={pyarrow.lib.Buffer: lambda b: 0})
```... | https://github.com/huggingface/datasets/issues/246 | closed | [] | 2020-06-06T11:02:07Z | 2020-07-09T09:15:07Z | null | Mistobaan |
huggingface/transformers | 4,817 | Question: Where do I find the Transformer model from the paper "Attention is all you need" ? | Hello
Firstly, thanks for supporting all questions here.
I read the paper "Attention is all you need" and wondering which class should I use in the HuggingFace library to use the Transformer architecture used in the paper.
Can you please advise?
Thanks
Abhishek | https://github.com/huggingface/transformers/issues/4817 | closed | [] | 2020-06-06T10:34:56Z | 2020-06-08T22:37:27Z | null | abhisheksgumadi |
huggingface/neuralcoref | 256 | Can't locate CorScorer.pm | Dear guys,
Thank you for your interesting works. I'm training the model for new language (Dutch) using the SoNars corpus. Due to the fact that SoNars was in MMAX, I used the modification of this script (https://github.com/andreasvc/dutchcoref/blob/master/mmaxconll.py) to convert it to CONLL format.
After that, I ... | https://github.com/huggingface/neuralcoref/issues/256 | closed | [
"question"
] | 2020-05-31T10:33:59Z | 2021-11-02T14:06:49Z | null | SysDevHayes |
huggingface/swift-coreml-transformers | 19 | What GPT-2 model is distilled here? | Is it the gpt2-small (124M), gpt2-medium (345M), gpt2-large (774M), or the gpt-xl (1.5B) that this implementation uses out of the box? | https://github.com/huggingface/swift-coreml-transformers/issues/19 | closed | [] | 2020-05-05T02:08:49Z | 2023-04-01T18:01:45Z | null | philipkd |
huggingface/neuralcoref | 250 | How to improve processing speed? | Hi.
Could you give me some information about how to tune the parameters to make processing faster, even at the expense of accuracy?
How much impact does the `greedyness` parameter have on speed?
Thanks! | https://github.com/huggingface/neuralcoref/issues/250 | closed | [
"question",
"wontfix",
"perf / speed"
] | 2020-04-17T16:32:08Z | 2022-01-09T04:06:48Z | null | murphd40 |
huggingface/transformers | 3,424 | Where is the code of Bart fine-tuning?Thanks | https://github.com/huggingface/transformers/issues/3424 | closed | [] | 2020-03-25T01:54:34Z | 2020-04-16T15:03:10Z | null | qiunlp | |
huggingface/transformers | 3,283 | What is the most effective way to use BERT , ROBERTA , GPT-2 architectures as frozen feature extractors ? | We use pretrained self-supervised learning (SSL) models for NLP as feature extractors for downstream tasks like sentiment analysis. In most of such cases, we add a simple new classification layer and **fine-tune the whole model**. With the SSL models getting bigger and the amount of unsupervised training data is huge ... | https://github.com/huggingface/transformers/issues/3283 | closed | [
"Discussion",
"wontfix"
] | 2020-03-15T09:06:20Z | 2020-06-02T09:15:03Z | null | shamanez |
huggingface/neuralcoref | 248 | German Training not working | Hi we tried to train your model for german. We used Glove in german but it doesnt work.
How does the binary static_word_embeddings.npy needs to be structured?
| https://github.com/huggingface/neuralcoref/issues/248 | closed | [
"question",
"wontfix",
"training",
"feat / coref"
] | 2020-03-11T10:25:36Z | 2022-01-09T04:06:40Z | null | SimonF89 |
huggingface/transformers | 3,205 | where is the position emdeddings in bert for training a new model from scratch ? | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make s... | https://github.com/huggingface/transformers/issues/3205 | closed | [
"wontfix"
] | 2020-03-10T13:35:16Z | 2020-05-16T17:44:04Z | null | 2hip3ng |
huggingface/transformers | 3,193 | Where is the default download address for pre-trained weight | # ❓ Questions & Help
```
from transformers import DistilBertTokenizer, DistilBertModel
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
model = DistilBertModel.from_pretrained('distilbert-base-uncased')
```
I can't find the downloaded file.
Thanks for your help | https://github.com/huggingface/transformers/issues/3193 | closed | [] | 2020-03-09T17:35:47Z | 2020-03-09T17:52:49Z | null | 649459021 |
huggingface/blog | 5 | Where is the CoNLL-2003 formatted Esperanto dataset ref. in the tutorial? | > Using a dataset of annotated Esperanto POS tags formatted in the CoNLL-2003 format
Where is this dataset?
Thanks! | https://github.com/huggingface/blog/issues/5 | open | [] | 2020-02-20T04:26:54Z | 2020-03-04T16:30:32Z | null | ohmeow |
huggingface/sentence-transformers | 120 | What is the expected number of epochs for training sentenceBERT | Hi,
Given a model in {BERT, XLM, .XLnet, ...}, do you have a dictionary of estimated best number of epochs for training your Siamese Network on NLI dataset?
Else, what would be your suggestion on this? (other than just keep trying with different epochs parameters since it takes a lot of computational time 😞 )
... | https://github.com/huggingface/sentence-transformers/issues/120 | open | [] | 2020-02-04T14:17:22Z | 2020-06-08T19:48:20Z | null | MastafaF |
huggingface/transformers | 2,705 | What is the input for TFBertForSequenceClassification? | # ❓ Questions & Help
What is the input for TFBertForSequenceClassification?
## Details
I have a simple multiclass text data on which I want to train the BERT model.
From docs I have found the input format of data:
```a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: ... | https://github.com/huggingface/transformers/issues/2705 | closed | [] | 2020-02-01T10:20:29Z | 2020-03-12T08:41:25Z | null | sainimohit23 |
huggingface/transformers | 2,591 | What is the f1 score of Squad v2.0 on bert-base? I only got f1 score 74.78. | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
Hello, I am doing some experiment of squad v2.0 on bert-base (NOT bert-large).
According to the BERT paper, bert-large achieves f1 score 81.9 with squad v2.0.
Since I couldn't find the official result for bert-base, I am not sure if I... | https://github.com/huggingface/transformers/issues/2591 | closed | [] | 2020-01-20T09:03:45Z | 2020-01-22T05:03:12Z | null | YJYJLee |
huggingface/tokenizers | 73 | Decoding to string | Hi, thanks for this awesome library!
I want to decode BPE back to *actual* text, so that I can calculate BLEU scores. When I use the tokenizer.decoder, I get a string without any whitespace. I understand I can use a `pre_tokenizer` to get whitespaces, but in that case the decoded output would be `i can feel the mag ... | https://github.com/huggingface/tokenizers/issues/73 | closed | [
"question",
"python"
] | 2020-01-15T12:58:44Z | 2020-01-20T15:38:29Z | null | davidstap |
huggingface/transformers | 2,411 | What is the difference between T5Model, T5WithLMHeadModel, T5PreTrainedModel? | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
I notice that for T5 model, there are more choices(T5Model, T5WithLMHeadModel, T5PreTrainedModel) than BERT or GPT. What is the difference between these three? I think all three are pre-trained model. We do not use T5PreTrainedModel in ... | https://github.com/huggingface/transformers/issues/2411 | closed | [
"wontfix"
] | 2020-01-06T07:01:32Z | 2020-03-13T08:09:42Z | null | g-jing |
huggingface/transformers | 2,372 | What is the "could not find answer" warning in squad.py | Hello,
I am trying to run run_squad.py for BERT (italian-cased) with an italian version of squad.
During the creation of features from dataset, I got some answer skipped like in the following:
<img width="478" alt="Screenshot 2019-12-30 at 23 30 19" src="https://user-images.githubusercontent.com/26765504/71603... | https://github.com/huggingface/transformers/issues/2372 | closed | [
"wontfix"
] | 2019-12-30T22:31:58Z | 2020-08-29T15:05:37Z | null | cppntn |
huggingface/transformers | 2,278 | where is the script of a second step of knwoledge distillation on SQuAD 1.0? | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
In Distil part, there is a paragraph description which is "distilbert-base-uncased-distilled-squad: A finetuned version of distilbert-base-uncased finetuned using (a second step of) knwoledge distillation on SQuAD 1.0. This model reache... | https://github.com/huggingface/transformers/issues/2278 | closed | [
"wontfix"
] | 2019-12-23T09:13:26Z | 2020-05-08T15:29:08Z | null | c0derm4n |
huggingface/pytorch-image-models | 63 | what is the value range of magnitude in auto-augment when the MAX_LEVEL is set as 10. | Dear @rwightman , I have read the code about auto-augmentation and random-augmentation, and I noticed that the MAX_LEVEL is set as 10, same as the google's implementation. Also in the google implementation, they say an optimal magnitude is often in [5, 30]. But in your implementation you clip the input magnitude to be ... | https://github.com/huggingface/pytorch-image-models/issues/63 | closed | [] | 2019-12-23T08:49:19Z | 2019-12-26T23:40:49Z | null | cddlyf |
huggingface/transformers | 2,230 | what is the most efficient way to store all hidden layers' weights? | Hi,
I am following this [post](https://mccormickml.com/2019/05/14/BERT-word-embeddings-tutorial/) for getting all 12 hidden layers' weights for every token in a sentence.
Consider I have a short text with 2 sentences: `He stole money today. He is fishing on the Mississippi riverbank.`
I want to store 5 + 8 = 1... | https://github.com/huggingface/transformers/issues/2230 | closed | [
"wontfix"
] | 2019-12-19T19:41:00Z | 2020-02-24T20:38:46Z | null | vr25 |
huggingface/pytorch-image-models | 61 | where is your MixNet code? I can't find it. | https://github.com/huggingface/pytorch-image-models/issues/61 | closed | [] | 2019-12-17T02:49:04Z | 2019-12-17T05:30:46Z | null | xiebinghua | |
huggingface/transformers | 2,127 | Where is extract_features.py and run_classifier.py ? | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
Hello! I couldn't find the extract_features.py and run_classifier.py. Have they been renamed ? | https://github.com/huggingface/transformers/issues/2127 | closed | [] | 2019-12-10T17:14:27Z | 2019-12-13T15:09:01Z | null | JiangYanting |
huggingface/transformers | 2,013 | What is the real parameters to weight the triple loss (L_{ce}, L_{mlm}, L_{cos}) in DistilBert? | Hello! Thanks for your great work DistilBert. I want to ask what is the real parameters "alpha" you used in DistilBert to weight the triple loss (L_{ce}, L_{mlm}, L_{cos})?
You did not mention this detail in your NIPS workshop paper (http://arxiv.org/abs/1910.01108). In the [README](https://github.com/huggingface/tr... | https://github.com/huggingface/transformers/issues/2013 | closed | [] | 2019-12-01T16:49:05Z | 2019-12-02T15:37:37Z | null | voidism |
huggingface/neuralcoref | 228 | Integration of different word embeddings for prediction | HI,
I am using SciSpacy with neuralcoref (by adding `ENTITY` to `ACCEPTED_ENTS`) and would also like to use the SciSpacy word vectors if possible.
I already have switched the `self.static_vectors` and `self.tuned_vectors` to point to the `self.vocab.vectors` in the `NeuralCoref` constructor. I also changed `SIZE... | https://github.com/huggingface/neuralcoref/issues/228 | closed | [
"question",
"wontfix",
"usage"
] | 2019-11-25T17:01:15Z | 2022-01-09T04:06:41Z | null | masonedmison |
huggingface/neuralcoref | 227 | What is the performance on CoNLL-2012 test set? | Hi,
Thank you for your excellent work. I am looking for an off-the-shelf tool to do some coref text processing. I am wondering about the model performance of this repo on the CoNLL-2012, such as the Avg. F1 score.
Would you please post it here or in the readme file? Thanks a lot. | https://github.com/huggingface/neuralcoref/issues/227 | closed | [
"question",
"perf / accuracy"
] | 2019-11-25T09:26:30Z | 2019-12-06T21:57:04Z | null | magic282 |
huggingface/transformers | 1,866 | BertForTokenClassification for NER . what is the conclusion of this output ? | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
Hi ,
Im trying to perform NER using BertForTokenClassification .I saw this sample code in transformers GIT page.
from transformers import BertForTokenClassification
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
... | https://github.com/huggingface/transformers/issues/1866 | closed | [
"wontfix"
] | 2019-11-19T09:23:23Z | 2020-02-04T21:23:21Z | null | AjitAntony |
huggingface/transformers | 1,834 | Where is Model2Model PreTrainedEncoderDecoder in run_summerization_finetune | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
| https://github.com/huggingface/transformers/issues/1834 | closed | [
"wontfix"
] | 2019-11-14T18:09:24Z | 2020-03-09T03:39:51Z | null | yeliu918 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.