url stringlengths 58 61 | repository_url stringclasses 1
value | labels_url stringlengths 72 75 | comments_url stringlengths 67 70 | events_url stringlengths 65 68 | html_url stringlengths 48 51 | id int64 600M 1.08B | node_id stringlengths 18 24 | number int64 2 3.45k | title stringlengths 1 276 | user dict | labels list | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees list | milestone dict | comments list | created_at int64 1,587B 1,640B | updated_at int64 1,588B 1,640B | closed_at int64 1,588B 1,640B ⌀ | author_association stringclasses 3
values | active_lock_reason null | body stringlengths 0 228k ⌀ | reactions dict | timeline_url stringlengths 67 70 | performed_via_github_app null | draft null | pull_request null | is_pull_request bool 1
class |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/2387 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2387/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2387/comments | https://api.github.com/repos/huggingface/datasets/issues/2387/events | https://github.com/huggingface/datasets/issues/2387 | 897,566,666 | MDU6SXNzdWU4OTc1NjY2NjY= | 2,387 | datasets 1.6 ignores cache | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/fo... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | [
"Looks like there are multiple issues regarding this (#2386, #2322) and it's a WIP #2329. Currently these datasets are being loaded in-memory which is causing this issue. Quoting @mariosasko here for a quick fix:\r\n\r\n> set `keep_in_memory` to `False` when loading a dataset (`sst = load_dataset(\"sst\", keep_in_m... | 1,621,555,978,000 | 1,622,045,274,000 | 1,622,045,274,000 | CONTRIBUTOR | null | Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2387/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2387/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2386 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2386/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2386/comments | https://api.github.com/repos/huggingface/datasets/issues/2386/events | https://github.com/huggingface/datasets/issues/2386 | 897,560,049 | MDU6SXNzdWU4OTc1NjAwNDk= | 2,386 | Accessing Arrow dataset cache_files | {
"login": "Mehrad0711",
"id": 28717374,
"node_id": "MDQ6VXNlcjI4NzE3Mzc0",
"avatar_url": "https://avatars.githubusercontent.com/u/28717374?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mehrad0711",
"html_url": "https://github.com/Mehrad0711",
"followers_url": "https://api.github.com/use... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Thanks @bhavitvyamalik for referencing the workaround. Setting `keep_in_memory=False` is working."
] | 1,621,555,063,000 | 1,621,624,683,000 | 1,621,624,683,000 | NONE | null | ## Describe the bug
In datasets 1.5.0 the following code snippet would have printed the cache_files:
```
train_data = load_dataset('conll2003', split='train', cache_dir='data')
print(train_data.cache_files[0]['filename'])
```
However, in the newest release (1.6.1), it prints an empty list.
I also tried l... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2386/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2386/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2382 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2382/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2382/comments | https://api.github.com/repos/huggingface/datasets/issues/2382/events | https://github.com/huggingface/datasets/issues/2382 | 895,610,216 | MDU6SXNzdWU4OTU2MTAyMTY= | 2,382 | DuplicatedKeysError: FAILURE TO GENERATE DATASET ! load_dataset('head_qa', 'en') | {
"login": "helloworld123-lab",
"id": 75953751,
"node_id": "MDQ6VXNlcjc1OTUzNzUx",
"avatar_url": "https://avatars.githubusercontent.com/u/75953751?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/helloworld123-lab",
"html_url": "https://github.com/helloworld123-lab",
"followers_url": "https... | [] | closed | false | null | [] | null | [] | 1,621,439,388,000 | 1,622,381,176,000 | 1,622,381,176,000 | NONE | null | Hello everyone,
I try to use head_qa dataset in [https://huggingface.co/datasets/viewer/?dataset=head_qa&config=en](url)
```
!pip install datasets
from datasets import load_dataset
dataset = load_dataset(
'head_qa', 'en')
```
When I write above load_dataset(.), it throws the following:
```
Duplicated... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2382/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2382/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2378 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2378/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2378/comments | https://api.github.com/repos/huggingface/datasets/issues/2378/events | https://github.com/huggingface/datasets/issues/2378 | 895,131,774 | MDU6SXNzdWU4OTUxMzE3NzQ= | 2,378 | Add missing dataset_infos.json files | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/fo... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/fo... | [
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github... | null | [] | 1,621,411,872,000 | 1,621,411,872,000 | null | MEMBER | null | Some of the datasets in `datasets` are missing a `dataset_infos.json` file, e.g.
```
[PosixPath('datasets/chr_en/chr_en.py'), PosixPath('datasets/chr_en/README.md')]
[PosixPath('datasets/telugu_books/README.md'), PosixPath('datasets/telugu_books/telugu_books.py')]
[PosixPath('datasets/reclor/README.md'), PosixPat... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2378/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2378/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2377 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2377/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2377/comments | https://api.github.com/repos/huggingface/datasets/issues/2377/events | https://github.com/huggingface/datasets/issues/2377 | 894,918,927 | MDU6SXNzdWU4OTQ5MTg5Mjc= | 2,377 | ArrowDataset.save_to_disk produces files that cannot be read using pyarrow.feather | {
"login": "Ark-kun",
"id": 1829149,
"node_id": "MDQ6VXNlcjE4MjkxNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ark-kun",
"html_url": "https://github.com/Ark-kun",
"followers_url": "https://api.github.com/users/Ark-kun/... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi ! This is because we are actually using the arrow streaming format. We plan to switch to the arrow IPC format.\r\nMore info at #1933 "
] | 1,621,389,877,000 | 1,622,803,151,000 | null | NONE | null | ## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
from datasets import load_dataset
from pyarrow import feather
dataset = load_dataset('imdb', split='train')
dataset.save_to_disk('dataset_dir')
table = feather.read_table('dataset_dir/dataset.arro... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2377/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2377/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2373 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2373/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2373/comments | https://api.github.com/repos/huggingface/datasets/issues/2373/events | https://github.com/huggingface/datasets/issues/2373 | 894,499,909 | MDU6SXNzdWU4OTQ0OTk5MDk= | 2,373 | Loading dataset from local path | {
"login": "kolakows",
"id": 34172905,
"node_id": "MDQ6VXNlcjM0MTcyOTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/34172905?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kolakows",
"html_url": "https://github.com/kolakows",
"followers_url": "https://api.github.com/users/kol... | [] | closed | false | null | [] | null | [
"Version below works, checked again in the docs, and data_files should be a path.\r\n```\r\nds = datasets.load_dataset('my_script.py', \r\n data_files='/data/dir/corpus.txt', \r\n cache_dir='.')\r\n```"
] | 1,621,351,250,000 | 1,621,352,196,000 | 1,621,352,195,000 | NONE | null | I'm trying to load a local dataset with the code below
```
ds = datasets.load_dataset('my_script.py',
data_files='corpus.txt',
data_dir='/data/dir',
cache_dir='.')
```
But internally a BuilderConfig is created, which tries to u... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2373/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2373/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2371 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2371/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2371/comments | https://api.github.com/repos/huggingface/datasets/issues/2371/events | https://github.com/huggingface/datasets/issues/2371 | 894,193,403 | MDU6SXNzdWU4OTQxOTM0MDM= | 2,371 | Align question answering tasks with sub-domains | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/fo... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/fo... | [
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github... | null | [] | 1,621,331,279,000 | 1,621,331,362,000 | null | MEMBER | null | As pointed out by @thomwolf in #2255 we should consider breaking with the pipeline taxonomy of `transformers` to account for the various types of question-answering domains:
> `question-answering` exists in two forms: abstractive and extractive question answering.
>
> we can keep a generic `question-answering` bu... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2371/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2371/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2366 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2366/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2366/comments | https://api.github.com/repos/huggingface/datasets/issues/2366/events | https://github.com/huggingface/datasets/issues/2366 | 893,185,266 | MDU6SXNzdWU4OTMxODUyNjY= | 2,366 | Json loader fails if user-specified features don't match the json data fields order | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | [] | 1,621,247,168,000 | 1,623,840,469,000 | 1,623,840,469,000 | MEMBER | null | If you do
```python
dataset = load_dataset("json", data_files=data_files, features=features)
```
Then depending on the order of the features in the json data field it fails:
```python
[...]
~/Desktop/hf/datasets/src/datasets/packaged_modules/json/json.py in _generate_tables(self, files)
94 if s... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2366/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2366/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2365 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2365/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2365/comments | https://api.github.com/repos/huggingface/datasets/issues/2365/events | https://github.com/huggingface/datasets/issues/2365 | 893,179,697 | MDU6SXNzdWU4OTMxNzk2OTc= | 2,365 | Missing ClassLabel encoding in Json loader | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/5",
"html_url": "https://github.com/huggingface/datasets/milestone/5",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/5/labels",
"id": 6808903,
"node_id": "MDk6TWlsZXN0b25lNjgwODkwMw==",
"number": 5,
"title... | [] | 1,621,246,750,000 | 1,624,892,734,000 | 1,624,892,734,000 | MEMBER | null | Currently if you want to load a json dataset this way
```python
dataset = load_dataset("json", data_files=data_files, features=features)
```
Then if your features has ClassLabel types and if your json data needs class label encoding (i.e. if the labels in the json files are strings and not integers), then it would ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2365/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2365/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2363 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2363/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2363/comments | https://api.github.com/repos/huggingface/datasets/issues/2363/events | https://github.com/huggingface/datasets/issues/2363 | 892,391,232 | MDU6SXNzdWU4OTIzOTEyMzI= | 2,363 | Trying to use metric.compute but get OSError | {
"login": "hyusterr",
"id": 52968111,
"node_id": "MDQ6VXNlcjUyOTY4MTEx",
"avatar_url": "https://avatars.githubusercontent.com/u/52968111?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hyusterr",
"html_url": "https://github.com/hyusterr",
"followers_url": "https://api.github.com/users/hyu... | [] | open | false | null | [] | null | [
"also, I test the function on some little data , get the same message:\r\n\r\n```\r\nPython 3.8.5 (default, Jan 27 2021, 15:41:15)\r\n[GCC 9.3.0] on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> from datasets import load_metric\r\n>>> metric = load_metric('accuracy'... | 1,621,067,946,000 | 1,630,936,866,000 | null | NONE | null | I want to use metric.compute from load_metric('accuracy') to get training accuracy, but receive OSError. I am wondering what is the mechanism behind the metric calculation, why would it report an OSError?
```python
195 for epoch in range(num_train_epochs):
196 model.train()
197 for step, batch... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2363/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2363/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2360 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2360/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2360/comments | https://api.github.com/repos/huggingface/datasets/issues/2360/events | https://github.com/huggingface/datasets/issues/2360 | 891,965,964 | MDU6SXNzdWU4OTE5NjU5NjQ= | 2,360 | Automatically detect datasets with compatible task schemas | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/fo... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/fo... | [
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github... | null | [] | 1,621,002,220,000 | 1,621,002,220,000 | null | MEMBER | null | See description of #2255 for details.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2360/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2360/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2359 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2359/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2359/comments | https://api.github.com/repos/huggingface/datasets/issues/2359/events | https://github.com/huggingface/datasets/issues/2359 | 891,946,017 | MDU6SXNzdWU4OTE5NDYwMTc= | 2,359 | Allow model labels to be passed during task preparation | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/fo... | [] | open | false | null | [] | null | [] | 1,621,000,708,000 | 1,621,000,708,000 | null | MEMBER | null | Models have a config with label2id. And we have the same for datasets with the ClassLabel feature type. At one point either the model or the dataset must sync with the other. It would be great to do that on the dataset side.
For example for sentiment classification on amazon reviews with you could have these labels:... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2359/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2359/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2356 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2356/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2356/comments | https://api.github.com/repos/huggingface/datasets/issues/2356/events | https://github.com/huggingface/datasets/issues/2356 | 890,511,019 | MDU6SXNzdWU4OTA1MTEwMTk= | 2,356 | How to Add New Metrics Guide | {
"login": "ncoop57",
"id": 7613470,
"node_id": "MDQ6VXNlcjc2MTM0NzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7613470?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ncoop57",
"html_url": "https://github.com/ncoop57",
"followers_url": "https://api.github.com/users/ncoop57/... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi ! sorry for the late response \r\n\r\nIt would be fantastic to have a guide for adding metrics as well ! Currently we only have this template here:\r\nhttps://github.com/huggingface/datasets/blob/master/templates/new_metric_script.py\r\n\r\nWe can also include test utilities for metrics in the guide.\r\n\r\nWe ... | 1,620,855,726,000 | 1,622,486,975,000 | null | CONTRIBUTOR | null | **Is your feature request related to a problem? Please describe.**
Currently there is an absolutely fantastic guide for how to contribute a new dataset to the library. However, there isn't one for adding new metrics.
**Describe the solution you'd like**
I'd like for a guide in a similar style to the dataset guide ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2356/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2356/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2354 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2354/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2354/comments | https://api.github.com/repos/huggingface/datasets/issues/2354/events | https://github.com/huggingface/datasets/issues/2354 | 890,439,523 | MDU6SXNzdWU4OTA0Mzk1MjM= | 2,354 | Document DatasetInfo attributes | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/fo... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/fo... | [
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github... | null | [] | 1,620,849,689,000 | 1,621,675,574,000 | 1,621,675,574,000 | MEMBER | null | **Is your feature request related to a problem? Please describe.**
As noted in PR #2255, the attributes of `DatasetInfo` are not documented in the [docs](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=datasetinfo#datasetinfo). It would be nice to do so :)
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2354/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2354/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2350 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2350/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2350/comments | https://api.github.com/repos/huggingface/datasets/issues/2350/events | https://github.com/huggingface/datasets/issues/2350 | 889,580,247 | MDU6SXNzdWU4ODk1ODAyNDc= | 2,350 | `FaissIndex.save` throws error on GPU | {
"login": "Guitaricet",
"id": 2821124,
"node_id": "MDQ6VXNlcjI4MjExMjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2821124?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Guitaricet",
"html_url": "https://github.com/Guitaricet",
"followers_url": "https://api.github.com/users... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Just in case, this is a workaround that I use in my code and it seems to do the job.\r\n\r\n```python\r\nif use_gpu_index:\r\n data[\"train\"]._indexes[\"text_emb\"].faiss_index = faiss.index_gpu_to_cpu(data[\"train\"]._indexes[\"text_emb\"].faiss_index)\r\n```"
] | 1,620,790,916,000 | 1,621,258,901,000 | 1,621,258,901,000 | CONTRIBUTOR | null | ## Describe the bug
After training an index with a factory string `OPQ16_128,IVF512,PQ32` on GPU, `.save_faiss_index` throws this error.
```
File "index_wikipedia.py", line 119, in <module>
data["train"].save_faiss_index("text_emb", index_save_path)
File "/home/vlialin/miniconda3/envs/cat/lib/python3.8... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2350/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2350/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2347 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2347/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2347/comments | https://api.github.com/repos/huggingface/datasets/issues/2347/events | https://github.com/huggingface/datasets/issues/2347 | 887,404,868 | MDU6SXNzdWU4ODc0MDQ4Njg= | 2,347 | Add an API to access the language and pretty name of a dataset | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugge... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi ! With @bhavitvyamalik we discussed about having something like\r\n```python\r\nfrom datasets import load_dataset_card\r\n\r\ndataset_card = load_dataset_card(\"squad\")\r\nprint(dataset_card.metadata.pretty_name)\r\n# Stanford Question Answering Dataset (SQuAD)\r\nprint(dataset_card.metadata.languages)\r\n# [\... | 1,620,742,208,000 | 1,621,589,206,000 | null | MEMBER | null | It would be super nice to have an API to get some metadata of the dataset from the name and args passed to `load_dataset`. This way we could programmatically infer the language and the name of a dataset when creating model cards automatically in the Transformers examples scripts. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2347/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2347/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2345 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2345/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2345/comments | https://api.github.com/repos/huggingface/datasets/issues/2345/events | https://github.com/huggingface/datasets/issues/2345 | 886,586,872 | MDU6SXNzdWU4ODY1ODY4NzI= | 2,345 | [Question] How to move and reuse preprocessed dataset? | {
"login": "AtmaHou",
"id": 15045402,
"node_id": "MDQ6VXNlcjE1MDQ1NDAy",
"avatar_url": "https://avatars.githubusercontent.com/u/15045402?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AtmaHou",
"html_url": "https://github.com/AtmaHou",
"followers_url": "https://api.github.com/users/AtmaHo... | [] | closed | false | null | [] | null | [
"@lhoestq @LysandreJik",
"<s>Hi :) Can you share with us the code you used ?</s>\r\n\r\nEDIT: from https://github.com/huggingface/transformers/issues/11665#issuecomment-838348291 I understand you're using the run_clm.py script. Can you share your logs ?\r\n",
"Also note that for the caching to work, you must re... | 1,620,724,157,000 | 1,623,386,351,000 | 1,623,386,351,000 | NONE | null | Hi, I am training a gpt-2 from scratch using run_clm.py.
I want to move and reuse the preprocessed dataset (It take 2 hour to preprocess),
I tried to :
copy path_to_cache_dir/datasets to new_cache_dir/datasets
set export HF_DATASETS_CACHE="new_cache_dir/"
but the program still re-preprocess the whole dataset... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2345/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2345/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2344 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2344/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2344/comments | https://api.github.com/repos/huggingface/datasets/issues/2344/events | https://github.com/huggingface/datasets/issues/2344 | 885,331,505 | MDU6SXNzdWU4ODUzMzE1MDU= | 2,344 | Is there a way to join multiple datasets in one? | {
"login": "alexvaca0",
"id": 35173563,
"node_id": "MDQ6VXNlcjM1MTczNTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexvaca0",
"html_url": "https://github.com/alexvaca0",
"followers_url": "https://api.github.com/users/... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi ! We don't have `join`/`merge` on a certain column as in pandas.\r\nMaybe you can just use the [concatenate_datasets](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=concatenate#datasets.concatenate_datasets) function.\r\n"
] | 1,620,688,570,000 | 1,620,721,488,000 | null | NONE | null | **Is your feature request related to a problem? Please describe.**
I need to join 2 datasets, one that is in the hub and another I've created from my files. Is there an easy way to join these 2?
**Describe the solution you'd like**
Id like to join them with a merge or join method, just like pandas dataframes.
**Add... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2344/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2344/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2343 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2343/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2343/comments | https://api.github.com/repos/huggingface/datasets/issues/2343/events | https://github.com/huggingface/datasets/issues/2343 | 883,208,539 | MDU6SXNzdWU4ODMyMDg1Mzk= | 2,343 | Columns are removed before or after map function applied? | {
"login": "taghizad3h",
"id": 8199406,
"node_id": "MDQ6VXNlcjgxOTk0MDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8199406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taghizad3h",
"html_url": "https://github.com/taghizad3h",
"followers_url": "https://api.github.com/users... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [] | 1,620,614,180,000 | 1,620,614,180,000 | null | NONE | null | ## Describe the bug
According to the documentation when applying map function the [remove_columns ](https://huggingface.co/docs/datasets/processing.html#removing-columns) will be removed after they are passed to the function, but in the [source code](https://huggingface.co/docs/datasets/package_reference/main_classes.... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2343/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2343/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2337 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2337/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2337/comments | https://api.github.com/repos/huggingface/datasets/issues/2337/events | https://github.com/huggingface/datasets/issues/2337 | 881,610,567 | MDU6SXNzdWU4ODE2MTA1Njc= | 2,337 | NonMatchingChecksumError for web_of_science dataset | {
"login": "nbroad1881",
"id": 24982805,
"node_id": "MDQ6VXNlcjI0OTgyODA1",
"avatar_url": "https://avatars.githubusercontent.com/u/24982805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nbroad1881",
"html_url": "https://github.com/nbroad1881",
"followers_url": "https://api.github.com/use... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"I've raised a PR for this. Should work with `dataset = load_dataset(\"web_of_science\", \"WOS11967\", ignore_verifications=True)`once it gets merged into the main branch. Thanks for reporting this! "
] | 1,620,525,722,000 | 1,620,653,753,000 | 1,620,653,753,000 | NONE | null | NonMatchingChecksumError when trying to download the web_of_science dataset.
>NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://data.mendeley.com/datasets/9rw3vkcfy4/6/files/c9ea673d-5542-44c0-ab7b-f1311f7d61df/WebOfScience.zip?dl=1']
Setting `ignore_verfications=True` results... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2337/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2337/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2335 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2335/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2335/comments | https://api.github.com/repos/huggingface/datasets/issues/2335/events | https://github.com/huggingface/datasets/issues/2335 | 881,291,887 | MDU6SXNzdWU4ODEyOTE4ODc= | 2,335 | Index error in Dataset.map | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/use... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 1,620,506,697,000 | 1,620,653,172,000 | 1,620,653,172,000 | CONTRIBUTOR | null | The following code, if executed on master, raises an IndexError (due to overflow):
```python
>>> from datasets import *
>>> d = load_dataset("bookcorpus", split="train")
Reusing dataset bookcorpus (C:\Users\Mario\.cache\huggingface\datasets\bookcorpus\plain_text\1.0.0\44662c4a114441c35200992bea923b170e6f13f2f0beb7c... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2335/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2335/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2331 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2331/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2331/comments | https://api.github.com/repos/huggingface/datasets/issues/2331/events | https://github.com/huggingface/datasets/issues/2331 | 879,031,427 | MDU6SXNzdWU4NzkwMzE0Mjc= | 2,331 | Add Topical-Chat | {
"login": "ktangri",
"id": 22266659,
"node_id": "MDQ6VXNlcjIyMjY2NjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/22266659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ktangri",
"html_url": "https://github.com/ktangri",
"followers_url": "https://api.github.com/users/ktangr... | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | null | [] | null | [] | 1,620,395,039,000 | 1,620,395,039,000 | null | NONE | null | ## Adding a Dataset
- **Name:** Topical-Chat
- **Description:** a knowledge-grounded human-human conversation dataset where the underlying knowledge spans 8 broad topics and conversation partners don’t have explicitly defined roles
- **Paper:** https://www.isca-speech.org/archive/Interspeech_2019/pdfs/3079.pdf
- **... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2331/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2331/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2330 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2330/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2330/comments | https://api.github.com/repos/huggingface/datasets/issues/2330/events | https://github.com/huggingface/datasets/issues/2330 | 878,490,927 | MDU6SXNzdWU4Nzg0OTA5Mjc= | 2,330 | Allow passing `desc` to `tqdm` in `Dataset.map()` | {
"login": "cccntu",
"id": 31893406,
"node_id": "MDQ6VXNlcjMxODkzNDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cccntu",
"html_url": "https://github.com/cccntu",
"followers_url": "https://api.github.com/users/cccntu/fo... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 1935892877,
"node_id": "MDU6... | closed | false | null | [] | null | [
"Hi @lhoestq,\r\nShould we change `desc` in [pbar](https://github.com/huggingface/datasets/blob/81fcf88172ed5e3026ef68aed4c0ec6980372333/src/datasets/arrow_dataset.py#L1860) to something meaningful?",
"I think the user could pass the `desc` parameter to `map` so that it can be displayed in the tqdm progress bar, ... | 1,620,366,774,000 | 1,622,041,161,000 | 1,622,041,161,000 | CONTRIBUTOR | null | It's normal to have many `map()` calls, and some of them can take a few minutes,
it would be nice to have a description on the progress bar.
Alternative solution:
Print the description before/after the `map()` call. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2330/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2330/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2327 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2327/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2327/comments | https://api.github.com/repos/huggingface/datasets/issues/2327/events | https://github.com/huggingface/datasets/issues/2327 | 877,565,831 | MDU6SXNzdWU4Nzc1NjU4MzE= | 2,327 | A syntax error in example | {
"login": "mymusise",
"id": 6883957,
"node_id": "MDQ6VXNlcjY4ODM5NTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/6883957?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mymusise",
"html_url": "https://github.com/mymusise",
"followers_url": "https://api.github.com/users/mymus... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"cc @beurkinger but I think this has been fixed internally and will soon be updated right ?",
"This issue has been fixed."
] | 1,620,311,684,000 | 1,621,479,859,000 | 1,621,479,859,000 | NONE | null | 
Sorry to report with an image, I can't find the template source code of this snippet. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2327/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2327/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2323 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2323/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2323/comments | https://api.github.com/repos/huggingface/datasets/issues/2323/events | https://github.com/huggingface/datasets/issues/2323 | 876,438,507 | MDU6SXNzdWU4NzY0Mzg1MDc= | 2,323 | load_dataset("timit_asr") gives back duplicates of just one sample text | {
"login": "ekeleshian",
"id": 33647474,
"node_id": "MDQ6VXNlcjMzNjQ3NDc0",
"avatar_url": "https://avatars.githubusercontent.com/u/33647474?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ekeleshian",
"html_url": "https://github.com/ekeleshian",
"followers_url": "https://api.github.com/use... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Upgrading datasets to version 1.6 fixes the issue",
"This bug was fixed in #1995. Upgrading the `datasets` should work! ",
"Thanks @ekeleshian for having reported.\r\n\r\nI am closing this issue once that you updated `datasets`. Feel free to reopen it if the problem persists."
] | 1,620,220,488,000 | 1,620,383,550,000 | 1,620,383,550,000 | NONE | null | ## Describe the bug
When you look up on key ["train"] and then ['text'], you get back a list with just one sentence duplicated 4620 times. Namely, the sentence "Would such an act of refusal be useful?". Similarly when you look up ['test'] and then ['text'], the list is one sentence repeated "The bungalow was pleasant... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2323/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2323/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2322 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2322/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2322/comments | https://api.github.com/repos/huggingface/datasets/issues/2322/events | https://github.com/huggingface/datasets/issues/2322 | 876,383,853 | MDU6SXNzdWU4NzYzODM4NTM= | 2,322 | Calls to map are not cached. | {
"login": "villmow",
"id": 2743060,
"node_id": "MDQ6VXNlcjI3NDMwNjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2743060?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/villmow",
"html_url": "https://github.com/villmow",
"followers_url": "https://api.github.com/users/villmow/... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"I tried upgrading to `datasets==1.6.2` and downgrading to `1.6.0`. Both versions produce the same output.\r\n\r\nDowngrading to `1.5.0` works and produces the following output for me:\r\n\r\n```bash\r\nDownloading: 9.20kB [00:00, 3.94MB/s] \r\nDownloading: 5.99kB [00:00, 3.29MB/s] ... | 1,620,216,687,000 | 1,623,179,402,000 | 1,623,179,301,000 | NONE | null | ## Describe the bug
Somehow caching does not work for me anymore. Am I doing something wrong, or is there anything that I missed?
## Steps to reproduce the bug
```python
import datasets
datasets.set_caching_enabled(True)
sst = datasets.load_dataset("sst")
def foo(samples, i):
print("executed", i[:10])... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2322/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2322/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2319 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2319/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2319/comments | https://api.github.com/repos/huggingface/datasets/issues/2319/events | https://github.com/huggingface/datasets/issues/2319 | 876,251,376 | MDU6SXNzdWU4NzYyNTEzNzY= | 2,319 | UnicodeDecodeError for OSCAR (Afrikaans) | {
"login": "sgraaf",
"id": 8904453,
"node_id": "MDQ6VXNlcjg5MDQ0NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8904453?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgraaf",
"html_url": "https://github.com/sgraaf",
"followers_url": "https://api.github.com/users/sgraaf/foll... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | [
"Thanks for reporting, @sgraaf.\r\n\r\nI am going to have a look at it. \r\n\r\nI guess the expected codec is \"UTF-8\". Normally, when no explicitly codec is passed, Python uses one which is platform-dependent. For Linux machines, the default codec is `utf_8`, which is OK. However for Windows machine, the default ... | 1,620,206,572,000 | 1,620,212,251,000 | 1,620,211,855,000 | NONE | null | ## Describe the bug
When loading the [OSCAR dataset](https://huggingface.co/datasets/oscar) (specifically `unshuffled_deduplicated_af`), I encounter a `UnicodeDecodeError`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("oscar", "unshuffled_deduplicated_af")
```... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2319/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2319/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2318 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2318/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2318/comments | https://api.github.com/repos/huggingface/datasets/issues/2318/events | https://github.com/huggingface/datasets/issues/2318 | 876,212,460 | MDU6SXNzdWU4NzYyMTI0NjA= | 2,318 | [api request] API to obtain "dataset_module" dynamic path? | {
"login": "richardliaw",
"id": 4529381,
"node_id": "MDQ6VXNlcjQ1MjkzODE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4529381?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richardliaw",
"html_url": "https://github.com/richardliaw",
"followers_url": "https://api.github.com/us... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | [
"Hi @richardliaw, \r\n\r\nFirst, thanks for the compliments.\r\n\r\nIn relation with your request, currently, the dynamic modules path is obtained this way:\r\n```python\r\nfrom datasets.load import init_dynamic_modules, MODULE_NAME_FOR_DYNAMIC_MODULES\r\n\r\ndynamic_modules_path = init_dynamic_modules(MODULE_NAME_... | 1,620,204,048,000 | 1,620,290,745,000 | 1,620,287,874,000 | NONE | null | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
This is an awesome library.
It seems like the dynamic module path in this library has broken some of hyperparameter tuning functionality: https://discuss.huggingface.co/t/using-hyperparamet... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2318/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2318/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2316 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2316/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2316/comments | https://api.github.com/repos/huggingface/datasets/issues/2316/events | https://github.com/huggingface/datasets/issues/2316 | 875,756,353 | MDU6SXNzdWU4NzU3NTYzNTM= | 2,316 | Incorrect version specification for pyarrow | {
"login": "cemilcengiz",
"id": 32267027,
"node_id": "MDQ6VXNlcjMyMjY3MDI3",
"avatar_url": "https://avatars.githubusercontent.com/u/32267027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cemilcengiz",
"html_url": "https://github.com/cemilcengiz",
"followers_url": "https://api.github.com/... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Fixed by #2317."
] | 1,620,155,711,000 | 1,620,209,403,000 | 1,620,209,403,000 | CONTRIBUTOR | null | ## Describe the bug
The pyarrow dependency is incorrectly specified in setup.py file, in [this line](https://github.com/huggingface/datasets/blob/3a3e5a4da20bfcd75f8b6a6869b240af8feccc12/setup.py#L77).
Also as a snippet:
```python
"pyarrow>=1.0.0<4.0.0",
```
## Steps to reproduce the bug
```bash
pip install... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2316/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2316/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2308 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2308/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2308/comments | https://api.github.com/repos/huggingface/datasets/issues/2308/events | https://github.com/huggingface/datasets/issues/2308 | 874,559,846 | MDU6SXNzdWU4NzQ1NTk4NDY= | 2,308 | Add COCO evaluation metrics | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/use... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi @NielsRogge, \r\nI'd like to contribute these metrics to datasets. Let's start with `CocoEvaluator` first? Currently how are are you sending the ground truths and predictions in coco_evaluator?\r\n",
"Great!\r\n\r\nHere's a notebook that illustrates how I'm using `CocoEvaluator`: https://drive.google.com/file... | 1,620,047,285,000 | 1,622,790,687,000 | null | NONE | null | I'm currently working on adding Facebook AI's DETR model (end-to-end object detection with Transformers) to HuggingFace Transformers. The model is working fine, but regarding evaluation, I'm currently relying on external `CocoEvaluator` and `PanopticEvaluator` objects which are defined in the original repository ([here... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2308/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2308/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2301 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2301/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2301/comments | https://api.github.com/repos/huggingface/datasets/issues/2301/events | https://github.com/huggingface/datasets/issues/2301 | 873,941,266 | MDU6SXNzdWU4NzM5NDEyNjY= | 2,301 | Unable to setup dev env on Windows | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | [
"Hi @gchhablani, \r\n\r\nThere are some 3rd-party dependencies that require to build code in C. In this case, it is the library `python-Levenshtein`.\r\n\r\nOn Windows, in order to be able to build C code, you need to install at least `Microsoft C++ Build Tools` version 14. You can find more info here: https://visu... | 1,619,961,642,000 | 1,620,055,081,000 | 1,620,055,054,000 | CONTRIBUTOR | null | Hi
I tried installing the `".[dev]"` version on Windows 10 after cloning.
Here is the error I'm facing:
```bat
(env) C:\testing\datasets>pip install -e ".[dev]"
Obtaining file:///C:/testing/datasets
Requirement already satisfied: numpy>=1.17 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datas... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2301/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2301/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2300 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2300/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2300/comments | https://api.github.com/repos/huggingface/datasets/issues/2300/events | https://github.com/huggingface/datasets/issues/2300 | 873,928,169 | MDU6SXNzdWU4NzM5MjgxNjk= | 2,300 | Add VoxPopuli | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://... | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 2725241052,
... | open | false | null | [] | null | [
"I'm happy to take this on:) One question: The original unlabelled data is stored unsegmented (see e.g. https://github.com/facebookresearch/voxpopuli/blob/main/voxpopuli/get_unlabelled_data.py#L30), but segmenting the audio in the dataset would require a dependency on something like soundfile or torchaudio. An alte... | 1,619,957,860,000 | 1,620,901,912,000 | null | MEMBER | null | ## Adding a Dataset
- **Name:** Voxpopuli
- **Description:** VoxPopuli is raw data is collected from 2009-2020 European Parliament event recordings
- **Paper:** https://arxiv.org/abs/2101.00390
- **Data:** https://github.com/facebookresearch/voxpopuli
- **Motivation:** biggest unlabeled speech dataset
**Note**:... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2300/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2300/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2299 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2299/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2299/comments | https://api.github.com/repos/huggingface/datasets/issues/2299/events | https://github.com/huggingface/datasets/issues/2299 | 873,914,717 | MDU6SXNzdWU4NzM5MTQ3MTc= | 2,299 | My iPhone | {
"login": "Jasonbuchanan1983",
"id": 82856229,
"node_id": "MDQ6VXNlcjgyODU2MjI5",
"avatar_url": "https://avatars.githubusercontent.com/u/82856229?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jasonbuchanan1983",
"html_url": "https://github.com/Jasonbuchanan1983",
"followers_url": "https... | [] | closed | false | null | [] | null | [] | 1,619,953,871,000 | 1,627,032,256,000 | 1,620,029,858,000 | NONE | null | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons t... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2299/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2299/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2296 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2296/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2296/comments | https://api.github.com/repos/huggingface/datasets/issues/2296/events | https://github.com/huggingface/datasets/issues/2296 | 872,974,907 | MDU6SXNzdWU4NzI5NzQ5MDc= | 2,296 | 1 | {
"login": "zinnyi",
"id": 82880142,
"node_id": "MDQ6VXNlcjgyODgwMTQy",
"avatar_url": "https://avatars.githubusercontent.com/u/82880142?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zinnyi",
"html_url": "https://github.com/zinnyi",
"followers_url": "https://api.github.com/users/zinnyi/fo... | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [] | 1,619,805,229,000 | 1,620,029,851,000 | 1,620,029,851,000 | NONE | null | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons t... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2296/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2296/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2294 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2294/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2294/comments | https://api.github.com/repos/huggingface/datasets/issues/2294/events | https://github.com/huggingface/datasets/issues/2294 | 872,136,075 | MDU6SXNzdWU4NzIxMzYwNzU= | 2,294 | Slow #0 when using map to tokenize. | {
"login": "VerdureChen",
"id": 31714566,
"node_id": "MDQ6VXNlcjMxNzE0NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/31714566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VerdureChen",
"html_url": "https://github.com/VerdureChen",
"followers_url": "https://api.github.com/... | [] | open | false | null | [] | null | [
"Hi ! Have you tried other values for `preprocessing_num_workers` ? Is it always process 0 that is slower ?\r\nThere are no difference between process 0 and the others except that it processes the first shard of the dataset.",
"Hi, I have found the reason of it. Before using the map function to tokenize the data,... | 1,619,769,633,000 | 1,620,126,011,000 | null | NONE | null | Hi, _datasets_ is really amazing! I am following [run_mlm_no_trainer.py](url) to pre-train BERT, and it uses `tokenized_datasets = raw_datasets.map(
tokenize_function,
batched=True,
num_proc=args.preprocessing_num_workers,
remove_columns=column_names,
loa... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2294/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2294/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2288 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2288/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2288/comments | https://api.github.com/repos/huggingface/datasets/issues/2288/events | https://github.com/huggingface/datasets/issues/2288 | 871,111,235 | MDU6SXNzdWU4NzExMTEyMzU= | 2,288 | Load_dataset for local CSV files | {
"login": "sstojanoska",
"id": 17052700,
"node_id": "MDQ6VXNlcjE3MDUyNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/17052700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sstojanoska",
"html_url": "https://github.com/sstojanoska",
"followers_url": "https://api.github.com/... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi,\r\n\r\nthis is not a standard CSV file (requires additional preprocessing) so I wouldn't label this as s bug. You could parse the examples with the regex module or the string API to extract the data, but the following approach is probably the easiest (once you load the data):\r\n```python\r\nimport ast\r\n# lo... | 1,619,708,470,000 | 1,623,764,966,000 | 1,623,764,966,000 | NONE | null | The method load_dataset fails to correctly load a dataset from csv.
Moreover, I am working on a token-classification task ( POS tagging) , where each row in my CSV contains two columns each of them having a list of strings.
row example:
```tokens | labels
['I' , 'am', 'John'] | ['PRON', 'AUX', 'PROPN' ]
``... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2288/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2288/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2285 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2285/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2285/comments | https://api.github.com/repos/huggingface/datasets/issues/2285/events | https://github.com/huggingface/datasets/issues/2285 | 871,005,236 | MDU6SXNzdWU4NzEwMDUyMzY= | 2,285 | Help understanding how to build a dataset for language modeling as with the old TextDataset | {
"login": "danieldiezmallo",
"id": 46021411,
"node_id": "MDQ6VXNlcjQ2MDIxNDEx",
"avatar_url": "https://avatars.githubusercontent.com/u/46021411?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danieldiezmallo",
"html_url": "https://github.com/danieldiezmallo",
"followers_url": "https://api... | [] | closed | false | null | [] | null | [
"\r\nI received an answer for this question on the HuggingFace Datasets forum by @lhoestq\r\n\r\nHi !\r\n\r\nIf you want to tokenize line by line, you can use this:\r\n\r\n```\r\nmax_seq_length = 512\r\nnum_proc = 4\r\n\r\ndef tokenize_function(examples):\r\n# Remove empty lines\r\nexamples[\"text\"] = [line for li... | 1,619,702,205,000 | 1,621,408,965,000 | 1,621,408,959,000 | NONE | null | Hello,
I am trying to load a custom dataset that I will then use for language modeling. The dataset consists of a text file that has a whole document in each line, meaning that each line overpasses the normal 512 tokens limit of most tokenizers.
I would like to understand what is the process to build a text datas... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2285/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2285/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2279 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2279/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2279/comments | https://api.github.com/repos/huggingface/datasets/issues/2279/events | https://github.com/huggingface/datasets/issues/2279 | 870,431,662 | MDU6SXNzdWU4NzA0MzE2NjI= | 2,279 | Compatibility with Ubuntu 18 and GLIBC 2.27? | {
"login": "tginart",
"id": 11379648,
"node_id": "MDQ6VXNlcjExMzc5NjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/11379648?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tginart",
"html_url": "https://github.com/tginart",
"followers_url": "https://api.github.com/users/tginar... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"From the trace this seems like an error in the tokenizer library instead.\r\n\r\nDo you mind opening an issue at https://github.com/huggingface/tokenizers instead?",
"Hi @tginart, thanks for reporting.\r\n\r\nI think this issue is already open at `tokenizers` library: https://github.com/huggingface/tokenizers/is... | 1,619,647,687,000 | 1,619,682,162,000 | 1,619,682,162,000 | NONE | null | ## Describe the bug
For use on Ubuntu systems, it seems that datasets requires GLIBC 2.29. However, Ubuntu 18 runs with GLIBC 2.27 and it seems [non-trivial to upgrade GLIBC to 2.29 for Ubuntu 18 users](https://www.digitalocean.com/community/questions/how-install-glibc-2-29-or-higher-in-ubuntu-18-04).
I'm not sure... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2279/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2279/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2278 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2278/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2278/comments | https://api.github.com/repos/huggingface/datasets/issues/2278/events | https://github.com/huggingface/datasets/issues/2278 | 870,088,059 | MDU6SXNzdWU4NzAwODgwNTk= | 2,278 | Loss result inGptNeoForCasual | {
"login": "Yossillamm",
"id": 51174606,
"node_id": "MDQ6VXNlcjUxMTc0NjA2",
"avatar_url": "https://avatars.githubusercontent.com/u/51174606?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Yossillamm",
"html_url": "https://github.com/Yossillamm",
"followers_url": "https://api.github.com/use... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi ! I think you might have to ask on the `transformers` repo on or the forum at https://discuss.huggingface.co/\r\n\r\nClosing since it's not related to this library"
] | 1,619,624,392,000 | 1,620,317,663,000 | 1,620,317,663,000 | NONE | null | Is there any way you give the " loss" and "logits" results in the gpt neo api? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2278/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2278/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2276 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2276/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2276/comments | https://api.github.com/repos/huggingface/datasets/issues/2276/events | https://github.com/huggingface/datasets/issues/2276 | 870,010,511 | MDU6SXNzdWU4NzAwMTA1MTE= | 2,276 | concatenate_datasets loads all the data into memory | {
"login": "TaskManager91",
"id": 7063207,
"node_id": "MDQ6VXNlcjcwNjMyMDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7063207?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TaskManager91",
"html_url": "https://github.com/TaskManager91",
"followers_url": "https://api.github.... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.git... | null | [
"Therefore, when I try to concatenate larger datasets (5x 35GB data sets) I also get an out of memory error, since over 90GB of swap space was used at the time of the crash:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nMemoryError Traceba... | 1,619,620,041,000 | 1,620,031,315,000 | 1,620,031,315,000 | NONE | null | ## Describe the bug
When I try to concatenate 2 datasets (10GB each) , the entire data is loaded into memory instead of being written directly to disk.
Interestingly, this happens when trying to save the new dataset to disk or concatenating it again.
`\r\n`dataset_test_filter = dataset['test'].filter(... | 1,619,569,945,000 | 1,621,258,458,000 | 1,621,258,458,000 | NONE | null | There are a number of rows with a label of -1 in the SNLI dataset. The dataset descriptions [here](https://nlp.stanford.edu/projects/snli/) and [here](https://github.com/huggingface/datasets/tree/master/datasets/snli) don't list -1 as a label possibility, and neither does the dataset viewer. As examples, see index 107... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2275/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2275/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2272 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2272/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2272/comments | https://api.github.com/repos/huggingface/datasets/issues/2272/events | https://github.com/huggingface/datasets/issues/2272 | 869,017,977 | MDU6SXNzdWU4NjkwMTc5Nzc= | 2,272 | Bug in Dataset.class_encode_column | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"This has been fixed in this commit: https://github.com/huggingface/datasets/pull/2254/commits/88676c930216cd4cc31741b99827b477d2b46cb6\r\n\r\nIt was introduced in #2246 : using map with `input_columns` doesn't return the other columns anymore"
] | 1,619,539,998,000 | 1,619,787,267,000 | 1,619,787,267,000 | MEMBER | null | ## Describe the bug
All the rest of the columns except the one passed to `Dataset.class_encode_column` are discarded.
## Expected results
All the original columns should be kept.
This needs regression tests.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2272/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2272/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2271 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2271/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2271/comments | https://api.github.com/repos/huggingface/datasets/issues/2271/events | https://github.com/huggingface/datasets/issues/2271 | 869,002,141 | MDU6SXNzdWU4NjkwMDIxNDE= | 2,271 | Synchronize table metadata with features | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"See PR #2274 "
] | 1,619,538,913,000 | 1,619,614,105,000 | null | MEMBER | null | **Is your feature request related to a problem? Please describe.**
As pointed out in this [comment](https://github.com/huggingface/datasets/pull/2145#discussion_r621326767):
> Metadata stored in the schema is just a redundant information regarding the feature types.
It is used when calling Dataset.from_file to kno... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2271/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2271/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2267 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2267/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2267/comments | https://api.github.com/repos/huggingface/datasets/issues/2267/events | https://github.com/huggingface/datasets/issues/2267 | 868,291,129 | MDU6SXNzdWU4NjgyOTExMjk= | 2,267 | DatasetDict save load Failing test in 1.6 not in 1.5 | {
"login": "timothyjlaurent",
"id": 2000204,
"node_id": "MDQ6VXNlcjIwMDAyMDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/timothyjlaurent",
"html_url": "https://github.com/timothyjlaurent",
"followers_url": "https://api.g... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Thanks for reporting ! We're looking into it",
"I'm not able to reproduce this, do you think you can provide a code that creates a DatasetDict that has this issue when saving and reloading ?",
"Hi, I just ran into a similar error. Here is the minimal code to reproduce:\r\n```python\r\nfrom datasets import load... | 1,619,481,805,000 | 1,622,215,654,000 | null | NONE | null | ## Describe the bug
We have a test that saves a DatasetDict to disk and then loads it from disk. In 1.6 there is an incompatibility in the schema.
Downgrading to `>1.6` -- fixes the problem.
## Steps to reproduce the bug
```python
### Load a dataset dict from jsonl
path = '/test/foo'
ds_dict.s... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2267/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2267/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2262 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2262/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2262/comments | https://api.github.com/repos/huggingface/datasets/issues/2262/events | https://github.com/huggingface/datasets/issues/2262 | 867,325,351 | MDU6SXNzdWU4NjczMjUzNTE= | 2,262 | NewsPH NLI dataset script fails to access test data. | {
"login": "jinmang2",
"id": 37775784,
"node_id": "MDQ6VXNlcjM3Nzc1Nzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/37775784?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jinmang2",
"html_url": "https://github.com/jinmang2",
"followers_url": "https://api.github.com/users/jin... | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | null | [] | null | [
"Thanks @bhavitvyamalik for the fix !\r\nThe fix will be available in the next release.\r\nIt's already available on the `master` branch. For now you can either install `datasets` from source or use `script_version=\"master\"` in `load_dataset` to use the fixed version of this dataset."
] | 1,619,419,481,000 | 1,619,688,723,000 | 1,619,688,620,000 | NONE | null | In Newsph-NLI Dataset (#1192), it fails to access test data.
According to the script below, the download manager will download the train data when trying to download the test data.
https://github.com/huggingface/datasets/blob/2a2dd6316af2cc7fdf24e4779312e8ee0c7ed98b/datasets/newsph_nli/newsph_nli.py#L71
If yo... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2262/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2262/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2256 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2256/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2256/comments | https://api.github.com/repos/huggingface/datasets/issues/2256/events | https://github.com/huggingface/datasets/issues/2256 | 866,708,609 | MDU6SXNzdWU4NjY3MDg2MDk= | 2,256 | Running `datase.map` with `num_proc > 1` uses a lot of memory | {
"login": "roskoN",
"id": 8143425,
"node_id": "MDQ6VXNlcjgxNDM0MjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8143425?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/roskoN",
"html_url": "https://github.com/roskoN",
"followers_url": "https://api.github.com/users/roskoN/foll... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.git... | null | [
"Thanks for reporting ! We are working on this and we'll do a patch release very soon.",
"We did a patch release to fix this issue.\r\nIt should be fixed in the new version 1.6.1\r\n\r\nThanks again for reporting and for the details :)"
] | 1,619,258,180,000 | 1,619,457,135,000 | 1,619,457,135,000 | NONE | null | ## Describe the bug
Running `datase.map` with `num_proc > 1` leads to a tremendous memory usage that requires swapping on disk and it becomes very slow.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dstc8_datset = load_dataset("roskoN/dstc8-reddit-corpus", keep_in_memory=False)
... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2256/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2256/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2252 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2252/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2252/comments | https://api.github.com/repos/huggingface/datasets/issues/2252/events | https://github.com/huggingface/datasets/issues/2252 | 865,870,710 | MDU6SXNzdWU4NjU4NzA3MTA= | 2,252 | Slow dataloading with big datasets issue persists | {
"login": "hwijeen",
"id": 29157715,
"node_id": "MDQ6VXNlcjI5MTU3NzE1",
"avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hwijeen",
"html_url": "https://github.com/hwijeen",
"followers_url": "https://api.github.com/users/hwijee... | [] | open | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.git... | null | [
"Hi ! Sorry to hear that. This may come from another issue then.\r\n\r\nFirst can we check if this latency comes from the dataset itself ?\r\nYou can try to load your dataset and benchmark the speed of querying random examples inside it ?\r\n```python\r\nimport time\r\nimport numpy as np\r\n\r\nfrom datasets import... | 1,619,165,900,000 | 1,637,776,195,000 | null | NONE | null | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2252/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/2252/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2251 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2251/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2251/comments | https://api.github.com/repos/huggingface/datasets/issues/2251/events | https://github.com/huggingface/datasets/issues/2251 | 865,848,705 | MDU6SXNzdWU4NjU4NDg3MDU= | 2,251 | while running run_qa.py, ran into a value error | {
"login": "nlee0212",
"id": 44570724,
"node_id": "MDQ6VXNlcjQ0NTcwNzI0",
"avatar_url": "https://avatars.githubusercontent.com/u/44570724?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nlee0212",
"html_url": "https://github.com/nlee0212",
"followers_url": "https://api.github.com/users/nle... | [] | open | false | null | [] | null | [] | 1,619,164,263,000 | 1,619,164,263,000 | null | NONE | null | command:
python3 run_qa.py --model_name_or_path hyunwoongko/kobart --dataset_name squad_kor_v2 --do_train --do_eval --per_device_train_batch_size 8 --learning_rate 3e-5 --num_train_epochs 3 --max_seq_length 512 --doc_stride 128 --output_dir /tmp/debug_squad/
error:
ValueError: External fe... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2251/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2251/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2250 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2250/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2250/comments | https://api.github.com/repos/huggingface/datasets/issues/2250/events | https://github.com/huggingface/datasets/issues/2250 | 865,402,449 | MDU6SXNzdWU4NjU0MDI0NDk= | 2,250 | some issue in loading local txt file as Dataset for run_mlm.py | {
"login": "alighofrani95",
"id": 14968123,
"node_id": "MDQ6VXNlcjE0OTY4MTIz",
"avatar_url": "https://avatars.githubusercontent.com/u/14968123?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alighofrani95",
"html_url": "https://github.com/alighofrani95",
"followers_url": "https://api.githu... | [] | open | false | null | [] | null | [
"Hi,\r\n\r\n1. try\r\n ```python\r\n dataset = load_dataset(\"text\", data_files={\"train\": [\"a1.txt\", \"b1.txt\"], \"test\": [\"c1.txt\"]})\r\n ```\r\n instead.\r\n\r\n Sadly, I can't reproduce the error on my machine. If the above code doesn't resolve the issue, try to update the library to the ... | 1,619,120,353,000 | 1,629,258,552,000 | null | NONE | null | 
first of all, I tried to load 3 .txt files as a dataset (sure that the directory and permission is OK.), I face with the below error.
> FileNotFoundError: [Errno 2] No such file or directory: 'c'
by ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2250/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2250/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2243 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2243/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2243/comments | https://api.github.com/repos/huggingface/datasets/issues/2243/events | https://github.com/huggingface/datasets/issues/2243 | 862,909,389 | MDU6SXNzdWU4NjI5MDkzODk= | 2,243 | Map is slow and processes batches one after another | {
"login": "villmow",
"id": 2743060,
"node_id": "MDQ6VXNlcjI3NDMwNjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2743060?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/villmow",
"html_url": "https://github.com/villmow",
"followers_url": "https://api.github.com/users/villmow/... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @villmow, thanks for reporting.\r\n\r\nCould you please try with the Datasets version 1.6? We released it yesterday and it fixes some issues about the processing speed. You can see the fix implemented by @lhoestq here: #2122.\r\n\r\nOnce you update Datasets, please confirm if the problem persists.",
"Hi @albe... | 1,618,930,700,000 | 1,620,064,473,000 | 1,620,064,472,000 | NONE | null | ## Describe the bug
I have a somewhat unclear bug to me, where I can't figure out what the problem is. The code works as expected on a small subset of my dataset (2000 samples) on my local machine, but when I execute the same code with a larger dataset (1.4 million samples) this problem occurs. Thats why I can't giv... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2243/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2243/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2242 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2242/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2242/comments | https://api.github.com/repos/huggingface/datasets/issues/2242/events | https://github.com/huggingface/datasets/issues/2242 | 862,870,205 | MDU6SXNzdWU4NjI4NzAyMDU= | 2,242 | Link to datasets viwer on Quick Tour page returns "502 Bad Gateway" | {
"login": "martavillegas",
"id": 6735707,
"node_id": "MDQ6VXNlcjY3MzU3MDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/6735707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/martavillegas",
"html_url": "https://github.com/martavillegas",
"followers_url": "https://api.github.... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"This should be fixed now!\r\n\r\ncc @srush "
] | 1,618,928,391,000 | 1,618,930,965,000 | 1,618,930,965,000 | NONE | null | Link to datasets viwer (https://huggingface.co/datasets/viewer/) on Quick Tour page (https://huggingface.co/docs/datasets/quicktour.html) returns "502 Bad Gateway"
The same error with https://huggingface.co/datasets/viewer/?dataset=glue&config=mrpc | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2242/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2242/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2239 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2239/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2239/comments | https://api.github.com/repos/huggingface/datasets/issues/2239/events | https://github.com/huggingface/datasets/issues/2239 | 861,904,306 | MDU6SXNzdWU4NjE5MDQzMDY= | 2,239 | Error loading wikihow dataset | {
"login": "odellus",
"id": 4686956,
"node_id": "MDQ6VXNlcjQ2ODY5NTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4686956?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/odellus",
"html_url": "https://github.com/odellus",
"followers_url": "https://api.github.com/users/odellus/... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @odellus, thanks for reporting.\r\n\r\nThe `wikihow` dataset has 2 versions:\r\n- `all`: Consisting of the concatenation of all paragraphs as the articles and the bold lines as the reference summaries.\r\n- `sep`: Consisting of each paragraph and its summary.\r\n\r\nTherefore, in order to load it, you have to s... | 1,618,866,151,000 | 1,618,936,391,000 | 1,618,936,391,000 | CONTRIBUTOR | null | ## Describe the bug
When attempting to load wikihow into a dataset with
```python
from datasets import load_dataset
dataset = load_dataset('wikihow', data_dir='./wikihow')
```
I get the message:
```
AttributeError: 'BuilderConfig' object has no attribute 'filename'
```
at the end of a [full stack trace](htt... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2239/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2239/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2237 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2237/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2237/comments | https://api.github.com/repos/huggingface/datasets/issues/2237/events | https://github.com/huggingface/datasets/issues/2237 | 861,427,439 | MDU6SXNzdWU4NjE0Mjc0Mzk= | 2,237 | Update Dataset.dataset_size after transformed with map | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"@albertvillanova I would like to take this up. It would be great if you could point me as to how the dataset size is calculated in HF. Thanks!"
] | 1,618,845,578,000 | 1,618,928,525,000 | null | MEMBER | null | After loading a dataset, if we transform it by using `.map` its `dataset_size` attirbute is not updated. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2237/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2237/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2236 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2236/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2236/comments | https://api.github.com/repos/huggingface/datasets/issues/2236/events | https://github.com/huggingface/datasets/issues/2236 | 861,388,145 | MDU6SXNzdWU4NjEzODgxNDU= | 2,236 | Request to add StrategyQA dataset | {
"login": "sarahwie",
"id": 8027676,
"node_id": "MDQ6VXNlcjgwMjc2NzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sarahwie",
"html_url": "https://github.com/sarahwie",
"followers_url": "https://api.github.com/users/sarah... | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | null | [] | null | [] | 1,618,843,586,000 | 1,618,843,586,000 | null | NONE | null | ## Request to add StrategyQA dataset
- **Name:** StrategyQA
- **Description:** open-domain QA [(project page)](https://allenai.org/data/strategyqa)
- **Paper:** [url](https://arxiv.org/pdf/2101.02235.pdf)
- **Data:** [here](https://allenai.org/data/strategyqa)
- **Motivation:** uniquely-formulated dataset that als... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2236/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2236/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2230 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2230/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2230/comments | https://api.github.com/repos/huggingface/datasets/issues/2230/events | https://github.com/huggingface/datasets/issues/2230 | 859,817,159 | MDU6SXNzdWU4NTk4MTcxNTk= | 2,230 | Keys yielded while generating dataset are not being checked | {
"login": "NikhilBartwal",
"id": 42388668,
"node_id": "MDQ6VXNlcjQyMzg4NjY4",
"avatar_url": "https://avatars.githubusercontent.com/u/42388668?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NikhilBartwal",
"html_url": "https://github.com/NikhilBartwal",
"followers_url": "https://api.githu... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi ! Indeed there's no verification on the uniqueness nor the types of the keys.\r\nDo you already have some ideas of what you would like to implement and how ?",
"Hey @lhoestq, thank you so much for the opportunity.\r\nAlthough I haven't had much experience with the HF Datasets code, after a careful look at how... | 1,618,579,787,000 | 1,620,667,881,000 | 1,620,667,881,000 | CONTRIBUTOR | null | The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for any of these, as evident from `xnli' dataset generation:
https... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2230/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2230/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2229 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2229/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2229/comments | https://api.github.com/repos/huggingface/datasets/issues/2229/events | https://github.com/huggingface/datasets/issues/2229 | 859,810,602 | MDU6SXNzdWU4NTk4MTA2MDI= | 2,229 | `xnli` dataset creating a tuple key while yielding instead of `str` or `int` | {
"login": "NikhilBartwal",
"id": 42388668,
"node_id": "MDQ6VXNlcjQyMzg4NjY4",
"avatar_url": "https://avatars.githubusercontent.com/u/42388668?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NikhilBartwal",
"html_url": "https://github.com/NikhilBartwal",
"followers_url": "https://api.githu... | [] | closed | false | null | [] | null | [
"Hi ! Sure sounds good. Also if you find other datasets that use tuples instead of str/int, you can also fix them !\r\nthanks :)",
"@lhoestq I have sent a PR for fixing the issue. Would be great if you could have a look! Thanks!"
] | 1,618,579,313,000 | 1,618,822,602,000 | 1,618,822,602,000 | CONTRIBUTOR | null | When using `ds = datasets.load_dataset('xnli', 'ar')`, the dataset generation script uses the following section of code in the egging, which yields a tuple key instead of the specified `str` or `int` key:
https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/datasets/xnli/xnli.py#L196
... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2229/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2229/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2226 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2226/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2226/comments | https://api.github.com/repos/huggingface/datasets/issues/2226/events | https://github.com/huggingface/datasets/issues/2226 | 859,720,302 | MDU6SXNzdWU4NTk3MjAzMDI= | 2,226 | Batched map fails when removing all columns | {
"login": "villmow",
"id": 2743060,
"node_id": "MDQ6VXNlcjI3NDMwNjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2743060?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/villmow",
"html_url": "https://github.com/villmow",
"followers_url": "https://api.github.com/users/villmow/... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.git... | null | [
"I found the problem. I called `set_format` on some columns before. This makes it crash. Here is a complete example to reproduce:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\nsst = load_dataset(\"sst\")\r\nsst.set_format(\"torch\", columns=[\"label\"], output_all_columns=True)\r\nds = sst[\"train\"]\r\n... | 1,618,571,821,000 | 1,618,585,841,000 | null | NONE | null | Hi @lhoestq ,
I'm hijacking this issue, because I'm currently trying to do the approach you recommend:
> Currently the optimal setup for single-column computations is probably to do something like
>
> ```python
> result = dataset.map(f, input_columns="my_col", remove_columns=dataset.column_names)
> ```
He... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2226/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2226/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2224 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2224/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2224/comments | https://api.github.com/repos/huggingface/datasets/issues/2224/events | https://github.com/huggingface/datasets/issues/2224 | 857,983,361 | MDU6SXNzdWU4NTc5ODMzNjE= | 2,224 | Raise error if Windows max path length is not disabled | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [] | open | false | null | [] | null | [] | 1,618,412,240,000 | 1,618,412,353,000 | null | MEMBER | null | On startup, raise an error if Windows max path length is not disabled; ask the user to disable it.
Linked to discussion in #2220. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2224/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2224/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2218 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2218/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2218/comments | https://api.github.com/repos/huggingface/datasets/issues/2218/events | https://github.com/huggingface/datasets/issues/2218 | 857,238,435 | MDU6SXNzdWU4NTcyMzg0MzU= | 2,218 | Duplicates in the LAMA dataset | {
"login": "amarasovic",
"id": 7276193,
"node_id": "MDQ6VXNlcjcyNzYxOTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7276193?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amarasovic",
"html_url": "https://github.com/amarasovic",
"followers_url": "https://api.github.com/users... | [] | open | false | null | [] | null | [
"Hi,\r\n\r\ncurrently the datasets API doesn't have a dedicated function to remove duplicate rows, but since the LAMA dataset is not too big (it fits in RAM), we can leverage pandas to help us remove duplicates:\r\n```python\r\n>>> from datasets import load_dataset, Dataset\r\n>>> dataset = load_dataset('lama', spl... | 1,618,340,389,000 | 1,618,436,547,000 | null | NONE | null | I observed duplicates in the LAMA probing dataset, see a minimal code below.
```
>>> import datasets
>>> dataset = datasets.load_dataset('lama')
No config specified, defaulting to: lama/trex
Reusing dataset lama (/home/anam/.cache/huggingface/datasets/lama/trex/1.1.0/97deffae13eca0a18e77dfb3960bb31741e973586f5c... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2218/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2218/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2214 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2214/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2214/comments | https://api.github.com/repos/huggingface/datasets/issues/2214/events | https://github.com/huggingface/datasets/issues/2214 | 856,333,657 | MDU6SXNzdWU4NTYzMzM2NTc= | 2,214 | load_metric error: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings' | {
"login": "nsaphra",
"id": 414788,
"node_id": "MDQ6VXNlcjQxNDc4OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/414788?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nsaphra",
"html_url": "https://github.com/nsaphra",
"followers_url": "https://api.github.com/users/nsaphra/fo... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @nsaphra, thanks for reporting.\r\n\r\nThis issue was fixed in `datasets` version 1.3.0. Could you please update `datasets` and tell me if the problem persists?\r\n```shell\r\npip install -U datasets\r\n```",
"There might be a bug in the conda version of `datasets` 1.2.1 where the datasets/metric scripts are ... | 1,618,259,161,000 | 1,619,191,202,000 | 1,619,191,202,000 | NONE | null | I'm having the same problem as [Notebooks issue 10](https://github.com/huggingface/notebooks/issues/10) on datasets 1.2.1, and it seems to be an issue with the datasets package.
```python
>>> from datasets import load_metric
>>> metric = load_metric("glue", "sst2")
Traceback (most recent call last):
File "<std... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2214/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2214/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2212 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2212/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2212/comments | https://api.github.com/repos/huggingface/datasets/issues/2212/events | https://github.com/huggingface/datasets/issues/2212 | 855,999,133 | MDU6SXNzdWU4NTU5OTkxMzM= | 2,212 | Can't reach "https://storage.googleapis.com/illuin/fquad/train.json.zip" when trying to load fquad dataset | {
"login": "hanss0n",
"id": 21348833,
"node_id": "MDQ6VXNlcjIxMzQ4ODMz",
"avatar_url": "https://avatars.githubusercontent.com/u/21348833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hanss0n",
"html_url": "https://github.com/hanss0n",
"followers_url": "https://api.github.com/users/hanss0... | [] | open | false | null | [] | null | [
"Hi ! Apparently the data are not available from this url anymore. We'll replace it with the new url when it's available",
"I saw this on their website when we request to download the dataset:\r\n\r\n\r\... | 1,618,235,396,000 | 1,621,289,826,000 | null | NONE | null | I'm trying to load the [fquad dataset](https://huggingface.co/datasets/fquad) by running:
```Python
fquad = load_dataset("fquad")
```
which produces the following error:
```
Using custom data configuration default
Downloading and preparing dataset fquad/default (download: 3.14 MiB, generated: 6.62 MiB, ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2212/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2212/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2211 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2211/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2211/comments | https://api.github.com/repos/huggingface/datasets/issues/2211/events | https://github.com/huggingface/datasets/issues/2211 | 855,988,410 | MDU6SXNzdWU4NTU5ODg0MTA= | 2,211 | Getting checksum error when trying to load lc_quad dataset | {
"login": "hanss0n",
"id": 21348833,
"node_id": "MDQ6VXNlcjIxMzQ4ODMz",
"avatar_url": "https://avatars.githubusercontent.com/u/21348833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hanss0n",
"html_url": "https://github.com/hanss0n",
"followers_url": "https://api.github.com/users/hanss0... | [] | closed | false | null | [] | null | [
"Hi,\r\n\r\nI've already opened a PR with the fix. If you are in a hurry, just build the project from source and run:\r\n```bash\r\ndatasets-cli test datasets/lc_quad --save_infos --all_configs --ignore_verifications\r\n```\r\n\r\n",
"Ah sorry, I tried searching but couldn't find any related PR. \r\n\r\nThank you... | 1,618,234,738,000 | 1,618,407,745,000 | 1,618,407,745,000 | NONE | null | I'm having issues loading the [lc_quad](https://huggingface.co/datasets/fquad) dataset by running:
```Python
lc_quad = load_dataset("lc_quad")
```
which is giving me the following error:
```
Using custom data configuration default
Downloading and preparing dataset lc_quad/default (download: 3.69 MiB, ge... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2211/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2211/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2210 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2210/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2210/comments | https://api.github.com/repos/huggingface/datasets/issues/2210/events | https://github.com/huggingface/datasets/issues/2210 | 855,709,400 | MDU6SXNzdWU4NTU3MDk0MDA= | 2,210 | dataloading slow when using HUGE dataset | {
"login": "hwijeen",
"id": 29157715,
"node_id": "MDQ6VXNlcjI5MTU3NzE1",
"avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hwijeen",
"html_url": "https://github.com/hwijeen",
"followers_url": "https://api.github.com/users/hwijee... | [] | closed | false | null | [] | null | [
"Hi ! Yes this is an issue with `datasets<=1.5.0`\r\nThis issue has been fixed by #2122 , we'll do a new release soon :)\r\nFor now you can test it on the `master` branch.",
"Hi, thank you for your answer. I did not realize that my issue stems from the same problem. "
] | 1,618,216,382,000 | 1,618,279,385,000 | 1,618,279,385,000 | NONE | null | Hi,
When I use datasets with 600GB data, the dataloading speed increases significantly.
I am experimenting with two datasets, and one is about 60GB and the other 600GB.
Simply speaking, my code uses `datasets.set_format("torch")` function and let pytorch-lightning handle ddp training.
When looking at the pytorch... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2210/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2210/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2207 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2207/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2207/comments | https://api.github.com/repos/huggingface/datasets/issues/2207/events | https://github.com/huggingface/datasets/issues/2207 | 855,267,383 | MDU6SXNzdWU4NTUyNjczODM= | 2,207 | making labels consistent across the datasets | {
"login": "dorost1234",
"id": 79165106,
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorost1234",
"html_url": "https://github.com/dorost1234",
"followers_url": "https://api.github.com/use... | [] | open | false | null | [] | null | [
"Hi ! The ClassLabel feature type encodes the labels as integers.\r\nThe integer corresponds to the index of the label name in the `names` list of the ClassLabel.\r\nHere that means that the labels are 'entailment' (0), 'neutral' (1), 'contradiction' (2).\r\n\r\nYou can get the label names back by using `a.features... | 1,618,135,436,000 | 1,618,408,920,000 | null | NONE | null | Hi
For accessing the labels one can type
```
>>> a.features['label']
ClassLabel(num_classes=3, names=['entailment', 'neutral', 'contradiction'], names_file=None, id=None)
```
The labels however are not consistent with the actual labels sometimes, for instance in case of XNLI, the actual labels are 0,1,2, but if ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2207/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2207/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2206 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2206/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2206/comments | https://api.github.com/repos/huggingface/datasets/issues/2206/events | https://github.com/huggingface/datasets/issues/2206 | 855,252,415 | MDU6SXNzdWU4NTUyNTI0MTU= | 2,206 | Got pyarrow error when loading a dataset while adding special tokens into the tokenizer | {
"login": "yana-xuyan",
"id": 38536635,
"node_id": "MDQ6VXNlcjM4NTM2NjM1",
"avatar_url": "https://avatars.githubusercontent.com/u/38536635?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yana-xuyan",
"html_url": "https://github.com/yana-xuyan",
"followers_url": "https://api.github.com/use... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi,\r\n\r\nthe output of the tokenizers is treated specially in the lib to optimize the dataset size (see the code [here](https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_writer.py#L138-L141)). It looks like that one of the values in a dictionary returned by the tokenizer is out of the assume... | 1,618,130,409,000 | 1,636,546,710,000 | 1,636,545,868,000 | NONE | null | I added five more special tokens into the GPT2 tokenizer. But after that, when I try to pre-process the data using my previous code, I got an error shown below:
Traceback (most recent call last):
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1687, in _map_sin... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2206/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2206/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2200 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2200/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2200/comments | https://api.github.com/repos/huggingface/datasets/issues/2200/events | https://github.com/huggingface/datasets/issues/2200 | 854,449,656 | MDU6SXNzdWU4NTQ0NDk2NTY= | 2,200 | _prepare_split will overwrite DatasetBuilder.info.features | {
"login": "Gforky",
"id": 4157614,
"node_id": "MDQ6VXNlcjQxNTc2MTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4157614?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Gforky",
"html_url": "https://github.com/Gforky",
"followers_url": "https://api.github.com/users/Gforky/foll... | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.git... | null | [
"Hi ! This might be related to #2153 \r\n\r\nYou're right the ArrowWriter should be initialized with `features=self.info.features` ! Good catch\r\nI'm opening a PR to fix this and also to figure out how it was not caught in the tests\r\n\r\nEDIT: opened #2201",
"> Hi ! This might be related to #2153\r\n> \r\n> Yo... | 1,617,968,833,000 | 1,622,803,055,000 | 1,622,803,055,000 | NONE | null | Hi, here is my issue:
I initialized a Csv datasetbuilder with specific features:
```
def get_dataset_features(data_args):
features = {}
if data_args.text_features:
features.update({text_feature: hf_features.Value("string") for text_feature in data_args.text_features.strip().split(",")})
if da... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2200/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2200/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2196 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2196/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2196/comments | https://api.github.com/repos/huggingface/datasets/issues/2196/events | https://github.com/huggingface/datasets/issues/2196 | 854,126,114 | MDU6SXNzdWU4NTQxMjYxMTQ= | 2,196 | `load_dataset` caches two arrow files? | {
"login": "hwijeen",
"id": 29157715,
"node_id": "MDQ6VXNlcjI5MTU3NzE1",
"avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hwijeen",
"html_url": "https://github.com/hwijeen",
"followers_url": "https://api.github.com/users/hwijee... | [
{
"id": 1935892912,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "Further information is requested"
}
] | closed | false | null | [] | null | [
"Hi ! Files that starts with `cache-*` are cached computation files, i.e. they are the cached results of map/filter/cast/etc. operations. For example if you used `map` on your dataset to transform it, then the resulting dataset is going to be stored and cached in a `cache-*` file. These files are used to avoid havi... | 1,617,940,159,000 | 1,618,205,129,000 | 1,618,205,129,000 | NONE | null | Hi,
I am using datasets to load large json file of 587G.
I checked the cached folder and found that there are two arrow files created:
* `cache-ed205e500a7dc44c.arrow` - 355G
* `json-train.arrow` - 582G
Why is the first file created?
If I delete it, would I still be able to `load_from_disk`? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2196/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2196/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2195 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2195/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2195/comments | https://api.github.com/repos/huggingface/datasets/issues/2195/events | https://github.com/huggingface/datasets/issues/2195 | 854,070,194 | MDU6SXNzdWU4NTQwNzAxOTQ= | 2,195 | KeyError: '_indices_files' in `arrow_dataset.py` | {
"login": "samsontmr",
"id": 15007950,
"node_id": "MDQ6VXNlcjE1MDA3OTUw",
"avatar_url": "https://avatars.githubusercontent.com/u/15007950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/samsontmr",
"html_url": "https://github.com/samsontmr",
"followers_url": "https://api.github.com/users/... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Thanks for reporting @samsontmr.\r\n\r\nIt seems a backward compatibility issue...",
"Thanks @samsontmr this should be fixed on master now\r\n\r\nFeel free to reopen if you're still having issues"
] | 1,617,932,232,000 | 1,617,962,109,000 | 1,617,962,079,000 | NONE | null | After pulling the latest master, I'm getting a crash when `load_from_disk` tries to load my local dataset.
Trace:
```
Traceback (most recent call last):
File "load_data.py", line 11, in <module>
dataset = load_from_disk(SRC)
File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/load.py", line ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2195/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2195/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2194 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2194/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2194/comments | https://api.github.com/repos/huggingface/datasets/issues/2194/events | https://github.com/huggingface/datasets/issues/2194 | 853,909,452 | MDU6SXNzdWU4NTM5MDk0NTI= | 2,194 | py3.7: TypeError: can't pickle _LazyModule objects | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/fo... | [] | closed | false | null | [] | null | [
"\r\nThis wasn't a `datasets` problem, but `transformers`' and it was solved here https://github.com/huggingface/transformers/pull/11168\r\n"
] | 1,617,915,768,000 | 1,617,987,410,000 | 1,617,933,177,000 | CONTRIBUTOR | null | While this works fine with py3.8, under py3.7, with a totally new conda env and transformers install:
```
git clone https://github.com/huggingface/transformers
cd transformers
pip install -e .[testing]
export BS=1; rm -rf /tmp/test-clm; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 python \
examples/language... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2194/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2194/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2193 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2193/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2193/comments | https://api.github.com/repos/huggingface/datasets/issues/2193/events | https://github.com/huggingface/datasets/issues/2193 | 853,725,707 | MDU6SXNzdWU4NTM3MjU3MDc= | 2,193 | Filtering/mapping on one column is very slow | {
"login": "norabelrose",
"id": 39116809,
"node_id": "MDQ6VXNlcjM5MTE2ODA5",
"avatar_url": "https://avatars.githubusercontent.com/u/39116809?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/norabelrose",
"html_url": "https://github.com/norabelrose",
"followers_url": "https://api.github.com/... | [
{
"id": 1935892912,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "Further information is requested"
}
] | closed | false | null | [] | null | [
"Hi ! Yes we are working on making `filter` significantly faster. You can look at related PRs here: #2060 #2178 \r\n\r\nI think you can expect to have the fast version of `filter` available next week.\r\n\r\nWe'll make it only select one column, and we'll also make the overall filtering operation way faster by avoi... | 1,617,905,774,000 | 1,619,453,639,000 | 1,619,453,639,000 | CONTRIBUTOR | null | I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_colu... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2193/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2193/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2190 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2190/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2190/comments | https://api.github.com/repos/huggingface/datasets/issues/2190/events | https://github.com/huggingface/datasets/issues/2190 | 853,181,564 | MDU6SXNzdWU4NTMxODE1NjQ= | 2,190 | News_commentary Dataset Translation Pairs are of Incorrect Language Specified Pairs | {
"login": "anassalamah",
"id": 8571003,
"node_id": "MDQ6VXNlcjg1NzEwMDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8571003?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anassalamah",
"html_url": "https://github.com/anassalamah",
"followers_url": "https://api.github.com/us... | [] | closed | false | null | [] | null | [
"Hi @anassalamah,\r\n\r\nCould you please try with this:\r\n```python\r\ntrain_ds = load_dataset(\"news_commentary\", lang1=\"ar\", lang2=\"en\", split='train[:98%]')\r\nval_ds = load_dataset(\"news_commentary\", lang1=\"ar\", lang2=\"en\", split='train[98%:]')\r\n```",
"Hello @albertvillanova, \r\n\r\nThanks for... | 1,617,868,423,000 | 1,621,850,635,000 | 1,621,850,635,000 | NONE | null | I used load_dataset to load the news_commentary dataset for "ar-en" translation pairs but found translations from Arabic to Hindi.
```
train_ds = load_dataset("news_commentary", "ar-en", split='train[:98%]')
val_ds = load_dataset("news_commentary", "ar-en", split='train[98%:]')
# filtering out examples that a... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2190/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2190/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2189 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2189/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2189/comments | https://api.github.com/repos/huggingface/datasets/issues/2189/events | https://github.com/huggingface/datasets/issues/2189 | 853,052,891 | MDU6SXNzdWU4NTMwNTI4OTE= | 2,189 | save_to_disk doesn't work when we use concatenate_datasets function before creating the final dataset_object. | {
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/sha... | [] | open | false | null | [] | null | [
"Hi ! We refactored save_to_disk in #2025 so this doesn't happen.\r\nFeel free to try it on master for now\r\nWe'll do a new release soon"
] | 1,617,856,973,000 | 1,618,408,625,000 | null | NONE | null | As you can see, it saves the entire dataset.
@lhoestq
You can check by going through the following example,
```
from datasets import load_from_disk,concatenate_datasets
loaded_data=load_from_disk('/home/gsir059/HNSW-ori/my_knowledge_dataset')
n=20
kb_list=[loaded_data.shard(n, i, contiguous=True) for i... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2189/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2189/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2188 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2188/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2188/comments | https://api.github.com/repos/huggingface/datasets/issues/2188/events | https://github.com/huggingface/datasets/issues/2188 | 853,044,166 | MDU6SXNzdWU4NTMwNDQxNjY= | 2,188 | Duplicate data in Timit dataset | {
"login": "BHM-RB",
"id": 78190188,
"node_id": "MDQ6VXNlcjc4MTkwMTg4",
"avatar_url": "https://avatars.githubusercontent.com/u/78190188?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BHM-RB",
"html_url": "https://github.com/BHM-RB",
"followers_url": "https://api.github.com/users/BHM-RB/fo... | [] | closed | false | null | [] | null | [
"Hi ! Thanks for reporting\r\nIf I recall correctly this has been recently fixed #1995\r\nCan you try to upgrade your local version of `datasets` ?\r\n```\r\npip install --upgrade datasets\r\n```",
"Hi Ihoestq,\r\n\r\nThank you. It works after upgrading the datasets\r\n"
] | 1,617,855,714,000 | 1,617,883,999,000 | 1,617,883,999,000 | NONE | null | I ran a simple code to list all texts in Timit dataset and the texts were all the same.
Is this dataset corrupted?
**Code:**
timit = load_dataset("timit_asr")
print(*timit['train']['text'], sep='\n')
**Result:**
Would such an act of refusal be useful?
Would such an act of refusal be useful?
Would such an act of... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2188/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2188/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2187 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2187/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2187/comments | https://api.github.com/repos/huggingface/datasets/issues/2187/events | https://github.com/huggingface/datasets/issues/2187 | 852,939,736 | MDU6SXNzdWU4NTI5Mzk3MzY= | 2,187 | Question (potential issue?) related to datasets caching | {
"login": "ioana-blue",
"id": 17202292,
"node_id": "MDQ6VXNlcjE3MjAyMjky",
"avatar_url": "https://avatars.githubusercontent.com/u/17202292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ioana-blue",
"html_url": "https://github.com/ioana-blue",
"followers_url": "https://api.github.com/use... | [
{
"id": 1935892912,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "Further information is requested"
}
] | open | false | null | [] | null | [
"An educated guess: does this refer to the fact that depending on the custom column names in the dataset files (csv in this case), there is a dataset loader being created? and this dataset loader - using the \"custom data configuration\" is used among all jobs running using this particular csv files? (thinking out ... | 1,617,840,988,000 | 1,618,412,158,000 | null | NONE | null | I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the following:
```
04/07/2021 18:34:42 - WARNING - datasets.build... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2187/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2187/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2185 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2185/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2185/comments | https://api.github.com/repos/huggingface/datasets/issues/2185/events | https://github.com/huggingface/datasets/issues/2185 | 852,684,395 | MDU6SXNzdWU4NTI2ODQzOTU= | 2,185 | .map() and distributed training | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | [
"Hi, one workaround would be to save the mapped(tokenized in your case) file using `save_to_disk`, and having each process load this file using `load_from_disk`. This is what I am doing, and in this case, I turn off the ability to automatically load from the cache.\r\n\r\nAlso, multiprocessing the map function seem... | 1,617,819,734,000 | 1,634,973,075,000 | 1,617,982,711,000 | MEMBER | null | Hi,
I have a question regarding distributed training and the `.map` call on a dataset.
I have a local dataset "my_custom_dataset" that I am loading with `datasets = load_from_disk(dataset_path=my_path)`.
`dataset` is then tokenized:
```python
datasets = load_from_disk(dataset_path=my_path)
[...]
def tokeni... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2185/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2185/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2181 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2181/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2181/comments | https://api.github.com/repos/huggingface/datasets/issues/2181/events | https://github.com/huggingface/datasets/issues/2181 | 852,261,607 | MDU6SXNzdWU4NTIyNjE2MDc= | 2,181 | Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries) | {
"login": "hwijeen",
"id": 29157715,
"node_id": "MDQ6VXNlcjI5MTU3NzE1",
"avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hwijeen",
"html_url": "https://github.com/hwijeen",
"followers_url": "https://api.github.com/users/hwijee... | [] | closed | false | null | [] | null | [
"Hi ! Can you try to increase the block size ? For example\r\n```python\r\nblock_size_10MB = 10<<20\r\nload_dataset(\"json\", ..., block_size=block_size_10MB)\r\n```\r\nThe block size corresponds to how much bytes to process at a time from the input stream.\r\nThis will determine multi-threading granularity as well... | 1,617,791,206,000 | 1,618,211,755,000 | 1,618,211,755,000 | NONE | null | Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.
When loading a huge json file of 500GB, pyarrow complains as follows:
```
Traceback (most recent call last):
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-pack... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2181/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2181/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2179 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2179/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2179/comments | https://api.github.com/repos/huggingface/datasets/issues/2179/events | https://github.com/huggingface/datasets/issues/2179 | 852,237,957 | MDU6SXNzdWU4NTIyMzc5NTc= | 2,179 | Load small datasets in-memory instead of using memory map | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 2067400324,
"node_id": "MDU6... | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | [] | 1,617,789,496,000 | 1,618,913,044,000 | 1,618,913,043,000 | MEMBER | null | Currently all datasets are loaded using memory mapping by default in `load_dataset`.
However this might not be necessary for small datasets. If a dataset is small enough, then it can be loaded in-memory and:
- its memory footprint would be small so it's ok
- in-memory computations/queries would be faster
- the cach... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2179/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2179/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2176 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2176/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2176/comments | https://api.github.com/repos/huggingface/datasets/issues/2176/events | https://github.com/huggingface/datasets/issues/2176 | 851,865,795 | MDU6SXNzdWU4NTE4NjU3OTU= | 2,176 | Converting a Value to a ClassLabel | {
"login": "nelson-liu",
"id": 7272031,
"node_id": "MDQ6VXNlcjcyNzIwMzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/7272031?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nelson-liu",
"html_url": "https://github.com/nelson-liu",
"followers_url": "https://api.github.com/users... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi @nelson-liu!\r\nHere is what I do to convert a string to class label:\r\n\r\n```python\r\nfrom datasets import load_dataset, features\r\n\r\n\r\ndset = load_dataset(...)\r\ncol_name = \"the string column name\"\r\n\r\nclass_names = dset.unique(col_name)\r\nclass_feature = features.ClassLabel(names=sorted(class... | 1,617,749,656,000 | 1,618,827,034,000 | null | NONE | null | Hi!
In the docs for `cast`, it's noted that `For non-trivial conversion, e.g. string <-> ClassLabel you should use map() to update the Dataset.`
Would it be possible to have an example that demonstrates such a string <-> ClassLabel conversion using `map`? Thanks! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2176/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2176/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2175 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2175/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2175/comments | https://api.github.com/repos/huggingface/datasets/issues/2175/events | https://github.com/huggingface/datasets/issues/2175 | 851,836,096 | MDU6SXNzdWU4NTE4MzYwOTY= | 2,175 | dataset.search_batch() function outputs all -1 indices sometime. | {
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/sha... | [] | closed | false | null | [] | null | [
"Actually, I found the answer [here](https://github.com/facebookresearch/faiss/wiki/FAQ#what-does-it-mean-when-a-search-returns--1-ids). \r\n\r\nSo we have to do some modifications to the code for instances where the index doesn't retrieve any IDs.",
"@lhoestq @patrickvonplaten \r\n\r\nI also found another short... | 1,617,745,849,000 | 1,618,575,676,000 | 1,618,575,675,000 | NONE | null | I am working with RAG and playing around with different faiss indexes. At the moment I use **index = faiss.index_factory(768, "IVF65536_HNSW32,Flat")**.
During the retrieval phase exactly in [this line of retrieval_rag.py](https://github.com/huggingface/transformers/blob/master/src/transformers/models/rag/retrieval_... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2175/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2175/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2170 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2170/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2170/comments | https://api.github.com/repos/huggingface/datasets/issues/2170/events | https://github.com/huggingface/datasets/issues/2170 | 850,913,228 | MDU6SXNzdWU4NTA5MTMyMjg= | 2,170 | Wikipedia historic dumps are deleted but hf/datasets hardcodes dump date | {
"login": "leezu",
"id": 946903,
"node_id": "MDQ6VXNlcjk0NjkwMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/946903?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leezu",
"html_url": "https://github.com/leezu",
"followers_url": "https://api.github.com/users/leezu/followers"... | [] | open | false | null | [] | null | [
"It seems that this can be fixed from user's end by including a `date` argument, like this:\r\n\r\n`dataset = datasets.load_dataset('wikipedia', '20200501.en', date='20210420')`\r\n\r\nYou can get available dates from [here](https://dumps.wikimedia.org/enwiki/).\r\n\r\nThis is not a proper fix however as all the fi... | 1,617,678,798,000 | 1,623,805,850,000 | null | NONE | null | Wikimedia does not keep all historical dumps. For example, as of today https://dumps.wikimedia.org/kowiki/ only provides
```
20201220/ 02-Feb-2021 01:36 -
20210101/ 21-Feb-2021 01:26 -
20210120/ ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2170/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2170/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2167 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2167/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2167/comments | https://api.github.com/repos/huggingface/datasets/issues/2167/events | https://github.com/huggingface/datasets/issues/2167 | 849,944,891 | MDU6SXNzdWU4NDk5NDQ4OTE= | 2,167 | Split type not preserved when reloading the dataset | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | [] | 1,617,564,594,000 | 1,618,823,335,000 | 1,618,823,335,000 | CONTRIBUTOR | null | A minimal reproducible example:
```python
>>> from datasets import load_dataset, Dataset
>>> dset = load_dataset("sst", split="train")
>>> dset.save_to_disk("sst")
>>> type(dset.split)
<class 'datasets.splits.NamedSplit'>
>>> dset = Dataset.load_from_disk("sst")
>>> type(dset.split) # NamedSplit expected
<cla... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2167/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2167/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2166 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2166/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2166/comments | https://api.github.com/repos/huggingface/datasets/issues/2166/events | https://github.com/huggingface/datasets/issues/2166 | 849,778,545 | MDU6SXNzdWU4NDk3Nzg1NDU= | 2,166 | Regarding Test Sets for the GEM datasets | {
"login": "vyraun",
"id": 17217068,
"node_id": "MDQ6VXNlcjE3MjE3MDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/17217068?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vyraun",
"html_url": "https://github.com/vyraun",
"followers_url": "https://api.github.com/users/vyraun/fo... | [
{
"id": 2067401494,
"node_id": "MDU6TGFiZWwyMDY3NDAxNDk0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion",
"name": "Dataset discussion",
"color": "72f99f",
"default": false,
"description": "Discussions on the datasets"
}
] | closed | false | null | [] | null | [
"Hi @vyraun ! The test references for CommonGen are not publicly available: you can reach out to the original dataset authors if you would like to ask for them, but we will not be releasing them as part of GEM (March 31st was the release date for the test set inputs, references are incidentally released for some of... | 1,617,501,765,000 | 1,617,696,792,000 | 1,617,696,792,000 | NONE | null | @yjernite Hi, are the test sets for the GEM datasets scheduled to be [added soon](https://gem-benchmark.com/shared_task)?
e.g.
```
from datasets import load_dataset
DATASET_NAME="common_gen"
data = load_dataset("gem", DATASET_NAME)
```
The test set doesn't have the target or references.
```
data['test... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2166/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2166/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2165 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2165/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2165/comments | https://api.github.com/repos/huggingface/datasets/issues/2165/events | https://github.com/huggingface/datasets/issues/2165 | 849,771,665 | MDU6SXNzdWU4NDk3NzE2NjU= | 2,165 | How to convert datasets.arrow_dataset.Dataset to torch.utils.data.Dataset | {
"login": "y-rokutan",
"id": 24562381,
"node_id": "MDQ6VXNlcjI0NTYyMzgx",
"avatar_url": "https://avatars.githubusercontent.com/u/24562381?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/y-rokutan",
"html_url": "https://github.com/y-rokutan",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | [
"Hi,\r\n\r\na HF dataset can be converted to a Torch Dataset with a simple wrapper as follows:\r\n```python\r\nfrom torch.utils.data import Dataset\r\n \r\nclass HFDataset(Dataset):\r\n def __init__(self, dset):\r\n self.dset = dset\r\n\r\n def __getitem__(self, idx):\r\n return self.dset[idx]\r... | 1,617,498,108,000 | 1,629,820,535,000 | 1,617,807,964,000 | NONE | null | Hi,
I'm trying to pretraine deep-speed model using HF arxiv dataset like:
```
train_ds = nlp.load_dataset('scientific_papers', 'arxiv')
train_ds.set_format(
type="torch",
columns=["input_ids", "attention_mask", "global_attention_mask", "labels"],
)
engine, _, _, _ = deepspeed.initialize(
... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2165/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2165/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2162 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2162/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2162/comments | https://api.github.com/repos/huggingface/datasets/issues/2162/events | https://github.com/huggingface/datasets/issues/2162 | 849,129,201 | MDU6SXNzdWU4NDkxMjkyMDE= | 2,162 | visualization for cc100 is broken | {
"login": "dorost1234",
"id": 79165106,
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorost1234",
"html_url": "https://github.com/dorost1234",
"followers_url": "https://api.github.com/use... | [
{
"id": 2107841032,
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer",
"name": "nlp-viewer",
"color": "94203D",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [
"This looks like an issue with the cc100 dataset itself but not sure\r\nDid you try loading cc100 on your machine ?",
"Hi\nloading works fine, but the viewer only is broken\nthanks\n\nOn Wed, Apr 7, 2021 at 12:17 PM Quentin Lhoest ***@***.***>\nwrote:\n\n> This looks like an issue with the cc100 dataset itself bu... | 1,617,358,273,000 | 1,617,800,467,000 | null | NONE | null | Hi
visualization through dataset viewer for cc100 is broken
https://huggingface.co/datasets/viewer/
thanks a lot
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2162/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2162/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2161 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2161/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2161/comments | https://api.github.com/repos/huggingface/datasets/issues/2161/events | https://github.com/huggingface/datasets/issues/2161 | 849,127,041 | MDU6SXNzdWU4NDkxMjcwNDE= | 2,161 | any possibility to download part of large datasets only? | {
"login": "dorost1234",
"id": 79165106,
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorost1234",
"html_url": "https://github.com/dorost1234",
"followers_url": "https://api.github.com/use... | [] | open | false | null | [] | null | [
"Not yet but it’s on the short/mid-term roadmap (requested by many indeed).",
"oh, great, really awesome feature to have, thank you very much for the great, fabulous work",
"We'll work on dataset streaming soon. This should allow you to only load the examples you need ;)",
"thanks a lot Quentin, this would be... | 1,617,358,006,000 | 1,625,239,169,000 | null | NONE | null | Hi
Some of the datasets I need like cc100 are very large, and then I wonder if I can download first X samples of the shuffled/unshuffled data without going through first downloading the whole data then sampling? thanks | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2161/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2161/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2160 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2160/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2160/comments | https://api.github.com/repos/huggingface/datasets/issues/2160/events | https://github.com/huggingface/datasets/issues/2160 | 849,052,921 | MDU6SXNzdWU4NDkwNTI5MjE= | 2,160 | data_args.preprocessing_num_workers almost freezes | {
"login": "dorost1234",
"id": 79165106,
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorost1234",
"html_url": "https://github.com/dorost1234",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | [
"Hi.\r\nI cannot always reproduce this issue, and on later runs I did not see it so far. Sometimes also I set 8 processes but I see less being showed, is this normal, here only 5 are shown for 8 being set, thanks\r\n\r\n```\r\n#3: 11%|███████████████▊ ... | 1,617,350,173,000 | 1,617,358,472,000 | 1,617,358,471,000 | NONE | null | Hi @lhoestq
I am running this code from huggingface transformers https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py
to speed up tokenization, since I am running on multiple datasets, I am using data_args.preprocessing_num_workers = 4 with opus100 corpus but this moves ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2160/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2160/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2159 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2159/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2159/comments | https://api.github.com/repos/huggingface/datasets/issues/2159/events | https://github.com/huggingface/datasets/issues/2159 | 848,851,962 | MDU6SXNzdWU4NDg4NTE5NjI= | 2,159 | adding ccnet dataset | {
"login": "dorost1234",
"id": 79165106,
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorost1234",
"html_url": "https://github.com/dorost1234",
"followers_url": "https://api.github.com/use... | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [
"closing since I think this is cc100, just the name has been changed. thanks "
] | 1,617,319,716,000 | 1,617,357,919,000 | 1,617,357,919,000 | NONE | null | ## Adding a Dataset
- **Name:** ccnet
- **Description:**
Common Crawl
- **Paper:**
https://arxiv.org/abs/1911.00359
- **Data:**
https://github.com/facebookresearch/cc_net
- **Motivation:**
this is one of the most comprehensive clean monolingual datasets across a variety of languages. Quite importan... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2159/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2159/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2158 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2158/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2158/comments | https://api.github.com/repos/huggingface/datasets/issues/2158/events | https://github.com/huggingface/datasets/issues/2158 | 848,506,746 | MDU6SXNzdWU4NDg1MDY3NDY= | 2,158 | viewer "fake_news_english" error | {
"login": "emanuelevivoli",
"id": 9447991,
"node_id": "MDQ6VXNlcjk0NDc5OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/9447991?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emanuelevivoli",
"html_url": "https://github.com/emanuelevivoli",
"followers_url": "https://api.gith... | [
{
"id": 2107841032,
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer",
"name": "nlp-viewer",
"color": "94203D",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [
"Thanks for reporting !\r\nThe viewer doesn't have all the dependencies of the datasets. We may add openpyxl to be able to show this dataset properly"
] | 1,617,286,400,000 | 1,617,791,169,000 | null | NONE | null | When I visit the [Huggingface - viewer](https://huggingface.co/datasets/viewer/) web site, under the dataset "fake_news_english" I've got this error:
> ImportError: To be able to use this dataset, you need to install the following dependencies['openpyxl'] using 'pip install # noqa: requires this pandas optional depe... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2158/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2158/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2153 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2153/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2153/comments | https://api.github.com/repos/huggingface/datasets/issues/2153/events | https://github.com/huggingface/datasets/issues/2153 | 846,181,502 | MDU6SXNzdWU4NDYxODE1MDI= | 2,153 | load_dataset ignoring features | {
"login": "GuillemGSubies",
"id": 37592763,
"node_id": "MDQ6VXNlcjM3NTkyNzYz",
"avatar_url": "https://avatars.githubusercontent.com/u/37592763?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GuillemGSubies",
"html_url": "https://github.com/GuillemGSubies",
"followers_url": "https://api.gi... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.git... | null | [
"Hi ! Thanks for reporting. I opened a PR to fix this issue: #2201",
"Nice question which helped me a lot! I have wasted a lot of time to the `DatasetDict` creation from a csv file. Hope the document of this module add some simple examples.",
"Hi :) We're indeed working on tutorials that we will add to the docs... | 1,617,179,409,000 | 1,630,077,838,000 | null | NONE | null | First of all, I'm sorry if it is a repeated issue or the changes are already in master, I searched and I didn't find anything.
I'm using datasets 1.5.0

As you can see, when I load the dataset, the C... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2153/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2153/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2149 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2149/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2149/comments | https://api.github.com/repos/huggingface/datasets/issues/2149/events | https://github.com/huggingface/datasets/issues/2149 | 844,734,076 | MDU6SXNzdWU4NDQ3MzQwNzY= | 2,149 | Telugu subset missing for xtreme tatoeba dataset | {
"login": "jerryIsHere",
"id": 50871412,
"node_id": "MDQ6VXNlcjUwODcxNDEy",
"avatar_url": "https://avatars.githubusercontent.com/u/50871412?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jerryIsHere",
"html_url": "https://github.com/jerryIsHere",
"followers_url": "https://api.github.com/... | [] | open | false | null | [] | null | [
"Good catch ! Thanks for reporting\r\n\r\nI just opened #2180 to fix this"
] | 1,617,117,994,000 | 1,617,791,015,000 | null | CONTRIBUTOR | null | from nlp import load_dataset
train_dataset = load_dataset('xtreme', 'tatoeba.tel')['validation']
ValueError: BuilderConfig tatoeba.tel not found.
but language tel is actually included in xtreme:
https://github.com/google-research/xtreme/blob/master/utils_preprocess.py
def tatoeba_preprocess(args):
lang3_dict ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2149/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2149/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2148 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2148/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2148/comments | https://api.github.com/repos/huggingface/datasets/issues/2148/events | https://github.com/huggingface/datasets/issues/2148 | 844,700,910 | MDU6SXNzdWU4NDQ3MDA5MTA= | 2,148 | Add configurable options to `seqeval` metric | {
"login": "marrodion",
"id": 44571847,
"node_id": "MDQ6VXNlcjQ0NTcxODQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/44571847?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marrodion",
"html_url": "https://github.com/marrodion",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | [
"Hi @marrodion. \r\n\r\nThanks for pointing this out. It would be great to incorporate this metric-specific enhancement.\r\n\r\nAnother possibility would be to require the user to input the scheme as a string `mode=\"strict\", scheme=\"IOB2\"` and then dynamically import the corresponding module using Python `impor... | 1,617,116,646,000 | 1,618,494,586,000 | 1,618,494,586,000 | CONTRIBUTOR | null | Right now `load_metric("seqeval")` only works in the default mode of evaluation (equivalent to conll evaluation).
However, seqeval library [supports](https://github.com/chakki-works/seqeval#support-features) different evaluation schemes (IOB1, IOB2, etc.), which can be plugged in just by supporting additional kwargs... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2148/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2148/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2146 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2146/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2146/comments | https://api.github.com/repos/huggingface/datasets/issues/2146/events | https://github.com/huggingface/datasets/issues/2146 | 844,673,244 | MDU6SXNzdWU4NDQ2NzMyNDQ= | 2,146 | Dataset file size on disk is very large with 3D Array | {
"login": "jblemoine",
"id": 22685854,
"node_id": "MDQ6VXNlcjIyNjg1ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/22685854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jblemoine",
"html_url": "https://github.com/jblemoine",
"followers_url": "https://api.github.com/users/... | [] | open | false | null | [] | null | [
"Hi ! In the arrow file we store all the integers as uint8.\r\nSo your arrow file should weigh around `height x width x n_channels x n_images` bytes.\r\n\r\nWhat feature type do your TFDS dataset have ?\r\n\r\nIf it uses a `tfds.features.Image` type, then what is stored is the encoded data (as png or jpg for exampl... | 1,617,115,569,000 | 1,618,578,422,000 | null | NONE | null | Hi,
I have created my own dataset using the provided dataset loading script. It is an image dataset where images are stored as 3D Array with dtype=uint8.
The actual size on disk is surprisingly large. It takes 520 MB. Here is some info from `dataset_info.json`.
`{
"description": "",
"citation": ""... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2146/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2146/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2144 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2144/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2144/comments | https://api.github.com/repos/huggingface/datasets/issues/2144/events | https://github.com/huggingface/datasets/issues/2144 | 844,352,067 | MDU6SXNzdWU4NDQzNTIwNjc= | 2,144 | Loading wikipedia 20200501.en throws pyarrow related error | {
"login": "TomPyonsuke",
"id": 26637405,
"node_id": "MDQ6VXNlcjI2NjM3NDA1",
"avatar_url": "https://avatars.githubusercontent.com/u/26637405?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TomPyonsuke",
"html_url": "https://github.com/TomPyonsuke",
"followers_url": "https://api.github.com/... | [] | open | false | null | [] | null | [
"That's how I loaded the dataset\r\n```python\r\nfrom datasets import load_dataset\r\nds = load_dataset('wikipedia', '20200501.en', cache_dir='/usr/local/workspace/NAS_NLP/cache')\r\n```",
"Hi ! It looks like the arrow file in the folder\r\n`/usr/local/workspace/NAS_NLP/cache/wikipedia/20200501.en/1.0.0/50aa706aa... | 1,617,100,711,000 | 1,617,268,877,000 | null | NONE | null | **Problem description**
I am getting the following error when trying to load wikipedia/20200501.en dataset.
**Error log**
Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, post-processed: Unknown size, total: 34.06 GiB) to /usr/local/workspace/NAS_NLP/cache/wikiped... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2144/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2144/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2139 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2139/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2139/comments | https://api.github.com/repos/huggingface/datasets/issues/2139/events | https://github.com/huggingface/datasets/issues/2139 | 843,662,613 | MDU6SXNzdWU4NDM2NjI2MTM= | 2,139 | TypeError when using save_to_disk in a dataset loaded with ReadInstruction split | {
"login": "PedroMLF",
"id": 22480495,
"node_id": "MDQ6VXNlcjIyNDgwNDk1",
"avatar_url": "https://avatars.githubusercontent.com/u/22480495?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PedroMLF",
"html_url": "https://github.com/PedroMLF",
"followers_url": "https://api.github.com/users/Ped... | [] | closed | false | null | [] | null | [
"Hi !\r\nI think this has been fixed recently on `master`.\r\nCan you try again by installing `datasets` from `master` ?\r\n```\r\npip install git+https://github.com/huggingface/datasets.git\r\n```",
"Hi!\r\n\r\nUsing that version of the code solves the issue. Thanks!"
] | 1,617,042,234,000 | 1,617,095,573,000 | 1,617,095,573,000 | NONE | null | Hi,
Loading a dataset with `load_dataset` using a split defined via `ReadInstruction` and then saving it to disk results in the following error: `TypeError: Object of type ReadInstruction is not JSON serializable`.
Here is the minimal reproducible example:
```python
from datasets import load_dataset
from dat... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2139/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2139/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2135 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2135/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2135/comments | https://api.github.com/repos/huggingface/datasets/issues/2135/events | https://github.com/huggingface/datasets/issues/2135 | 843,246,344 | MDU6SXNzdWU4NDMyNDYzNDQ= | 2,135 | en language data from MLQA dataset is missing | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/... | [] | closed | false | null | [] | null | [
"Hi ! Indeed only the languages of the `translate-train` data are included...\r\nI can't find a link to download the english train set on https://github.com/facebookresearch/MLQA though, do you know where we can download it ?",
"Hi @lhoestq \r\nthank you very much for coming back to me, now I see, you are right, ... | 1,617,014,870,000 | 1,617,099,623,000 | 1,617,099,623,000 | CONTRIBUTOR | null | Hi
I need mlqa-translate-train.en dataset, but it is missing from the MLQA dataset. could you have a look please? @lhoestq thank you for your help to fix this issue. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2135/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2135/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2134 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2134/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2134/comments | https://api.github.com/repos/huggingface/datasets/issues/2134/events | https://github.com/huggingface/datasets/issues/2134 | 843,242,849 | MDU6SXNzdWU4NDMyNDI4NDk= | 2,134 | Saving large in-memory datasets with save_to_disk crashes because of pickling | {
"login": "prokopCerny",
"id": 5815801,
"node_id": "MDQ6VXNlcjU4MTU4MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5815801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prokopCerny",
"html_url": "https://github.com/prokopCerny",
"followers_url": "https://api.github.com/us... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.git... | null | [
"Hi !\r\nIndeed `save_to_disk` doesn't call pickle anymore. Though the `OverflowError` can still appear for in-memory datasets bigger than 4GB. This happens when doing this for example:\r\n```python\r\nimport pyarrow as pa\r\nimport pickle\r\n\r\narr = pa.array([0] * ((4 * 8 << 30) // 64))\r\ntable = pa.Table.from_... | 1,617,014,595,000 | 1,620,064,761,000 | 1,620,064,761,000 | NONE | null | Using Datasets 1.5.0 on Python 3.7.
Recently I've been working on medium to large size datasets (pretokenized raw text sizes from few gigabytes to low tens of gigabytes), and have found out that several preprocessing steps are massively faster when done in memory, and I have the ability to requisition a lot of RAM, so... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2134/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2134/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2133 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2133/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2133/comments | https://api.github.com/repos/huggingface/datasets/issues/2133/events | https://github.com/huggingface/datasets/issues/2133 | 843,149,680 | MDU6SXNzdWU4NDMxNDk2ODA= | 2,133 | bug in mlqa dataset | {
"login": "dorost1234",
"id": 79165106,
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorost1234",
"html_url": "https://github.com/dorost1234",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | [
"If you print those questions, you get readable texts:\r\n```python\r\n>>> questions = [\r\n... \"\\u0645\\u062a\\u0649 \\u0628\\u062f\\u0627\\u062a \\u0627\\u0644\\u0645\\u062c\\u0644\\u0629 \\u0627\\u0644\\u0645\\u062f\\u0631\\u0633\\u064a\\u0629 \\u0641\\u064a \\u0646\\u0648\\u062a\\u0631\\u062f\\u0627\\u064... | 1,617,008,589,000 | 1,617,126,057,000 | 1,617,126,057,000 | NONE | null | Hi
Looking into MLQA dataset for langauge "ar":
```
"question": [
"\u0645\u062a\u0649 \u0628\u062f\u0627\u062a \u0627\u0644\u0645\u062c\u0644\u0629 \u0627\u0644\u0645\u062f\u0631\u0633\u064a\u0629 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645 \u0628\u0627\u0644\u0646\u0634\u0631?",
"\u0643\u0... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2133/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2133/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2132 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2132/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2132/comments | https://api.github.com/repos/huggingface/datasets/issues/2132/events | https://github.com/huggingface/datasets/issues/2132 | 843,142,822 | MDU6SXNzdWU4NDMxNDI4MjI= | 2,132 | TydiQA dataset is mixed and is not split per language | {
"login": "dorost1234",
"id": 79165106,
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorost1234",
"html_url": "https://github.com/dorost1234",
"followers_url": "https://api.github.com/use... | [] | open | false | null | [] | null | [
"You can filter the languages this way:\r\n```python\r\ntydiqa_en = tydiqa_dataset.filter(lambda x: x[\"language\"] == \"english\")\r\n```\r\n\r\nOtherwise maybe we can have one configuration per language ?\r\nWhat do you think of this for example ?\r\n\r\n```python\r\nload_dataset(\"tydiqa\", \"primary_task.en\")\... | 1,617,008,181,000 | 1,617,530,235,000 | null | NONE | null | Hi @lhoestq
Currently TydiQA is mixed and user can only access the whole training set of all languages:
https://www.tensorflow.org/datasets/catalog/tydi_qa
for using this dataset, one need to train/evaluate in each separate language, and having them mixed, makes it hard to use this dataset. This is much convenien... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2132/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2132/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2131 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2131/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2131/comments | https://api.github.com/repos/huggingface/datasets/issues/2131/events | https://github.com/huggingface/datasets/issues/2131 | 843,133,112 | MDU6SXNzdWU4NDMxMzMxMTI= | 2,131 | When training with Multi-Node Multi-GPU the worker 2 has TypeError: 'NoneType' object | {
"login": "andy-yangz",
"id": 23011317,
"node_id": "MDQ6VXNlcjIzMDExMzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/23011317?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andy-yangz",
"html_url": "https://github.com/andy-yangz",
"followers_url": "https://api.github.com/use... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.git... | null | [
"Hi ! Thanks for reporting\r\nI was able to reproduce this issue. This was caused by missing split infos if a worker reloads the cache of the other worker.\r\n\r\nI just opened https://github.com/huggingface/datasets/pull/2137 to fix this issue",
"The PR got merged :)\r\nFeel free to try it out on the `master` br... | 1,617,007,558,000 | 1,618,052,935,000 | 1,618,052,935,000 | NONE | null | version: 1.5.0
met a very strange error, I am training large scale language model, and need train on 2 machines(workers).
And sometimes I will get this error `TypeError: 'NoneType' object is not iterable`
This is traceback
```
71 | | Traceback (most recent call last):
-- | -- | --
72 | | File "run_gpt.py"... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2131/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 1,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2131/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2130 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2130/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2130/comments | https://api.github.com/repos/huggingface/datasets/issues/2130/events | https://github.com/huggingface/datasets/issues/2130 | 843,111,936 | MDU6SXNzdWU4NDMxMTE5MzY= | 2,130 | wikiann dataset is missing columns | {
"login": "dorost1234",
"id": 79165106,
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorost1234",
"html_url": "https://github.com/dorost1234",
"followers_url": "https://api.github.com/use... | [
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | closed | false | null | [] | null | [
"Here please find TFDS format of this dataset: https://www.tensorflow.org/datasets/catalog/wikiann\r\nwhere there is a span column, this is really necessary to be able to use the data, and I appreciate your help @lhoestq ",
"Hi !\r\nApparently you can get the spans from the NER tags using `tags_to_spans` defined ... | 1,617,006,180,000 | 1,630,075,458,000 | 1,630,075,458,000 | NONE | null | Hi
Wikiann dataset needs to have "spans" columns, which is necessary to be able to use this dataset, but this column is missing from huggingface datasets, could you please have a look? thank you @lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2130/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2130/timeline | null | null | null | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.