url stringlengths 58 61 | repository_url stringclasses 1 value | labels_url stringlengths 72 75 | comments_url stringlengths 67 70 | events_url stringlengths 65 68 | html_url stringlengths 46 51 | id int64 599M 1.1B | node_id stringlengths 18 32 | number int64 1 3.54k | title stringlengths 1 276 | user dict | labels list | state stringclasses 2 values | locked bool 1 class | assignee dict | assignees list | milestone dict | comments list | created_at int64 1,587B 1,642B | updated_at int64 1,587B 1,642B | closed_at int64 1,587B 1,641B ⌀ | author_association stringclasses 3 values | active_lock_reason null | body stringlengths 0 228k ⌀ | reactions dict | timeline_url stringlengths 67 70 | performed_via_github_app null | draft bool 2 classes | pull_request dict | is_pull_request bool 2 classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/809 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/809/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/809/comments | https://api.github.com/repos/huggingface/datasets/issues/809/events | https://github.com/huggingface/datasets/issues/809 | 737,832,701 | MDU6SXNzdWU3Mzc4MzI3MDE= | 809 | Add Google Taskmaster dataset | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [
"Hey @yjernite. Was going to start working on this but found taskmaster 1,2 & 3 in the datasets library already so think this can be closed now?",
"You are absolutely right :) \r\n\r\nClosed by https://github.com/huggingface/datasets/pull/1193 https://github.com/huggingface/datasets/pull/1197 https://github.com/h... | 1,604,675,441,000 | 1,618,924,166,000 | 1,618,924,166,000 | MEMBER | null | ## Adding a Dataset
- **Name:** Taskmaster
- **Description:** A large dataset of task-oriented dialogue with annotated goals (55K dialogues covering entertainment and travel reservations)
- **Paper:** https://arxiv.org/abs/1909.05358
- **Data:** https://github.com/google-research-datasets/Taskmaster
- **Motivation:** One of few annotated datasets of this size for goal-oriented dialogue
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/809/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/809/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/808 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/808/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/808/comments | https://api.github.com/repos/huggingface/datasets/issues/808/events | https://github.com/huggingface/datasets/pull/808 | 737,638,942 | MDExOlB1bGxSZXF1ZXN0NTE2NjQ0NDc0 | 808 | dataset(dgs): initial dataset loading script | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @AmitMY, \r\n\r\nWere you able to figure this out?",
"I did not.\r\nWith all the limitations this repo currently has, I had to create a repo of my own using tfds to mitigate them. \r\nhttps://github.com/sign-language-processing/datasets/tree/master/sign_language_datasets/datasets/dgs_corpus\r\n\r\nClosing as ... | 1,604,657,683,000 | 1,616,480,335,000 | 1,616,480,335,000 | CONTRIBUTOR | null | When trying to create dummy data I get:
> Dataset datasets with config None seems to already open files in the method `_split_generators(...)`. You might consider to instead only open files in the method `_generate_examples(...)` instead. If this is not possible the dummy data has t o be created with less guidance. Make sure you create the file dummy_data.
I am not sure how to manually create the dummy_data (what exactly it should contain)
Also note, this library says:
> ImportError: To be able to use this dataset, you need to install the following dependencies['pympi'] using 'pip install pympi' for instance'
When you actually need to `pip install pympi-ling`
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/808/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/808/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/808",
"html_url": "https://github.com/huggingface/datasets/pull/808",
"diff_url": "https://github.com/huggingface/datasets/pull/808.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/808.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/807 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/807/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/807/comments | https://api.github.com/repos/huggingface/datasets/issues/807/events | https://github.com/huggingface/datasets/issues/807 | 737,509,954 | MDU6SXNzdWU3Mzc1MDk5NTQ= | 807 | load_dataset for LOCAL CSV files report CONNECTION ERROR | {
"login": "shexuan",
"id": 25664170,
"node_id": "MDQ6VXNlcjI1NjY0MTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/25664170?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shexuan",
"html_url": "https://github.com/shexuan",
"followers_url": "https://api.github.com/users/shexuan/followers",
"following_url": "https://api.github.com/users/shexuan/following{/other_user}",
"gists_url": "https://api.github.com/users/shexuan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shexuan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shexuan/subscriptions",
"organizations_url": "https://api.github.com/users/shexuan/orgs",
"repos_url": "https://api.github.com/users/shexuan/repos",
"events_url": "https://api.github.com/users/shexuan/events{/privacy}",
"received_events_url": "https://api.github.com/users/shexuan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi !\r\nThe url works on my side.\r\n\r\nIs the url working in your navigator ?\r\nAre you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?",
"> Hi !\r\n> The url works on my side.\r\n> \r\n> Is the url working in your navigator ?\r\n> Are you connected to internet ? Does y... | 1,604,644,384,000 | 1,610,328,627,000 | 1,605,331,834,000 | NONE | null | ## load_dataset for LOCAL CSV files report CONNECTION ERROR
- **Description:**
A local demo csv file:
```
import pandas as pd
import numpy as np
from datasets import load_dataset
import torch
import transformers
df = pd.DataFrame(np.arange(1200).reshape(300,4))
df.to_csv('test.csv', header=False, index=False)
print('datasets version: ', datasets.__version__)
print('pytorch version: ', torch.__version__)
print('transformers version: ', transformers.__version__)
# output:
datasets version: 1.1.2
pytorch version: 1.5.0
transformers version: 3.2.0
```
when I load data through `dataset`:
```
dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False)
```
Error infos:
```
ConnectionError Traceback (most recent call last)
<ipython-input-17-bbdadb9a0c78> in <module>
----> 1 dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False)
~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)
588 # Download/copy dataset processing script
589 module_path, hash = prepare_module(
--> 590 path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True
591 )
592
~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs)
266 file_path = hf_github_url(path=path, name=name, dataset=dataset, version=script_version)
267 try:
--> 268 local_path = cached_path(file_path, download_config=download_config)
269 except FileNotFoundError:
270 if script_version is not None:
~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)
306 user_agent=download_config.user_agent,
307 local_files_only=download_config.local_files_only,
--> 308 use_etag=download_config.use_etag,
309 )
310 elif os.path.exists(url_or_filename):
~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag)
473 elif response is not None and response.status_code == 404:
474 raise FileNotFoundError("Couldn't find file at {}".format(url))
--> 475 raise ConnectionError("Couldn't reach {}".format(url))
476
477 # Try a second time
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py
```
And I try to connect to the site with requests:
```
import requests
requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py")
```
Similarly Error occurs:
```
---------------------------------------------------------------------------
ConnectionRefusedError Traceback (most recent call last)
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self)
159 conn = connection.create_connection(
--> 160 (self._dns_host, self.port), self.timeout, **extra_kw
161 )
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options)
83 if err is not None:
---> 84 raise err
85
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options)
73 sock.bind(source_address)
---> 74 sock.connect(sa)
75 return sock
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
NewConnectionError Traceback (most recent call last)
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
676 headers=headers,
--> 677 chunked=chunked,
678 )
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw)
380 try:
--> 381 self._validate_conn(conn)
382 except (SocketTimeout, BaseSSLError) as e:
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _validate_conn(self, conn)
975 if not getattr(conn, "sock", None): # AppEngine might not have `.sock`
--> 976 conn.connect()
977
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in connect(self)
307 # Add certificate verification
--> 308 conn = self._new_conn()
309 hostname = self.host
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self)
171 raise NewConnectionError(
--> 172 self, "Failed to establish a new connection: %s" % e
173 )
NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
MaxRetryError Traceback (most recent call last)
~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies)
448 retries=self.max_retries,
--> 449 timeout=timeout
450 )
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
724 retries = retries.increment(
--> 725 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
726 )
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/retry.py in increment(self, method, url, response, error, _pool, _stacktrace)
438 if new_retry.is_exhausted():
--> 439 raise MaxRetryError(_pool, url, error or ResponseError(cause))
440
MaxRetryError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',))
During handling of the above exception, another exception occurred:
ConnectionError Traceback (most recent call last)
<ipython-input-20-18cc3eb4a049> in <module>
1 import requests
2
----> 3 requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py")
~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in head(url, **kwargs)
102
103 kwargs.setdefault('allow_redirects', False)
--> 104 return request('head', url, **kwargs)
105
106
~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in request(method, url, **kwargs)
59 # cases, and look like a memory leak in others.
60 with sessions.Session() as session:
---> 61 return session.request(method=method, url=url, **kwargs)
62
63
~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)
528 }
529 send_kwargs.update(settings)
--> 530 resp = self.send(prep, **send_kwargs)
531
532 return resp
~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in send(self, request, **kwargs)
641
642 # Send the request
--> 643 r = adapter.send(request, **kwargs)
644
645 # Total elapsed time of the request (approximately)
~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies)
514 raise SSLError(e, request=request)
515
--> 516 raise ConnectionError(e, request=request)
517
518 except ClosedPoolError as e:
ConnectionError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',))
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/807/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/807/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/806 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/806/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/806/comments | https://api.github.com/repos/huggingface/datasets/issues/806/events | https://github.com/huggingface/datasets/issues/806 | 737,215,430 | MDU6SXNzdWU3MzcyMTU0MzA= | 806 | Quail dataset urls are out of date | {
"login": "ngdodd",
"id": 4889636,
"node_id": "MDQ6VXNlcjQ4ODk2MzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4889636?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ngdodd",
"html_url": "https://github.com/ngdodd",
"followers_url": "https://api.github.com/users/ngdodd/followers",
"following_url": "https://api.github.com/users/ngdodd/following{/other_user}",
"gists_url": "https://api.github.com/users/ngdodd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ngdodd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ngdodd/subscriptions",
"organizations_url": "https://api.github.com/users/ngdodd/orgs",
"repos_url": "https://api.github.com/users/ngdodd/repos",
"events_url": "https://api.github.com/users/ngdodd/events{/privacy}",
"received_events_url": "https://api.github.com/users/ngdodd/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! Thanks for reporting.\r\nWe should fix the urls and use quail 1.3.\r\nIf you want to contribute feel free to fix the urls and open a PR :) ",
"Done! PR [https://github.com/huggingface/datasets/pull/820](https://github.com/huggingface/datasets/pull/820)\r\n\r\nUpdated links and also regenerated the metadata ... | 1,604,605,219,000 | 1,605,016,971,000 | 1,605,016,971,000 | CONTRIBUTOR | null | <h3>Code</h3>
```
from datasets import load_dataset
quail = load_dataset('quail')
```
<h3>Error</h3>
```
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/text-machine-lab/quail/master/quail_v1.2/xml/ordered/quail_1.2_train.xml
```
As per [quail v1.3 commit](https://github.com/text-machine-lab/quail/commit/506501cfa34d9ec6c042d31026ba6fea6bcec8ff) it looks like the location and suggested ordering has changed. In [https://github.com/huggingface/datasets/blob/master/datasets/quail/quail.py#L52-L58](https://github.com/huggingface/datasets/blob/master/datasets/quail/quail.py#L52-L58) the quail v1.2 datasets are being pointed to, which don't exist anymore. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/806/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/806/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/805 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/805/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/805/comments | https://api.github.com/repos/huggingface/datasets/issues/805/events | https://github.com/huggingface/datasets/issues/805 | 737,019,360 | MDU6SXNzdWU3MzcwMTkzNjA= | 805 | On loading a metric from datasets, I get the following error | {
"login": "laibamehnaz",
"id": 36405283,
"node_id": "MDQ6VXNlcjM2NDA1Mjgz",
"avatar_url": "https://avatars.githubusercontent.com/u/36405283?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/laibamehnaz",
"html_url": "https://github.com/laibamehnaz",
"followers_url": "https://api.github.com/users/laibamehnaz/followers",
"following_url": "https://api.github.com/users/laibamehnaz/following{/other_user}",
"gists_url": "https://api.github.com/users/laibamehnaz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/laibamehnaz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/laibamehnaz/subscriptions",
"organizations_url": "https://api.github.com/users/laibamehnaz/orgs",
"repos_url": "https://api.github.com/users/laibamehnaz/repos",
"events_url": "https://api.github.com/users/laibamehnaz/events{/privacy}",
"received_events_url": "https://api.github.com/users/laibamehnaz/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi ! We support only pyarrow > 0.17.1 so that we have access to the `PyExtensionType` object.\r\nCould you update pyarrow and try again ?\r\n```\r\npip install --upgrade pyarrow\r\n```"
] | 1,604,589,278,000 | 1,604,913,155,000 | null | NONE | null | `from datasets import load_metric`
`metric = load_metric('bleurt')`
Traceback:
210 class _ArrayXDExtensionType(pa.PyExtensionType):
211
212 ndims: int = None
AttributeError: module 'pyarrow' has no attribute 'PyExtensionType'
Any help will be appreciated. Thank you. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/805/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/805/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/804 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/804/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/804/comments | https://api.github.com/repos/huggingface/datasets/issues/804/events | https://github.com/huggingface/datasets/issues/804 | 736,858,507 | MDU6SXNzdWU3MzY4NTg1MDc= | 804 | Empty output/answer in TriviaQA test set (both in 'kilt_tasks' and 'trivia_qa') | {
"login": "PaulLerner",
"id": 25532159,
"node_id": "MDQ6VXNlcjI1NTMyMTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PaulLerner",
"html_url": "https://github.com/PaulLerner",
"followers_url": "https://api.github.com/users/PaulLerner/followers",
"following_url": "https://api.github.com/users/PaulLerner/following{/other_user}",
"gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions",
"organizations_url": "https://api.github.com/users/PaulLerner/orgs",
"repos_url": "https://api.github.com/users/PaulLerner/repos",
"events_url": "https://api.github.com/users/PaulLerner/events{/privacy}",
"received_events_url": "https://api.github.com/users/PaulLerner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"cc @yjernite is this expected ?",
"Yes: TriviaQA has a private test set for the leaderboard [here](https://competitions.codalab.org/competitions/17208)\r\n\r\nFor the KILT training and validation portions, you need to link the examples from the TriviaQA dataset as detailed here:\r\nhttps://github.com/huggingface... | 1,604,576,281,000 | 1,604,931,299,000 | 1,604,931,298,000 | CONTRIBUTOR | null | # The issue
It's all in the title, it appears to be fine on the train and validation sets.
Is there some kind of mapping to do like for the questions (see https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md) ?
# How to reproduce
```py
from datasets import load_dataset
kilt_tasks = load_dataset("kilt_tasks")
trivia_qa = load_dataset('trivia_qa', 'unfiltered.nocontext')
# both in "kilt_tasks"
In [18]: any([output['answer'] for output in kilt_tasks['test_triviaqa']['output']])
Out[18]: False
# and "trivia_qa"
In [13]: all([answer['value'] == '<unk>' for answer in trivia_qa['test']['answer']])
Out[13]: True
# appears to be fine on the train and validation sets.
In [14]: all([answer['value'] == '<unk>' for answer in trivia_qa['train']['answer']])
Out[14]: False
In [15]: all([answer['value'] == '<unk>' for answer in trivia_qa['validation']['answer']])
Out[15]: False
In [16]: any([output['answer'] for output in kilt_tasks['train_triviaqa']['output']])
Out[16]: True
In [17]: any([output['answer'] for output in kilt_tasks['validation_triviaqa']['output']])
Out[17]: True
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/804/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/804/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/803 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/803/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/803/comments | https://api.github.com/repos/huggingface/datasets/issues/803/events | https://github.com/huggingface/datasets/pull/803 | 736,818,917 | MDExOlB1bGxSZXF1ZXN0NTE1OTY1ODE2 | 803 | fix: typos in tutorial to map KILT and TriviaQA | {
"login": "PaulLerner",
"id": 25532159,
"node_id": "MDQ6VXNlcjI1NTMyMTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PaulLerner",
"html_url": "https://github.com/PaulLerner",
"followers_url": "https://api.github.com/users/PaulLerner/followers",
"following_url": "https://api.github.com/users/PaulLerner/following{/other_user}",
"gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions",
"organizations_url": "https://api.github.com/users/PaulLerner/orgs",
"repos_url": "https://api.github.com/users/PaulLerner/repos",
"events_url": "https://api.github.com/users/PaulLerner/events{/privacy}",
"received_events_url": "https://api.github.com/users/PaulLerner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,604,572,920,000 | 1,604,999,287,000 | 1,604,999,287,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/803/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/803/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/803",
"html_url": "https://github.com/huggingface/datasets/pull/803",
"diff_url": "https://github.com/huggingface/datasets/pull/803.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/803.patch",
"merged_at": 1604999287000
} | true | |
https://api.github.com/repos/huggingface/datasets/issues/802 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/802/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/802/comments | https://api.github.com/repos/huggingface/datasets/issues/802/events | https://github.com/huggingface/datasets/pull/802 | 736,296,343 | MDExOlB1bGxSZXF1ZXN0NTE1NTM1MDI0 | 802 | Add XGlue | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Really cool to add XGlue, this will be a nice addition !\r\n\r\nSplits shouldn't depend on the language. There must be configurations for each language, as we're doing for xnli, xtreme, etc.\r\nFor example for XGlue we'll have these configurations: NER.de, NER.en etc."
] | 1,604,510,994,000 | 1,606,838,308,000 | 1,606,838,307,000 | MEMBER | null | Dataset is ready to merge. An important feature of this dataset is that for each config the train data is in English, while dev and test data are in multiple languages. Therefore, @lhoestq and I decided offline that we will give the dataset the following API, *e.g.* for
```python
load_dataset("xglue", "ner") # would give the splits 'train', 'validation.en', 'test.en', 'validation.es', 'test.es', ...
```
=> therefore one can load a single language test via
```python
load_dataset("xglue", "ner", split="test.es")
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/802/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/802/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/802",
"html_url": "https://github.com/huggingface/datasets/pull/802",
"diff_url": "https://github.com/huggingface/datasets/pull/802.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/802.patch",
"merged_at": 1606838307000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/801 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/801/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/801/comments | https://api.github.com/repos/huggingface/datasets/issues/801/events | https://github.com/huggingface/datasets/issues/801 | 735,790,876 | MDU6SXNzdWU3MzU3OTA4NzY= | 801 | How to join two datasets? | {
"login": "shangw-nvidia",
"id": 66387198,
"node_id": "MDQ6VXNlcjY2Mzg3MTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/66387198?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shangw-nvidia",
"html_url": "https://github.com/shangw-nvidia",
"followers_url": "https://api.github.com/users/shangw-nvidia/followers",
"following_url": "https://api.github.com/users/shangw-nvidia/following{/other_user}",
"gists_url": "https://api.github.com/users/shangw-nvidia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shangw-nvidia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shangw-nvidia/subscriptions",
"organizations_url": "https://api.github.com/users/shangw-nvidia/orgs",
"repos_url": "https://api.github.com/users/shangw-nvidia/repos",
"events_url": "https://api.github.com/users/shangw-nvidia/events{/privacy}",
"received_events_url": "https://api.github.com/users/shangw-nvidia/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi this is also my question. thanks ",
"Hi ! Currently the only way to add new fields to a dataset is by using `.map` and picking items from the other dataset\r\n",
"Closing this one. Feel free to re-open if you have other questions about this issue.\r\n\r\nAlso linking another discussion about joining dataset... | 1,604,461,991,000 | 1,608,732,178,000 | 1,608,732,178,000 | NONE | null | Hi,
I'm wondering if it's possible to join two (preprocessed) datasets with the same number of rows but different labels?
I'm currently trying to create paired sentences for BERT from `wikipedia/'20200501.en`, and I couldn't figure out a way to create a paired sentence using `.map()` where the second sentence is **not** the next sentence (i.e., from a different article) of the first sentence.
Thanks! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/801/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/801/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/800 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/800/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/800/comments | https://api.github.com/repos/huggingface/datasets/issues/800/events | https://github.com/huggingface/datasets/pull/800 | 735,772,775 | MDExOlB1bGxSZXF1ZXN0NTE1MTAyMjc3 | 800 | Update loading_metrics.rst | {
"login": "ayushidalmia",
"id": 5400513,
"node_id": "MDQ6VXNlcjU0MDA1MTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5400513?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ayushidalmia",
"html_url": "https://github.com/ayushidalmia",
"followers_url": "https://api.github.com/users/ayushidalmia/followers",
"following_url": "https://api.github.com/users/ayushidalmia/following{/other_user}",
"gists_url": "https://api.github.com/users/ayushidalmia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ayushidalmia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayushidalmia/subscriptions",
"organizations_url": "https://api.github.com/users/ayushidalmia/orgs",
"repos_url": "https://api.github.com/users/ayushidalmia/repos",
"events_url": "https://api.github.com/users/ayushidalmia/events{/privacy}",
"received_events_url": "https://api.github.com/users/ayushidalmia/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,604,458,631,000 | 1,605,108,512,000 | 1,605,108,512,000 | CONTRIBUTOR | null | Minor bug | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/800/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/800/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/800",
"html_url": "https://github.com/huggingface/datasets/pull/800",
"diff_url": "https://github.com/huggingface/datasets/pull/800.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/800.patch",
"merged_at": 1605108512000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/799 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/799/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/799/comments | https://api.github.com/repos/huggingface/datasets/issues/799/events | https://github.com/huggingface/datasets/pull/799 | 735,551,165 | MDExOlB1bGxSZXF1ZXN0NTE0OTIzNDMx | 799 | switch amazon reviews class label order | {
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"repos_url": "https://api.github.com/users/joeddav/repos",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,604,428,738,000 | 1,604,429,054,000 | 1,604,429,050,000 | CONTRIBUTOR | null | Switches the label order to be more intuitive for amazon reviews, #791. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/799/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/799/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/799",
"html_url": "https://github.com/huggingface/datasets/pull/799",
"diff_url": "https://github.com/huggingface/datasets/pull/799.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/799.patch",
"merged_at": 1604429050000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/798 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/798/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/798/comments | https://api.github.com/repos/huggingface/datasets/issues/798/events | https://github.com/huggingface/datasets/issues/798 | 735,518,805 | MDU6SXNzdWU3MzU1MTg4MDU= | 798 | Cannot load TREC dataset: ConnectionError | {
"login": "kaletap",
"id": 25740957,
"node_id": "MDQ6VXNlcjI1NzQwOTU3",
"avatar_url": "https://avatars.githubusercontent.com/u/25740957?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kaletap",
"html_url": "https://github.com/kaletap",
"followers_url": "https://api.github.com/users/kaletap/followers",
"following_url": "https://api.github.com/users/kaletap/following{/other_user}",
"gists_url": "https://api.github.com/users/kaletap/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kaletap/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kaletap/subscriptions",
"organizations_url": "https://api.github.com/users/kaletap/orgs",
"repos_url": "https://api.github.com/users/kaletap/repos",
"events_url": "https://api.github.com/users/kaletap/events{/privacy}",
"received_events_url": "https://api.github.com/users/kaletap/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | open | false | null | [] | null | [
"Hi ! Indeed there's an issue with those links.\r\nWe should probably use the target urls of the redirections instead",
"Hi, the same issue here, could you tell me how to download it through datasets? thanks ",
"Same issue. ",
"Actually it's already fixed on the master branch since #740 \r\nI'll do the 1.1.3 ... | 1,604,425,522,000 | 1,637,322,472,000 | null | NONE | null | ## Problem
I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally.
* `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>.
* `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label', allow_redirects=True)` raises `requests.exceptions.TooManyRedirects: Exceeded 30 redirects.`
* Opening `http://cogcomp.org/Data/QA/QC/train_5500.label' in a browser works, but opens a different address
* Increasing max_redirects to 100 doesn't help
Also, while debugging I've seen that requesting 'https://storage.googleapis.com/huggingface-nlp/cache/datasets/trec/default/1.1.0/dataset_info.json' returns <Response [404]> before, but it doesn't raise any errors. Not sure if that's relevant.
* datasets.__version__ == '1.1.2'
* requests.__version__ == '2.24.0'
## Error trace
```
>>> import datasets
>>> datasets.__version__
'1.1.2'
>>> dataset = load_dataset("trec", split="train")
Using custom data configuration default
Downloading and preparing dataset trec/default (download: 350.79 KiB, generated: 403.39 KiB, post-processed: Unknown size, total: 754.18 KiB) to /home/przemyslaw/.cache/huggingface/datasets/trec/default/1.1.0/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/przemyslaw/.cache/huggingface/modules/datasets_modules/datasets/trec/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7/trec.py", line 140, in _split_generators
dl_files = dl_manager.download_and_extract(_URLs)
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download
num_proc=download_config.num_proc,
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp>
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested
return function(data_struct)
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path
use_etag=download_config.use_etag,
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label
```
I would appreciate some suggestions here. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/798/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/798/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/797 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/797/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/797/comments | https://api.github.com/repos/huggingface/datasets/issues/797/events | https://github.com/huggingface/datasets/issues/797 | 735,420,332 | MDU6SXNzdWU3MzU0MjAzMzI= | 797 | Token classification labels are strings and we don't have the list of labels | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 2067401494,
"node_id": "MDU6... | open | false | null | [] | null | [
"Indeed. Pinging @stefan-it here if he want to give an expert opinion :)",
"Related is https://github.com/huggingface/datasets/pull/636",
"Should definitely be a ClassLabel 👍 "
] | 1,604,417,610,000 | 1,605,017,231,000 | null | MEMBER | null | Not sure if this is an issue we want to fix or not, putting it here so it's not forgotten. Right now, in token classification datasets, the labels for NER, POS and the likes are typed as `Sequence` of `strings`, which is wrong in my opinion. These should be `Sequence` of `ClassLabel` or some types that gives easy access to the underlying labels.
The main problem for preprocessing those datasets is that the list of possible labels is not stored inside the `Dataset` object which makes converting the labels to IDs quite difficult (you either have to know the list of labels in advance or run a full pass through the dataset to get the list of labels, the `unique` method being useless with the type `Sequence[str]`). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/797/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/797/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/796 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/796/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/796/comments | https://api.github.com/repos/huggingface/datasets/issues/796/events | https://github.com/huggingface/datasets/issues/796 | 735,414,881 | MDU6SXNzdWU3MzU0MTQ4ODE= | 796 | Seq2Seq Metrics QOL: Bleu, Rouge | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi ! Thanks for letting us know your experience :) \r\nWe should at least improve the error messages indeed",
"So what is the right way to add a batch to compute BLEU?",
"prediction = [['Hey', 'how', 'are', 'you', '?']] \r\nreference=[['Hey', 'how', 'are', 'you', '?']]\r\nbleu.compute(predictions=prediction,r... | 1,604,417,189,000 | 1,611,843,228,000 | null | CONTRIBUTOR | null | Putting all my QOL issues here, idt I will have time to propose fixes, but I didn't want these to be lost, in case they are useful. I tried using `rouge` and `bleu` for the first time and wrote down everything I didn't immediately understand:
+ Bleu expects tokenization, can I just kwarg it like sacrebleu?
+ different signatures, means that I would have had to add a lot of conditionals + pre and post processing: if I were going to replace the `calculate_rouge` and `calculate_bleu` functions here: https://github.com/huggingface/transformers/blob/master/examples/seq2seq/utils.py#L61
#### What I tried
Rouge experience:
```python
rouge = load_metric('rouge')
rouge.add_batch(['hi im sam'], ['im daniel']) # fails
rouge.add_batch(predictions=['hi im sam'], references=['im daniel']) # works
rouge.compute() # huge messy output, but reasonable. Not worth integrating b/c don't want to rewrite all the postprocessing.
```
BLEU experience:
```python
bleu = load_metric('bleu')
bleu.add_batch(predictions=['hi im sam'], references=['im daniel'])
bleu.add_batch(predictions=[['hi im sam']], references=[['im daniel']])
bleu.add_batch(predictions=[['hi im sam']], references=[['im daniel']])
```
All of these raise `ValueError: Got a string but expected a list instead: 'im daniel'`
#### Doc Typo
This says `dataset=load_metric(...)` which seems wrong, will cause `NameError`

cc @lhoestq, feel free to ignore. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/796/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/796/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/795 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/795/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/795/comments | https://api.github.com/repos/huggingface/datasets/issues/795/events | https://github.com/huggingface/datasets/issues/795 | 735,198,265 | MDU6SXNzdWU3MzUxOTgyNjU= | 795 | Descriptions of raw and processed versions of wikitext are inverted | {
"login": "fraboniface",
"id": 16835358,
"node_id": "MDQ6VXNlcjE2ODM1MzU4",
"avatar_url": "https://avatars.githubusercontent.com/u/16835358?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fraboniface",
"html_url": "https://github.com/fraboniface",
"followers_url": "https://api.github.com/users/fraboniface/followers",
"following_url": "https://api.github.com/users/fraboniface/following{/other_user}",
"gists_url": "https://api.github.com/users/fraboniface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fraboniface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fraboniface/subscriptions",
"organizations_url": "https://api.github.com/users/fraboniface/orgs",
"repos_url": "https://api.github.com/users/fraboniface/repos",
"events_url": "https://api.github.com/users/fraboniface/events{/privacy}",
"received_events_url": "https://api.github.com/users/fraboniface/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | open | false | null | [] | null | [
"Yes indeed ! Thanks for reporting"
] | 1,604,399,091,000 | 1,605,017,145,000 | null | NONE | null | Nothing of importance, but it looks like the descriptions of wikitext-n-v1 and wikitext-n-raw-v1 are inverted for both n=2 and n=103. I just verified by loading them and the `<unk>` tokens are present in the non-raw versions, which confirms that it's a mere inversion of the descriptions and not of the datasets themselves.
Also it would be nice if those descriptions appeared in the dataset explorer.
https://github.com/huggingface/datasets/blob/87bd0864845ea0a1dd7167918dc5f341bf807bd3/datasets/wikitext/wikitext.py#L52 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/795/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/795/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/794 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/794/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/794/comments | https://api.github.com/repos/huggingface/datasets/issues/794/events | https://github.com/huggingface/datasets/issues/794 | 735,158,725 | MDU6SXNzdWU3MzUxNTg3MjU= | 794 | self.options cannot be converted to a Python object for pickling | {
"login": "hzqjyyx",
"id": 9635713,
"node_id": "MDQ6VXNlcjk2MzU3MTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9635713?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hzqjyyx",
"html_url": "https://github.com/hzqjyyx",
"followers_url": "https://api.github.com/users/hzqjyyx/followers",
"following_url": "https://api.github.com/users/hzqjyyx/following{/other_user}",
"gists_url": "https://api.github.com/users/hzqjyyx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hzqjyyx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hzqjyyx/subscriptions",
"organizations_url": "https://api.github.com/users/hzqjyyx/orgs",
"repos_url": "https://api.github.com/users/hzqjyyx/repos",
"events_url": "https://api.github.com/users/hzqjyyx/events{/privacy}",
"received_events_url": "https://api.github.com/users/hzqjyyx/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi ! Thanks for reporting that's a bug on master indeed.\r\nWe'll fix that soon"
] | 1,604,395,654,000 | 1,605,807,338,000 | 1,605,807,338,000 | NONE | null | Hi,
Currently I am trying to load csv file with customized read_options. And the latest master seems broken if we pass the ReadOptions object.
Here is a code snippet
```python
from datasets import load_dataset
from pyarrow.csv import ReadOptions
load_dataset("csv", data_files=["out.csv"], read_options=ReadOptions(block_size=16*1024*1024))
```
error is `self.options cannot be converted to a Python object for pickling`
Would you mind to take a look? Thanks!
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-28-ab83fec2ded4> in <module>
----> 1 load_dataset("csv", data_files=["out.csv"], read_options=ReadOptions(block_size=16*1024*1024))
/tmp/datasets/src/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)
602 hash=hash,
603 features=features,
--> 604 **config_kwargs,
605 )
606
/tmp/datasets/src/datasets/builder.py in __init__(self, cache_dir, name, hash, features, **config_kwargs)
162 name,
163 custom_features=features,
--> 164 **config_kwargs,
165 )
166
/tmp/datasets/src/datasets/builder.py in _create_builder_config(self, name, custom_features, **config_kwargs)
281 )
282 else:
--> 283 suffix = Hasher.hash(config_kwargs_to_add_to_suffix)
284
285 if builder_config.data_files is not None:
/tmp/datasets/src/datasets/fingerprint.py in hash(cls, value)
51 return cls.dispatch[type(value)](cls, value)
52 else:
---> 53 return cls.hash_default(value)
54
55 def update(self, value):
/tmp/datasets/src/datasets/fingerprint.py in hash_default(cls, value)
44 @classmethod
45 def hash_default(cls, value):
---> 46 return cls.hash_bytes(dumps(value))
47
48 @classmethod
/tmp/datasets/src/datasets/utils/py_utils.py in dumps(obj)
365 file = StringIO()
366 with _no_cache_fields(obj):
--> 367 dump(obj, file)
368 return file.getvalue()
369
/tmp/datasets/src/datasets/utils/py_utils.py in dump(obj, file)
337 def dump(obj, file):
338 """pickle an object to a file"""
--> 339 Pickler(file, recurse=True).dump(obj)
340 return
341
~/.local/lib/python3.6/site-packages/dill/_dill.py in dump(self, obj)
444 raise PicklingError(msg)
445 else:
--> 446 StockPickler.dump(self, obj)
447 stack.clear() # clear record of 'recursion-sensitive' pickled objects
448 return
/usr/lib/python3.6/pickle.py in dump(self, obj)
407 if self.proto >= 4:
408 self.framer.start_framing()
--> 409 self.save(obj)
410 self.write(STOP)
411 self.framer.end_framing()
/usr/lib/python3.6/pickle.py in save(self, obj, save_persistent_id)
474 f = self.dispatch.get(t)
475 if f is not None:
--> 476 f(self, obj) # Call unbound method with explicit self
477 return
478
~/.local/lib/python3.6/site-packages/dill/_dill.py in save_module_dict(pickler, obj)
931 # we only care about session the first pass thru
932 pickler._session = False
--> 933 StockPickler.save_dict(pickler, obj)
934 log.info("# D2")
935 return
/usr/lib/python3.6/pickle.py in save_dict(self, obj)
819
820 self.memoize(obj)
--> 821 self._batch_setitems(obj.items())
822
823 dispatch[dict] = save_dict
/usr/lib/python3.6/pickle.py in _batch_setitems(self, items)
850 k, v = tmp[0]
851 save(k)
--> 852 save(v)
853 write(SETITEM)
854 # else tmp is empty, and we're done
/usr/lib/python3.6/pickle.py in save(self, obj, save_persistent_id)
494 reduce = getattr(obj, "__reduce_ex__", None)
495 if reduce is not None:
--> 496 rv = reduce(self.proto)
497 else:
498 reduce = getattr(obj, "__reduce__", None)
~/.local/lib/python3.6/site-packages/pyarrow/_csv.cpython-36m-x86_64-linux-gnu.so in pyarrow._csv.ReadOptions.__reduce_cython__()
TypeError: self.options cannot be converted to a Python object for pickling
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/794/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/794/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/793 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/793/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/793/comments | https://api.github.com/repos/huggingface/datasets/issues/793/events | https://github.com/huggingface/datasets/pull/793 | 735,105,907 | MDExOlB1bGxSZXF1ZXN0NTE0NTU2NzY5 | 793 | [Datasets] fix discofuse links | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,604,390,625,000 | 1,604,391,401,000 | 1,604,391,400,000 | MEMBER | null | The discofuse links were changed: https://github.com/google-research-datasets/discofuse/commit/d27641016eb5b3eb2af03c7415cfbb2cbebe8558.
The old links are broken
I changed the links and created the new dataset_infos.json.
Pinging @thomwolf @lhoestq for notification. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/793/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/793/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/793",
"html_url": "https://github.com/huggingface/datasets/pull/793",
"diff_url": "https://github.com/huggingface/datasets/pull/793.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/793.patch",
"merged_at": 1604391400000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/792 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/792/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/792/comments | https://api.github.com/repos/huggingface/datasets/issues/792/events | https://github.com/huggingface/datasets/issues/792 | 734,693,652 | MDU6SXNzdWU3MzQ2OTM2NTI= | 792 | KILT dataset: empty string in triviaqa input field | {
"login": "PaulLerner",
"id": 25532159,
"node_id": "MDQ6VXNlcjI1NTMyMTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PaulLerner",
"html_url": "https://github.com/PaulLerner",
"followers_url": "https://api.github.com/users/PaulLerner/followers",
"following_url": "https://api.github.com/users/PaulLerner/following{/other_user}",
"gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions",
"organizations_url": "https://api.github.com/users/PaulLerner/orgs",
"repos_url": "https://api.github.com/users/PaulLerner/repos",
"events_url": "https://api.github.com/users/PaulLerner/events{/privacy}",
"received_events_url": "https://api.github.com/users/PaulLerner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Just found out about https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md\r\n(Not very clear in https://huggingface.co/datasets/kilt_tasks links to http://github.com/huggingface/datasets/datasets/kilt_tasks/README.md which is dead, closing the issue though :))"
] | 1,604,338,434,000 | 1,604,572,499,000 | 1,604,572,499,000 | CONTRIBUTOR | null | # What happened
Both train and test splits of the triviaqa dataset (part of the KILT benchmark) seem to have empty string in their input field (unlike the natural questions dataset, part of the same benchmark)
# Versions
KILT version is `1.0.0`
`datasets` version is `1.1.2`
[more here](https://gist.github.com/PaulLerner/3768c8d25f723edbac20d99b6a4056c1)
# How to reproduce
```py
In [1]: from datasets import load_dataset
In [4]: dataset = load_dataset("kilt_tasks")
# everything works fine, removed output for a better readibility
Dataset kilt_tasks downloaded and prepared to /people/lerner/.cache/huggingface/datasets/kilt_tasks/all_tasks/1.0.0/821c4295a2c35db2847585918d9c47d7f028f1a26b78825d8e77cd3aeb2621a1. Subsequent calls will reuse this data.
# empty string in triviaqa input field
In [36]: dataset['train_triviaqa'][0]
Out[36]:
{'id': 'dpql_5197',
'input': '',
'meta': {'left_context': '',
'mention': '',
'obj_surface': {'text': []},
'partial_evidence': {'end_paragraph_id': [],
'meta': [],
'section': [],
'start_paragraph_id': [],
'title': [],
'wikipedia_id': []},
'right_context': '',
'sub_surface': {'text': []},
'subj_aliases': {'text': []},
'template_questions': {'text': []}},
'output': {'answer': ['five £', '5 £', '£5', 'five £'],
'meta': [],
'provenance': [{'bleu_score': [1.0],
'end_character': [248],
'end_paragraph_id': [30],
'meta': [],
'section': ['Section::::Question of legal tender.\n'],
'start_character': [246],
'start_paragraph_id': [30],
'title': ['Banknotes of the pound sterling'],
'wikipedia_id': ['270680']}]}}
In [35]: dataset['train_triviaqa']['input'][:10]
Out[35]: ['', '', '', '', '', '', '', '', '', '']
# same with test set
In [37]: dataset['test_triviaqa']['input'][:10]
Out[37]: ['', '', '', '', '', '', '', '', '', '']
# works fine with natural questions
In [34]: dataset['train_nq']['input'][:10]
Out[34]:
['how i.met your mother who is the mother',
'who had the most wins in the nfl',
'who played mantis guardians of the galaxy 2',
'what channel is the premier league on in france',
"god's not dead a light in the darkness release date",
'who is the current president of un general assembly',
'when do the eclipse supposed to take place',
'what is the name of the sea surrounding dubai',
'who holds the nba record for most points in a career',
'when did the new maze runner movie come out']
```
Stay safe :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/792/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/792/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/791 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/791/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/791/comments | https://api.github.com/repos/huggingface/datasets/issues/791/events | https://github.com/huggingface/datasets/pull/791 | 734,656,518 | MDExOlB1bGxSZXF1ZXN0NTE0MTg0MzU5 | 791 | add amazon reviews | {
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"repos_url": "https://api.github.com/users/joeddav/repos",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@patrickvonplaten Yeah this is adapted from tfds so a lot is just how they wrote the code. Addressed your comments and also simplified the weird `AmazonUSReviewsConfig` definition. Will merge once tests pass.",
"Thanks for checking this one :) \r\nLooks good to me \r\n\r\nJust one question : is there a particula... | 1,604,335,377,000 | 1,604,434,506,000 | 1,604,421,837,000 | CONTRIBUTOR | null | Adds the Amazon US Reviews dataset as requested in #353. Converted from [TensorFlow Datasets](https://www.tensorflow.org/datasets/catalog/amazon_us_reviews). cc @clmnt @sshleifer | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/791/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/791/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/791",
"html_url": "https://github.com/huggingface/datasets/pull/791",
"diff_url": "https://github.com/huggingface/datasets/pull/791.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/791.patch",
"merged_at": 1604421837000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/790 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/790/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/790/comments | https://api.github.com/repos/huggingface/datasets/issues/790/events | https://github.com/huggingface/datasets/issues/790 | 734,470,197 | MDU6SXNzdWU3MzQ0NzAxOTc= | 790 | Error running pip install -e ".[dev]" on MacOS 10.13.6: faiss/python does not exist | {
"login": "shawwn",
"id": 59632,
"node_id": "MDQ6VXNlcjU5NjMy",
"avatar_url": "https://avatars.githubusercontent.com/u/59632?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shawwn",
"html_url": "https://github.com/shawwn",
"followers_url": "https://api.github.com/users/shawwn/followers",
"following_url": "https://api.github.com/users/shawwn/following{/other_user}",
"gists_url": "https://api.github.com/users/shawwn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shawwn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shawwn/subscriptions",
"organizations_url": "https://api.github.com/users/shawwn/orgs",
"repos_url": "https://api.github.com/users/shawwn/repos",
"events_url": "https://api.github.com/users/shawwn/events{/privacy}",
"received_events_url": "https://api.github.com/users/shawwn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I saw that `faiss-cpu` 1.6.4.post2 was released recently to fix the installation on macos. It should work now",
"Closing this one.\r\nFeel free to re-open if you still have issues"
] | 1,604,320,595,000 | 1,605,017,102,000 | 1,605,017,102,000 | NONE | null | I was following along with https://huggingface.co/docs/datasets/share_dataset.html#adding-tests-and-metadata-to-the-dataset when I ran into this error.
```sh
git clone https://github.com/huggingface/datasets
cd datasets
virtualenv venv -p python3 --system-site-packages
source venv/bin/activate
pip install -e ".[dev]"
```


Python 3.7.7
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/790/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/790/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/789 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/789/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/789/comments | https://api.github.com/repos/huggingface/datasets/issues/789/events | https://github.com/huggingface/datasets/pull/789 | 734,237,839 | MDExOlB1bGxSZXF1ZXN0NTEzODM1MzE0 | 789 | dataset(ncslgr): add initial loading script | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @AmitMY, sorry for leaving you hanging for a minute :) \r\n\r\nWe've developed a new pipeline for adding datasets with a few extra steps, including adding a dataset card. You can find the full process [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md)\r\n\r\nWould you be up for addin... | 1,604,299,810,000 | 1,606,830,097,000 | 1,606,830,096,000 | CONTRIBUTOR | null | Its a small dataset, but its heavily annotated
https://www.bu.edu/asllrp/ncslgr.html

| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/789/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/789/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/789",
"html_url": "https://github.com/huggingface/datasets/pull/789",
"diff_url": "https://github.com/huggingface/datasets/pull/789.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/789.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/788 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/788/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/788/comments | https://api.github.com/repos/huggingface/datasets/issues/788/events | https://github.com/huggingface/datasets/issues/788 | 734,136,124 | MDU6SXNzdWU3MzQxMzYxMjQ= | 788 | failed to reuse cache | {
"login": "WangHexie",
"id": 31768052,
"node_id": "MDQ6VXNlcjMxNzY4MDUy",
"avatar_url": "https://avatars.githubusercontent.com/u/31768052?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WangHexie",
"html_url": "https://github.com/WangHexie",
"followers_url": "https://api.github.com/users/WangHexie/followers",
"following_url": "https://api.github.com/users/WangHexie/following{/other_user}",
"gists_url": "https://api.github.com/users/WangHexie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WangHexie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WangHexie/subscriptions",
"organizations_url": "https://api.github.com/users/WangHexie/orgs",
"repos_url": "https://api.github.com/users/WangHexie/repos",
"events_url": "https://api.github.com/users/WangHexie/events{/privacy}",
"received_events_url": "https://api.github.com/users/WangHexie/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,604,284,956,000 | 1,604,319,975,000 | 1,604,319,975,000 | NONE | null | I packed the `load_dataset ` in a function of class, and cached data in a directory. But when I import the class and use the function, the data still have to be downloaded again. The information (Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to ******) which logged to terminal shows the path is right to the cache directory, but the files still have to be downloaded again. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/788/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/788/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/787 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/787/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/787/comments | https://api.github.com/repos/huggingface/datasets/issues/787/events | https://github.com/huggingface/datasets/pull/787 | 734,070,162 | MDExOlB1bGxSZXF1ZXN0NTEzNjk5MTQz | 787 | Adding nli_tr dataset | {
"login": "e-budur",
"id": 2246791,
"node_id": "MDQ6VXNlcjIyNDY3OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2246791?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/e-budur",
"html_url": "https://github.com/e-budur",
"followers_url": "https://api.github.com/users/e-budur/followers",
"following_url": "https://api.github.com/users/e-budur/following{/other_user}",
"gists_url": "https://api.github.com/users/e-budur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/e-budur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/e-budur/subscriptions",
"organizations_url": "https://api.github.com/users/e-budur/orgs",
"repos_url": "https://api.github.com/users/e-budur/repos",
"events_url": "https://api.github.com/users/e-budur/events{/privacy}",
"received_events_url": "https://api.github.com/users/e-budur/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thank you @lhoestq for the time you take to review our pull request. We appreciate your help.\r\n\r\nWe've made the changes you described. Hope that it is ready for being merged. Please let me know if you have any additional requests for revisions. "
] | 1,604,267,384,000 | 1,605,207,962,000 | 1,605,207,962,000 | CONTRIBUTOR | null | Hello,
In this pull request, we have implemented the necessary interface to add our recent dataset [NLI-TR](https://github.com/boun-tabi/NLI-TR). The datasets will be presented on a full paper at EMNLP 2020 this month. [[arXiv link] ](https://arxiv.org/pdf/2004.14963.pdf)
The dataset is the neural machine translation of SNLI and MultiNLI datasets into Turkish. So, we followed a similar format with the original datasets hosted in the HuggingFace datasets hub.
Our dataset is designed to be accessed as follows by following the interface of the GLUE dataset that provides multiple datasets in a single interface over the HuggingFace datasets hub.
```
from datasets import load_dataset
multinli_tr = load_dataset("nli_tr", "multinli_tr")
snli_tr = load_dataset("nli_tr", "snli_tr")
```
Thanks for your help in reviewing our pull request. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/787/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/787/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/787",
"html_url": "https://github.com/huggingface/datasets/pull/787",
"diff_url": "https://github.com/huggingface/datasets/pull/787.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/787.patch",
"merged_at": 1605207962000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/786 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/786/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/786/comments | https://api.github.com/repos/huggingface/datasets/issues/786/events | https://github.com/huggingface/datasets/issues/786 | 733,761,717 | MDU6SXNzdWU3MzM3NjE3MTc= | 786 | feat(dataset): multiprocessing _generate_examples | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"I agree that would be cool :)\r\nRight now the only distributed dataset builder is based on Apache Beam so you can use distributed processing frameworks like Dataflow, Spark, Flink etc. to build your dataset but it's not really well suited for single-worker parallel processing afaik"
] | 1,604,163,136,000 | 1,604,911,118,000 | null | CONTRIBUTOR | null | forking this out of #741, this issue is only regarding multiprocessing
I'd love if there was a dataset configuration parameter `workers`, where when it is `1` it behaves as it does right now, and when its `>1` maybe `_generate_examples` can also get the `pool` and return an iterable using the pool.
In my use case, I would instead of:
```python
for datum in data:
yield self.load_datum(datum)
```
do:
```python
return pool.map(self.load_datum, data)
```
As the dataset in question, as an example, has **only** 7000 rows, and takes 10 seconds to load each row on average, it takes almost 20 hours to load the entire dataset.
If this was a larger dataset (and many such datasets exist), it would take multiple days to complete.
Using multiprocessing, for example, 40 cores, could speed it up dramatically. For this dataset, hopefully to fully load in under an hour. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/786/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/786/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/785 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/785/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/785/comments | https://api.github.com/repos/huggingface/datasets/issues/785/events | https://github.com/huggingface/datasets/pull/785 | 733,719,419 | MDExOlB1bGxSZXF1ZXN0NTEzNDMyNTM1 | 785 | feat(aslg_pc12): add dev and test data splits | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! I'm not sure we should make this split decision arbitrarily on our side. Users can split it afterwards to whatever they want using `dataset.train_test_split` for example.\r\nMoreover it looks like there's already papers that use this dataset and propose their own splits ([here](http://xanthippi.ceid.upatras.g... | 1,604,150,738,000 | 1,605,022,170,000 | 1,605,022,170,000 | CONTRIBUTOR | null | For reproducibility sake, it's best if there are defined dev and test splits.
The original paper author did not define splits for the entire dataset, not for the sample loaded via this library, so I decided to define:
- 5/7th for train
- 1/7th for dev
- 1/7th for test
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/785/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/785/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/785",
"html_url": "https://github.com/huggingface/datasets/pull/785",
"diff_url": "https://github.com/huggingface/datasets/pull/785.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/785.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/784 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/784/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/784/comments | https://api.github.com/repos/huggingface/datasets/issues/784/events | https://github.com/huggingface/datasets/issues/784 | 733,700,463 | MDU6SXNzdWU3MzM3MDA0NjM= | 784 | Issue with downloading Wikipedia data for low resource language | {
"login": "SamuelCahyawijaya",
"id": 2826602,
"node_id": "MDQ6VXNlcjI4MjY2MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2826602?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SamuelCahyawijaya",
"html_url": "https://github.com/SamuelCahyawijaya",
"followers_url": "https://api.github.com/users/SamuelCahyawijaya/followers",
"following_url": "https://api.github.com/users/SamuelCahyawijaya/following{/other_user}",
"gists_url": "https://api.github.com/users/SamuelCahyawijaya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SamuelCahyawijaya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SamuelCahyawijaya/subscriptions",
"organizations_url": "https://api.github.com/users/SamuelCahyawijaya/orgs",
"repos_url": "https://api.github.com/users/SamuelCahyawijaya/repos",
"events_url": "https://api.github.com/users/SamuelCahyawijaya/events{/privacy}",
"received_events_url": "https://api.github.com/users/SamuelCahyawijaya/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hello, maybe you could ty to use another date for the wikipedia dump (see the available [dates](https://dumps.wikimedia.org/jvwiki) here for `jv`) ?",
"@lhoestq\r\n\r\nI've tried `load_dataset('wikipedia', '20200501.zh', beam_runner='DirectRunner')` and got the same `FileNotFoundError` as @SamuelCahyawijaya.\r\n... | 1,604,144,400,000 | 1,624,584,931,000 | 1,606,318,933,000 | NONE | null | Hi, I tried to download Sundanese and Javanese wikipedia data with the following snippet
```
jv_wiki = datasets.load_dataset('wikipedia', '20200501.jv', beam_runner='DirectRunner')
su_wiki = datasets.load_dataset('wikipedia', '20200501.su', beam_runner='DirectRunner')
```
And I get the following error for these two languages:
Javanese
```
FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/jvwiki/20200501/dumpstatus.json
```
Sundanese
```
FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/suwiki/20200501/dumpstatus.json
```
I found from https://github.com/huggingface/datasets/issues/577#issuecomment-688435085 that for small languages, they are directly downloaded and parsed from the Wikipedia dump site, but both of `https://dumps.wikimedia.org/jvwiki/20200501/dumpstatus.json` and `https://dumps.wikimedia.org/suwiki/20200501/dumpstatus.json` are no longer valid.
Any suggestions on how to handle this issue? Thanks! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/784/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/784/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/783 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/783/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/783/comments | https://api.github.com/repos/huggingface/datasets/issues/783/events | https://github.com/huggingface/datasets/pull/783 | 733,536,254 | MDExOlB1bGxSZXF1ZXN0NTEzMzAwODUz | 783 | updated links to v1.3 of quail, fixed the description | {
"login": "annargrs",
"id": 1450322,
"node_id": "MDQ6VXNlcjE0NTAzMjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1450322?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/annargrs",
"html_url": "https://github.com/annargrs",
"followers_url": "https://api.github.com/users/annargrs/followers",
"following_url": "https://api.github.com/users/annargrs/following{/other_user}",
"gists_url": "https://api.github.com/users/annargrs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/annargrs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/annargrs/subscriptions",
"organizations_url": "https://api.github.com/users/annargrs/orgs",
"repos_url": "https://api.github.com/users/annargrs/repos",
"events_url": "https://api.github.com/users/annargrs/events{/privacy}",
"received_events_url": "https://api.github.com/users/annargrs/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"we're using quail 1.3 now thanks.\r\nclosing this one"
] | 1,604,094,453,000 | 1,606,691,119,000 | 1,606,691,118,000 | NONE | null | updated links to v1.3 of quail, fixed the description | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/783/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/783/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/783",
"html_url": "https://github.com/huggingface/datasets/pull/783",
"diff_url": "https://github.com/huggingface/datasets/pull/783.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/783.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/782 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/782/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/782/comments | https://api.github.com/repos/huggingface/datasets/issues/782/events | https://github.com/huggingface/datasets/pull/782 | 733,316,463 | MDExOlB1bGxSZXF1ZXN0NTEzMTE2MTM0 | 782 | Fix metric deletion when attribuets are missing | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,604,074,570,000 | 1,604,076,473,000 | 1,604,076,472,000 | MEMBER | null | When you call `del` on a metric we want to make sure that the arrow attributes are not already deleted.
I just added `if hasattr(...)` to make sure it doesn't crash | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/782/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/782/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/782",
"html_url": "https://github.com/huggingface/datasets/pull/782",
"diff_url": "https://github.com/huggingface/datasets/pull/782.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/782.patch",
"merged_at": 1604076472000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/781 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/781/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/781/comments | https://api.github.com/repos/huggingface/datasets/issues/781/events | https://github.com/huggingface/datasets/pull/781 | 733,168,609 | MDExOlB1bGxSZXF1ZXN0NTEyOTkyMzQw | 781 | Add XNLI train set | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,604,064,113,000 | 1,604,946,170,000 | 1,604,946,169,000 | MEMBER | null | I added the train set that was built using the translated MNLI.
Now you can load the dataset specifying one language:
```python
from datasets import load_dataset
xnli_en = load_dataset("xnli", "en")
print(xnli_en["train"][0])
# {'hypothesis': 'Product and geography are what make cream skimming work .', 'label': 1, 'premise': 'Conceptually cream skimming has two basic dimensions - product and geography .'}
print(xnli_en["test"][0])
# {'hypothesis': 'I havent spoken to him again.', 'label': 2, 'premise': "Well, I wasn't even thinking about that, but I was so frustrated, and, I ended up talking to him again."}
```
Cc @sgugger | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/781/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/781/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/781",
"html_url": "https://github.com/huggingface/datasets/pull/781",
"diff_url": "https://github.com/huggingface/datasets/pull/781.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/781.patch",
"merged_at": 1604946169000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/780 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/780/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/780/comments | https://api.github.com/repos/huggingface/datasets/issues/780/events | https://github.com/huggingface/datasets/pull/780 | 732,738,647 | MDExOlB1bGxSZXF1ZXN0NTEyNjM0MzI0 | 780 | Add ASNQ dataset | {
"login": "mkserge",
"id": 2992022,
"node_id": "MDQ6VXNlcjI5OTIwMjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2992022?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mkserge",
"html_url": "https://github.com/mkserge",
"followers_url": "https://api.github.com/users/mkserge/followers",
"following_url": "https://api.github.com/users/mkserge/following{/other_user}",
"gists_url": "https://api.github.com/users/mkserge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mkserge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mkserge/subscriptions",
"organizations_url": "https://api.github.com/users/mkserge/orgs",
"repos_url": "https://api.github.com/users/mkserge/repos",
"events_url": "https://api.github.com/users/mkserge/events{/privacy}",
"received_events_url": "https://api.github.com/users/mkserge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Very nice !\r\nWhat do the `sentence1` and `sentence2` correspond to exactly ?\r\nAlso maybe you could use the `ClassLabel` feature type for the `label` field (see [snli](https://github.com/huggingface/datasets/blob/master/datasets/snli/snli.py) for example)",
"> What do the `sentence1` and `sentence2` correspon... | 1,604,014,316,000 | 1,605,000,383,000 | 1,605,000,383,000 | CONTRIBUTOR | null | This pull request adds the ASNQ dataset. It is a dataset for answer sentence selection derived from Google Natural Questions (NQ) dataset (Kwiatkowski et al. 2019). The dataset details can be found in the paper at https://arxiv.org/abs/1911.04118
The dataset is authored by Siddhant Garg, Thuy Vu and Alessandro Moschitti.
_Please note that I have no affiliation with the authors._
Repo: https://github.com/alexa/wqa_tanda
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/780/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/780/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/780",
"html_url": "https://github.com/huggingface/datasets/pull/780",
"diff_url": "https://github.com/huggingface/datasets/pull/780.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/780.patch",
"merged_at": 1605000383000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/779 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/779/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/779/comments | https://api.github.com/repos/huggingface/datasets/issues/779/events | https://github.com/huggingface/datasets/pull/779 | 732,514,887 | MDExOlB1bGxSZXF1ZXN0NTEyNDQzMjY0 | 779 | Feature/fidelity metrics from emnlp2020 evaluating and characterizing human rationales | {
"login": "rathoreanirudh",
"id": 11327413,
"node_id": "MDQ6VXNlcjExMzI3NDEz",
"avatar_url": "https://avatars.githubusercontent.com/u/11327413?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rathoreanirudh",
"html_url": "https://github.com/rathoreanirudh",
"followers_url": "https://api.github.com/users/rathoreanirudh/followers",
"following_url": "https://api.github.com/users/rathoreanirudh/following{/other_user}",
"gists_url": "https://api.github.com/users/rathoreanirudh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rathoreanirudh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rathoreanirudh/subscriptions",
"organizations_url": "https://api.github.com/users/rathoreanirudh/orgs",
"repos_url": "https://api.github.com/users/rathoreanirudh/repos",
"events_url": "https://api.github.com/users/rathoreanirudh/events{/privacy}",
"received_events_url": "https://api.github.com/users/rathoreanirudh/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi ! This looks interesting, thanks for adding it :) \r\n\r\nFor metrics there should only be two features fields: references and predictions.\r\nBoth of them can be defined as you want using nested structures if you need to.\r\nAlso I'm not sure what goes into references and what goes into predictions, could you ... | 1,603,992,674,000 | 1,605,291,082,000 | null | NONE | null | This metric computes fidelity (Yu et al. 2019, DeYoung et al. 2019) and normalized fidelity (Carton et al. 2020). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/779/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/779/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/779",
"html_url": "https://github.com/huggingface/datasets/pull/779",
"diff_url": "https://github.com/huggingface/datasets/pull/779.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/779.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/778 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/778/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/778/comments | https://api.github.com/repos/huggingface/datasets/issues/778/events | https://github.com/huggingface/datasets/issues/778 | 732,449,652 | MDU6SXNzdWU3MzI0NDk2NTI= | 778 | Unexpected behavior when loading cached csv file? | {
"login": "dcfidalgo",
"id": 15979778,
"node_id": "MDQ6VXNlcjE1OTc5Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/15979778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dcfidalgo",
"html_url": "https://github.com/dcfidalgo",
"followers_url": "https://api.github.com/users/dcfidalgo/followers",
"following_url": "https://api.github.com/users/dcfidalgo/following{/other_user}",
"gists_url": "https://api.github.com/users/dcfidalgo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dcfidalgo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dcfidalgo/subscriptions",
"organizations_url": "https://api.github.com/users/dcfidalgo/orgs",
"repos_url": "https://api.github.com/users/dcfidalgo/repos",
"events_url": "https://api.github.com/users/dcfidalgo/events{/privacy}",
"received_events_url": "https://api.github.com/users/dcfidalgo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! Thanks for reporting.\r\nThe same issue was reported in #730 (but with the encodings instead of the delimiter). It was fixed by #770 .\r\nThe fix will be available in the next release :)",
"Thanks for the prompt reply and terribly sorry for the spam! \r\nLooking forward to the new release! "
] | 1,603,987,570,000 | 1,604,006,487,000 | 1,604,006,487,000 | CONTRIBUTOR | null | I read a csv file from disk and forgot so specify the right delimiter. When i read the csv file again specifying the right delimiter it had no effect since it was using the cached dataset. I am not sure if this is unwanted behavior since i can always specify `download_mode="force_redownload"`. But i think it would be nice if the information what `delimiter` or what `column_names` were used would influence the identifier of the cached dataset.
Small snippet to reproduce the behavior:
```python
import datasets
with open("dummy_data.csv", "w") as file:
file.write("test,this;text\n")
print(datasets.load_dataset("csv", data_files="dummy_data.csv", split="train").column_names)
# ["test", "this;text"]
print(datasets.load_dataset("csv", data_files="dummy_data.csv", split="train", delimiter=";").column_names)
# still ["test", "this;text"]
```
By the way, thanks a lot for this amazing library! :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/778/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/778/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/777 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/777/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/777/comments | https://api.github.com/repos/huggingface/datasets/issues/777/events | https://github.com/huggingface/datasets/pull/777 | 732,376,648 | MDExOlB1bGxSZXF1ZXN0NTEyMzI2ODM2 | 777 | Better error message for uninitialized metric | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,603,982,570,000 | 1,603,984,706,000 | 1,603,984,704,000 | MEMBER | null | When calling `metric.compute()` without having called `metric.add` or `metric.add_batch` at least once, the error was quite cryptic. I added a better error message
Fix #729 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/777/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/777/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/777",
"html_url": "https://github.com/huggingface/datasets/pull/777",
"diff_url": "https://github.com/huggingface/datasets/pull/777.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/777.patch",
"merged_at": 1603984703000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/776 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/776/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/776/comments | https://api.github.com/repos/huggingface/datasets/issues/776/events | https://github.com/huggingface/datasets/pull/776 | 732,343,550 | MDExOlB1bGxSZXF1ZXN0NTEyMjk5NzQx | 776 | Allow custom split names in text dataset | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Awesome! This will make the behaviour much more intuitive for some non-standard code.\r\n\r\nThanks!"
] | 1,603,980,246,000 | 1,604,065,605,000 | 1,604,064,232,000 | MEMBER | null | The `text` dataset used to return only splits like train, test and validation. Other splits were ignored.
Now any split name is allowed.
I did the same for `json`, `pandas` and `csv`
Fix #735 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/776/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/776/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/776",
"html_url": "https://github.com/huggingface/datasets/pull/776",
"diff_url": "https://github.com/huggingface/datasets/pull/776.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/776.patch",
"merged_at": 1604064232000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/775 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/775/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/775/comments | https://api.github.com/repos/huggingface/datasets/issues/775/events | https://github.com/huggingface/datasets/pull/775 | 732,287,504 | MDExOlB1bGxSZXF1ZXN0NTEyMjUyODI3 | 775 | Properly delete metrics when a process is killed | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,603,975,927,000 | 1,603,980,080,000 | 1,603,980,079,000 | MEMBER | null | Tests are flaky when using metrics in distributed setup.
There is because of one test that make sure that using two possibly incompatible metric computation (same exp id) either works or raises the right error.
However if the error is raised, all the processes of the metric are killed, and the open files (arrow + lock files) are not closed correctly. This causes PermissionError on Windows when deleting the temporary directory.
To fix that I added a `finally` clause in the function passed to multiprocess to properly close the files when the process exits. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/775/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/775/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/775",
"html_url": "https://github.com/huggingface/datasets/pull/775",
"diff_url": "https://github.com/huggingface/datasets/pull/775.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/775.patch",
"merged_at": 1603980079000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/774 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/774/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/774/comments | https://api.github.com/repos/huggingface/datasets/issues/774/events | https://github.com/huggingface/datasets/pull/774 | 732,265,741 | MDExOlB1bGxSZXF1ZXN0NTEyMjM0NjA0 | 774 | [ROUGE] Add description to Rouge metric | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,603,973,972,000 | 1,603,994,150,000 | 1,603,994,148,000 | MEMBER | null | Add information about case sensitivity to ROUGE. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/774/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/774/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/774",
"html_url": "https://github.com/huggingface/datasets/pull/774",
"diff_url": "https://github.com/huggingface/datasets/pull/774.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/774.patch",
"merged_at": 1603994148000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/773 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/773/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/773/comments | https://api.github.com/repos/huggingface/datasets/issues/773/events | https://github.com/huggingface/datasets/issues/773 | 731,684,153 | MDU6SXNzdWU3MzE2ODQxNTM= | 773 | Adding CC-100: Monolingual Datasets from Web Crawl Data | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followe... | null | [
"cc @aconneau ;) "
] | 1,603,909,241,000 | 1,607,941,208,000 | 1,607,941,207,000 | MEMBER | null | ## Adding a Dataset
- **Name:** CC-100: Monolingual Datasets from Web Crawl Data
- **Description:** https://twitter.com/alex_conneau/status/1321507120848625665
- **Paper:** https://arxiv.org/abs/1911.02116
- **Data:** http://data.statmt.org/cc-100/
- **Motivation:** A large scale multi-lingual language modeling dataset. Text is de-duplicated and filtered by how "Wikipedia-like" it is, hopefully helping avoid some of the worst parts of the common crawl.
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/773/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/773/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/772 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/772/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/772/comments | https://api.github.com/repos/huggingface/datasets/issues/772/events | https://github.com/huggingface/datasets/pull/772 | 731,612,430 | MDExOlB1bGxSZXF1ZXN0NTExNjg4ODMx | 772 | Fix metric with cache dir | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,603,903,393,000 | 1,603,964,084,000 | 1,603,964,083,000 | MEMBER | null | The cache_dir provided by the user was concatenated twice and therefore causing FileNotFound errors.
The tests didn't cover the case of providing `cache_dir=` for metrics because of a stupid issue (it was not using the right parameter).
I remove the double concatenation and I fixed the tests
Fix #728 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/772/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/772/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/772",
"html_url": "https://github.com/huggingface/datasets/pull/772",
"diff_url": "https://github.com/huggingface/datasets/pull/772.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/772.patch",
"merged_at": 1603964082000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/771 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/771/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/771/comments | https://api.github.com/repos/huggingface/datasets/issues/771/events | https://github.com/huggingface/datasets/issues/771 | 731,482,213 | MDU6SXNzdWU3MzE0ODIyMTM= | 771 | Using `Dataset.map` with `n_proc>1` print multiple progress bars | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Yes it allows to monitor the speed of each process. Currently each process takes care of one shard of the dataset.\r\n\r\nAt one point we can consider using streaming batches to a pool of processes instead of sharding the dataset in `num_proc` parts. At that point it will be easy to use only one progress bar"
] | 1,603,894,407,000 | 1,603,894,697,000 | null | MEMBER | null | When using `Dataset.map` with `n_proc > 1`, only one of the processes should print a progress bar (to make the output readable). Right now, `n_proc` progress bars are printed. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/771/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/771/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/770 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/770/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/770/comments | https://api.github.com/repos/huggingface/datasets/issues/770/events | https://github.com/huggingface/datasets/pull/770 | 731,445,222 | MDExOlB1bGxSZXF1ZXN0NTExNTQ5MTg1 | 770 | Fix custom builder caching | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,603,891,944,000 | 1,603,964,163,000 | 1,603,964,161,000 | MEMBER | null | The cache directory of a dataset didn't take into account additional parameters that the user could specify such as `features` or any parameter of the builder configuration kwargs (ex: `encoding` for the `text` dataset).
To fix that, the cache directory name now has a suffix that depends on all of them.
Fix #730
Fix #750 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/770/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/770/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/770",
"html_url": "https://github.com/huggingface/datasets/pull/770",
"diff_url": "https://github.com/huggingface/datasets/pull/770.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/770.patch",
"merged_at": 1603964161000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/769 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/769/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/769/comments | https://api.github.com/repos/huggingface/datasets/issues/769/events | https://github.com/huggingface/datasets/issues/769 | 731,257,104 | MDU6SXNzdWU3MzEyNTcxMDQ= | 769 | How to choose proper download_mode in function load_dataset? | {
"login": "jzq2000",
"id": 48550398,
"node_id": "MDQ6VXNlcjQ4NTUwMzk4",
"avatar_url": "https://avatars.githubusercontent.com/u/48550398?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jzq2000",
"html_url": "https://github.com/jzq2000",
"followers_url": "https://api.github.com/users/jzq2000/followers",
"following_url": "https://api.github.com/users/jzq2000/following{/other_user}",
"gists_url": "https://api.github.com/users/jzq2000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jzq2000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jzq2000/subscriptions",
"organizations_url": "https://api.github.com/users/jzq2000/orgs",
"repos_url": "https://api.github.com/users/jzq2000/repos",
"events_url": "https://api.github.com/users/jzq2000/events{/privacy}",
"received_events_url": "https://api.github.com/users/jzq2000/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"`download_mode=datasets.GenerateMode.FORCE_REDOWNLOAD` should work.\r\nThis makes me think we we should rename this to DownloadMode.FORCE_REDOWNLOAD. Currently that's confusing",
"Can we just use `features=...` in `load_dataset` for this @lhoestq?",
"Indeed you should use `features` in this case. \r\n```python... | 1,603,876,579,000 | 1,603,881,299,000 | null | NONE | null | Hi, I am a beginner to datasets and I try to use datasets to load my csv file.
my csv file looks like this
```
text,label
"Effective but too-tepid biopic",3
"If you sometimes like to go to the movies to have fun , Wasabi is a good place to start .",4
"Emerges as something rare , an issue movie that 's so honest and keenly observed that it does n't feel like one .",5
```
First I try to use this command to load my csv file .
``` python
dataset=load_dataset('csv', data_files=['sst_test.csv'])
```
It seems good, but when i try to overwrite the convert_options to convert 'label' columns from int64 to float32 like this.
``` python
import pyarrow as pa
from pyarrow import csv
read_options = csv.ReadOptions(block_size=1024*1024)
parse_options = csv.ParseOptions()
convert_options = csv.ConvertOptions(column_types={'text': pa.string(), 'label': pa.float32()})
dataset = load_dataset('csv', data_files=['sst_test.csv'], read_options=read_options,
parse_options=parse_options, convert_options=convert_options)
```
It keeps the same:
```shell
Dataset(features: {'text': Value(dtype='string', id=None), 'label': Value(dtype='int64', id=None)}, num_rows: 2210)
```
I think this issue is caused by the parameter "download_mode" Default to REUSE_DATASET_IF_EXISTS because after I delete the cache_dir, it seems right.
Is it a bug? How to choose proper download_mode to avoid this issue?
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/769/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/769/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/768 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/768/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/768/comments | https://api.github.com/repos/huggingface/datasets/issues/768/events | https://github.com/huggingface/datasets/issues/768 | 730,908,060 | MDU6SXNzdWU3MzA5MDgwNjA= | 768 | Add a `lazy_map` method to `Dataset` and `DatasetDict` | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"This is cool! I think some aspects to think about and decide in terms of API are:\r\n- do we allow several methods (chained i guess)\r\n- how do we inspect the currently set method(s)\r\n- how do we control/reset them"
] | 1,603,837,983,000 | 1,603,875,493,000 | null | MEMBER | null | The library is great, but it would be even more awesome with a `lazy_map` method implemented on `Dataset` and `DatasetDict`. This would apply a function on a give item but when the item is requested. Two use cases:
1. load image on the fly
2. apply a random function and get different outputs at each epoch (like data augmentation or randomly masking a part of a sentence for BERT-like objectives). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/768/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/768/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/767 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/767/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/767/comments | https://api.github.com/repos/huggingface/datasets/issues/767/events | https://github.com/huggingface/datasets/issues/767 | 730,771,610 | MDU6SXNzdWU3MzA3NzE2MTA= | 767 | Add option for named splits when using ds.train_test_split | {
"login": "nateraw",
"id": 32437151,
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nateraw",
"html_url": "https://github.com/nateraw",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"repos_url": "https://api.github.com/users/nateraw/repos",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Yes definitely we should give more flexibility to control the name of the splits outputted by `train_test_split`.\r\n\r\nRelated is the very interesting feedback from @bramvanroy on how we should improve this method: https://discuss.huggingface.co/t/how-to-split-main-dataset-into-train-dev-test-as-datasetdict/1090... | 1,603,828,784,000 | 1,605,017,121,000 | null | CONTRIBUTOR | null | ### Feature Request 🚀
Can we add a way to name your splits when using the `.train_test_split` function?
In almost every use case I've come across, I have a `train` and a `test` split in my `DatasetDict`, and I want to create a `validation` split. Therefore, its kinda useless to get a `test` split back from `train_test_split`, as it'll just overwrite my real `test` split that I intended to keep.
### Workaround
this is my hack for dealin with this, for now :slightly_smiling_face:
```python
from datasets import load_dataset
ds = load_dataset('imdb')
ds['train'], ds['validation'] = ds['train'].train_test_split(.1).values()
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/767/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/767/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/766 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/766/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/766/comments | https://api.github.com/repos/huggingface/datasets/issues/766/events | https://github.com/huggingface/datasets/issues/766 | 730,669,596 | MDU6SXNzdWU3MzA2Njk1OTY= | 766 | [GEM] add DART data-to-text generation dataset | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [
"Is this a duplicate of #924 ?",
"Yup, closing! Haven't been keeping track of the solved issues during the sprint."
] | 1,603,820,044,000 | 1,607,002,638,000 | 1,607,002,638,000 | MEMBER | null | ## Adding a Dataset
- **Name:** DART
- **Description:** DART consists of 82,191 examples across different domains with each input being a semantic RDF triple set derived from data records in tables and the tree ontology of the schema, annotated with sentence descriptions that cover all facts in the triple set.
- **Paper:** https://arxiv.org/abs/2007.02871v1
- **Data:** https://github.com/Yale-LILY/dart
- **Motivation:** the dataset will likely be included in the GEM benchmark
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/766/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/766/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/765 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/765/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/765/comments | https://api.github.com/repos/huggingface/datasets/issues/765/events | https://github.com/huggingface/datasets/issues/765 | 730,668,332 | MDU6SXNzdWU3MzA2NjgzMzI= | 765 | [GEM] Add DART data-to-text generation dataset | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [] | 1,603,819,943,000 | 1,603,820,061,000 | 1,603,820,061,000 | MEMBER | null | ## Adding a Dataset
- **Name:** DART
- **Description:** DART consists of 82,191 examples across different domains with each input being a semantic RDF triple set derived from data records in tables and the tree ontology of the schema, annotated with sentence descriptions that cover all facts in the triple set.
- **Paper:** https://arxiv.org/abs/2007.02871v1
- **Data:** https://github.com/Yale-LILY/dart
- **Motivation:** It will likely be included in the GEM generation evaluation benchmark
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/765/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/765/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/764 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/764/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/764/comments | https://api.github.com/repos/huggingface/datasets/issues/764/events | https://github.com/huggingface/datasets/pull/764 | 730,617,828 | MDExOlB1bGxSZXF1ZXN0NTEwODkyMTk2 | 764 | Adding Issue Template for Dataset Requests | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,603,816,628,000 | 1,603,819,526,000 | 1,603,819,525,000 | MEMBER | null | adding .github/ISSUE_TEMPLATE/add-dataset.md | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/764/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/764/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/764",
"html_url": "https://github.com/huggingface/datasets/pull/764",
"diff_url": "https://github.com/huggingface/datasets/pull/764.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/764.patch",
"merged_at": 1603819525000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/763 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/763/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/763/comments | https://api.github.com/repos/huggingface/datasets/issues/763/events | https://github.com/huggingface/datasets/pull/763 | 730,593,631 | MDExOlB1bGxSZXF1ZXN0NTEwODcyMDYx | 763 | Fixed errors in bertscore related to custom baseline | {
"login": "juanjucm",
"id": 36761132,
"node_id": "MDQ6VXNlcjM2NzYxMTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/36761132?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/juanjucm",
"html_url": "https://github.com/juanjucm",
"followers_url": "https://api.github.com/users/juanjucm/followers",
"following_url": "https://api.github.com/users/juanjucm/following{/other_user}",
"gists_url": "https://api.github.com/users/juanjucm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/juanjucm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/juanjucm/subscriptions",
"organizations_url": "https://api.github.com/users/juanjucm/orgs",
"repos_url": "https://api.github.com/users/juanjucm/repos",
"events_url": "https://api.github.com/users/juanjucm/events{/privacy}",
"received_events_url": "https://api.github.com/users/juanjucm/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,603,814,915,000 | 1,603,907,965,000 | 1,603,907,965,000 | CONTRIBUTOR | null | [bertscore version 0.3.6 ](https://github.com/Tiiiger/bert_score) added support for custom baseline files. This update added extra argument `baseline_path` to BERTScorer class as well as some extra boolean parameters `use_custom_baseline` in functions like `get_hash(model, num_layers, idf, rescale_with_baseline, use_custom_baseline)`.
This PR fix those matching errors in bertscore metric implementation. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/763/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/763/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/763",
"html_url": "https://github.com/huggingface/datasets/pull/763",
"diff_url": "https://github.com/huggingface/datasets/pull/763.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/763.patch",
"merged_at": 1603907965000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/762 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/762/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/762/comments | https://api.github.com/repos/huggingface/datasets/issues/762/events | https://github.com/huggingface/datasets/issues/762 | 730,586,972 | MDU6SXNzdWU3MzA1ODY5NzI= | 762 | [GEM] Add Czech Restaurant data-to-text generation dataset | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [] | 1,603,814,447,000 | 1,607,002,664,000 | 1,607,002,664,000 | MEMBER | null | - Paper: https://www.aclweb.org/anthology/W19-8670.pdf
- Data: https://github.com/UFAL-DSG/cs_restaurant_dataset
- The dataset will likely be part of the GEM benchmark | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/762/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/762/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/761 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/761/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/761/comments | https://api.github.com/repos/huggingface/datasets/issues/761/events | https://github.com/huggingface/datasets/issues/761 | 729,898,867 | MDU6SXNzdWU3Mjk4OTg4Njc= | 761 | Downloaded datasets are not usable offline | {
"login": "ghazi-f",
"id": 25091538,
"node_id": "MDQ6VXNlcjI1MDkxNTM4",
"avatar_url": "https://avatars.githubusercontent.com/u/25091538?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghazi-f",
"html_url": "https://github.com/ghazi-f",
"followers_url": "https://api.github.com/users/ghazi-f/followers",
"following_url": "https://api.github.com/users/ghazi-f/following{/other_user}",
"gists_url": "https://api.github.com/users/ghazi-f/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghazi-f/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghazi-f/subscriptions",
"organizations_url": "https://api.github.com/users/ghazi-f/orgs",
"repos_url": "https://api.github.com/users/ghazi-f/repos",
"events_url": "https://api.github.com/users/ghazi-f/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghazi-f/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Yes currently you need an internet connection because the lib tries to check for the etag of the dataset script online to see if you don't have it locally already.\r\n\r\nIf we add a way to store the etag/hash locally after the first download, it would allow users to first download the dataset with an internet con... | 1,603,745,686,000 | 1,603,807,469,000 | null | CONTRIBUTOR | null | I've been trying to use the IMDB dataset offline, but after downloading it and turning off the internet it still raises an error from the ```requests``` library trying to reach for the online dataset.
Is this the intended behavior ?
(Sorry, I wrote the the first version of this issue while still on nlp 0.3.0). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/761/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/761/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/760 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/760/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/760/comments | https://api.github.com/repos/huggingface/datasets/issues/760/events | https://github.com/huggingface/datasets/issues/760 | 729,637,917 | MDU6SXNzdWU3Mjk2Mzc5MTc= | 760 | Add meta-data to the HANS dataset | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
},
{
"id": 2067388877,
"node_... | closed | false | {
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "htt... | null | [] | 1,603,724,213,000 | 1,607,002,714,000 | 1,607,002,714,000 | MEMBER | null | The current version of the [HANS dataset](https://github.com/huggingface/datasets/blob/master/datasets/hans/hans.py) is missing the additional information provided for each example, including the sentence parses, heuristic and subcase. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/760/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/760/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/759 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/759/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/759/comments | https://api.github.com/repos/huggingface/datasets/issues/759/events | https://github.com/huggingface/datasets/issues/759 | 729,046,916 | MDU6SXNzdWU3MjkwNDY5MTY= | 759 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py | {
"login": "AI678",
"id": 63541083,
"node_id": "MDQ6VXNlcjYzNTQxMDgz",
"avatar_url": "https://avatars.githubusercontent.com/u/63541083?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AI678",
"html_url": "https://github.com/AI678",
"followers_url": "https://api.github.com/users/AI678/followers",
"following_url": "https://api.github.com/users/AI678/following{/other_user}",
"gists_url": "https://api.github.com/users/AI678/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AI678/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AI678/subscriptions",
"organizations_url": "https://api.github.com/users/AI678/orgs",
"repos_url": "https://api.github.com/users/AI678/repos",
"events_url": "https://api.github.com/users/AI678/events{/privacy}",
"received_events_url": "https://api.github.com/users/AI678/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Are you running the script on a machine with an internet connection ?",
"Yes , I can browse the url through Google Chrome.",
"Does this HEAD request return 200 on your machine ?\r\n```python\r\nimport requests ... | 1,603,640,097,000 | 1,628,100,609,000 | 1,628,100,609,000 | NONE | null | Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“train”)
And I got the following errors.
Traceback (most recent call last):
File “test.py”, line 7, in
test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“test”)
File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\load.py”, line 589, in load_dataset
module_path, hash = prepare_module(
File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\load.py”, line 268, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\utils\file_utils.py”, line 300, in cached_path
output_path = get_from_cache(
File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\utils\file_utils.py”, line 475, in get_from_cache
raise ConnectionError(“Couldn’t reach {}”.format(url))
ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py
How can I fix this ? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/759/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/759/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/758 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/758/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/758/comments | https://api.github.com/repos/huggingface/datasets/issues/758/events | https://github.com/huggingface/datasets/issues/758 | 728,638,559 | MDU6SXNzdWU3Mjg2Mzg1NTk= | 758 | Process 0 very slow when using num_procs with map to tokenizer | {
"login": "ksjae",
"id": 17930170,
"node_id": "MDQ6VXNlcjE3OTMwMTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/17930170?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ksjae",
"html_url": "https://github.com/ksjae",
"followers_url": "https://api.github.com/users/ksjae/followers",
"following_url": "https://api.github.com/users/ksjae/following{/other_user}",
"gists_url": "https://api.github.com/users/ksjae/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ksjae/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ksjae/subscriptions",
"organizations_url": "https://api.github.com/users/ksjae/orgs",
"repos_url": "https://api.github.com/users/ksjae/repos",
"events_url": "https://api.github.com/users/ksjae/events{/privacy}",
"received_events_url": "https://api.github.com/users/ksjae/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! Thanks for reporting.\r\nIs the distribution of text length of your data evenly distributed across your dataset ? I mean, could it be because the examples in the first part of your dataset are slower to process ?\r\nAlso could how many CPUs can you use for multiprocessing ?\r\n```python\r\nimport multiprocess... | 1,603,507,220,000 | 1,603,857,586,000 | 1,603,857,585,000 | NONE | null | <img width="721" alt="image" src="https://user-images.githubusercontent.com/17930170/97066109-776d0d00-15ed-11eb-8bba-bb4d2e0fcc33.png">
The code I am using is
```
dataset = load_dataset("text", data_files=[file_path], split='train')
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True,
truncation=True, max_length=args.block_size), num_proc=8)
dataset.set_format(type='torch', columns=['input_ids'])
dataset.save_to_disk(file_path+'.arrow')
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/758/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/758/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/757 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/757/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/757/comments | https://api.github.com/repos/huggingface/datasets/issues/757/events | https://github.com/huggingface/datasets/issues/757 | 728,241,494 | MDU6SXNzdWU3MjgyNDE0OTQ= | 757 | CUDA out of memory | {
"login": "li1117heex",
"id": 47059217,
"node_id": "MDQ6VXNlcjQ3MDU5MjE3",
"avatar_url": "https://avatars.githubusercontent.com/u/47059217?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/li1117heex",
"html_url": "https://github.com/li1117heex",
"followers_url": "https://api.github.com/users/li1117heex/followers",
"following_url": "https://api.github.com/users/li1117heex/following{/other_user}",
"gists_url": "https://api.github.com/users/li1117heex/gists{/gist_id}",
"starred_url": "https://api.github.com/users/li1117heex/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/li1117heex/subscriptions",
"organizations_url": "https://api.github.com/users/li1117heex/orgs",
"repos_url": "https://api.github.com/users/li1117heex/repos",
"events_url": "https://api.github.com/users/li1117heex/events{/privacy}",
"received_events_url": "https://api.github.com/users/li1117heex/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Could you provide more details ? What's the code you ran ?",
"```python\r\ntokenizer = FunnelTokenizer.from_pretrained('funnel-transformer/small')\r\n\r\ndef tokenize(batch):\r\n return tokenizer(batch['text'], padding='max_length', truncation=True,max_length=512)\r\n\r\ndataset = load_dataset(\"bookcorpus\",... | 1,603,461,420,000 | 1,608,732,389,000 | 1,608,732,389,000 | NONE | null | In your dataset ,cuda run out of memory as long as the trainer begins:
however, without changing any other element/parameter,just switch dataset to `LineByLineTextDataset`,everything becames OK.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/757/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/757/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/756 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/756/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/756/comments | https://api.github.com/repos/huggingface/datasets/issues/756/events | https://github.com/huggingface/datasets/pull/756 | 728,211,373 | MDExOlB1bGxSZXF1ZXN0NTA4OTYwNTc3 | 756 | Start community-provided dataset docs | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Oh, really cool @sshleifer!"
] | 1,603,459,061,000 | 1,603,716,920,000 | 1,603,716,919,000 | CONTRIBUTOR | null | Continuation of #736 with clean fork.
#### Old description
This is what I did to get the pseudo-labels updated. Not sure if it generalizes, but I figured I would write it down. It was pretty easy because all I had to do was make properly formatted directories and change URLs.
In slack @thomwolf called it a user-namespace dataset, but the docs call it community dataset.
I think the first naming is clearer, but I didn't address that here.
I didn't add metadata, will try that. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/756/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/756/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/756",
"html_url": "https://github.com/huggingface/datasets/pull/756",
"diff_url": "https://github.com/huggingface/datasets/pull/756.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/756.patch",
"merged_at": 1603716919000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/755 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/755/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/755/comments | https://api.github.com/repos/huggingface/datasets/issues/755/events | https://github.com/huggingface/datasets/pull/755 | 728,203,821 | MDExOlB1bGxSZXF1ZXN0NTA4OTU0NDI2 | 755 | Start community-provided dataset docs V2 | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,603,458,450,000 | 1,603,458,937,000 | 1,603,458,937,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/755/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/755/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/755",
"html_url": "https://github.com/huggingface/datasets/pull/755",
"diff_url": "https://github.com/huggingface/datasets/pull/755.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/755.patch",
"merged_at": null
} | true | |
https://api.github.com/repos/huggingface/datasets/issues/754 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/754/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/754/comments | https://api.github.com/repos/huggingface/datasets/issues/754/events | https://github.com/huggingface/datasets/pull/754 | 727,863,105 | MDExOlB1bGxSZXF1ZXN0NTA4NjczNzM2 | 754 | Use full released xsum dataset | {
"login": "jbragg",
"id": 2238344,
"node_id": "MDQ6VXNlcjIyMzgzNDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2238344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jbragg",
"html_url": "https://github.com/jbragg",
"followers_url": "https://api.github.com/users/jbragg/followers",
"following_url": "https://api.github.com/users/jbragg/following{/other_user}",
"gists_url": "https://api.github.com/users/jbragg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jbragg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jbragg/subscriptions",
"organizations_url": "https://api.github.com/users/jbragg/orgs",
"repos_url": "https://api.github.com/users/jbragg/repos",
"events_url": "https://api.github.com/users/jbragg/events{/privacy}",
"received_events_url": "https://api.github.com/users/jbragg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq I took a shot at addressing your comments but the build scripts seem to be complaining about not being able to open dummy files. How do I resolve those errors without copying the full dataset into the dummy dir?",
"Could you check that the names of the dummy data files are right ?\r\nYou can use \r\n```... | 1,603,423,789,000 | 1,609,470,716,000 | 1,603,717,018,000 | CONTRIBUTOR | null | #672 Fix xsum to expand coverage and include IDs
Code based on parser from older version of `datasets/xsum/xsum.py`
@lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/754/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/754/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/754",
"html_url": "https://github.com/huggingface/datasets/pull/754",
"diff_url": "https://github.com/huggingface/datasets/pull/754.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/754.patch",
"merged_at": 1603717018000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/753 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/753/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/753/comments | https://api.github.com/repos/huggingface/datasets/issues/753/events | https://github.com/huggingface/datasets/pull/753 | 727,434,935 | MDExOlB1bGxSZXF1ZXN0NTA4MzI4ODM0 | 753 | Fix doc links to viewer | {
"login": "Pierrci",
"id": 5020707,
"node_id": "MDQ6VXNlcjUwMjA3MDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5020707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pierrci",
"html_url": "https://github.com/Pierrci",
"followers_url": "https://api.github.com/users/Pierrci/followers",
"following_url": "https://api.github.com/users/Pierrci/following{/other_user}",
"gists_url": "https://api.github.com/users/Pierrci/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pierrci/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pierrci/subscriptions",
"organizations_url": "https://api.github.com/users/Pierrci/orgs",
"repos_url": "https://api.github.com/users/Pierrci/repos",
"events_url": "https://api.github.com/users/Pierrci/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pierrci/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,603,376,416,000 | 1,603,442,531,000 | 1,603,442,531,000 | MEMBER | null | It seems #733 forgot some links in the doc :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/753/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/753/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/753",
"html_url": "https://github.com/huggingface/datasets/pull/753",
"diff_url": "https://github.com/huggingface/datasets/pull/753.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/753.patch",
"merged_at": 1603442531000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/752 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/752/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/752/comments | https://api.github.com/repos/huggingface/datasets/issues/752/events | https://github.com/huggingface/datasets/issues/752 | 726,917,801 | MDU6SXNzdWU3MjY5MTc4MDE= | 752 | Clicking on a metric in the search page points to datasets page giving "Missing dataset" warning | {
"login": "ogabrielluiz",
"id": 24829397,
"node_id": "MDQ6VXNlcjI0ODI5Mzk3",
"avatar_url": "https://avatars.githubusercontent.com/u/24829397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ogabrielluiz",
"html_url": "https://github.com/ogabrielluiz",
"followers_url": "https://api.github.com/users/ogabrielluiz/followers",
"following_url": "https://api.github.com/users/ogabrielluiz/following{/other_user}",
"gists_url": "https://api.github.com/users/ogabrielluiz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ogabrielluiz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ogabrielluiz/subscriptions",
"organizations_url": "https://api.github.com/users/ogabrielluiz/orgs",
"repos_url": "https://api.github.com/users/ogabrielluiz/repos",
"events_url": "https://api.github.com/users/ogabrielluiz/events{/privacy}",
"received_events_url": "https://api.github.com/users/ogabrielluiz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for the report, can reproduce. Will fix",
"Fixed now @ogabrielluiz "
] | 1,603,320,983,000 | 1,603,383,582,000 | 1,603,383,582,000 | NONE | null | Hi! Sorry if this isn't the right place to talk about the website, I just didn't exactly where to write this.
Searching a metric in https://huggingface.co/metrics gives the right results but clicking on a metric (E.g ROUGE) points to https://huggingface.co/datasets/rouge. Clicking on a metric without searching points to the right page.
Thanks for all the great work! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/752/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/752/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/751 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/751/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/751/comments | https://api.github.com/repos/huggingface/datasets/issues/751/events | https://github.com/huggingface/datasets/issues/751 | 726,820,191 | MDU6SXNzdWU3MjY4MjAxOTE= | 751 | Error loading ms_marco v2.1 using load_dataset() | {
"login": "JainSahit",
"id": 30478979,
"node_id": "MDQ6VXNlcjMwNDc4OTc5",
"avatar_url": "https://avatars.githubusercontent.com/u/30478979?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JainSahit",
"html_url": "https://github.com/JainSahit",
"followers_url": "https://api.github.com/users/JainSahit/followers",
"following_url": "https://api.github.com/users/JainSahit/following{/other_user}",
"gists_url": "https://api.github.com/users/JainSahit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JainSahit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JainSahit/subscriptions",
"organizations_url": "https://api.github.com/users/JainSahit/orgs",
"repos_url": "https://api.github.com/users/JainSahit/repos",
"events_url": "https://api.github.com/users/JainSahit/events{/privacy}",
"received_events_url": "https://api.github.com/users/JainSahit/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"There was a similar issue in #294 \r\nClearing the cache and download again the dataset did the job. Could you try to clear your cache and download the dataset again ?",
"I was able to load the dataset successfully, I'm pretty sure it's just a cache issue that you have.\r\nLet me know if clearing your cache fixe... | 1,603,310,083,000 | 1,604,539,917,000 | 1,604,539,917,000 | NONE | null | Code:
`dataset = load_dataset('ms_marco', 'v2.1')`
Error:
```
`---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
<ipython-input-16-34378c057212> in <module>()
9
10 # Downloading and loading a dataset
---> 11 dataset = load_dataset('ms_marco', 'v2.1')
10 frames
/usr/lib/python3.6/json/decoder.py in raw_decode(self, s, idx)
353 """
354 try:
--> 355 obj, end = self.scan_once(s, idx)
356 except StopIteration as err:
357 raise JSONDecodeError("Expecting value", s, err.value) from None
JSONDecodeError: Unterminated string starting at: line 1 column 388988661 (char 388988660)
`
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/751/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/751/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/750 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/750/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/750/comments | https://api.github.com/repos/huggingface/datasets/issues/750/events | https://github.com/huggingface/datasets/issues/750 | 726,589,446 | MDU6SXNzdWU3MjY1ODk0NDY= | 750 | load_dataset doesn't include `features` in its hash | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,603,293,401,000 | 1,603,964,161,000 | 1,603,964,161,000 | MEMBER | null | It looks like the function `load_dataset` does not include what's passed in the `features` argument when creating a hash for a given dataset. As a result, if a user includes new features from an already downloaded dataset, those are ignored.
Example: some models on the hub have a different ordering for the labels than what `datasets` uses for MNLI so I'd like to do something along the lines of:
```
dataset = load_dataset("glue", "mnli")
features = dataset["train"].features
features["label"] = ClassLabel(names = ['entailment', 'contradiction', 'neutral']) # new label order
dataset = load_dataset("glue", "mnli", features=features)
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/750/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/750/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/749 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/749/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/749/comments | https://api.github.com/repos/huggingface/datasets/issues/749/events | https://github.com/huggingface/datasets/issues/749 | 726,366,062 | MDU6SXNzdWU3MjYzNjYwNjI= | 749 | [XGLUE] Adding new dataset | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"follo... | null | [
"Amazing! ",
"Small poll @thomwolf @yjernite @lhoestq @JetRunner @qiweizhen .\r\n\r\nAs stated in the XGLUE paper: https://arxiv.org/pdf/2004.01401.pdf , for each of the 11 down-stream tasks training data is only available in English, whereas development and test data is available in multiple different language ... | 1,603,277,496,000 | 1,609,927,376,000 | 1,609,927,375,000 | MEMBER | null | XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/749/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/749/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/748 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/748/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/748/comments | https://api.github.com/repos/huggingface/datasets/issues/748/events | https://github.com/huggingface/datasets/pull/748 | 726,196,589 | MDExOlB1bGxSZXF1ZXN0NTA3MzAyNjE3 | 748 | New version of CompGuessWhat?! with refined annotations | {
"login": "aleSuglia",
"id": 1479733,
"node_id": "MDQ6VXNlcjE0Nzk3MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1479733?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aleSuglia",
"html_url": "https://github.com/aleSuglia",
"followers_url": "https://api.github.com/users/aleSuglia/followers",
"following_url": "https://api.github.com/users/aleSuglia/following{/other_user}",
"gists_url": "https://api.github.com/users/aleSuglia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aleSuglia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aleSuglia/subscriptions",
"organizations_url": "https://api.github.com/users/aleSuglia/orgs",
"repos_url": "https://api.github.com/users/aleSuglia/repos",
"events_url": "https://api.github.com/users/aleSuglia/events{/privacy}",
"received_events_url": "https://api.github.com/users/aleSuglia/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"No worries. Always happy to help and thanks for your support in fixing the issue :)"
] | 1,603,263,341,000 | 1,603,270,362,000 | 1,603,269,979,000 | CONTRIBUTOR | null | This pull request introduces a few fixes to the annotations for VisualGenome in the CompGuessWhat?! original split. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/748/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/748/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/748",
"html_url": "https://github.com/huggingface/datasets/pull/748",
"diff_url": "https://github.com/huggingface/datasets/pull/748.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/748.patch",
"merged_at": 1603269979000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/747 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/747/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/747/comments | https://api.github.com/repos/huggingface/datasets/issues/747/events | https://github.com/huggingface/datasets/pull/747 | 725,884,704 | MDExOlB1bGxSZXF1ZXN0NTA3MDQ3MDE4 | 747 | Add Quail question answering dataset | {
"login": "sai-prasanna",
"id": 3595526,
"node_id": "MDQ6VXNlcjM1OTU1MjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/3595526?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sai-prasanna",
"html_url": "https://github.com/sai-prasanna",
"followers_url": "https://api.github.com/users/sai-prasanna/followers",
"following_url": "https://api.github.com/users/sai-prasanna/following{/other_user}",
"gists_url": "https://api.github.com/users/sai-prasanna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sai-prasanna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sai-prasanna/subscriptions",
"organizations_url": "https://api.github.com/users/sai-prasanna/orgs",
"repos_url": "https://api.github.com/users/sai-prasanna/repos",
"events_url": "https://api.github.com/users/sai-prasanna/events{/privacy}",
"received_events_url": "https://api.github.com/users/sai-prasanna/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,603,222,394,000 | 1,603,269,315,000 | 1,603,269,315,000 | CONTRIBUTOR | null | QuAIL is a multi-domain RC dataset featuring news, blogs, fiction and user stories. Each domain is represented by 200 texts, which gives us a 4-way data split. The texts are 300-350 word excerpts from CC-licensed texts that were hand-picked so as to make sense to human readers without larger context. Domain diversity mitigates the issue of possible overlap between training and test data of large pre-trained models, which the current SOTA systems are based on. For instance, BERT is trained on Wikipedia + BookCorpus, and was tested on Wikipedia-based SQuAD (Devlin, Chang, Lee, & Toutanova, 2019).
https://text-machine-lab.github.io/blog/2020/quail/ @annargrs | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/747/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/747/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/747",
"html_url": "https://github.com/huggingface/datasets/pull/747",
"diff_url": "https://github.com/huggingface/datasets/pull/747.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/747.patch",
"merged_at": 1603269315000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/746 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/746/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/746/comments | https://api.github.com/repos/huggingface/datasets/issues/746/events | https://github.com/huggingface/datasets/pull/746 | 725,627,235 | MDExOlB1bGxSZXF1ZXN0NTA2ODMzNDMw | 746 | dataset(ngt): add ngt dataset initial loading script | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,603,202,698,000 | 1,616,480,378,000 | 1,616,480,378,000 | CONTRIBUTOR | null | Currently only making the paths to the annotation ELAN (eaf) file and videos available.
This is the first accessible way to download this dataset, which is not manual file-by-file.
Only downloading the necessary files, the annotation files are very small, 20MB for all of them, but the video files are large, 100GB in total, saved in `mpg` format.
I do not intend to actually store these as an uncompressed array of frames, because it will be huge.
Future updates may add pose estimation files for all videos, making it easier to work with this data | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/746/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/746/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/746",
"html_url": "https://github.com/huggingface/datasets/pull/746",
"diff_url": "https://github.com/huggingface/datasets/pull/746.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/746.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/745 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/745/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/745/comments | https://api.github.com/repos/huggingface/datasets/issues/745/events | https://github.com/huggingface/datasets/pull/745 | 725,589,352 | MDExOlB1bGxSZXF1ZXN0NTA2ODAxMTI0 | 745 | Fix emotion description | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hello, I probably have a silly question but the labels of the emotion dataset are in the form of numbers and not string, so I can not use the function classification_report because it mixes numbers and string (prediction). How can I access the label in the form of a string and not a number? \r\nThank you in advanc... | 1,603,200,519,000 | 1,619,102,851,000 | 1,603,269,507,000 | MEMBER | null | Fixes the description of the emotion dataset to reflect the class names observed in the data, not the ones described in the paper.
I also took the liberty to make use of `ClassLabel` for the emotion labels. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/745/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/745/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/745",
"html_url": "https://github.com/huggingface/datasets/pull/745",
"diff_url": "https://github.com/huggingface/datasets/pull/745.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/745.patch",
"merged_at": 1603269507000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/744 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/744/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/744/comments | https://api.github.com/repos/huggingface/datasets/issues/744/events | https://github.com/huggingface/datasets/issues/744 | 724,918,448 | MDU6SXNzdWU3MjQ5MTg0NDg= | 744 | Dataset Explorer Doesn't Work for squad_es and squad_it | {
"login": "gaotongxiao",
"id": 22607038,
"node_id": "MDQ6VXNlcjIyNjA3MDM4",
"avatar_url": "https://avatars.githubusercontent.com/u/22607038?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gaotongxiao",
"html_url": "https://github.com/gaotongxiao",
"followers_url": "https://api.github.com/users/gaotongxiao/followers",
"following_url": "https://api.github.com/users/gaotongxiao/following{/other_user}",
"gists_url": "https://api.github.com/users/gaotongxiao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gaotongxiao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gaotongxiao/subscriptions",
"organizations_url": "https://api.github.com/users/gaotongxiao/orgs",
"repos_url": "https://api.github.com/users/gaotongxiao/repos",
"events_url": "https://api.github.com/users/gaotongxiao/events{/privacy}",
"received_events_url": "https://api.github.com/users/gaotongxiao/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2107841032,
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer",
"name": "nlp-viewer",
"color": "94203D",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [
"Oups wrong click.\r\nThis one is for you @srush"
] | 1,603,136,052,000 | 1,603,730,177,000 | 1,603,730,177,000 | NONE | null | https://huggingface.co/nlp/viewer/?dataset=squad_es
https://huggingface.co/nlp/viewer/?dataset=squad_it
Both pages show "OSError: [Errno 28] No space left on device". | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/744/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/744/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/743 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/743/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/743/comments | https://api.github.com/repos/huggingface/datasets/issues/743/events | https://github.com/huggingface/datasets/issues/743 | 724,703,980 | MDU6SXNzdWU3MjQ3MDM5ODA= | 743 | load_dataset for CSV files not working | {
"login": "iliemihai",
"id": 2815308,
"node_id": "MDQ6VXNlcjI4MTUzMDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2815308?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iliemihai",
"html_url": "https://github.com/iliemihai",
"followers_url": "https://api.github.com/users/iliemihai/followers",
"following_url": "https://api.github.com/users/iliemihai/following{/other_user}",
"gists_url": "https://api.github.com/users/iliemihai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iliemihai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iliemihai/subscriptions",
"organizations_url": "https://api.github.com/users/iliemihai/orgs",
"repos_url": "https://api.github.com/users/iliemihai/repos",
"events_url": "https://api.github.com/users/iliemihai/events{/privacy}",
"received_events_url": "https://api.github.com/users/iliemihai/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Thank you !\r\nCould you provide a csv file that reproduces the error ?\r\nIt doesn't have to be one of your dataset. As long as it reproduces the error\r\nThat would help a lot !",
"I think another good example is the following:\r\n`\r\nfrom datasets import load_dataset\r\n`\r\n`\r\ndataset = load_dataset(\"csv... | 1,603,119,231,000 | 1,631,212,006,000 | null | CONTRIBUTOR | null | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInvalid: CSV parse error: Expected 2 columns, got 1
`
I should mention that when I've tried to read data from `https://github.com/lhoestq/transformers/tree/custom-dataset-in-rag-retriever/examples/rag/test_data/my_knowledge_dataset.csv` it worked without a problem. I've read that there might be some problems with /r character, so I've removed them from the custom dataset, but the problem still remains.
I've added a colab reproducing the bug, but unfortunately I cannot provide the dataset.
https://colab.research.google.com/drive/1Qzu7sC-frZVeniiWOwzoCe_UHZsrlxu8?usp=sharing
Are there any work around for it ?
Thank you | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/743/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/743/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/742 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/742/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/742/comments | https://api.github.com/repos/huggingface/datasets/issues/742/events | https://github.com/huggingface/datasets/pull/742 | 724,509,974 | MDExOlB1bGxSZXF1ZXN0NTA1ODgzNjI3 | 742 | Add OCNLI, a new CLUE dataset | {
"login": "JetRunner",
"id": 22514219,
"node_id": "MDQ6VXNlcjIyNTE0MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JetRunner",
"html_url": "https://github.com/JetRunner",
"followers_url": "https://api.github.com/users/JetRunner/followers",
"following_url": "https://api.github.com/users/JetRunner/following{/other_user}",
"gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions",
"organizations_url": "https://api.github.com/users/JetRunner/orgs",
"repos_url": "https://api.github.com/users/JetRunner/repos",
"events_url": "https://api.github.com/users/JetRunner/events{/privacy}",
"received_events_url": "https://api.github.com/users/JetRunner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks :) merging it"
] | 1,603,105,593,000 | 1,603,383,589,000 | 1,603,383,588,000 | MEMBER | null | OCNLI stands for Original Chinese Natural Language Inference. It is a corpus for
Chinese Natural Language Inference, collected following closely the procedures of MNLI,
but with enhanced strategies aiming for more challenging inference pairs. We want to
emphasize we did not use human/machine translation in creating the dataset, and thus
our Chinese texts are original and not translated. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/742/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/742/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/742",
"html_url": "https://github.com/huggingface/datasets/pull/742",
"diff_url": "https://github.com/huggingface/datasets/pull/742.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/742.patch",
"merged_at": 1603383587000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/741 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/741/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/741/comments | https://api.github.com/repos/huggingface/datasets/issues/741/events | https://github.com/huggingface/datasets/issues/741 | 723,924,275 | MDU6SXNzdWU3MjM5MjQyNzU= | 741 | Creating dataset consumes too much memory | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Thanks for reporting.\r\nIn theory since the dataset script is just made to yield examples to write them into an arrow file, it's not supposed to create memory issues.\r\n\r\nCould you please try to run this exact same loop in a separate script to see if it's not an issue with `PIL` ?\r\nYou can just copy paste wh... | 1,603,001,226,000 | 1,617,097,628,000 | null | CONTRIBUTOR | null | Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, base_path, split):
""" Yields examples. """
filepath = os.path.join(base_path, "annotations", "manual", "PHOENIX-2014-T." + split + ".corpus.csv")
images_path = os.path.join(base_path, "features", "fullFrame-210x260px", split)
with open(filepath, "r", encoding="utf-8") as f:
data = csv.DictReader(f, delimiter="|", quoting=csv.QUOTE_NONE)
for row in data:
frames_path = os.path.join(images_path, row["video"])[:-7]
np_frames = []
for frame_name in os.listdir(frames_path):
frame_path = os.path.join(frames_path, frame_name)
im = Image.open(frame_path)
np_frames.append(np.asarray(im))
im.close()
yield row["name"], {"video": np_frames}
```
The dataset creation process goes out of memory on a machine with 500GB RAM.
I was under the impression that the "generator" here is exactly for that, to avoid memory constraints.
However, even if you want the entire dataset in memory, it would be in the worst case
`260x210x3 x 400 max length x 7000 samples` in bytes (uint8) = 458.64 gigabytes
So I'm not sure why it's taking more than 500GB.
And the dataset creation fails after 170 examples on a machine with 120gb RAM, and after 672 examples on a machine with 500GB RAM.
---
## Info that might help:
Iterating over examples is extremely slow.

If I perform this iteration in my own, custom loop (Without saving to file), it runs at 8-9 examples/sec
And you can see at this state it is using 94% of the memory:

And it is only using one CPU core, which is probably why it's so slow:

| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/741/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/741/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/740 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/740/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/740/comments | https://api.github.com/repos/huggingface/datasets/issues/740/events | https://github.com/huggingface/datasets/pull/740 | 723,047,958 | MDExOlB1bGxSZXF1ZXN0NTA0NzAyNTc0 | 740 | Fix TREC urls | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,602,839,488,000 | 1,603,097,677,000 | 1,603,097,676,000 | MEMBER | null | The old TREC urls are now redirections.
I updated the urls to the new ones, since we don't support redirections for downloads.
Fix #737 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/740/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/740/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/740",
"html_url": "https://github.com/huggingface/datasets/pull/740",
"diff_url": "https://github.com/huggingface/datasets/pull/740.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/740.patch",
"merged_at": 1603097675000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/739 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/739/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/739/comments | https://api.github.com/repos/huggingface/datasets/issues/739/events | https://github.com/huggingface/datasets/pull/739 | 723,044,066 | MDExOlB1bGxSZXF1ZXN0NTA0Njk5NTY3 | 739 | Add wiki dpr multiset embeddings | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I still have to compute the dataset_infos, and build + host the indexes",
"update: I'm computing the metadata, will update the PR soon",
"Finally all green and ready to merge :)"
] | 1,602,839,149,000 | 1,606,399,370,000 | 1,606,399,369,000 | MEMBER | null | There are two DPR encoders, one trained on Natural Questions and one trained on a multiset/hybrid dataset.
Previously only the embeddings from the encoder trained on NQ were available. I'm adding the ones from the encoder trained on the multiset/hybrid dataset.
In the configuration you can now specify `embeddings_name="nq"` or `embeddings_name="multiset"` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/739/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/739/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/739",
"html_url": "https://github.com/huggingface/datasets/pull/739",
"diff_url": "https://github.com/huggingface/datasets/pull/739.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/739.patch",
"merged_at": 1606399369000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/738 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/738/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/738/comments | https://api.github.com/repos/huggingface/datasets/issues/738/events | https://github.com/huggingface/datasets/pull/738 | 723,033,923 | MDExOlB1bGxSZXF1ZXN0NTA0NjkxNjM4 | 738 | Replace seqeval code with original classification_report for simplicity | {
"login": "Hironsan",
"id": 6737785,
"node_id": "MDQ6VXNlcjY3Mzc3ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6737785?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hironsan",
"html_url": "https://github.com/Hironsan",
"followers_url": "https://api.github.com/users/Hironsan/followers",
"following_url": "https://api.github.com/users/Hironsan/following{/other_user}",
"gists_url": "https://api.github.com/users/Hironsan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hironsan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hironsan/subscriptions",
"organizations_url": "https://api.github.com/users/Hironsan/orgs",
"repos_url": "https://api.github.com/users/Hironsan/repos",
"events_url": "https://api.github.com/users/Hironsan/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hironsan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hello,\r\n\r\nI ran https://github.com/huggingface/transformers/blob/master/examples/token-classification/run.sh\r\n\r\nAnd received this error:\r\n```\r\n100%|██████████| 407/407 [21:37<00:00, 3.44s/it]Traceback (most recent call last):\r\n File \"run_ner.py\", line 445, in <module>\r\n main()\r\n File \"ru... | 1,602,838,305,000 | 1,611,245,235,000 | 1,603,103,472,000 | CONTRIBUTOR | null | Recently, the original seqeval has enabled us to get per type scores and overall scores as a dictionary.
This PR replaces the current code with the original function(`classification_report`) to simplify it.
Also, the original code has been updated to fix #352.
- Related issue: https://github.com/chakki-works/seqeval/pull/38
```python
from datasets import load_metric
metric = load_metric("seqeval")
y_true = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
y_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
metric.compute(predictions=y_pred, references=y_true)
# Output: {'MISC': {'precision': 0.0, 'recall': 0.0, 'f1': 0, 'number': 1}, 'PER': {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1}, 'overall_precision': 0.5, 'overall_recall': 0.5, 'overall_f1': 0.5, 'overall_accuracy': 0.8}
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/738/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/738/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/738",
"html_url": "https://github.com/huggingface/datasets/pull/738",
"diff_url": "https://github.com/huggingface/datasets/pull/738.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/738.patch",
"merged_at": 1603103471000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/737 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/737/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/737/comments | https://api.github.com/repos/huggingface/datasets/issues/737/events | https://github.com/huggingface/datasets/issues/737 | 722,463,923 | MDU6SXNzdWU3MjI0NjM5MjM= | 737 | Trec Dataset Connection Error | {
"login": "aychang95",
"id": 10554495,
"node_id": "MDQ6VXNlcjEwNTU0NDk1",
"avatar_url": "https://avatars.githubusercontent.com/u/10554495?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aychang95",
"html_url": "https://github.com/aychang95",
"followers_url": "https://api.github.com/users/aychang95/followers",
"following_url": "https://api.github.com/users/aychang95/following{/other_user}",
"gists_url": "https://api.github.com/users/aychang95/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aychang95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aychang95/subscriptions",
"organizations_url": "https://api.github.com/users/aychang95/orgs",
"repos_url": "https://api.github.com/users/aychang95/repos",
"events_url": "https://api.github.com/users/aychang95/events{/privacy}",
"received_events_url": "https://api.github.com/users/aychang95/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for reporting.\r\nThat's because the download url has changed. The old url now redirects to the new one but we don't support redirection for downloads.\r\n\r\nI'm opening a PR to update the url"
] | 1,602,777,473,000 | 1,603,097,676,000 | 1,603,097,676,000 | NONE | null | **Datasets Version:**
1.1.2
**Python Version:**
3.6/3.7
**Code:**
```python
from datasets import load_dataset
load_dataset("trec")
```
**Expected behavior:**
Download Trec dataset and load Dataset object
**Current Behavior:**
Get a connection error saying it couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label (but the link doesn't seem broken)
<details>
<summary>Error Logs</summary>
Using custom data configuration default
Downloading and preparing dataset trec/default (download: 350.79 KiB, generated: 403.39 KiB, post-processed: Unknown size, total: 754.18 KiB) to /root/.cache/huggingface/datasets/trec/default/1.1.0/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7...
---------------------------------------------------------------------------
ConnectionError Traceback (most recent call last)
<ipython-input-8-66bf1242096e> in <module>()
----> 1 load_dataset("trec")
10 frames
/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag)
473 elif response is not None and response.status_code == 404:
474 raise FileNotFoundError("Couldn't find file at {}".format(url))
--> 475 raise ConnectionError("Couldn't reach {}".format(url))
476
477 # Try a second time
ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label
</details> | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/737/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/737/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/736 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/736/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/736/comments | https://api.github.com/repos/huggingface/datasets/issues/736/events | https://github.com/huggingface/datasets/pull/736 | 722,348,191 | MDExOlB1bGxSZXF1ZXN0NTA0MTE0MjMy | 736 | Start community-provided dataset docs | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"can you also reference the `--organization` flag like in https://github.com/huggingface/transformers/blob/master/docs/source/model_sharing.rst#upload-your-model-with-the-cli ?",
"done!",
"Not sure if the changes in `datasets/wmt_t2t/wmt_utils.py` are intentional.\r\nIf you want to add more configs to wmt, coul... | 1,602,769,299,000 | 1,603,458,928,000 | 1,603,458,928,000 | CONTRIBUTOR | null | This is one I did to get the pseudo-labels updated. Not sure if it generalizes, but I figured I would write it down. It was pretty easy because all I had to do was make properly formatted directories and change URLs.
+ In slack @thomwolf called it a `user-namespace` dataset, but the docs call it `community dataset`.
I think the first naming is clearer, but I didn't address that here.
+ I didn't add metadata, will try that. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/736/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/736/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/736",
"html_url": "https://github.com/huggingface/datasets/pull/736",
"diff_url": "https://github.com/huggingface/datasets/pull/736.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/736.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/735 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/735/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/735/comments | https://api.github.com/repos/huggingface/datasets/issues/735/events | https://github.com/huggingface/datasets/issues/735 | 722,225,270 | MDU6SXNzdWU3MjIyMjUyNzA= | 735 | Throw error when an unexpected key is used in data_files | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for reporting !\r\nWe'll add support for other keys"
] | 1,602,759,327,000 | 1,604,064,232,000 | 1,604,064,232,000 | CONTRIBUTOR | null | I have found that only "train", "validation" and "test" are valid keys in the `data_files` argument. When you use any other ones, those attached files are silently ignored - leading to unexpected behaviour for the users.
So the following, unintuitively, returns only one key (namely `train`).
```python
datasets = load_dataset("text", data_files={"train": train_f, "valid": valid_f})
print(datasets.keys())
# dict_keys(['train'])
```
whereas using `validation` instead, does return the expected result:
```python
datasets = load_dataset("text", data_files={"train": train_f, "validation": valid_f})
print(datasets.keys())
# dict_keys(['train', 'validation'])
```
I would like to see more freedom in which keys one can use, but if that is not possible at least an error should be thrown when using an unexpected key. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/735/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/735/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/734 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/734/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/734/comments | https://api.github.com/repos/huggingface/datasets/issues/734/events | https://github.com/huggingface/datasets/pull/734 | 721,767,848 | MDExOlB1bGxSZXF1ZXN0NTAzNjMwMDcz | 734 | Fix GLUE metric description | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,602,708,254,000 | 1,602,754,063,000 | 1,602,754,062,000 | MEMBER | null | Small typo: the description says translation instead of prediction. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/734/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/734/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/734",
"html_url": "https://github.com/huggingface/datasets/pull/734",
"diff_url": "https://github.com/huggingface/datasets/pull/734.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/734.patch",
"merged_at": 1602754062000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/733 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/733/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/733/comments | https://api.github.com/repos/huggingface/datasets/issues/733/events | https://github.com/huggingface/datasets/pull/733 | 721,366,744 | MDExOlB1bGxSZXF1ZXN0NTAzMjk2NDQw | 733 | Update link to dataset viewer | {
"login": "negedng",
"id": 12969168,
"node_id": "MDQ6VXNlcjEyOTY5MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/12969168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/negedng",
"html_url": "https://github.com/negedng",
"followers_url": "https://api.github.com/users/negedng/followers",
"following_url": "https://api.github.com/users/negedng/following{/other_user}",
"gists_url": "https://api.github.com/users/negedng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/negedng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/negedng/subscriptions",
"organizations_url": "https://api.github.com/users/negedng/orgs",
"repos_url": "https://api.github.com/users/negedng/repos",
"events_url": "https://api.github.com/users/negedng/events{/privacy}",
"received_events_url": "https://api.github.com/users/negedng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,602,674,003,000 | 1,602,684,451,000 | 1,602,684,451,000 | CONTRIBUTOR | null | Change 404 error links in quick tour to working ones | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/733/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/733/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/733",
"html_url": "https://github.com/huggingface/datasets/pull/733",
"diff_url": "https://github.com/huggingface/datasets/pull/733.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/733.patch",
"merged_at": 1602684451000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/732 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/732/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/732/comments | https://api.github.com/repos/huggingface/datasets/issues/732/events | https://github.com/huggingface/datasets/pull/732 | 721,359,448 | MDExOlB1bGxSZXF1ZXN0NTAzMjkwMjEy | 732 | dataset(wlasl): initial loading script | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Followup: \r\nFrom the info in https://github.com/huggingface/datasets/pull/722, I probably should load the videos as array of frames directly into the database. \r\nThis will make the dataset generation time very long, but will make working with the dataset much easier.",
"When I run:\r\n```\r\npython datasets-... | 1,602,673,302,000 | 1,616,480,383,000 | 1,616,480,383,000 | CONTRIBUTOR | null | takes like 9-10 hours to download all of the videos for the dataset, but it does finish :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/732/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/732/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/732",
"html_url": "https://github.com/huggingface/datasets/pull/732",
"diff_url": "https://github.com/huggingface/datasets/pull/732.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/732.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/731 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/731/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/731/comments | https://api.github.com/repos/huggingface/datasets/issues/731/events | https://github.com/huggingface/datasets/pull/731 | 721,142,985 | MDExOlB1bGxSZXF1ZXN0NTAzMTExNzc4 | 731 | dataset(aslg_pc12): initial loading script | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks @lhoestq \r\nAre there any guidelines for the dummy data?\r\nIn this particular case for example, the dataset fetches from two hardcoded URLs. \r\nDo I just `head -n 10` both files and zip them?\r\n\r\n",
"> Thanks @lhoestq\r\n> Are there any guidelines for the dummy data?\r\n> In this particular case for... | 1,602,652,477,000 | 1,603,898,826,000 | 1,603,898,826,000 | CONTRIBUTOR | null | This contains the only current public part of this corpus.
The rest of the corpus is not yet been made public, but this sample is still being used by researchers. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/731/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/731/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/731",
"html_url": "https://github.com/huggingface/datasets/pull/731",
"diff_url": "https://github.com/huggingface/datasets/pull/731.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/731.patch",
"merged_at": 1603898826000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/730 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/730/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/730/comments | https://api.github.com/repos/huggingface/datasets/issues/730/events | https://github.com/huggingface/datasets/issues/730 | 721,073,812 | MDU6SXNzdWU3MjEwNzM4MTI= | 730 | Possible caching bug | {
"login": "ArneBinder",
"id": 3375489,
"node_id": "MDQ6VXNlcjMzNzU0ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3375489?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArneBinder",
"html_url": "https://github.com/ArneBinder",
"followers_url": "https://api.github.com/users/ArneBinder/followers",
"following_url": "https://api.github.com/users/ArneBinder/following{/other_user}",
"gists_url": "https://api.github.com/users/ArneBinder/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArneBinder/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArneBinder/subscriptions",
"organizations_url": "https://api.github.com/users/ArneBinder/orgs",
"repos_url": "https://api.github.com/users/ArneBinder/repos",
"events_url": "https://api.github.com/users/ArneBinder/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArneBinder/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Thanks for reporting. That's a bug indeed.\r\nApparently only the `data_files` parameter is taken into account right now in `DatasetBuilder._create_builder_config` but it should also be the case for `config_kwargs` (or at least the instantiated `builder_config`)",
"Hi, does this bug be fixed? when I load JSON fi... | 1,602,640,954,000 | 1,638,109,737,000 | 1,603,964,161,000 | NONE | null | The following code with `test1.txt` containing just "🤗🤗🤗":
```
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1")
print(dataset[0])
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8")
print(dataset[0])
```
produces this output:
```
Downloading and preparing dataset text/default-15600e4d83254059 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155...
Dataset text downloaded and prepared to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155. Subsequent calls will reuse this data.
{'text': 'ð\x9f¤\x97ð\x9f¤\x97ð\x9f¤\x97'}
Using custom data configuration default
Reusing dataset text (/home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155)
{'text': 'ð\x9f¤\x97ð\x9f¤\x97ð\x9f¤\x97'}
```
Just changing the order (and deleting the temp files):
```
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8")
print(dataset[0])
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1")
print(dataset[0])
```
produces this:
```
Using custom data configuration default
Downloading and preparing dataset text/default-15600e4d83254059 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155...
Dataset text downloaded and prepared to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155. Subsequent calls will reuse this data.
{'text': '🤗🤗🤗'}
Using custom data configuration default
Reusing dataset text (/home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155)
{'text': '🤗🤗🤗'}
```
Is it intended that the cache path does not depend on the config entries?
tested with datasets==1.1.2 and python==3.8.5 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/730/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/730/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/729 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/729/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/729/comments | https://api.github.com/repos/huggingface/datasets/issues/729/events | https://github.com/huggingface/datasets/issues/729 | 719,558,876 | MDU6SXNzdWU3MTk1NTg4NzY= | 729 | Better error message when one forgets to call `add_batch` before `compute` | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,602,525,562,000 | 1,603,984,704,000 | 1,603,984,704,000 | MEMBER | null | When using metrics, if for some reason a user forgets to call `add_batch` to a metric before `compute` (with no arguments), the error message is a bit cryptic and could probably be made clearer.
## Reproducer
```python
import datasets
import torch
from datasets import Metric
class GatherMetric(Metric):
def _info(self):
return datasets.MetricInfo(
description="description",
citation="citation",
inputs_description="kwargs",
features=datasets.Features({
'predictions': datasets.Value('int64'),
'references': datasets.Value('int64'),
}),
codebase_urls=[],
reference_urls=[],
format='numpy'
)
def _compute(self, predictions, references):
return {"predictions": predictions, "labels": references}
metric = GatherMetric(cache_dir="test-metric")
inputs = torch.randint(0, 2, (1024,))
targets = torch.randint(0, 2, (1024,))
batch_size = 8
for i in range(0, 1024, batch_size):
pass # User forgets to call `add_batch`
result = metric.compute()
```
## Stack trace:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-13-267729d187fa> in <module>
3 pass
4 # metric.add_batch(predictions=inputs[i:i+batch_size], references=targets[i:i+batch_size])
----> 5 result = metric.compute()
~/git/datasets/src/datasets/metric.py in compute(self, *args, **kwargs)
380 if predictions is not None:
381 self.add_batch(predictions=predictions, references=references)
--> 382 self._finalize()
383
384 self.cache_file_name = None
~/git/datasets/src/datasets/metric.py in _finalize(self)
343 elif self.process_id == 0:
344 # Let's acquire a lock on each node files to be sure they are finished writing
--> 345 file_paths, filelocks = self._get_all_cache_files()
346
347 # Read the predictions and references
~/git/datasets/src/datasets/metric.py in _get_all_cache_files(self)
280 filelocks = []
281 for process_id, file_path in enumerate(file_paths):
--> 282 filelock = FileLock(file_path + ".lock")
283 try:
284 filelock.acquire(timeout=self.timeout)
TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/729/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/729/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/728 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/728/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/728/comments | https://api.github.com/repos/huggingface/datasets/issues/728/events | https://github.com/huggingface/datasets/issues/728 | 719,555,780 | MDU6SXNzdWU3MTk1NTU3ODA= | 728 | Passing `cache_dir` to a metric does not work | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,602,525,314,000 | 1,603,964,082,000 | 1,603,964,082,000 | MEMBER | null | When passing `cache_dir` to a custom metric, the folder is concatenated to itself at some point and this results in a FileNotFoundError:
## Reproducer
```python
import datasets
import torch
from datasets import Metric
class GatherMetric(Metric):
def _info(self):
return datasets.MetricInfo(
description="description",
citation="citation",
inputs_description="kwargs",
features=datasets.Features({
'predictions': datasets.Value('int64'),
'references': datasets.Value('int64'),
}),
codebase_urls=[],
reference_urls=[],
format='numpy'
)
def _compute(self, predictions, references):
return {"predictions": predictions, "labels": references}
metric = GatherMetric(cache_dir="test-metric")
inputs = torch.randint(0, 2, (1024,))
targets = torch.randint(0, 2, (1024,))
batch_size = 8
for i in range(0, 1024, batch_size):
metric.add_batch(predictions=inputs[i:i+batch_size], references=targets[i:i+batch_size])
result = metric.compute()
```
## Stack trace:
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
~/git/datasets/src/datasets/metric.py in _finalize(self)
349 reader = ArrowReader(path=self.data_dir, info=DatasetInfo(features=self.features))
--> 350 self.data = Dataset(**reader.read_files([{"filename": f} for f in file_paths]))
351 except FileNotFoundError:
~/git/datasets/src/datasets/arrow_reader.py in read_files(self, files, original_instructions)
227 # Prepend path to filename
--> 228 pa_table = self._read_files(files)
229 files = copy.deepcopy(files)
~/git/datasets/src/datasets/arrow_reader.py in _read_files(self, files)
166 for f_dict in files:
--> 167 pa_table: pa.Table = self._get_dataset_from_filename(f_dict)
168 pa_tables.append(pa_table)
~/git/datasets/src/datasets/arrow_reader.py in _get_dataset_from_filename(self, filename_skip_take)
291 )
--> 292 mmap = pa.memory_map(filename)
293 f = pa.ipc.open_stream(mmap)
~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.memory_map()
~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.MemoryMappedFile._open()
~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
FileNotFoundError: [Errno 2] Failed to open local file 'test-metric/gather_metric/default/test-metric/gather_metric/default/default_experiment-1-0.arrow'. Detail: [errno 2] No such file or directory
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
<ipython-input-17-e42d43cc981f> in <module>
2 for i in range(0, 1024, batch_size):
3 metric.add_batch(predictions=inputs[i:i+batch_size], references=targets[i:i+batch_size])
----> 4 result = metric.compute()
~/git/datasets/src/datasets/metric.py in compute(self, *args, **kwargs)
380 if predictions is not None:
381 self.add_batch(predictions=predictions, references=references)
--> 382 self._finalize()
383
384 self.cache_file_name = None
~/git/datasets/src/datasets/metric.py in _finalize(self)
351 except FileNotFoundError:
352 raise ValueError(
--> 353 "Error in finalize: another metric instance is already using the local cache file. "
354 "Please specify an experiment_id to avoid colision between distributed metric instances."
355 )
ValueError: Error in finalize: another metric instance is already using the local cache file. Please specify an experiment_id to avoid colision between distributed metric instances.
```
The code works when we remove the `cache_dir=...` from the metric. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/728/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/728/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/727 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/727/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/727/comments | https://api.github.com/repos/huggingface/datasets/issues/727/events | https://github.com/huggingface/datasets/issues/727 | 719,386,366 | MDU6SXNzdWU3MTkzODYzNjY= | 727 | Parallel downloads progress bar flickers | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,602,509,765,000 | 1,602,509,765,000 | null | MEMBER | null | When there are parallel downloads using the download manager, the tqdm progress bar flickers since all the progress bars are on the same line.
To fix that we could simply specify `position=i` for i=0 to n the number of files to download when instantiating the tqdm progress bar.
Another way would be to have one "master" progress bar that tracks the number of finished downloads, and then one progress bar per process that show the current downloads. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/727/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/727/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/726 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/726/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/726/comments | https://api.github.com/repos/huggingface/datasets/issues/726/events | https://github.com/huggingface/datasets/issues/726 | 719,313,754 | MDU6SXNzdWU3MTkzMTM3NTQ= | 726 | "Checksums didn't match for dataset source files" error while loading openwebtext dataset | {
"login": "SparkJiao",
"id": 16469472,
"node_id": "MDQ6VXNlcjE2NDY5NDcy",
"avatar_url": "https://avatars.githubusercontent.com/u/16469472?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SparkJiao",
"html_url": "https://github.com/SparkJiao",
"followers_url": "https://api.github.com/users/SparkJiao/followers",
"following_url": "https://api.github.com/users/SparkJiao/following{/other_user}",
"gists_url": "https://api.github.com/users/SparkJiao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SparkJiao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SparkJiao/subscriptions",
"organizations_url": "https://api.github.com/users/SparkJiao/orgs",
"repos_url": "https://api.github.com/users/SparkJiao/repos",
"events_url": "https://api.github.com/users/SparkJiao/events{/privacy}",
"received_events_url": "https://api.github.com/users/SparkJiao/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi try, to provide more information please.\r\n\r\nExample code in a colab to reproduce the error, details on what you are trying to do and what you were expected and details on your environment (OS, PyPi packages version).",
"> Hi try, to provide more information please.\r\n> \r\n> Example code in a colab to re... | 1,602,503,110,000 | 1,633,830,741,000 | null | NONE | null | Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, post-processed: Unknown size, total: 49.03 GiB) to /home/admin/.cache/huggingface/datasets/openwebtext/plain_text/1.0.0/5c636399c7155da97c982d0d70ecdce30fbca66a4eb4fc768ad91f8331edac02...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/builder.py", line 536, in _download_and_prepare
self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files"
File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 39, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://zenodo.org/record/3834942/files/openwebtext.tar.xz']
```
I think this problem is caused because the released dataset has changed. Or I should download the dataset manually?
Sorry for release the unfinised issue by mistake. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/726/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/726/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/725 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/725/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/725/comments | https://api.github.com/repos/huggingface/datasets/issues/725/events | https://github.com/huggingface/datasets/pull/725 | 718,985,641 | MDExOlB1bGxSZXF1ZXN0NTAxMjUxODI1 | 725 | pretty print dataset objects | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Great, as you found it useful I improved the code a bit to automate indentation in the parent class, so that the child repr doesn't need to guess the indentation level, while repr'ing nicely on its own.\r\n\r\n- do we want indent=4 or 2?\r\n- do we want `{` ... `}` or w/o?\r\n\r\ncurrently it's indent4 and w/ curl... | 1,602,468,226,000 | 1,603,470,275,000 | 1,603,443,646,000 | CONTRIBUTOR | null | Currently, if I do:
```
from datasets import load_dataset
load_dataset("wikihow", 'all', data_dir="/hf/pegasus-datasets/wikihow/")
```
I get:
```
DatasetDict({'train': Dataset(features: {'text': Value(dtype='string', id=None),
'headline': Value(dtype='string', id=None), 'title': Value(dtype='string',
id=None)}, num_rows: 157252), 'validation': Dataset(features: {'text':
Value(dtype='string', id=None), 'headline': Value(dtype='string', id=None),
'title': Value(dtype='string', id=None)}, num_rows: 5599), 'test':
Dataset(features: {'text': Value(dtype='string', id=None), 'headline':
Value(dtype='string', id=None), 'title': Value(dtype='string', id=None)},
num_rows: 5577)})
```
This is not very readable.
Can we either have a better `__repr__` or have a custom method to nicely pprint the dataset object?
Here is my very simple attempt. With this PR, it produces:
```
DatasetDict({
train: Dataset({
features: ['text', 'headline', 'title'],
num_rows: 157252
})
validation: Dataset({
features: ['text', 'headline', 'title'],
num_rows: 5599
})
test: Dataset({
features: ['text', 'headline', 'title'],
num_rows: 5577
})
})
```
I did omit the data types on purpose to make it more readable, but it shouldn't be too difficult to integrate those too.
note that this PR also fixes the inconsistency in output that in master misses enclosing `{}` for Dataset, but it is there for `DatasetDict` - or perhaps it was by design.
I'm totally not attached to this format, just wanting something more readable. One approach could be to serialize to `json.dumps` or something similar. It'd make the indentation simpler.
Thank you. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/725/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/725/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/725",
"html_url": "https://github.com/huggingface/datasets/pull/725",
"diff_url": "https://github.com/huggingface/datasets/pull/725.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/725.patch",
"merged_at": 1603443646000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/724 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/724/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/724/comments | https://api.github.com/repos/huggingface/datasets/issues/724/events | https://github.com/huggingface/datasets/issues/724 | 718,947,700 | MDU6SXNzdWU3MTg5NDc3MDA= | 724 | need to redirect /nlp to /datasets and remove outdated info | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Should be fixed now: \r\n\r\n\r\n\r\nNot sure I understand what you mean by the second part?\r\n",
"Thank you!\r\n\r\n> Not sure I understand what you mean by the second part?\r\n\r\nCompare the 2:\r\n* htt... | 1,602,457,932,000 | 1,602,694,812,000 | 1,602,694,812,000 | CONTRIBUTOR | null | It looks like the website still has all the `nlp` data, e.g.: https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all
should probably redirect to: https://huggingface.co/datasets/wikihow
also for some reason the new information is slightly borked. If you look at the old one it was nicely formatted and had the links marked up, the new one is just a jumble of text in one chunk and no markup for links (i.e. not clickable). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/724/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/724/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/723 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/723/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/723/comments | https://api.github.com/repos/huggingface/datasets/issues/723/events | https://github.com/huggingface/datasets/issues/723 | 718,926,723 | MDU6SXNzdWU3MTg5MjY3MjM= | 723 | Adding pseudo-labels to datasets | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api... | null | [
"Nice ! :)\r\nIt's indeed the first time we have such contributions so we'll have to figure out the appropriate way to integrate them.\r\nCould you add details on what they could be used for ?\r\n",
"They can be used as training data for a smaller model.",
"Sounds just like a regular dataset to me then, no?",
... | 1,602,450,345,000 | 1,627,967,511,000 | 1,627,967,511,000 | CONTRIBUTOR | null | I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo.
Since pseudo-labels are just a large model's generations on an existing dataset, what is the right way to structure this contribution.
I read https://huggingface.co/docs/datasets/add_dataset.html, but it doesn't really cover this type of contribution.
I could, for example, make a new directory, `xsum_bart_pseudolabels` for each set of pseudolabels or add some sort of parametrization to `xsum.py`: https://github.com/huggingface/datasets/blob/5f4c6e830f603830117877b8990a0e65a2386aa6/datasets/xsum/xsum.py
What do you think @lhoestq ?
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/723/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/723/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/722 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/722/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/722/comments | https://api.github.com/repos/huggingface/datasets/issues/722/events | https://github.com/huggingface/datasets/pull/722 | 718,689,117 | MDExOlB1bGxSZXF1ZXN0NTAxMDI3NjAw | 722 | datasets(RWTH-PHOENIX-Weather 2014 T): add initial loading script | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"This might be interesting to @kayoyin the author of https://github.com/kayoyin/transformer-slt – pinging you just in case :)",
"Thanks Amit, this is a great idea! I'm thinking of porting the SLT models from my paper here as well, having this dataset would be perfect for that :)"
] | 1,602,359,048,000 | 1,609,830,411,000 | null | CONTRIBUTOR | null | This is the first sign language dataset in this repo as far as I know.
Following an old issue I opened https://github.com/huggingface/datasets/issues/302.
I added the dataset official REAMDE file, but I see it's not very standard, so it can be removed.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/722/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/722/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/722",
"html_url": "https://github.com/huggingface/datasets/pull/722",
"diff_url": "https://github.com/huggingface/datasets/pull/722.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/722.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/721 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/721/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/721/comments | https://api.github.com/repos/huggingface/datasets/issues/721/events | https://github.com/huggingface/datasets/issues/721 | 718,647,147 | MDU6SXNzdWU3MTg2NDcxNDc= | 721 | feat(dl_manager): add support for ftp downloads | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"We only support http by default for downloading.\r\nIf you really need to use ftp, then feel free to use a library that allows to download through ftp in your dataset script (I see that you've started working on #722 , that's awesome !). The users will get a message to install the extra library when they load the ... | 1,602,345,020,000 | 1,603,531,473,000 | null | CONTRIBUTOR | null | I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz"
dl_manager.download_and_extract(_URL)
```
I get an error:
> ValueError: unable to parse ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz as a URL or as a local path
I checked, and indeed you don't consider `ftp` as a remote file.
https://github.com/huggingface/datasets/blob/4c2af707a6955cf4b45f83ac67990395327c5725/src/datasets/utils/file_utils.py#L188
Adding `ftp` to that list does not immediately solve the issue, so there probably needs to be some extra work.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/721/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/721/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/720 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/720/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/720/comments | https://api.github.com/repos/huggingface/datasets/issues/720/events | https://github.com/huggingface/datasets/issues/720 | 716,581,266 | MDU6SXNzdWU3MTY1ODEyNjY= | 720 | OSError: Cannot find data file when not using the dummy dataset in RAG | {
"login": "josemlopez",
"id": 4112135,
"node_id": "MDQ6VXNlcjQxMTIxMzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4112135?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/josemlopez",
"html_url": "https://github.com/josemlopez",
"followers_url": "https://api.github.com/users/josemlopez/followers",
"following_url": "https://api.github.com/users/josemlopez/following{/other_user}",
"gists_url": "https://api.github.com/users/josemlopez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/josemlopez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/josemlopez/subscriptions",
"organizations_url": "https://api.github.com/users/josemlopez/orgs",
"repos_url": "https://api.github.com/users/josemlopez/repos",
"events_url": "https://api.github.com/users/josemlopez/events{/privacy}",
"received_events_url": "https://api.github.com/users/josemlopez/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.git... | null | [
"Same issue here. I will be digging further, but it looks like the [script](https://github.com/huggingface/datasets/blob/master/datasets/wiki_dpr/wiki_dpr.py#L132) is attempting to open a file that is not downloaded yet. \r\n\r\n```\r\n99dcbca09109e58502e6b9271d4d3f3791b43f61f3161a76b25d2775ab1a4498.lock\r\n```\r\n... | 1,602,080,833,000 | 1,608,732,271,000 | 1,608,732,271,000 | NONE | null | ## Environment info
transformers version: 3.3.1
Platform: Linux-4.19
Python version: 3.7.7
PyTorch version (GPU?): 1.6.0
Tensorflow version (GPU?): No
Using GPU in script?: Yes
Using distributed or parallel set-up in script?: No
## To reproduce
Steps to reproduce the behaviour:
```
import os
os.environ['HF_DATASETS_CACHE'] = '/workspace/notebooks/POCs/cache'
from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration
tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq")
retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=False)
```
Plese note that I'm using the whole dataset: **use_dummy_dataset=False**
After around 4 hours (downloading and some other things) this is returned:
```
Downloading and preparing dataset wiki_dpr/psgs_w100.nq.exact (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /workspace/notebooks/POCs/cache/wiki_dpr/psgs_w100.nq.exact/0.0.0/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2...
---------------------------------------------------------------------------
UnpicklingError Traceback (most recent call last)
/opt/conda/lib/python3.7/site-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding)
459 try:
--> 460 return pickle.load(fid, **pickle_kwargs)
461 except Exception:
UnpicklingError: pickle data was truncated
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
552 # Prepare split will record examples associated to the split
--> 553 self._prepare_split(split_generator, **prepare_split_kwargs)
554 except OSError:
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in _prepare_split(self, split_generator)
840 for key, record in utils.tqdm(
--> 841 generator, unit=" examples", total=split_info.num_examples, leave=False, disable=not_verbose
842 ):
/opt/conda/lib/python3.7/site-packages/tqdm/notebook.py in __iter__(self, *args, **kwargs)
217 try:
--> 218 for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs):
219 # return super(tqdm...) will not catch exception
/opt/conda/lib/python3.7/site-packages/tqdm/std.py in __iter__(self)
1128 try:
-> 1129 for obj in iterable:
1130 yield obj
~/.cache/huggingface/modules/datasets_modules/datasets/wiki_dpr/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2/wiki_dpr.py in _generate_examples(self, data_file, vectors_files)
131 break
--> 132 vecs = np.load(open(vectors_files.pop(0), "rb"), allow_pickle=True)
133 vec_idx = 0
/opt/conda/lib/python3.7/site-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding)
462 raise IOError(
--> 463 "Failed to interpret file %s as a pickle" % repr(file))
464 finally:
OSError: Failed to interpret file <_io.BufferedReader name='/workspace/notebooks/POCs/cache/downloads/f34d5f091294259b4ca90e813631e69a6ded660d71b6cbedf89ddba50df94448'> as a pickle
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-10-f28df370ac47> in <module>
1 # ln -s /workspace/notebooks/POCs/cache /root/.cache/huggingface/datasets
----> 2 retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=False)
/opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in from_pretrained(cls, retriever_name_or_path, **kwargs)
307 generator_tokenizer = rag_tokenizer.generator
308 return cls(
--> 309 config, question_encoder_tokenizer=question_encoder_tokenizer, generator_tokenizer=generator_tokenizer
310 )
311
/opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in __init__(self, config, question_encoder_tokenizer, generator_tokenizer)
298 self.config = config
299 if self._init_retrieval:
--> 300 self.init_retrieval()
301
302 @classmethod
/opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in init_retrieval(self)
324
325 logger.info("initializing retrieval")
--> 326 self.index.init_index()
327
328 def postprocess_docs(self, docs, input_strings, prefix, n_docs, return_tensors=None):
/opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in init_index(self)
238 split=self.dataset_split,
239 index_name=self.index_name,
--> 240 dummy=self.use_dummy_dataset,
241 )
242 self.dataset.set_format("numpy", columns=["embeddings"], output_all_columns=True)
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)
609 download_config=download_config,
610 download_mode=download_mode,
--> 611 ignore_verifications=ignore_verifications,
612 )
613
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
474 if not downloaded_from_gcs:
475 self._download_and_prepare(
--> 476 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
477 )
478 # Sync info
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
553 self._prepare_split(split_generator, **prepare_split_kwargs)
554 except OSError:
--> 555 raise OSError("Cannot find data file. " + (self.manual_download_instructions or ""))
556
557 if verify_infos:
OSError: Cannot find data file.
```
Thanks
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/720/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/720/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/719 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/719/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/719/comments | https://api.github.com/repos/huggingface/datasets/issues/719/events | https://github.com/huggingface/datasets/pull/719 | 716,492,263 | MDExOlB1bGxSZXF1ZXN0NDk5MjE5Mjg2 | 719 | Fix train_test_split output format | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,602,074,341,000 | 1,602,077,888,000 | 1,602,077,886,000 | MEMBER | null | There was an issue in the `transmit_format` wrapper that returned bad formats when using train_test_split.
This was due to `column_names` being handled as a List[str] instead of Dict[str, List[str]] when the dataset transform (train_test_split) returns a DatasetDict (one set of column names per split).
This should fix @timothyjlaurent 's issue in #620 and fix #676
I added tests for `transmit_format` so that it doesn't happen again | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/719/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/719/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/719",
"html_url": "https://github.com/huggingface/datasets/pull/719",
"diff_url": "https://github.com/huggingface/datasets/pull/719.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/719.patch",
"merged_at": 1602077886000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/718 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/718/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/718/comments | https://api.github.com/repos/huggingface/datasets/issues/718/events | https://github.com/huggingface/datasets/pull/718 | 715,694,709 | MDExOlB1bGxSZXF1ZXN0NDk4NTU5MDcw | 718 | Don't use tqdm 4.50.0 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,601,991,953,000 | 1,601,992,164,000 | 1,601,992,162,000 | MEMBER | null | tqdm 4.50.0 introduced permission errors on windows
see [here](https://app.circleci.com/pipelines/github/huggingface/datasets/235/workflows/cfb6a39f-68eb-4802-8b17-2cd5e8ea7369/jobs/1111) for the error details.
For now I just added `<4.50.0` in the setup.py
Hopefully we can find what's wrong with this version soon | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/718/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/718/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/718",
"html_url": "https://github.com/huggingface/datasets/pull/718",
"diff_url": "https://github.com/huggingface/datasets/pull/718.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/718.patch",
"merged_at": 1601992162000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/717 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/717/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/717/comments | https://api.github.com/repos/huggingface/datasets/issues/717/events | https://github.com/huggingface/datasets/pull/717 | 714,959,268 | MDExOlB1bGxSZXF1ZXN0NDk3OTUwOTA2 | 717 | Fixes #712 Error in the Overview.ipynb notebook | {
"login": "subhrm",
"id": 850012,
"node_id": "MDQ6VXNlcjg1MDAxMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/850012?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/subhrm",
"html_url": "https://github.com/subhrm",
"followers_url": "https://api.github.com/users/subhrm/followers",
"following_url": "https://api.github.com/users/subhrm/following{/other_user}",
"gists_url": "https://api.github.com/users/subhrm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/subhrm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/subhrm/subscriptions",
"organizations_url": "https://api.github.com/users/subhrm/orgs",
"repos_url": "https://api.github.com/users/subhrm/repos",
"events_url": "https://api.github.com/users/subhrm/events{/privacy}",
"received_events_url": "https://api.github.com/users/subhrm/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,601,913,041,000 | 1,601,965,903,000 | 1,601,915,141,000 | CONTRIBUTOR | null | Fixes #712 Error in the Overview.ipynb notebook by adding `with_details=True` parameter to `list_datasets` function in Cell 3 of **overview** notebook | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/717/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/717/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/717",
"html_url": "https://github.com/huggingface/datasets/pull/717",
"diff_url": "https://github.com/huggingface/datasets/pull/717.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/717.patch",
"merged_at": 1601915140000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/716 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/716/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/716/comments | https://api.github.com/repos/huggingface/datasets/issues/716/events | https://github.com/huggingface/datasets/pull/716 | 714,952,888 | MDExOlB1bGxSZXF1ZXN0NDk3OTQ1ODAw | 716 | Fixes #712 Attribute error in cell 3 of the overview notebook | {
"login": "subhrm",
"id": 850012,
"node_id": "MDQ6VXNlcjg1MDAxMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/850012?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/subhrm",
"html_url": "https://github.com/subhrm",
"followers_url": "https://api.github.com/users/subhrm/followers",
"following_url": "https://api.github.com/users/subhrm/following{/other_user}",
"gists_url": "https://api.github.com/users/subhrm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/subhrm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/subhrm/subscriptions",
"organizations_url": "https://api.github.com/users/subhrm/orgs",
"repos_url": "https://api.github.com/users/subhrm/repos",
"events_url": "https://api.github.com/users/subhrm/events{/privacy}",
"received_events_url": "https://api.github.com/users/subhrm/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Referencing the wrong issue # in the commit message. Closing this to fix it again."
] | 1,601,912,529,000 | 1,601,912,798,000 | 1,601,912,792,000 | CONTRIBUTOR | null | Fixes the Attribute error in cell 3 of the overview notebook | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/716/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/716/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/716",
"html_url": "https://github.com/huggingface/datasets/pull/716",
"diff_url": "https://github.com/huggingface/datasets/pull/716.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/716.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/715 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/715/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/715/comments | https://api.github.com/repos/huggingface/datasets/issues/715/events | https://github.com/huggingface/datasets/pull/715 | 714,690,192 | MDExOlB1bGxSZXF1ZXN0NDk3NzMwMDQ2 | 715 | Use python read for text dataset | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"One thing though, could we try to read the files in parallel?",
"We could but I'm not sure this would help a lot since the bottleneck is the drive IO if the files are big enough.\r\nIt could make sense for very small files.",
"Looks like windows is not a big fan of this approach\r\nI'm working on a fix",
"I ... | 1,601,891,275,000 | 1,601,903,598,000 | 1,601,903,597,000 | MEMBER | null | As mentioned in #622 the pandas reader used for text dataset doesn't work properly when there are \r characters in the text file.
Instead I switched to pure python using `open` and `read`.
From my benchmark on a 100MB text file, it's the same speed as the previous pandas reader. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/715/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 3,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/715/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/715",
"html_url": "https://github.com/huggingface/datasets/pull/715",
"diff_url": "https://github.com/huggingface/datasets/pull/715.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/715.patch",
"merged_at": 1601903596000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/714 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/714/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/714/comments | https://api.github.com/repos/huggingface/datasets/issues/714/events | https://github.com/huggingface/datasets/pull/714 | 714,487,881 | MDExOlB1bGxSZXF1ZXN0NDk3NTYzNjAx | 714 | Add the official dependabot implementation | {
"login": "ALazyMeme",
"id": 12804673,
"node_id": "MDQ6VXNlcjEyODA0Njcz",
"avatar_url": "https://avatars.githubusercontent.com/u/12804673?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ALazyMeme",
"html_url": "https://github.com/ALazyMeme",
"followers_url": "https://api.github.com/users/ALazyMeme/followers",
"following_url": "https://api.github.com/users/ALazyMeme/following{/other_user}",
"gists_url": "https://api.github.com/users/ALazyMeme/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ALazyMeme/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ALazyMeme/subscriptions",
"organizations_url": "https://api.github.com/users/ALazyMeme/orgs",
"repos_url": "https://api.github.com/users/ALazyMeme/repos",
"events_url": "https://api.github.com/users/ALazyMeme/events{/privacy}",
"received_events_url": "https://api.github.com/users/ALazyMeme/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,601,869,785,000 | 1,602,503,361,000 | 1,602,503,361,000 | NONE | null | This will keep dependencies up to date. This will require a pr label `dependencies` being created in order to function correctly. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/714/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/714/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/714",
"html_url": "https://github.com/huggingface/datasets/pull/714",
"diff_url": "https://github.com/huggingface/datasets/pull/714.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/714.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/713 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/713/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/713/comments | https://api.github.com/repos/huggingface/datasets/issues/713/events | https://github.com/huggingface/datasets/pull/713 | 714,475,732 | MDExOlB1bGxSZXF1ZXN0NDk3NTUzOTUy | 713 | Fix reading text files with carriage return symbols | {
"login": "mozharovsky",
"id": 6762769,
"node_id": "MDQ6VXNlcjY3NjI3Njk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6762769?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mozharovsky",
"html_url": "https://github.com/mozharovsky",
"followers_url": "https://api.github.com/users/mozharovsky/followers",
"following_url": "https://api.github.com/users/mozharovsky/following{/other_user}",
"gists_url": "https://api.github.com/users/mozharovsky/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mozharovsky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mozharovsky/subscriptions",
"organizations_url": "https://api.github.com/users/mozharovsky/orgs",
"repos_url": "https://api.github.com/users/mozharovsky/repos",
"events_url": "https://api.github.com/users/mozharovsky/events{/privacy}",
"received_events_url": "https://api.github.com/users/mozharovsky/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Discussed in #622, fixed in #715. Closing the issue. Thanks @lhoestq, it works now! 👍 "
] | 1,601,867,223,000 | 1,602,223,105,000 | 1,601,905,769,000 | NONE | null | The new pandas-based text reader isn't able to work properly with files that contain carriage return symbols (`\r`).
It fails with the following error message:
```
...
File "pandas/_libs/parsers.pyx", line 847, in pandas._libs.parsers.TextReader.read
File "pandas/_libs/parsers.pyx", line 874, in pandas._libs.parsers.TextReader._read_low_memory
File "pandas/_libs/parsers.pyx", line 918, in pandas._libs.parsers.TextReader._read_rows
File "pandas/_libs/parsers.pyx", line 905, in pandas._libs.parsers.TextReader._tokenize_rows
File "pandas/_libs/parsers.pyx", line 2042, in pandas._libs.parsers.raise_parser_error
pandas.errors.ParserError: Error tokenizing data. C error: Buffer overflow caught - possible malformed input file.
```
___
I figured out the pandas uses those symbols as line terminators and this eventually causes the error. Explicitly specifying the `lineterminator` fixes that issue and everything works fine.
Please, consider this PR as it seems to be a common issue to solve. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/713/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/713/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/713",
"html_url": "https://github.com/huggingface/datasets/pull/713",
"diff_url": "https://github.com/huggingface/datasets/pull/713.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/713.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/712 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/712/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/712/comments | https://api.github.com/repos/huggingface/datasets/issues/712/events | https://github.com/huggingface/datasets/issues/712 | 714,242,316 | MDU6SXNzdWU3MTQyNDIzMTY= | 712 | Error in the notebooks/Overview.ipynb notebook | {
"login": "subhrm",
"id": 850012,
"node_id": "MDQ6VXNlcjg1MDAxMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/850012?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/subhrm",
"html_url": "https://github.com/subhrm",
"followers_url": "https://api.github.com/users/subhrm/followers",
"following_url": "https://api.github.com/users/subhrm/following{/other_user}",
"gists_url": "https://api.github.com/users/subhrm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/subhrm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/subhrm/subscriptions",
"organizations_url": "https://api.github.com/users/subhrm/orgs",
"repos_url": "https://api.github.com/users/subhrm/repos",
"events_url": "https://api.github.com/users/subhrm/events{/privacy}",
"received_events_url": "https://api.github.com/users/subhrm/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Do this:\r\n``` python\r\nsquad_dataset = list_datasets(with_details=True)[datasets.index('squad')]\r\npprint(squad_dataset.__dict__) # It's a simple python dataclass\r\n```",
"Thanks! This worked. I have created a PR to fix this in the notebook. "
] | 1,601,791,111,000 | 1,601,915,140,000 | 1,601,915,140,000 | CONTRIBUTOR | null | Hi,
I got the following error in **cell number 3** while exploring the **Overview.ipynb** notebook in google colab. I used the [link ](https://colab.research.google.com/github/huggingface/datasets/blob/master/notebooks/Overview.ipynb) provided in the main README file to open it in colab.
```python
# You can access various attributes of the datasets before downloading them
squad_dataset = list_datasets()[datasets.index('squad')]
pprint(squad_dataset.__dict__) # It's a simple python dataclass
```
Error message
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-5-8dc805c4949c> in <module>()
2 squad_dataset = list_datasets()[datasets.index('squad')]
3
----> 4 pprint(squad_dataset.__dict__) # It's a simple python dataclass
AttributeError: 'str' object has no attribute '__dict__'
```
The object `squad_dataset` is a `str` not a `dataclass` . | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/712/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/712/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/710 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/710/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/710/comments | https://api.github.com/repos/huggingface/datasets/issues/710/events | https://github.com/huggingface/datasets/pull/710 | 714,186,999 | MDExOlB1bGxSZXF1ZXN0NDk3MzQ1NjQ0 | 710 | fix README typos/ consistency | {
"login": "discdiver",
"id": 7703961,
"node_id": "MDQ6VXNlcjc3MDM5NjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/7703961?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/discdiver",
"html_url": "https://github.com/discdiver",
"followers_url": "https://api.github.com/users/discdiver/followers",
"following_url": "https://api.github.com/users/discdiver/following{/other_user}",
"gists_url": "https://api.github.com/users/discdiver/gists{/gist_id}",
"starred_url": "https://api.github.com/users/discdiver/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/discdiver/subscriptions",
"organizations_url": "https://api.github.com/users/discdiver/orgs",
"repos_url": "https://api.github.com/users/discdiver/repos",
"events_url": "https://api.github.com/users/discdiver/events{/privacy}",
"received_events_url": "https://api.github.com/users/discdiver/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,601,763,656,000 | 1,602,928,365,000 | 1,602,928,365,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/710/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/710/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/710",
"html_url": "https://github.com/huggingface/datasets/pull/710",
"diff_url": "https://github.com/huggingface/datasets/pull/710.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/710.patch",
"merged_at": 1602928365000
} | true | |
https://api.github.com/repos/huggingface/datasets/issues/709 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/709/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/709/comments | https://api.github.com/repos/huggingface/datasets/issues/709/events | https://github.com/huggingface/datasets/issues/709 | 714,067,902 | MDU6SXNzdWU3MTQwNjc5MDI= | 709 | How to use similarity settings other then "BM25" in Elasticsearch index ? | {
"login": "nsankar",
"id": 431890,
"node_id": "MDQ6VXNlcjQzMTg5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/431890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nsankar",
"html_url": "https://github.com/nsankar",
"followers_url": "https://api.github.com/users/nsankar/followers",
"following_url": "https://api.github.com/users/nsankar/following{/other_user}",
"gists_url": "https://api.github.com/users/nsankar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nsankar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nsankar/subscriptions",
"organizations_url": "https://api.github.com/users/nsankar/orgs",
"repos_url": "https://api.github.com/users/nsankar/repos",
"events_url": "https://api.github.com/users/nsankar/events{/privacy}",
"received_events_url": "https://api.github.com/users/nsankar/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Datasets does not use elasticsearch API to define custom similarity. If you want to use a custom similarity, the best would be to run a curl request directly to your elasticsearch instance (see sample hereafter, directly from ES documentation), then you should be able to use `my_similarity` in your configuration p... | 1,601,723,929,000 | 1,626,634,975,000 | null | NONE | null | **QUESTION : How should we use other similarity algorithms supported by Elasticsearch other than "BM25" ?**
**ES Reference**
https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-similarity.html
**HF doc reference:**
https://huggingface.co/docs/datasets/faiss_and_ea.html
**context :**
========
I used the latest Elasticsearch server version 7.9.2
When I set DFR which is one of the other similarity algorithms supported by elasticsearch in the mapping, I get an error
For example DFR that I had tried in the first instance in mappings as below.,
`"mappings": {"properties": {"text": {"type": "text", "analyzer": "standard", "similarity": "DFR"}}},`
I get the following error
RequestError: RequestError(400, 'mapper_parsing_exception', 'Unknown Similarity type [DFR] for field [text]')
The other thing as another option I had tried was to declare "similarity": "my_similarity" within settings and then assigning "my_similarity" inside the mappings as below
`es_config = {
"settings": {
"number_of_shards": 1,
**"similarity": "my_similarity"**: {
"type": "DFR",
"basic_model": "g",
"after_effect": "l",
"normalization": "h2",
"normalization.h2.c": "3.0"
} ,
"analysis": {"analyzer": {"stop_standard": {"type": "standard", " stopwords": "_english_"}}},
},
"mappings": {"properties": {"text": {"type": "text", "analyzer": "standard", "similarity": "my_similarity"}}},
}`
For this , I got the following error
RequestError: RequestError(400, 'illegal_argument_exception', 'unknown setting [index.similarity] please check that any required plugins are installed, or check the breaking changes documentation for removed settings')
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/709/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/709/timeline | null | null | null | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.