id
int64 599M
3.26B
| number
int64 1
7.7k
| title
stringlengths 1
290
| body
stringlengths 0
228k
⌀ | state
stringclasses 2
values | html_url
stringlengths 46
51
| created_at
timestamp[s]date 2020-04-14 10:18:02
2025-07-23 08:04:53
| updated_at
timestamp[s]date 2020-04-27 16:04:17
2025-07-23 18:53:44
| closed_at
timestamp[s]date 2020-04-14 12:01:40
2025-07-23 16:44:42
⌀ | user
dict | labels
listlengths 0
4
| is_pull_request
bool 2
classes | comments
listlengths 0
0
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
760,867,325
| 1,435
|
Add FreebaseQA dataset
|
This PR adds the FreebaseQA dataset: A Trivia-type QA Data Set over the Freebase Knowledge Graph
Repo: https://github.com/kelvin-jiang/FreebaseQA
Paper: https://www.aclweb.org/anthology/N19-1028.pdf
## TODO: create dummy data
Error encountered when running `python datasets-cli dummy_data datasets/freebase_qa --auto_generate`
```
f"Couldn't parse columns {list(json_data.keys())}. "
ValueError: Couldn't parse columns ['Dataset', 'Version', 'Questions']. Maybe specify which json field must be used to read the data with --json_field <my_field>.
```
|
closed
|
https://github.com/huggingface/datasets/pull/1435
| 2020-12-10T04:03:27
| 2021-02-05T09:47:30
| 2021-02-05T09:47:30
|
{
"login": "anaerobeth",
"id": 3663322,
"type": "User"
}
|
[] | true
|
[] |
760,821,474
| 1,434
|
add_sofc_materials_articles
|
adding [SOFC-Exp Corpus](https://arxiv.org/abs/2006.03039)
|
closed
|
https://github.com/huggingface/datasets/pull/1434
| 2020-12-10T02:15:02
| 2020-12-17T09:59:54
| 2020-12-17T09:59:54
|
{
"login": "ZacharySBrown",
"id": 7950786,
"type": "User"
}
|
[] | true
|
[] |
760,813,539
| 1,433
|
Adding the ASSIN 2 dataset
|
Adding the ASSIN 2 dataset, a Portuguese language dataset for Natural Language Inference and Semantic Similarity Scoring
|
closed
|
https://github.com/huggingface/datasets/pull/1433
| 2020-12-10T01:57:02
| 2020-12-11T14:32:56
| 2020-12-11T14:32:56
|
{
"login": "jonatasgrosman",
"id": 5097052,
"type": "User"
}
|
[] | true
|
[] |
760,808,449
| 1,432
|
Adding journalists questions dataset
|
This is my first dataset to be added to HF.
|
closed
|
https://github.com/huggingface/datasets/pull/1432
| 2020-12-10T01:44:47
| 2020-12-14T13:51:05
| 2020-12-14T13:51:04
|
{
"login": "MaramHasanain",
"id": 3918663,
"type": "User"
}
|
[] | true
|
[] |
760,791,019
| 1,431
|
Ar cov19
|
Adding ArCOV-19 dataset. ArCOV-19 is an Arabic COVID-19 Twitter dataset that covers the period from 27th of January till 30th of April 2020. ArCOV-19 is the first publicly-available Arabic Twitter dataset covering COVID-19 pandemic that includes over 1M tweets alongside the propagation networks of the most-popular subset of them (i.e., most-retweeted and-liked). The propagation networks include both retweets and conversational threads (i.e., threads of replies). ArCOV-19 is designed to enable research under several domains including natural language processing, information retrieval, and social computing, among others.
|
closed
|
https://github.com/huggingface/datasets/pull/1431
| 2020-12-10T00:59:34
| 2020-12-11T15:01:23
| 2020-12-11T15:01:23
|
{
"login": "Fatima-Haouari",
"id": 71061623,
"type": "User"
}
|
[] | true
|
[] |
760,779,666
| 1,430
|
Add 1.5 billion words Arabic corpus
|
Needs https://github.com/huggingface/datasets/pull/1429 to work.
|
closed
|
https://github.com/huggingface/datasets/pull/1430
| 2020-12-10T00:32:18
| 2020-12-22T10:03:59
| 2020-12-22T10:03:59
|
{
"login": "zaidalyafeai",
"id": 15667714,
"type": "User"
}
|
[] | true
|
[] |
760,737,818
| 1,429
|
extract rar files
|
Unfortunately, I didn't find any native python libraries for extracting rar files. The user has to manually install `sudo apt-get install unrar`. Discussion with @yjernite is in the slack channel.
|
closed
|
https://github.com/huggingface/datasets/pull/1429
| 2020-12-09T23:01:10
| 2020-12-18T15:03:37
| 2020-12-18T15:03:37
|
{
"login": "zaidalyafeai",
"id": 15667714,
"type": "User"
}
|
[] | true
|
[] |
760,736,726
| 1,428
|
Add twi wordsim353
|
Add twi WordSim 353
|
closed
|
https://github.com/huggingface/datasets/pull/1428
| 2020-12-09T22:59:19
| 2020-12-11T13:57:32
| 2020-12-11T13:57:32
|
{
"login": "dadelani",
"id": 23586676,
"type": "User"
}
|
[] | true
|
[] |
760,736,703
| 1,427
|
Hebrew project BenYehuda
|
Added Hebrew corpus from https://github.com/projectbenyehuda/public_domain_dump
|
closed
|
https://github.com/huggingface/datasets/pull/1427
| 2020-12-09T22:59:17
| 2020-12-11T17:39:23
| 2020-12-11T17:39:23
|
{
"login": "imvladikon",
"id": 10088963,
"type": "User"
}
|
[] | true
|
[] |
760,735,763
| 1,426
|
init commit for MultiReQA for third PR with all issues fixed
|
3rd PR w.r.t. PR #1349 with all the issues fixed. As #1349 had uploaded other files along with the multi_re_qa dataset
|
closed
|
https://github.com/huggingface/datasets/pull/1426
| 2020-12-09T22:57:41
| 2020-12-11T13:37:08
| 2020-12-11T13:37:08
|
{
"login": "Karthik-Bhaskar",
"id": 13200370,
"type": "User"
}
|
[] | true
|
[] |
760,733,638
| 1,425
|
Add german common crawl dataset
|
Adding a subpart of the Common Crawl which was extracted with this repo https://github.com/facebookresearch/cc_net and additionally filtered for duplicates
|
closed
|
https://github.com/huggingface/datasets/pull/1425
| 2020-12-09T22:54:12
| 2022-10-03T09:39:02
| 2022-10-03T09:39:02
|
{
"login": "Phil1108",
"id": 39518904,
"type": "User"
}
|
[
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | true
|
[] |
760,724,914
| 1,424
|
Add yoruba wordsim353
|
Added WordSim-353 evaluation dataset for Yoruba
|
closed
|
https://github.com/huggingface/datasets/pull/1424
| 2020-12-09T22:37:42
| 2020-12-09T22:39:45
| 2020-12-09T22:39:45
|
{
"login": "dadelani",
"id": 23586676,
"type": "User"
}
|
[] | true
|
[] |
760,712,421
| 1,423
|
Imppres
|
2nd PR ever! Hopefully I'm starting to get the hang of this. This is for the IMPPRES dataset. Please let me know of any corrections or changes that need to be made.
|
closed
|
https://github.com/huggingface/datasets/pull/1423
| 2020-12-09T22:14:12
| 2020-12-17T18:27:14
| 2020-12-17T18:27:14
|
{
"login": "aclifton314",
"id": 53267795,
"type": "User"
}
|
[] | true
|
[] |
760,707,113
| 1,422
|
Can't map dataset (loaded from csv)
|
Hello! I am trying to load single csv file with two columns: ('label': str, 'text' str), where is label is str of two possible classes.
Below steps are similar with [this notebook](https://colab.research.google.com/drive/1-JIJlao4dI-Ilww_NnTc0rxtp-ymgDgM?usp=sharing), where bert model and tokenizer are used to classify lmdb loaded dataset. Only one difference it is the dataset loaded from .csv file.
Here is how I load it:
```python
data_path = 'data.csv'
data = pd.read_csv(data_path)
# process class name to indices
classes = ['neg', 'pos']
class_to_idx = { cl: i for i, cl in enumerate(classes) }
# now data is like {'label': int, 'text' str}
data['label'] = data['label'].apply(lambda x: class_to_idx[x])
# load dataset and map it with defined `tokenize` function
features = Features({
target: ClassLabel(num_classes=2, names=['neg', 'pos'], names_file=None, id=None),
feature: Value(dtype='string', id=None),
})
dataset = Dataset.from_pandas(data, features=features)
dataset.map(tokenize, batched=True, batch_size=len(dataset))
```
It ruins on the last line with following error:
```
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
<ipython-input-112-32b6275ce418> in <module>()
9 })
10 dataset = Dataset.from_pandas(data, features=features)
---> 11 dataset.map(tokenizer, batched=True, batch_size=len(dataset))
2 frames
/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1237 test_inputs = self[:2] if batched else self[0]
1238 test_indices = [0, 1] if batched else 0
-> 1239 update_data = does_function_return_dict(test_inputs, test_indices)
1240 logger.info("Testing finished, running the mapping function on the dataset")
1241
/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py in does_function_return_dict(inputs, indices)
1208 fn_args = [inputs] if input_columns is None else [inputs[col] for col in input_columns]
1209 processed_inputs = (
-> 1210 function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
1211 )
1212 does_return_dict = isinstance(processed_inputs, Mapping)
/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py in __call__(self, text, text_pair, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
2281 )
2282 ), (
-> 2283 "text input must of type `str` (single example), `List[str]` (batch or single pretokenized example) "
2284 "or `List[List[str]]` (batch of pretokenized examples)."
2285 )
AssertionError: text input must of type `str` (single example), `List[str]` (batch or single pretokenized example) or `List[List[str]]` (batch of pretokenized examples).
```
which I think is not expected. I also tried the same steps using `Dataset.from_csv` which resulted in the same error.
For reproducing this, I used [this dataset from kaggle](https://www.kaggle.com/team-ai/spam-text-message-classification)
|
closed
|
https://github.com/huggingface/datasets/issues/1422
| 2020-12-09T22:05:42
| 2020-12-17T18:13:40
| 2020-12-17T18:13:40
|
{
"login": "SolomidHero",
"id": 28161779,
"type": "User"
}
|
[] | false
|
[] |
760,706,851
| 1,421
|
adding fake-news-english-2
|
closed
|
https://github.com/huggingface/datasets/pull/1421
| 2020-12-09T22:05:13
| 2020-12-13T00:48:49
| 2020-12-13T00:48:49
|
{
"login": "MisbahKhan789",
"id": 15351802,
"type": "User"
}
|
[] | true
|
[] |
|
760,700,388
| 1,420
|
Add dataset yoruba_wordsim353
|
Contains loading script as well as dataset card including YAML tags.
|
closed
|
https://github.com/huggingface/datasets/pull/1420
| 2020-12-09T21:54:29
| 2020-12-11T13:34:04
| 2020-12-11T13:34:04
|
{
"login": "michael-aloys",
"id": 1858628,
"type": "User"
}
|
[] | true
|
[] |
760,673,716
| 1,419
|
Add Turkish News Category Dataset (270K)
|
This PR adds the Turkish News Categories Dataset (270K) dataset which is a text classification dataset by me and @yavuzKomecoglu. Turkish news dataset consisting of **273601 news** in **17 categories**, compiled from printed media and news websites between 2010 and 2017 by the [Interpress](https://www.interpress.com/) media monitoring company.
|
closed
|
https://github.com/huggingface/datasets/pull/1419
| 2020-12-09T21:08:33
| 2020-12-11T14:02:31
| 2020-12-11T14:02:31
|
{
"login": "basakbuluz",
"id": 41359672,
"type": "User"
}
|
[] | true
|
[] |
760,672,320
| 1,418
|
Add arabic dialects
|
Data loading script and dataset card for Dialectal Arabic Resources dataset.
Fixed git issues from PR #976
|
closed
|
https://github.com/huggingface/datasets/pull/1418
| 2020-12-09T21:06:07
| 2020-12-17T09:40:56
| 2020-12-17T09:40:56
|
{
"login": "mcmillanmajora",
"id": 26722925,
"type": "User"
}
|
[] | true
|
[] |
760,660,918
| 1,417
|
WIP: Vinay/add peer read dataset
|
closed
|
https://github.com/huggingface/datasets/pull/1417
| 2020-12-09T20:49:52
| 2020-12-11T18:43:31
| 2020-12-11T18:43:31
|
{
"login": "vinaykudari",
"id": 34424769,
"type": "User"
}
|
[] | true
|
[] |
|
760,653,971
| 1,416
|
Add Shrinked Turkish NER from Kaggle.
|
Add Shrinked Turkish NER from [Kaggle](https://www.kaggle.com/behcetsenturk/shrinked-twnertc-turkish-ner-data-by-kuzgunlar).
|
closed
|
https://github.com/huggingface/datasets/pull/1416
| 2020-12-09T20:38:35
| 2020-12-11T11:23:31
| 2020-12-11T11:23:31
|
{
"login": "bhctsntrk",
"id": 22636672,
"type": "User"
}
|
[] | true
|
[] |
760,642,786
| 1,415
|
Add Hate Speech and Offensive Language Detection dataset
|
Add [Hate Speech and Offensive Language Detection dataset](https://github.com/t-davidson/hate-speech-and-offensive-language) from [this paper](https://arxiv.org/abs/1703.04009).
|
closed
|
https://github.com/huggingface/datasets/pull/1415
| 2020-12-09T20:22:12
| 2020-12-14T18:06:44
| 2020-12-14T16:25:31
|
{
"login": "hugoabonizio",
"id": 1206395,
"type": "User"
}
|
[] | true
|
[] |
760,622,133
| 1,414
|
Adding BioCreative II Gene Mention corpus
|
Adding BioCreative II Gene Mention corpus
|
closed
|
https://github.com/huggingface/datasets/pull/1414
| 2020-12-09T19:49:28
| 2020-12-11T11:17:40
| 2020-12-11T11:17:40
|
{
"login": "mahajandiwakar",
"id": 10516432,
"type": "User"
}
|
[] | true
|
[] |
760,615,090
| 1,413
|
Add OffComBR
|
Add [OffComBR](https://github.com/rogersdepelle/OffComBR) from [Offensive Comments in the Brazilian Web: a dataset and baseline results](https://sol.sbc.org.br/index.php/brasnam/article/view/3260/3222) paper.
But I'm having a hard time generating dummy data since the original dataset extion is `.arff` and the [_create_dummy_data function](https://github.com/huggingface/datasets/blob/a4aeaf911240057286a01bff1b1d75a89aedd57b/src/datasets/commands/dummy_data.py#L185) doesn't allow it.
|
closed
|
https://github.com/huggingface/datasets/pull/1413
| 2020-12-09T19:38:08
| 2020-12-14T18:06:45
| 2020-12-14T16:51:10
|
{
"login": "hugoabonizio",
"id": 1206395,
"type": "User"
}
|
[] | true
|
[] |
760,607,959
| 1,412
|
Adding the ASSIN dataset
|
Adding the ASSIN dataset, a Portuguese language dataset for Natural Language Inference and Semantic Similarity Scoring
|
closed
|
https://github.com/huggingface/datasets/pull/1412
| 2020-12-09T19:27:06
| 2020-12-11T10:41:10
| 2020-12-11T10:41:10
|
{
"login": "jonatasgrosman",
"id": 5097052,
"type": "User"
}
|
[] | true
|
[] |
760,606,290
| 1,411
|
2 typos
|
Corrected 2 typos
|
closed
|
https://github.com/huggingface/datasets/pull/1411
| 2020-12-09T19:24:34
| 2020-12-11T10:39:05
| 2020-12-11T10:39:05
|
{
"login": "dezow",
"id": 47401160,
"type": "User"
}
|
[] | true
|
[] |
760,597,092
| 1,410
|
Add penn treebank dataset
|
closed
|
https://github.com/huggingface/datasets/pull/1410
| 2020-12-09T19:11:33
| 2020-12-16T09:38:23
| 2020-12-16T09:38:23
|
{
"login": "harshalmittal4",
"id": 24206326,
"type": "User"
}
|
[] | true
|
[] |
|
760,593,932
| 1,409
|
Adding the ASSIN dataset
|
Adding the ASSIN dataset, a Portuguese language dataset for Natural Language Inference and Semantic Similarity Scoring
|
closed
|
https://github.com/huggingface/datasets/pull/1409
| 2020-12-09T19:07:00
| 2020-12-09T19:18:12
| 2020-12-09T19:15:52
|
{
"login": "jonatasgrosman",
"id": 5097052,
"type": "User"
}
|
[] | true
|
[] |
760,590,589
| 1,408
|
adding fake-news-english
|
closed
|
https://github.com/huggingface/datasets/pull/1408
| 2020-12-09T19:02:07
| 2020-12-13T00:49:19
| 2020-12-13T00:49:19
|
{
"login": "MisbahKhan789",
"id": 15351802,
"type": "User"
}
|
[] | true
|
[] |
|
760,581,756
| 1,407
|
Add Tweet Eval Dataset
|
closed
|
https://github.com/huggingface/datasets/pull/1407
| 2020-12-09T18:48:57
| 2023-09-24T09:52:03
| 2021-02-26T08:54:04
|
{
"login": "abhishekkrthakur",
"id": 1183441,
"type": "User"
}
|
[] | true
|
[] |
|
760,581,330
| 1,406
|
Add Portuguese Hate Speech dataset
|
Binary Portuguese Hate Speech dataset from [this paper](https://www.aclweb.org/anthology/W19-3510/).
|
closed
|
https://github.com/huggingface/datasets/pull/1406
| 2020-12-09T18:48:16
| 2020-12-14T18:06:42
| 2020-12-14T16:22:20
|
{
"login": "hugoabonizio",
"id": 1206395,
"type": "User"
}
|
[] | true
|
[] |
760,578,035
| 1,405
|
Adding TaPaCo Dataset with README.md
|
closed
|
https://github.com/huggingface/datasets/pull/1405
| 2020-12-09T18:42:58
| 2020-12-13T19:11:18
| 2020-12-13T19:11:18
|
{
"login": "pacman100",
"id": 13534540,
"type": "User"
}
|
[] | true
|
[] |
|
760,575,473
| 1,404
|
Add Acronym Identification Dataset
|
closed
|
https://github.com/huggingface/datasets/pull/1404
| 2020-12-09T18:38:54
| 2020-12-14T13:12:01
| 2020-12-14T13:12:00
|
{
"login": "abhishekkrthakur",
"id": 1183441,
"type": "User"
}
|
[] | true
|
[] |
|
760,571,419
| 1,403
|
Add dataset clickbait_news_bg
|
Adding a new dataset - clickbait_news_bg
|
closed
|
https://github.com/huggingface/datasets/pull/1403
| 2020-12-09T18:32:12
| 2020-12-10T09:16:44
| 2020-12-10T09:16:43
|
{
"login": "tsvm",
"id": 1083319,
"type": "User"
}
|
[] | true
|
[] |
760,538,325
| 1,402
|
adding covid-tweets-japanese (again)
|
I had mistaken use git rebase, I was so hurried to fix it. However, I didn't fully consider the use of git reset , so I unintendedly stopped PR (#1367) altogether. Sorry about that.
I'll make a new PR.
|
closed
|
https://github.com/huggingface/datasets/pull/1402
| 2020-12-09T17:46:46
| 2020-12-13T17:54:14
| 2020-12-13T17:47:36
|
{
"login": "forest1988",
"id": 2755894,
"type": "User"
}
|
[] | true
|
[] |
760,525,949
| 1,401
|
Add reasoning_bg
|
Adding reading comprehension dataset for Bulgarian language
|
closed
|
https://github.com/huggingface/datasets/pull/1401
| 2020-12-09T17:30:49
| 2020-12-17T16:50:43
| 2020-12-17T16:50:42
|
{
"login": "saradhix",
"id": 1351362,
"type": "User"
}
|
[] | true
|
[] |
760,514,215
| 1,400
|
Add European Union Education and Culture Translation Memory (EAC-TM) dataset
|
Adding the EAC Translation Memory dataset : https://ec.europa.eu/jrc/en/language-technologies/eac-translation-memory
|
closed
|
https://github.com/huggingface/datasets/pull/1400
| 2020-12-09T17:14:52
| 2020-12-14T13:06:48
| 2020-12-14T13:06:47
|
{
"login": "SBrandeis",
"id": 33657802,
"type": "User"
}
|
[] | true
|
[] |
760,499,576
| 1,399
|
Add HoVer Dataset
|
HoVer: A Dataset for Many-Hop Fact Extraction And Claim Verification
https://arxiv.org/abs/2011.03088
|
closed
|
https://github.com/huggingface/datasets/pull/1399
| 2020-12-09T16:55:39
| 2020-12-14T10:57:23
| 2020-12-14T10:57:22
|
{
"login": "abhishekkrthakur",
"id": 1183441,
"type": "User"
}
|
[] | true
|
[] |
760,497,024
| 1,398
|
Add Neural Code Search Dataset
|
closed
|
https://github.com/huggingface/datasets/pull/1398
| 2020-12-09T16:52:16
| 2020-12-09T18:02:27
| 2020-12-09T18:02:27
|
{
"login": "vinaykudari",
"id": 34424769,
"type": "User"
}
|
[] | true
|
[] |
|
760,467,501
| 1,397
|
datasets card-creator link added
|
dataset card creator link has been added
link: https://huggingface.co/datasets/card-creator/
|
closed
|
https://github.com/huggingface/datasets/pull/1397
| 2020-12-09T16:15:18
| 2020-12-09T16:47:48
| 2020-12-09T16:47:48
|
{
"login": "tanmoyio",
"id": 33005287,
"type": "User"
}
|
[] | true
|
[] |
760,455,295
| 1,396
|
initial commit for MultiReQA for second PR
|
Since last PR #1349 had some issues passing the tests. So, a new PR is generated.
|
closed
|
https://github.com/huggingface/datasets/pull/1396
| 2020-12-09T16:00:35
| 2020-12-10T18:20:12
| 2020-12-10T18:20:11
|
{
"login": "Karthik-Bhaskar",
"id": 13200370,
"type": "User"
}
|
[] | true
|
[] |
760,448,255
| 1,395
|
Add WikiSource Dataset
|
closed
|
https://github.com/huggingface/datasets/pull/1395
| 2020-12-09T15:52:06
| 2020-12-14T10:24:14
| 2020-12-14T10:24:13
|
{
"login": "abhishekkrthakur",
"id": 1183441,
"type": "User"
}
|
[] | true
|
[] |
|
760,436,365
| 1,394
|
Add OfisPublik Dataset
|
closed
|
https://github.com/huggingface/datasets/pull/1394
| 2020-12-09T15:37:45
| 2020-12-14T10:23:30
| 2020-12-14T10:23:29
|
{
"login": "abhishekkrthakur",
"id": 1183441,
"type": "User"
}
|
[] | true
|
[] |
|
760,436,267
| 1,393
|
Add script_version suggestion when dataset/metric not found
|
Adds a helpful prompt to the error message when a dataset/metric is not found, suggesting the user might need to pass `script_version="master"` if the dataset was added recently. The whole error looks like:
> Couldn't find file locally at blah/blah.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1/metrics/blah/blah.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/met
rics/blah/blah.py.
If the dataset was added recently, you may need to to pass script_version="master" to find the loading script on the master branch.
|
closed
|
https://github.com/huggingface/datasets/pull/1393
| 2020-12-09T15:37:38
| 2020-12-10T18:17:05
| 2020-12-10T18:17:05
|
{
"login": "joeddav",
"id": 9353833,
"type": "User"
}
|
[] | true
|
[] |
760,432,261
| 1,392
|
Add KDE4 Dataset
|
closed
|
https://github.com/huggingface/datasets/pull/1392
| 2020-12-09T15:32:58
| 2020-12-14T10:22:33
| 2020-12-14T10:22:32
|
{
"login": "abhishekkrthakur",
"id": 1183441,
"type": "User"
}
|
[] | true
|
[] |
|
760,432,041
| 1,391
|
Add MultiParaCrawl Dataset
|
closed
|
https://github.com/huggingface/datasets/pull/1391
| 2020-12-09T15:32:46
| 2020-12-10T18:39:45
| 2020-12-10T18:39:44
|
{
"login": "abhishekkrthakur",
"id": 1183441,
"type": "User"
}
|
[] | true
|
[] |
|
760,431,051
| 1,390
|
Add SPC Dataset
|
closed
|
https://github.com/huggingface/datasets/pull/1390
| 2020-12-09T15:31:51
| 2020-12-14T11:13:53
| 2020-12-14T11:13:52
|
{
"login": "abhishekkrthakur",
"id": 1183441,
"type": "User"
}
|
[] | true
|
[] |
|
760,402,224
| 1,389
|
add amazon polarity dataset
|
This corresponds to the amazon (binary dataset) requested in https://github.com/huggingface/datasets/issues/353
|
closed
|
https://github.com/huggingface/datasets/pull/1389
| 2020-12-09T14:58:21
| 2020-12-11T11:45:39
| 2020-12-11T11:41:01
|
{
"login": "hfawaz",
"id": 29229602,
"type": "User"
}
|
[] | true
|
[] |
760,373,136
| 1,388
|
hind_encorp
|
resubmit of hind_encorp file changes
|
closed
|
https://github.com/huggingface/datasets/pull/1388
| 2020-12-09T14:22:59
| 2020-12-09T14:46:51
| 2020-12-09T14:46:37
|
{
"login": "rahul-art",
"id": 56379013,
"type": "User"
}
|
[] | true
|
[] |
760,368,355
| 1,387
|
Add LIAR dataset
|
Add LIAR dataset from [“Liar, Liar Pants on Fire”: A New Benchmark Dataset for Fake News Detection](https://www.aclweb.org/anthology/P17-2067/).
|
closed
|
https://github.com/huggingface/datasets/pull/1387
| 2020-12-09T14:16:55
| 2020-12-14T18:06:43
| 2020-12-14T16:23:59
|
{
"login": "hugoabonizio",
"id": 1206395,
"type": "User"
}
|
[] | true
|
[] |
760,365,505
| 1,386
|
Add RecipeNLG Dataset (manual download)
|
closed
|
https://github.com/huggingface/datasets/pull/1386
| 2020-12-09T14:13:19
| 2020-12-10T16:58:22
| 2020-12-10T16:58:21
|
{
"login": "abhishekkrthakur",
"id": 1183441,
"type": "User"
}
|
[] | true
|
[] |
|
760,351,405
| 1,385
|
add best2009
|
`best2009` is a Thai word-tokenization dataset from encyclopedia, novels, news and articles by [NECTEC](https://www.nectec.or.th/) (148,995/2,252 lines of train/test). It was created for [BEST 2010: Word Tokenization Competition](https://thailang.nectec.or.th/archive/indexa290.html?q=node/10). The test set answers are not provided publicly.
|
closed
|
https://github.com/huggingface/datasets/pull/1385
| 2020-12-09T13:56:09
| 2020-12-14T10:59:08
| 2020-12-14T10:59:08
|
{
"login": "cstorm125",
"id": 15519308,
"type": "User"
}
|
[] | true
|
[] |
760,331,767
| 1,384
|
Add News Commentary Dataset
|
closed
|
https://github.com/huggingface/datasets/pull/1384
| 2020-12-09T13:30:36
| 2020-12-10T16:54:08
| 2020-12-10T16:54:07
|
{
"login": "abhishekkrthakur",
"id": 1183441,
"type": "User"
}
|
[] | true
|
[] |
|
760,331,480
| 1,383
|
added conv ai 2
|
Dataset : https://github.com/DeepPavlov/convai/tree/master/2018
|
closed
|
https://github.com/huggingface/datasets/pull/1383
| 2020-12-09T13:30:12
| 2020-12-13T18:54:42
| 2020-12-13T18:54:41
|
{
"login": "rkc007",
"id": 22396042,
"type": "User"
}
|
[] | true
|
[] |
760,325,077
| 1,382
|
adding UNPC
|
Adding United Nations Parallel Corpus
http://opus.nlpl.eu/UNPC.php
|
closed
|
https://github.com/huggingface/datasets/pull/1382
| 2020-12-09T13:21:41
| 2020-12-09T17:53:06
| 2020-12-09T17:53:06
|
{
"login": "patil-suraj",
"id": 27137566,
"type": "User"
}
|
[] | true
|
[] |
760,320,960
| 1,381
|
Add twi text c3
|
Added Twi texts for training embeddings and language models based on the paper https://www.aclweb.org/anthology/2020.lrec-1.335/
|
closed
|
https://github.com/huggingface/datasets/pull/1381
| 2020-12-09T13:16:38
| 2020-12-13T18:39:27
| 2020-12-13T18:39:27
|
{
"login": "dadelani",
"id": 23586676,
"type": "User"
}
|
[] | true
|
[] |
760,320,494
| 1,380
|
Add Tatoeba Dataset
|
closed
|
https://github.com/huggingface/datasets/pull/1380
| 2020-12-09T13:16:04
| 2020-12-10T16:54:28
| 2020-12-10T16:54:27
|
{
"login": "abhishekkrthakur",
"id": 1183441,
"type": "User"
}
|
[] | true
|
[] |
|
760,320,487
| 1,379
|
Add yoruba text c3
|
Added Yoruba texts for training embeddings and language models based on the paper https://www.aclweb.org/anthology/2020.lrec-1.335/
|
closed
|
https://github.com/huggingface/datasets/pull/1379
| 2020-12-09T13:16:03
| 2020-12-13T18:45:12
| 2020-12-13T18:37:33
|
{
"login": "dadelani",
"id": 23586676,
"type": "User"
}
|
[] | true
|
[] |
760,313,108
| 1,378
|
Add FACTCK.BR dataset
|
This PR adds [FACTCK.BR](https://github.com/jghm-f/FACTCK.BR) dataset from [FACTCK.BR: a new dataset to study fake news](https://dl.acm.org/doi/10.1145/3323503.3361698).
|
closed
|
https://github.com/huggingface/datasets/pull/1378
| 2020-12-09T13:06:22
| 2020-12-17T12:38:45
| 2020-12-15T15:34:11
|
{
"login": "hugoabonizio",
"id": 1206395,
"type": "User"
}
|
[] | true
|
[] |
760,309,435
| 1,377
|
adding marathi-wiki dataset
|
Adding marathi-wiki-articles dataset.
|
closed
|
https://github.com/huggingface/datasets/pull/1377
| 2020-12-09T13:01:20
| 2022-10-03T09:39:09
| 2022-10-03T09:39:09
|
{
"login": "ekdnam",
"id": 40426312,
"type": "User"
}
|
[
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | true
|
[] |
760,309,300
| 1,376
|
Add SETimes Dataset
|
closed
|
https://github.com/huggingface/datasets/pull/1376
| 2020-12-09T13:01:08
| 2020-12-10T16:11:57
| 2020-12-10T16:11:56
|
{
"login": "abhishekkrthakur",
"id": 1183441,
"type": "User"
}
|
[] | true
|
[] |
|
760,294,931
| 1,375
|
Add OPUS EMEA Dataset
|
closed
|
https://github.com/huggingface/datasets/pull/1375
| 2020-12-09T12:39:44
| 2020-12-10T16:11:09
| 2020-12-10T16:11:08
|
{
"login": "abhishekkrthakur",
"id": 1183441,
"type": "User"
}
|
[] | true
|
[] |
|
760,288,291
| 1,374
|
Add OPUS Tilde Model Dataset
|
closed
|
https://github.com/huggingface/datasets/pull/1374
| 2020-12-09T12:29:23
| 2020-12-10T16:11:29
| 2020-12-10T16:11:28
|
{
"login": "abhishekkrthakur",
"id": 1183441,
"type": "User"
}
|
[] | true
|
[] |
|
760,280,869
| 1,373
|
Add OPUS ECB Dataset
|
closed
|
https://github.com/huggingface/datasets/pull/1373
| 2020-12-09T12:18:22
| 2020-12-10T15:25:55
| 2020-12-10T15:25:54
|
{
"login": "abhishekkrthakur",
"id": 1183441,
"type": "User"
}
|
[] | true
|
[] |
|
760,274,046
| 1,372
|
Add OPUS Books Dataset
|
closed
|
https://github.com/huggingface/datasets/pull/1372
| 2020-12-09T12:08:49
| 2020-12-14T09:56:28
| 2020-12-14T09:56:27
|
{
"login": "abhishekkrthakur",
"id": 1183441,
"type": "User"
}
|
[] | true
|
[] |
|
760,270,116
| 1,371
|
Adding Scielo
|
Adding Scielo: Parallel corpus of full-text articles in Portuguese, English and Spanish from SciELO
https://sites.google.com/view/felipe-soares/datasets#h.p_92uSCyAjWSRB
|
closed
|
https://github.com/huggingface/datasets/pull/1371
| 2020-12-09T12:02:48
| 2020-12-09T17:53:37
| 2020-12-09T17:53:37
|
{
"login": "patil-suraj",
"id": 27137566,
"type": "User"
}
|
[] | true
|
[] |
760,264,132
| 1,370
|
Add OPUS PHP Dataset
|
closed
|
https://github.com/huggingface/datasets/pull/1370
| 2020-12-09T11:53:30
| 2020-12-10T15:37:25
| 2020-12-10T15:37:24
|
{
"login": "abhishekkrthakur",
"id": 1183441,
"type": "User"
}
|
[] | true
|
[] |
|
760,227,776
| 1,369
|
Use passed --cache_dir for modules cache
|
When passed `--cache_dir` arg:
```shell
python datasets-cli test datasets/<my-dataset-folder> --save_infos --all_configs --cache_dir <my-cache-dir>
```
it is not used for caching the modules, which are cached in the default location at `.cache/huggingface/modules`.
With this fix, the modules will be cached at `<my-cache-dir>/modules`.
|
open
|
https://github.com/huggingface/datasets/pull/1369
| 2020-12-09T10:59:59
| 2022-07-06T15:19:47
| null |
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
760,222,616
| 1,368
|
Re-adding narrativeqa dataset
|
An update of #309.
|
closed
|
https://github.com/huggingface/datasets/pull/1368
| 2020-12-09T10:53:09
| 2020-12-11T13:30:59
| 2020-12-11T13:30:59
|
{
"login": "ghomasHudson",
"id": 13795113,
"type": "User"
}
|
[] | true
|
[] |
760,208,191
| 1,367
|
adding covid-tweets-japanese
|
Adding COVID-19 Japanese Tweets Dataset as part of the sprint.
Testing with dummy data is not working (the file is said to not exist). Sorry for the incomplete PR.
|
closed
|
https://github.com/huggingface/datasets/pull/1367
| 2020-12-09T10:34:01
| 2020-12-09T17:25:14
| 2020-12-09T17:25:14
|
{
"login": "forest1988",
"id": 2755894,
"type": "User"
}
|
[] | true
|
[] |
760,205,506
| 1,366
|
Adding Hope EDI dataset
|
closed
|
https://github.com/huggingface/datasets/pull/1366
| 2020-12-09T10:30:23
| 2020-12-14T14:27:57
| 2020-12-14T14:27:57
|
{
"login": "jamespaultg",
"id": 7421838,
"type": "User"
}
|
[] | true
|
[] |
|
760,188,457
| 1,365
|
Add Mkqa dataset
|
# MKQA: Multilingual Knowledge Questions & Answers Dataset
Adding the [MKQA](https://github.com/apple/ml-mkqa) dataset as part of the sprint 🎉
There is no official data splits so I added just a `train` split.
differently from the original:
- answer:type field is a ClassLabel (I thought it might be possible to train on this as a label for categorizing questions)
- answer:entity field has a default value of empty string '' (since this key is not available for all in original)
- answer:alias has default value of []
- [x] All tests passed
- [x] Added dummy data
- [x] Added data card (as much as I could)
|
closed
|
https://github.com/huggingface/datasets/pull/1365
| 2020-12-09T10:06:33
| 2020-12-10T15:37:56
| 2020-12-10T15:37:56
|
{
"login": "cceyda",
"id": 15624271,
"type": "User"
}
|
[] | true
|
[] |
760,164,558
| 1,364
|
Narrative QA (Manual Download Stories) Dataset
|
Narrative QA with manual download for stories.
|
closed
|
https://github.com/huggingface/datasets/pull/1364
| 2020-12-09T09:33:59
| 2021-01-25T15:31:51
| 2021-01-25T15:31:31
|
{
"login": "rsanjaykamath",
"id": 18527321,
"type": "User"
}
|
[] | true
|
[] |
760,160,944
| 1,363
|
Adding OPUS MultiUN
|
Adding UnMulti
http://www.euromatrixplus.net/multi-un/
|
closed
|
https://github.com/huggingface/datasets/pull/1363
| 2020-12-09T09:29:01
| 2020-12-09T17:54:20
| 2020-12-09T17:54:20
|
{
"login": "patil-suraj",
"id": 27137566,
"type": "User"
}
|
[] | true
|
[] |
760,138,233
| 1,362
|
adding opus_infopankki
|
Adding opus_infopankki
http://opus.nlpl.eu/infopankki-v1.php
|
closed
|
https://github.com/huggingface/datasets/pull/1362
| 2020-12-09T08:57:10
| 2020-12-09T18:16:20
| 2020-12-09T18:13:48
|
{
"login": "patil-suraj",
"id": 27137566,
"type": "User"
}
|
[] | true
|
[] |
760,101,728
| 1,361
|
adding bprec
|
Brand-Product Relation Extraction Corpora in Polish
|
closed
|
https://github.com/huggingface/datasets/pull/1361
| 2020-12-09T08:02:45
| 2020-12-16T17:04:44
| 2020-12-16T17:04:44
|
{
"login": "kldarek",
"id": 15803781,
"type": "User"
}
|
[] | true
|
[] |
760,088,419
| 1,360
|
add wisesight1000
|
`wisesight1000` contains Thai social media texts randomly drawn from the full `wisesight-sentiment`, tokenized by human annotators. Out of the labels `neg` (negative), `neu` (neutral), `pos` (positive), `q` (question), 250 samples each. Some texts are removed because they look like spam.Because these samples are representative of real world content, we believe having these annotaed samples will allow the community to robustly evaluate tokenization algorithms.
|
closed
|
https://github.com/huggingface/datasets/pull/1360
| 2020-12-09T07:41:30
| 2020-12-10T14:28:41
| 2020-12-10T14:28:41
|
{
"login": "cstorm125",
"id": 15519308,
"type": "User"
}
|
[] | true
|
[] |
760,055,969
| 1,359
|
Add JNLPBA
|
closed
|
https://github.com/huggingface/datasets/pull/1359
| 2020-12-09T06:48:51
| 2020-12-10T14:24:36
| 2020-12-10T14:24:36
|
{
"login": "edugp",
"id": 17855740,
"type": "User"
}
|
[] | true
|
[] |
|
760,031,131
| 1,358
|
Add spider dataset
|
This PR adds the Spider dataset, a large-scale complex and cross-domain semantic parsing and text-to-SQL dataset annotated by 11 Yale students. The goal of the Spider challenge is to develop natural language interfaces to cross-domain databases.
Dataset website: https://yale-lily.github.io/spider
Paper link: https://www.aclweb.org/anthology/D18-1425/
|
closed
|
https://github.com/huggingface/datasets/pull/1358
| 2020-12-09T06:06:18
| 2020-12-10T15:12:31
| 2020-12-10T15:12:31
|
{
"login": "olinguyen",
"id": 4341867,
"type": "User"
}
|
[] | true
|
[] |
760,023,525
| 1,357
|
Youtube caption corrections
|
This PR adds a new dataset of YouTube captions, error and corrections. This dataset was created in just the last week, as inspired by this sprint!
|
closed
|
https://github.com/huggingface/datasets/pull/1357
| 2020-12-09T05:52:34
| 2020-12-15T18:12:56
| 2020-12-15T18:12:56
|
{
"login": "2dot71mily",
"id": 21292059,
"type": "User"
}
|
[] | true
|
[] |
759,994,457
| 1,356
|
Add StackOverflow StackSample dataset
|
This PR adds the StackOverflow StackSample dataset from Kaggle: https://www.kaggle.com/stackoverflow/stacksample
Ran through all of the steps. However, since my dataset requires manually downloading the data, I was unable to run the pytest on the real dataset (the dummy data pytest passed).
|
closed
|
https://github.com/huggingface/datasets/pull/1356
| 2020-12-09T04:59:51
| 2020-12-21T14:48:21
| 2020-12-21T14:48:21
|
{
"login": "ncoop57",
"id": 7613470,
"type": "User"
}
|
[] | true
|
[] |
759,994,208
| 1,355
|
Addition of py_ast dataset
|
@lhoestq as discussed in PR #1195
|
closed
|
https://github.com/huggingface/datasets/pull/1355
| 2020-12-09T04:59:17
| 2020-12-09T16:19:49
| 2020-12-09T16:19:48
|
{
"login": "reshinthadithyan",
"id": 36307201,
"type": "User"
}
|
[] | true
|
[] |
759,987,763
| 1,354
|
Add TweetQA dataset
|
This PR adds the TweetQA dataset, the first dataset for QA on social media data by leveraging news media and crowdsourcing.
Paper: https://arxiv.org/abs/1907.06292
Repository: https://tweetqa.github.io/
|
closed
|
https://github.com/huggingface/datasets/pull/1354
| 2020-12-09T04:44:01
| 2020-12-10T15:10:30
| 2020-12-10T15:10:30
|
{
"login": "anaerobeth",
"id": 3663322,
"type": "User"
}
|
[] | true
|
[] |
759,980,004
| 1,353
|
New instruction for how to generate dataset_infos.json
|
Add additional instructions for how to generate dataset_infos.json for manual download datasets. Information courtesy of `Taimur Ibrahim` from the slack channel
|
closed
|
https://github.com/huggingface/datasets/pull/1353
| 2020-12-09T04:24:40
| 2020-12-10T13:45:15
| 2020-12-10T13:45:15
|
{
"login": "ncoop57",
"id": 7613470,
"type": "User"
}
|
[] | true
|
[] |
759,978,543
| 1,352
|
change url for prachathai67k to internet archive
|
`prachathai67k` is currently downloaded from git-lfs of PyThaiNLP github. Since the size is quite large (~250MB), I moved the URL to archive.org in order to prevent rate limit issues.
|
closed
|
https://github.com/huggingface/datasets/pull/1352
| 2020-12-09T04:20:37
| 2020-12-10T13:42:17
| 2020-12-10T13:42:17
|
{
"login": "cstorm125",
"id": 15519308,
"type": "User"
}
|
[] | true
|
[] |
759,902,770
| 1,351
|
added craigslist_bargians
|
`craigslist_bargains` data set from [here](https://worksheets.codalab.org/worksheets/0x453913e76b65495d8b9730d41c7e0a0c/)
(Cleaned up version of #1278)
|
closed
|
https://github.com/huggingface/datasets/pull/1351
| 2020-12-09T01:02:31
| 2020-12-10T14:14:34
| 2020-12-10T14:14:34
|
{
"login": "ZacharySBrown",
"id": 7950786,
"type": "User"
}
|
[] | true
|
[] |
759,879,789
| 1,350
|
add LeNER-Br dataset
|
Adding the LeNER-Br dataset, a Portuguese language dataset for named entity recognition
|
closed
|
https://github.com/huggingface/datasets/pull/1350
| 2020-12-09T00:06:38
| 2020-12-10T14:11:33
| 2020-12-10T14:11:33
|
{
"login": "jonatasgrosman",
"id": 5097052,
"type": "User"
}
|
[] | true
|
[] |
759,870,664
| 1,349
|
initial commit for MultiReQA
|
Added MultiReQA, which is a dataset containing the sentence boundary annotation from eight publicly available QA datasets including SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, BioASQ, RelationExtraction, and TextbookQA.
|
closed
|
https://github.com/huggingface/datasets/pull/1349
| 2020-12-08T23:44:34
| 2020-12-09T16:46:37
| 2020-12-09T16:46:37
|
{
"login": "Karthik-Bhaskar",
"id": 13200370,
"type": "User"
}
|
[] | true
|
[] |
759,869,849
| 1,348
|
add Yoruba NER dataset
|
Added Yoruba GV dataset based on this paper
|
closed
|
https://github.com/huggingface/datasets/pull/1348
| 2020-12-08T23:42:35
| 2020-12-10T14:30:25
| 2020-12-10T14:09:43
|
{
"login": "dadelani",
"id": 23586676,
"type": "User"
}
|
[] | true
|
[] |
759,845,231
| 1,347
|
Add spanish billion words corpus
|
Add an unannotated Spanish corpus of nearly 1.5 billion words, compiled from different resources from the web.
|
closed
|
https://github.com/huggingface/datasets/pull/1347
| 2020-12-08T22:51:38
| 2020-12-11T11:26:39
| 2020-12-11T11:15:28
|
{
"login": "mariagrandury",
"id": 57645283,
"type": "User"
}
|
[] | true
|
[] |
759,844,137
| 1,346
|
Add MultiBooked dataset
|
Add dataset.
|
closed
|
https://github.com/huggingface/datasets/pull/1346
| 2020-12-08T22:49:36
| 2020-12-15T17:02:09
| 2020-12-15T17:02:09
|
{
"login": "albertvillanova",
"id": 8515462,
"type": "User"
}
|
[] | true
|
[] |
759,835,486
| 1,345
|
First commit of NarrativeQA Dataset
|
Added NarrativeQA dataset and included a manual downloading option to download scripts from the original scripts provided by the authors.
|
closed
|
https://github.com/huggingface/datasets/pull/1345
| 2020-12-08T22:31:59
| 2021-01-25T15:31:52
| 2020-12-09T09:29:52
|
{
"login": "rsanjaykamath",
"id": 18527321,
"type": "User"
}
|
[] | true
|
[] |
759,831,925
| 1,344
|
Add hausa ner corpus
|
Added Hausa VOA NER data
|
closed
|
https://github.com/huggingface/datasets/pull/1344
| 2020-12-08T22:25:04
| 2020-12-08T23:11:55
| 2020-12-08T23:11:55
|
{
"login": "dadelani",
"id": 23586676,
"type": "User"
}
|
[] | true
|
[] |
759,809,999
| 1,343
|
Add LiveQA
|
This PR adds LiveQA, the Chinese real-time/timeline-based QA task by [Liu et al., 2020](https://arxiv.org/pdf/2010.00526.pdf).
|
closed
|
https://github.com/huggingface/datasets/pull/1343
| 2020-12-08T21:52:36
| 2020-12-14T09:40:28
| 2020-12-14T09:40:28
|
{
"login": "j-chim",
"id": 22435209,
"type": "User"
}
|
[] | true
|
[] |
759,794,121
| 1,342
|
[yaml] Fix metadata according to pre-specified scheme
|
@lhoestq @yjernite
|
closed
|
https://github.com/huggingface/datasets/pull/1342
| 2020-12-08T21:26:34
| 2020-12-09T15:37:27
| 2020-12-09T15:37:26
|
{
"login": "julien-c",
"id": 326577,
"type": "User"
}
|
[] | true
|
[] |
759,784,557
| 1,341
|
added references to only data card creator to all guides
|
We can now use the wonderful online form for dataset cards created by @evrardts
|
closed
|
https://github.com/huggingface/datasets/pull/1341
| 2020-12-08T21:11:11
| 2020-12-08T21:36:12
| 2020-12-08T21:36:11
|
{
"login": "yjernite",
"id": 10469459,
"type": "User"
}
|
[] | true
|
[] |
759,765,408
| 1,340
|
:fist: ¡Viva la Independencia!
|
Adds the Catalonia Independence Corpus for stance-detection of Tweets.
Ready for review!
|
closed
|
https://github.com/huggingface/datasets/pull/1340
| 2020-12-08T20:43:43
| 2020-12-14T10:36:01
| 2020-12-14T10:36:01
|
{
"login": "lewtun",
"id": 26859204,
"type": "User"
}
|
[] | true
|
[] |
759,744,088
| 1,339
|
hate_speech_18 initial commit
|
closed
|
https://github.com/huggingface/datasets/pull/1339
| 2020-12-08T20:10:08
| 2020-12-12T16:17:32
| 2020-12-12T16:17:32
|
{
"login": "czabo",
"id": 75574105,
"type": "User"
}
|
[] | true
|
[] |
|
759,725,770
| 1,338
|
Add GigaFren Dataset
|
closed
|
https://github.com/huggingface/datasets/pull/1338
| 2020-12-08T19:42:04
| 2020-12-14T10:03:47
| 2020-12-14T10:03:46
|
{
"login": "abhishekkrthakur",
"id": 1183441,
"type": "User"
}
|
[] | true
|
[] |
|
759,710,482
| 1,337
|
Add spanish billion words
|
Add an unannotated corpus of the Spanish language of nearly 1.5 billion words, compiled from different resources from the web.
The dataset needs 10 GB (download: 1.89 GiB, generated: 8.34 GiB, post-processed: Unknown size, total: 10.22 GiB), the test using dummy data pass but my laptop isn't able to run it on the real data (I left it running for over 8 hours and it didn't finish).
|
closed
|
https://github.com/huggingface/datasets/pull/1337
| 2020-12-08T19:18:02
| 2020-12-08T22:59:38
| 2020-12-08T21:15:27
|
{
"login": "mariagrandury",
"id": 57645283,
"type": "User"
}
|
[] | true
|
[] |
759,706,932
| 1,336
|
Add dataset Yoruba BBC Topic Classification
|
Added new dataset Yoruba BBC Topic Classification
Contains loading script as well as dataset card including YAML tags.
|
closed
|
https://github.com/huggingface/datasets/pull/1336
| 2020-12-08T19:12:18
| 2020-12-10T11:27:41
| 2020-12-10T11:27:41
|
{
"login": "michael-aloys",
"id": 1858628,
"type": "User"
}
|
[] | true
|
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.