title
stringlengths
1
290
body
stringlengths
0
228k
html_url
stringlengths
46
51
comments
list
pull_request
dict
number
int64
1
5.59k
is_pull_request
bool
2 classes
adding ted_talks_iwslt
UPDATE2: (2nd Jan) Wrote a long writeup on the slack channel. I don't think this approach is correct. Basically this created language pairs (109*108) Running the `pytest `went for more than 40+ hours and it was still running! So working on a different approach, such that the number of configs = number of languages. Will make a new pull request with that. UPDATE: This requires manual download dataset This is a draft version
https://github.com/huggingface/datasets/pull/1608
[ "Closing this with reference to the new approach #1676 " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1608", "html_url": "https://github.com/huggingface/datasets/pull/1608", "diff_url": "https://github.com/huggingface/datasets/pull/1608.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1608.patch", "merged_at": null }
1,608
true
modified tweets hate speech detection
https://github.com/huggingface/datasets/pull/1607
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1607", "html_url": "https://github.com/huggingface/datasets/pull/1607", "diff_url": "https://github.com/huggingface/datasets/pull/1607.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1607.patch", "merged_at": "2020-12-21T16:08:48" }
1,607
true
added Semantic Scholar Open Research Corpus
I picked up this dataset [Semantic Scholar Open Research Corpus](https://allenai.org/data/s2orc) but it contains 6000 files to be downloaded. I tried the current code with 100 files and it worked fine (took ~15GB space). For 6000 files it would occupy ~900GB space which I don’t have. Can someone from the HF team with that much of disk space help me with generate dataset_infos and dummy_data?
https://github.com/huggingface/datasets/pull/1606
[ "I think we’ll need complete dataset_infos.json to create YAML tags. I ran the script again with 100 files after going through your comments and it was occupying ~16 GB space. So in total it should take ~960GB and I don’t have this much memory available with me. Also, I'll have to download the whole dataset for gen...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1606", "html_url": "https://github.com/huggingface/datasets/pull/1606", "diff_url": "https://github.com/huggingface/datasets/pull/1606.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1606.patch", "merged_at": "2021-02-03T09:30:59" }
1,606
true
Navigation version breaking
Hi, when navigating docs (Chrome, Ubuntu) (e.g. on this page: https://huggingface.co/docs/datasets/loading_metrics.html#using-a-custom-metric-script) the version control dropdown has the wrong string displayed as the current version: ![image](https://user-images.githubusercontent.com/3007947/102632187-02cad080-414f-11eb-813b-28f3c8d80def.png) **Edit:** this actually happens _only_ if you open a link to a concrete subsection. IMO, the best way to fix this without getting too deep into the intricacies of retrieving version numbers from the URL would be to change [this](https://github.com/huggingface/datasets/blob/master/docs/source/_static/js/custom.js#L112) line to: ``` let label = (version in versionMapping) ? version : stableVersion ``` which delegates the check to the (already maintained) keys of the version mapping dictionary & should be more robust. There's a similar ternary expression [here](https://github.com/huggingface/datasets/blob/master/docs/source/_static/js/custom.js#L97) which should also fail in this case. I'd also suggest swapping this [block](https://github.com/huggingface/datasets/blob/master/docs/source/_static/js/custom.js#L80-L90) to `string.contains(version) for version in versionMapping` which might be more robust. I'd add a PR myself but I'm by no means competent in JS :) I also have a side question wrt. docs versioning: I'm trying to make docs for a project which are versioned alike to your dropdown versioning. I was wondering how do you handle storage of multiple doc versions on your server? Do you update what `https://huggingface.co/docs/datasets` points to for every stable release & manually create new folders for each released version? So far I'm building & publishing (scping) the docs to the server with a github action which works well for a single version, but would ideally need to reorder the public files triggered on a new release.
https://github.com/huggingface/datasets/issues/1605
[ "Not relevant for our current docs :)." ]
null
1,605
false
Add tests for the download functions ?
AFAIK the download functions in `DownloadManager` are not tested yet. It could be good to add some to ensure behavior is as expected.
https://github.com/huggingface/datasets/issues/1604
[ "We have some tests now for it under `tests/test_download_manager.py`." ]
null
1,604
false
Add retries to HTTP requests
## What does this PR do ? Adding retries to HTTP GET & HEAD requests, when they fail with a `ConnectTimeout` exception. The "canonical" way to do this is to use [urllib's Retry class](https://urllib3.readthedocs.io/en/latest/reference/urllib3.util.html#urllib3.util.Retry) and wrap it in a [HttpAdapter](https://requests.readthedocs.io/en/master/api/#requests.adapters.HTTPAdapter). Seems a bit overkill to me, plus it forces us to use the `requests.Session` object. I prefer this simpler implementation. I'm open to remarks and suggestions @lhoestq @yjernite Fixes #1102
https://github.com/huggingface/datasets/pull/1603
[ "merging this one then :) " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1603", "html_url": "https://github.com/huggingface/datasets/pull/1603", "diff_url": "https://github.com/huggingface/datasets/pull/1603.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1603.patch", "merged_at": "2020-12-22T15:34:06" }
1,603
true
second update of id_newspapers_2018
The feature "url" is currently set wrongly to data["date"], this PR fix it to data["url"]. I added also an additional POC.
https://github.com/huggingface/datasets/pull/1602
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1602", "html_url": "https://github.com/huggingface/datasets/pull/1602", "diff_url": "https://github.com/huggingface/datasets/pull/1602.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1602.patch", "merged_at": "2020-12-22T10:41:14" }
1,602
true
second update of the id_newspapers_2018
The feature "url" is currently set wrongly to data["date"], this PR fix it to data["url"]. I added also an additional POC.
https://github.com/huggingface/datasets/pull/1601
[ "I close this PR, since it based on 1 week old repo. And I will create a new one" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1601", "html_url": "https://github.com/huggingface/datasets/pull/1601", "diff_url": "https://github.com/huggingface/datasets/pull/1601.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1601.patch", "merged_at": null }
1,601
true
AttributeError: 'DatasetDict' object has no attribute 'train_test_split'
The following code fails with "'DatasetDict' object has no attribute 'train_test_split'" - am I doing something wrong? ``` from datasets import load_dataset dataset = load_dataset('csv', data_files='data.txt') dataset = dataset.train_test_split(test_size=0.1) ``` > AttributeError: 'DatasetDict' object has no attribute 'train_test_split'
https://github.com/huggingface/datasets/issues/1600
[ "Hi @david-waterworth!\r\n\r\nAs indicated in the error message, `load_dataset(\"csv\")` returns a `DatasetDict` object, which is mapping of `str` to `Dataset` objects. I believe in this case the behavior is to return a `train` split with all the data.\r\n`train_test_split` is a method of the `Dataset` object, so y...
null
1,600
false
add Korean Sarcasm Dataset
https://github.com/huggingface/datasets/pull/1599
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1599", "html_url": "https://github.com/huggingface/datasets/pull/1599", "diff_url": "https://github.com/huggingface/datasets/pull/1599.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1599.patch", "merged_at": "2020-12-23T17:25:59" }
1,599
true
made suggested changes in fake-news-english
https://github.com/huggingface/datasets/pull/1598
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1598", "html_url": "https://github.com/huggingface/datasets/pull/1598", "diff_url": "https://github.com/huggingface/datasets/pull/1598.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1598.patch", "merged_at": "2020-12-18T09:43:57" }
1,598
true
adding hate-speech-and-offensive-language
https://github.com/huggingface/datasets/pull/1597
[ "made suggested changes and opened PR https://github.com/huggingface/datasets/pull/1628" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1597", "html_url": "https://github.com/huggingface/datasets/pull/1597", "diff_url": "https://github.com/huggingface/datasets/pull/1597.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1597.patch", "merged_at": null }
1,597
true
made suggested changes to hate-speech-and-offensive-language
https://github.com/huggingface/datasets/pull/1596
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1596", "html_url": "https://github.com/huggingface/datasets/pull/1596", "diff_url": "https://github.com/huggingface/datasets/pull/1596.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1596.patch", "merged_at": null }
1,596
true
Logiqa en
logiqa in english.
https://github.com/huggingface/datasets/pull/1595
[ "I'm getting an error when I try to create the dummy data:\r\n```python\r\naclifton@pop-os:~/data/hf_datasets_sprint/datasets$ python datasets-cli dummy_data ./datasets/logiqa_en/ --auto_generate \r\n2021-01-07 10:50:12.024791: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic l...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1595", "html_url": "https://github.com/huggingface/datasets/pull/1595", "diff_url": "https://github.com/huggingface/datasets/pull/1595.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1595.patch", "merged_at": null }
1,595
true
connection error
Hi I am hitting to this error, thanks ``` > Traceback (most recent call last): File "finetune_t5_trainer.py", line 379, in <module> main() File "finetune_t5_trainer.py", line 208, in main if training_args.do_eval or training_args.evaluation_strategy != EvaluationStrategy.NO File "finetune_t5_trainer.py", line 207, in <dictcomp> for task in data_args.eval_tasks} File "/workdir/seq2seq/data/tasks.py", line 70, in get_dataset dataset = self.load_dataset(split=split) File "/workdir/seq2seq/data/tasks.py", line 66, in load_dataset return datasets.load_dataset(self.task.name, split=split, script_version="master") File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 589, in load_dataset path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 267, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 487, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/master/datasets/boolq/boolq.py el/0 I1217 01:11:33.898849 354161 main shadow.py:210 Current job status: FINISHED ```
https://github.com/huggingface/datasets/issues/1594
[ "This happen quite often when they are too many concurrent requests to github.\r\n\r\ni can understand it’s a bit cumbersome to handle on the user side. Maybe we should try a few times in the lib (eg with timeout) before failing, what do you think @lhoestq ?", "Yes currently there's no retry afaik. We should add ...
null
1,594
false
Access to key in DatasetDict map
It is possible that we want to do different things in the `map` function (and possibly other functions too) of a `DatasetDict`, depending on the key. I understand that `DatasetDict.map` is a really thin wrapper of `Dataset.map`, so it is easy to directly implement this functionality in the client code. Still, it'd be nice if there can be a flag, similar to `with_indices`, that allows the callable to know the key inside `DatasetDict`.
https://github.com/huggingface/datasets/issues/1593
[ "Indeed that would be cool\r\n\r\nAlso FYI right now the easiest way to do this is\r\n```python\r\ndataset_dict[\"train\"] = dataset_dict[\"train\"].map(my_transform_for_the_train_set)\r\ndataset_dict[\"test\"] = dataset_dict[\"test\"].map(my_transform_for_the_test_set)\r\n```", "I don't feel like adding an extra...
null
1,593
false
IWSLT-17 Link Broken
``` FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnmted//texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.tgz ```
https://github.com/huggingface/datasets/issues/1591
[ "Sorry, this is a duplicate of #1287. Not sure why it didn't come up when I searched `iwslt` in the issues list.", "Closing this since its a duplicate" ]
null
1,591
false
Add helper to resolve namespace collision
Many projects use a module called `datasets`, however this is incompatible with huggingface datasets. It would be great if there if there was some helper or similar function to resolve such a common conflict.
https://github.com/huggingface/datasets/issues/1590
[ "Do you have an example?", "I was thinking about using something like [importlib](https://docs.python.org/3/library/importlib.html#importing-a-source-file-directly) to over-ride the collision. \r\n\r\n**Reason requested**: I use the [following template](https://github.com/jramapuram/ml_base/) repo where I house a...
null
1,590
false
Update doc2dial.py
Added data loader for machine reading comprehension tasks proposed in the Doc2Dial EMNLP 2020 paper.
https://github.com/huggingface/datasets/pull/1589
[ "Thanks for adding the `doc2dial_rc` config :) \r\n\r\nIt looks like you're missing the dummy data for this config though. Could you add them please ?\r\nAlso to fix the CI you'll need to format the code with `make style`" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1589", "html_url": "https://github.com/huggingface/datasets/pull/1589", "diff_url": "https://github.com/huggingface/datasets/pull/1589.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1589.patch", "merged_at": null }
1,589
true
Modified hind encorp
description added, unnecessary comments removed from .py and readme.md reformated @lhoestq for #1584
https://github.com/huggingface/datasets/pull/1588
[ "welcome, awesome " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1588", "html_url": "https://github.com/huggingface/datasets/pull/1588", "diff_url": "https://github.com/huggingface/datasets/pull/1588.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1588.patch", "merged_at": "2020-12-16T17:20:28" }
1,588
true
Add nq_open question answering dataset
this is pr is a copy of #1506 due to messed up git history in that pr.
https://github.com/huggingface/datasets/pull/1587
[ "@SBrandeis all checks passing" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1587", "html_url": "https://github.com/huggingface/datasets/pull/1587", "diff_url": "https://github.com/huggingface/datasets/pull/1587.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1587.patch", "merged_at": "2020-12-17T16:07:10" }
1,587
true
added irc disentangle dataset
added irc disentanglement dataset
https://github.com/huggingface/datasets/pull/1586
[ "@lhoestq sorry, this was the only way I was able to fix the pull request ", "@lhoestq Thank you for the feedback. I wondering whether I should be passing an 'id' field in the dictionary since the 'connections' reference the 'id' of the linked messages. This 'id' would just be the same as the id_ that is in the y...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1586", "html_url": "https://github.com/huggingface/datasets/pull/1586", "diff_url": "https://github.com/huggingface/datasets/pull/1586.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1586.patch", "merged_at": "2021-01-29T10:28:53" }
1,586
true
FileNotFoundError for `amazon_polarity`
Version: `datasets==v1.1.3` ### Reproduction ```python from datasets import load_dataset data = load_dataset("amazon_polarity") ``` crashes with ```bash FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/amazon_polarity/amazon_polarity.py ``` and ```bash FileNotFoundError: Couldn't find file at https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/amazon_polarity/amazon_polarity.py ``` and ```bash FileNotFoundError: Couldn't find file locally at amazon_polarity/amazon_polarity.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/amazon_polarity/amazon_polarity.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/amazon_polarity/amazon_polarity.py ```
https://github.com/huggingface/datasets/issues/1585
[ "Hi @phtephanx , the `amazon_polarity` dataset has not been released yet. It will be available in the coming soon v2of `datasets` :) \r\n\r\nYou can still access it now if you want, but you will need to install datasets via the master branch:\r\n`pip install git+https://github.com/huggingface/datasets.git@master`" ...
null
1,585
false
Load hind encorp
reformated well documented, yaml tags added, code
https://github.com/huggingface/datasets/pull/1584
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1584", "html_url": "https://github.com/huggingface/datasets/pull/1584", "diff_url": "https://github.com/huggingface/datasets/pull/1584.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1584.patch", "merged_at": null }
1,584
true
Update metrics docstrings.
#1478 Correcting the argument descriptions for metrics. Let me know if there's any issues.
https://github.com/huggingface/datasets/pull/1583
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1583", "html_url": "https://github.com/huggingface/datasets/pull/1583", "diff_url": "https://github.com/huggingface/datasets/pull/1583.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1583.patch", "merged_at": "2020-12-18T18:39:06" }
1,583
true
Adding wiki lingua dataset as new branch
Adding the dataset as new branch as advised here: #1470
https://github.com/huggingface/datasets/pull/1582
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1582", "html_url": "https://github.com/huggingface/datasets/pull/1582", "diff_url": "https://github.com/huggingface/datasets/pull/1582.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1582.patch", "merged_at": "2020-12-17T18:06:45" }
1,582
true
Installing datasets and transformers in a tensorflow docker image throws Permission Error on 'import transformers'
I am using a docker container, based on latest tensorflow-gpu image, to run transformers and datasets (4.0.1 and 1.1.3 respectively - Dockerfile attached below). Importing transformers throws a Permission Error to access `/.cache`: ``` $ docker run --gpus=all --rm -it -u $(id -u):$(id -g) -v $(pwd)/data:/root/data -v $(pwd):/root -v $(pwd)/models/:/root/models -v $(pwd)/saved_models/:/root/saved_models -e "HOST_HOSTNAME=$(hostname)" hf-error:latest /bin/bash ________ _______________ ___ __/__________________________________ ____/__ /________ __ __ / _ _ \_ __ \_ ___/ __ \_ ___/_ /_ __ /_ __ \_ | /| / / _ / / __/ / / /(__ )/ /_/ / / _ __/ _ / / /_/ /_ |/ |/ / /_/ \___//_/ /_//____/ \____//_/ /_/ /_/ \____/____/|__/ You are running this container as user with ID 1000 and group 1000, which should map to the ID and group for your user on the Docker host. Great! tf-docker /root > python Python 3.6.9 (default, Oct 8 2020, 12:12:24) [GCC 8.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import transformers 2020-12-15 23:53:21.165827: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0 Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.6/dist-packages/transformers/__init__.py", line 22, in <module> from .integrations import ( # isort:skip File "/usr/local/lib/python3.6/dist-packages/transformers/integrations.py", line 5, in <module> from .trainer_utils import EvaluationStrategy File "/usr/local/lib/python3.6/dist-packages/transformers/trainer_utils.py", line 25, in <module> from .file_utils import is_tf_available, is_torch_available, is_torch_tpu_available File "/usr/local/lib/python3.6/dist-packages/transformers/file_utils.py", line 88, in <module> import datasets # noqa: F401 File "/usr/local/lib/python3.6/dist-packages/datasets/__init__.py", line 26, in <module> from .arrow_dataset import Dataset, concatenate_datasets File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py", line 40, in <module> from .arrow_reader import ArrowReader File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 31, in <module> from .utils import cached_path, logging File "/usr/local/lib/python3.6/dist-packages/datasets/utils/__init__.py", line 20, in <module> from .download_manager import DownloadManager, GenerateMode File "/usr/local/lib/python3.6/dist-packages/datasets/utils/download_manager.py", line 25, in <module> from .file_utils import HF_DATASETS_CACHE, cached_path, get_from_cache, hash_url_to_filename File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 118, in <module> os.makedirs(HF_MODULES_CACHE, exist_ok=True) File "/usr/lib/python3.6/os.py", line 210, in makedirs makedirs(head, mode, exist_ok) File "/usr/lib/python3.6/os.py", line 210, in makedirs makedirs(head, mode, exist_ok) File "/usr/lib/python3.6/os.py", line 220, in makedirs mkdir(name, mode) PermissionError: [Errno 13] Permission denied: '/.cache' ``` I've pinned the problem to `RUN pip install datasets`, and by commenting it you can actually import transformers correctly. Another workaround I've found is creating the directory and giving permissions to it directly on the Dockerfile. ``` FROM tensorflow/tensorflow:latest-gpu-jupyter WORKDIR /root EXPOSE 80 EXPOSE 8888 EXPOSE 6006 ENV SHELL /bin/bash ENV PATH="/root/.local/bin:${PATH}" ENV CUDA_CACHE_PATH="/root/cache/cuda" ENV CUDA_CACHE_MAXSIZE="4294967296" ENV TFHUB_CACHE_DIR="/root/cache/tfhub" RUN pip install --upgrade pip RUN apt update -y && apt upgrade -y RUN pip install transformers #Installing datasets will throw the error, try commenting and rebuilding RUN pip install datasets #Another workaround is creating the directory and give permissions explicitly #RUN mkdir /.cache #RUN chmod 777 /.cache ```
https://github.com/huggingface/datasets/issues/1581
[ "Thanks for reporting !\r\nYou can override the directory in which cache file are stored using for example\r\n```\r\nENV HF_HOME=\"/root/cache/hf_cache_home\"\r\n```\r\n\r\nThis way both `transformers` and `datasets` will use this directory instead of the default `.cache`", "Great, thanks. I didn't see documentat...
null
1,581
false
made suggested changes in diplomacy_detection.py
https://github.com/huggingface/datasets/pull/1580
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1580", "html_url": "https://github.com/huggingface/datasets/pull/1580", "diff_url": "https://github.com/huggingface/datasets/pull/1580.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1580.patch", "merged_at": "2020-12-16T10:27:52" }
1,580
true
Adding CLIMATE-FEVER dataset
This PR request the addition of the CLIMATE-FEVER dataset: A dataset adopting the FEVER methodology that consists of 1,535 real-world claims regarding climate-change collected on the internet. Each claim is accompanied by five manually annotated evidence sentences retrieved from the English Wikipedia that support, refute or do not give enough information to validate the claim totalling in 7,675 claim-evidence pairs. The dataset features challenging claims that relate multiple facets and disputed cases of claims where both supporting and refuting evidence are present. More information can be found at: - Homepage: <http://climatefever.ai> - Paper: <https://arxiv.org/abs/2012.00614>
https://github.com/huggingface/datasets/pull/1579
[ "I `git rebase`ed my branch to `upstream/master` as suggested in point 7 of <https://huggingface.co/docs/datasets/share_dataset.html> and subsequently used `git pull` to be able to push to my remote branch. However, I think this messed up the history.\r\n\r\nPlease let me know if I should create a clean new PR with...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1579", "html_url": "https://github.com/huggingface/datasets/pull/1579", "diff_url": "https://github.com/huggingface/datasets/pull/1579.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1579.patch", "merged_at": null }
1,579
true
update multiwozv22 checksums
a file was updated on the GitHub repo for the dataset
https://github.com/huggingface/datasets/pull/1578
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1578", "html_url": "https://github.com/huggingface/datasets/pull/1578", "diff_url": "https://github.com/huggingface/datasets/pull/1578.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1578.patch", "merged_at": "2020-12-15T17:06:29" }
1,578
true
Add comet metric
Hey! I decided to add our new Crosslingual Optimized Metric for Evaluation of Translation (COMET) to the list of the available metrics. COMET was [presented at EMNLP20](https://www.aclweb.org/anthology/2020.emnlp-main.213/) and it is the highest performing metric, so far, on the WMT19 benchmark. We also participated in the [WMT20 Metrics shared task ](http://www.statmt.org/wmt20/pdf/2020.wmt-1.101.pdf) where once again COMET was validated as a top-performing metric. I hope that this metric will help researcher's and industry workers to better validate their MT systems in the future 🤗 ! Cheers, Ricardo
https://github.com/huggingface/datasets/pull/1577
[ "I also thought a bit about the fact that \"sources\" can't be added to the batch.. but changing that would require a lot more changes. And I agree that the idea of adding them as part of the references is not ideal. Conceptually they are not references.\r\n\r\nI would keep it like this for now.. And in the future,...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1577", "html_url": "https://github.com/huggingface/datasets/pull/1577", "diff_url": "https://github.com/huggingface/datasets/pull/1577.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1577.patch", "merged_at": "2021-01-14T13:33:10" }
1,577
true
Remove the contributors section
sourcerer is down
https://github.com/huggingface/datasets/pull/1576
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1576", "html_url": "https://github.com/huggingface/datasets/pull/1576", "diff_url": "https://github.com/huggingface/datasets/pull/1576.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1576.patch", "merged_at": "2020-12-15T12:53:46" }
1,576
true
Hind_Encorp all done
https://github.com/huggingface/datasets/pull/1575
[ "ALL TEST PASSED locally @yjernite ", "@rahul-art kindly run the following from the datasets folder \r\n\r\n```\r\nmake style \r\nflake8 datasets\r\n\r\n```\r\n", "@skyprince999 I did that before it says all done \r\n", "I did that again it gives the same output all done and then I synchronized my changes ...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1575", "html_url": "https://github.com/huggingface/datasets/pull/1575", "diff_url": "https://github.com/huggingface/datasets/pull/1575.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1575.patch", "merged_at": null }
1,575
true
Diplomacy detection 3
https://github.com/huggingface/datasets/pull/1574
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1574", "html_url": "https://github.com/huggingface/datasets/pull/1574", "diff_url": "https://github.com/huggingface/datasets/pull/1574.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1574.patch", "merged_at": null }
1,574
true
adding dataset for diplomacy detection-2
https://github.com/huggingface/datasets/pull/1573
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1573", "html_url": "https://github.com/huggingface/datasets/pull/1573", "diff_url": "https://github.com/huggingface/datasets/pull/1573.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1573.patch", "merged_at": null }
1,573
true
add Gnad10 dataset
reference [PR#1317](https://github.com/huggingface/datasets/pull/1317)
https://github.com/huggingface/datasets/pull/1572
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1572", "html_url": "https://github.com/huggingface/datasets/pull/1572", "diff_url": "https://github.com/huggingface/datasets/pull/1572.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1572.patch", "merged_at": "2020-12-16T16:52:30" }
1,572
true
Fixing the KILT tasks to match our current standards
This introduces a few changes to the Knowledge Intensive Learning task benchmark to bring it more in line with our current datasets, including adding the (minimal) dataset card and having one config per sub-task
https://github.com/huggingface/datasets/pull/1571
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1571", "html_url": "https://github.com/huggingface/datasets/pull/1571", "diff_url": "https://github.com/huggingface/datasets/pull/1571.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1571.patch", "merged_at": "2020-12-14T23:07:41" }
1,571
true
Documentation for loading CSV datasets misleads the user
Documentation for loading CSV datasets misleads the user into thinking setting `quote_char' to False will disable quoting. There are two problems here: i) `quote_char' is misspelled, must be `quotechar' ii) the documentation should mention `quoting'
https://github.com/huggingface/datasets/pull/1570
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1570", "html_url": "https://github.com/huggingface/datasets/pull/1570", "diff_url": "https://github.com/huggingface/datasets/pull/1570.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1570.patch", "merged_at": "2020-12-21T13:47:09" }
1,570
true
added un_ga dataset
Hi :hugs:, This is a PR for [United nations general assembly resolutions: A six-language parallel corpus](http://opus.nlpl.eu/UN.php) dataset. With suggested changes in #1330
https://github.com/huggingface/datasets/pull/1569
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1569", "html_url": "https://github.com/huggingface/datasets/pull/1569", "diff_url": "https://github.com/huggingface/datasets/pull/1569.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1569.patch", "merged_at": "2020-12-15T15:28:58" }
1,569
true
Added the dataset clickbait_news_bg
There was a problem with my [previous PR 1445](https://github.com/huggingface/datasets/pull/1445) after rebasing, so I'm copying the dataset code into a new branch and submitting a new PR.
https://github.com/huggingface/datasets/pull/1568
[ "Hi @tsvm Great work! \r\nSince you have raised a clean PR could you close the earlier one - #1445 ? \r\n", "> Hi @tsvm Great work!\r\n> Since you have raised a clean PR could you close the earlier one - #1445 ?\r\n\r\nDone." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1568", "html_url": "https://github.com/huggingface/datasets/pull/1568", "diff_url": "https://github.com/huggingface/datasets/pull/1568.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1568.patch", "merged_at": "2020-12-15T18:28:56" }
1,568
true
[wording] Update Readme.md
Make the features of the library clearer.
https://github.com/huggingface/datasets/pull/1567
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1567", "html_url": "https://github.com/huggingface/datasets/pull/1567", "diff_url": "https://github.com/huggingface/datasets/pull/1567.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1567.patch", "merged_at": "2020-12-15T12:54:06" }
1,567
true
Add Microsoft Research Sequential Question Answering (SQA) Dataset
For more information: https://msropendata.com/datasets/b25190ed-0f59-47b1-9211-5962858142c2
https://github.com/huggingface/datasets/pull/1566
[ "I proposed something a few weeks ago in #898 (un-merged) but I think that the way that @mattbui added the dataset in the present PR is smarter and simpler should replace my PR #898.\r\n\r\n(Narrator voice: *And it was around that time that Thomas realized that the community was now a lot smarter than him and he sh...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1566", "html_url": "https://github.com/huggingface/datasets/pull/1566", "diff_url": "https://github.com/huggingface/datasets/pull/1566.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1566.patch", "merged_at": "2020-12-15T15:24:22" }
1,566
true
Create README.md
https://github.com/huggingface/datasets/pull/1565
[ "@ManuelFay thanks you so much for adding a dataset card, this is such a cool contribution!\r\n\r\nThis looks like it uses an old template for the card we've moved things around a bit and we have an app you should be using to get the tags and the structure of the Data Fields paragraph :) Would you mind moving your ...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1565", "html_url": "https://github.com/huggingface/datasets/pull/1565", "diff_url": "https://github.com/huggingface/datasets/pull/1565.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1565.patch", "merged_at": "2021-03-25T14:01:49" }
1,565
true
added saudinewsnet
I'm having issues in creating the dummy data. I'm still investigating how to fix it. I'll close the PR if I couldn't find a solution
https://github.com/huggingface/datasets/pull/1564
[ "Hi @abdulelahsm - This is an interesting dataset! But there are multiple issues with the PR. Some of them are listed below: \r\n- default builder config is not defined. There should be atleast one builder config \r\n- URL is incorrectly constructed so the data files are not being downloaded \r\n- dataset_info.jso...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1564", "html_url": "https://github.com/huggingface/datasets/pull/1564", "diff_url": "https://github.com/huggingface/datasets/pull/1564.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1564.patch", "merged_at": "2020-12-22T09:51:04" }
1,564
true
adding tmu-gfm-dataset
Adding TMU-GFM-Dataset for Grammatical Error Correction. https://github.com/tmu-nlp/TMU-GFM-Dataset A dataset for GEC metrics with manual evaluations of grammaticality, fluency, and meaning preservation for system outputs. More detail about the creation of the dataset can be found in [Yoshimura et al. (2020)](https://www.aclweb.org/anthology/2020.coling-main.573.pdf).
https://github.com/huggingface/datasets/pull/1563
[ "@lhoestq Thank you for your code review! I think I could do the necessary corrections. Could you please check it again when you have time?", "Thank you for merging!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1563", "html_url": "https://github.com/huggingface/datasets/pull/1563", "diff_url": "https://github.com/huggingface/datasets/pull/1563.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1563.patch", "merged_at": "2020-12-21T10:07:13" }
1,563
true
Add dataset COrpus of Urdu News TExt Reuse (COUNTER).
https://github.com/huggingface/datasets/pull/1562
[ "Just a small revision from simon's review: 20KB for the dummy_data.zip is fine, you can keep them this way.", "Also the CI is failing because of an error `tests/test_file_utils.py::TempSeedTest::test_tensorflow` that is not related to your dataset and is fixed on master. You can ignore it", "merging since the ...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1562", "html_url": "https://github.com/huggingface/datasets/pull/1562", "diff_url": "https://github.com/huggingface/datasets/pull/1562.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1562.patch", "merged_at": "2020-12-21T13:14:46" }
1,562
true
Lama
This the LAMA dataset for probing facts and common sense from language models. See https://github.com/facebookresearch/LAMA for more details.
https://github.com/huggingface/datasets/pull/1561
[ "Let me know why the pyarrow test is failing. For one of the config \"trex\", I had to load an initial datafile for a dictionary which is used to augment the rest of the datasets. In the dummy data, the dictionary file was truncated so I had to fudge that. I'm not sure if that is the issue.\r\n", "@ontocord it ju...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1561", "html_url": "https://github.com/huggingface/datasets/pull/1561", "diff_url": "https://github.com/huggingface/datasets/pull/1561.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1561.patch", "merged_at": "2020-12-28T09:51:47" }
1,561
true
Adding the BrWaC dataset
Adding the BrWaC dataset, a large corpus of Portuguese language texts
https://github.com/huggingface/datasets/pull/1560
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1560", "html_url": "https://github.com/huggingface/datasets/pull/1560", "diff_url": "https://github.com/huggingface/datasets/pull/1560.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1560.patch", "merged_at": "2020-12-18T15:56:55" }
1,560
true
adding dataset card information to CONTRIBUTING.md
Added a documentation line and link to the full sprint guide in the "How to add a dataset" section, and a section on how to contribute to the dataset card of an existing dataset. And a thank you note at the end :hugs:
https://github.com/huggingface/datasets/pull/1559
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1559", "html_url": "https://github.com/huggingface/datasets/pull/1559", "diff_url": "https://github.com/huggingface/datasets/pull/1559.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1559.patch", "merged_at": "2020-12-14T17:55:03" }
1,559
true
Adding Igbo NER data
This PR adds the Igbo NER dataset. Data: https://github.com/IgnatiusEzeani/IGBONLP/tree/master/ig_ner
https://github.com/huggingface/datasets/pull/1558
[ "Thanks for the PR @purvimisal. \r\n\r\nFew comments below. ", "Hi, @lhoestq Thank you for the review. I have made all the changes. PTAL! ", "the CI error is not related to your dataset, merging" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1558", "html_url": "https://github.com/huggingface/datasets/pull/1558", "diff_url": "https://github.com/huggingface/datasets/pull/1558.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1558.patch", "merged_at": "2020-12-21T14:38:20" }
1,558
true
HindEncorp again commited
https://github.com/huggingface/datasets/pull/1557
[ "Yes this has the right files!!!\r\n\r\nI'll close the previous one then :) \r\n\r\nNow to pass the tests, you will need to:\r\n- `make style` and run `flake8 datasets` from your repository root directory\r\n- fix the dummy data\r\n\r\nDid you generate the dummy data with the auto-generation tool (see the guide) or...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1557", "html_url": "https://github.com/huggingface/datasets/pull/1557", "diff_url": "https://github.com/huggingface/datasets/pull/1557.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1557.patch", "merged_at": null }
1,557
true
add bswac
https://github.com/huggingface/datasets/pull/1556
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1556", "html_url": "https://github.com/huggingface/datasets/pull/1556", "diff_url": "https://github.com/huggingface/datasets/pull/1556.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1556.patch", "merged_at": "2020-12-18T15:14:27" }
1,556
true
Added Opus TedTalks
Dataset : http://opus.nlpl.eu/TedTalks.php
https://github.com/huggingface/datasets/pull/1555
[ "@lhoestq I saw some common changes you made on the other PR's (Similar Opus Datasets). I fixed those changes here. Can you please review it once ? \r\nThanks.", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1555", "html_url": "https://github.com/huggingface/datasets/pull/1555", "diff_url": "https://github.com/huggingface/datasets/pull/1555.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1555.patch", "merged_at": "2020-12-18T09:44:43" }
1,555
true
Opus CAPES added
Dataset : http://opus.nlpl.eu/CAPES.php
https://github.com/huggingface/datasets/pull/1554
[ "@lhoestq I saw some common changes you made on the other PR's (Similar Opus Datasets). I fixed those changes here. Can you please review it once ? \r\nThanks.", "Hi @rkc007 , thanks for the contribution.\r\nUnfortunately, the CAPES dataset has already been added here: #1307\r\nI'm closing the PR ", "@lhoestq F...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1554", "html_url": "https://github.com/huggingface/datasets/pull/1554", "diff_url": "https://github.com/huggingface/datasets/pull/1554.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1554.patch", "merged_at": null }
1,554
true
added air_dialogue
UPDATE2 (3797ce5): Updated for multi-configs UPDATE (7018082): manually created the dummy_datasets. All tests were cleared locally. Pushed it to origin/master DRAFT VERSION (57fdb20): (_no longer draft_) Uploaded the air_dialogue database. dummy_data creation was failing in local, since the original downloaded file has some nested folders. Pushing it since the tests with real data was cleared. Will re-check & update via manually creating some dummy_data
https://github.com/huggingface/datasets/pull/1553
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1553", "html_url": "https://github.com/huggingface/datasets/pull/1553", "diff_url": "https://github.com/huggingface/datasets/pull/1553.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1553.patch", "merged_at": "2020-12-23T11:20:39" }
1,553
true
Added OPUS ParaCrawl
Dataset : http://opus.nlpl.eu/ParaCrawl.php
https://github.com/huggingface/datasets/pull/1552
[ "@lhoestq I saw some common changes you made on the other PR's (Similar Opus Datasets). I fixed those changes here. Can you please review it once ? \r\nThanks.", "@rkc007 @lhoestq just noticed a dataset named para_crawl has been added a long time ago: #91.", "They're not exactly the same so it's ok to have both...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1552", "html_url": "https://github.com/huggingface/datasets/pull/1552", "diff_url": "https://github.com/huggingface/datasets/pull/1552.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1552.patch", "merged_at": "2020-12-21T09:50:25" }
1,552
true
Monero
Biomedical Romanian dataset :)
https://github.com/huggingface/datasets/pull/1551
[ "Hi @iliemihai - you need to add the Readme file! Otherwise seems good. \r\nAlso don't forget to run `make style` & `flake8 datasets` locally, from the datasets folder", "@skyprince999 I will add the README.d for it. Thank you :D ", "Thanks for your contribution, @iliemihai. Are you still interested in adding ...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1551", "html_url": "https://github.com/huggingface/datasets/pull/1551", "diff_url": "https://github.com/huggingface/datasets/pull/1551.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1551.patch", "merged_at": null }
1,551
true
Add offensive langauge dravidian dataset
https://github.com/huggingface/datasets/pull/1550
[ "Thanks much!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1550", "html_url": "https://github.com/huggingface/datasets/pull/1550", "diff_url": "https://github.com/huggingface/datasets/pull/1550.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1550.patch", "merged_at": "2020-12-18T14:25:30" }
1,550
true
Generics kb new branch
Datasets need manual downloads. Have thus created dummy data as well. But pytest on real and dummy data are failing. I have completed the readme , tags and other required things. I need to create the metadata json once tests get successful. Opening a PR while working with Yacine Jernite to resolve my pytest issues.
https://github.com/huggingface/datasets/pull/1549
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1549", "html_url": "https://github.com/huggingface/datasets/pull/1549", "diff_url": "https://github.com/huggingface/datasets/pull/1549.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1549.patch", "merged_at": "2020-12-21T13:55:09" }
1,549
true
Fix `🤗Datasets` - `tfds` differences link + a few aesthetics
https://github.com/huggingface/datasets/pull/1548
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1548", "html_url": "https://github.com/huggingface/datasets/pull/1548", "diff_url": "https://github.com/huggingface/datasets/pull/1548.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1548.patch", "merged_at": "2020-12-15T12:55:27" }
1,548
true
Adding PolEval2019 Machine Translation Task dataset
Facing an error with pytest in training. Dummy data is passing. README has to be updated.
https://github.com/huggingface/datasets/pull/1547
[ "**NOTE:**\r\n\r\n- Train and Dev: Manually downloaded (auto download is repeatedly giving `ConnectionError` for one of the files), Test: Auto Download\r\n- Dummy test is passing\r\n- The json file has been created with hard-coded paths for the manual downloads _(hardcoding has been removed from the final uploaded ...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1547", "html_url": "https://github.com/huggingface/datasets/pull/1547", "diff_url": "https://github.com/huggingface/datasets/pull/1547.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1547.patch", "merged_at": "2020-12-21T16:13:21" }
1,547
true
Add persian ner dataset
Adding the following dataset: https://github.com/HaniehP/PersianNER
https://github.com/huggingface/datasets/pull/1546
[ "HI @SBrandeis. Thanks for all the comments - very helpful. I realised that the tests had failed and had been trying to figure out what was causing them to do so. All the tests pass when I run the load_real_dataset test however when I run `RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1546", "html_url": "https://github.com/huggingface/datasets/pull/1546", "diff_url": "https://github.com/huggingface/datasets/pull/1546.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1546.patch", "merged_at": "2020-12-23T09:53:03" }
1,546
true
add hrwac
https://github.com/huggingface/datasets/pull/1545
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1545", "html_url": "https://github.com/huggingface/datasets/pull/1545", "diff_url": "https://github.com/huggingface/datasets/pull/1545.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1545.patch", "merged_at": "2020-12-18T13:35:17" }
1,545
true
Added Wiki Summary Dataset
Wiki Summary: Dataset extracted from Persian Wikipedia into the form of articles and highlights. Link: https://github.com/m3hrdadfi/wiki-summary
https://github.com/huggingface/datasets/pull/1544
[ "@lhoestq why my tests are not running?", "Maybe an issue with CircleCI, let me try to make them run", "The CI error `tests/test_file_utils.py::TempSeedTest::test_tensorflow` is not related to this dataset and is fixed on master, you can ignore it", "what I need to do now", "Now the delimiter of the csv rea...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1544", "html_url": "https://github.com/huggingface/datasets/pull/1544", "diff_url": "https://github.com/huggingface/datasets/pull/1544.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1544.patch", "merged_at": "2020-12-18T16:17:18" }
1,544
true
adding HindEncorp
adding Hindi Wikipedia corpus
https://github.com/huggingface/datasets/pull/1543
[ "@lhoestq I have created a new PR by reforking and creating a new branch ", "@rahul-art unfortunately this didn't quite work, here's how you can try again:\r\n- `git checkout master` to go back to the main branch\r\n- `git pull upstream master` to make it up to date\r\n- `git checkout -b add_hind_encorp` to creat...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1543", "html_url": "https://github.com/huggingface/datasets/pull/1543", "diff_url": "https://github.com/huggingface/datasets/pull/1543.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1543.patch", "merged_at": null }
1,543
true
fix typo readme
https://github.com/huggingface/datasets/pull/1542
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1542", "html_url": "https://github.com/huggingface/datasets/pull/1542", "diff_url": "https://github.com/huggingface/datasets/pull/1542.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1542.patch", "merged_at": "2020-12-13T17:16:40" }
1,542
true
connection issue while downloading data
Hi I am running my codes on google cloud, and I am getting this error resulting in the failure of the codes when trying to download the data, could you assist me to solve this? also as a temporary solution, could you tell me how I can increase the number of retries and timeout to at least let the models run for now. thanks ``` Traceback (most recent call last): File "finetune_t5_trainer.py", line 361, in <module> main() File "finetune_t5_trainer.py", line 269, in main add_prefix=False if training_args.train_adapters else True) File "/workdir/seq2seq/data/tasks.py", line 70, in get_dataset dataset = self.load_dataset(split=split) File "/workdir/seq2seq/data/tasks.py", line 306, in load_dataset return datasets.load_dataset('glue', 'cola', split=split) File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 589, in load_dataset path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 263, in prepare_module head_hf_s3(path, filename=name, dataset=dataset) File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 200, in head_hf_s3 return http_head(hf_bucket_url(identifier=identifier, filename=filename, use_cdn=use_cdn, dataset=dataset)) File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 403, in http_head url, proxies=proxies, headers=headers, cookies=cookies, allow_redirects=allow_redirects, timeout=timeout File "/usr/local/lib/python3.6/dist-packages/requests/api.py", line 104, in head return request('head', url, **kwargs) File "/usr/local/lib/python3.6/dist-packages/requests/api.py", line 61, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.6/dist-packages/requests/sessions.py", line 542, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.6/dist-packages/requests/sessions.py", line 655, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.6/dist-packages/requests/adapters.py", line 504, in send raise ConnectTimeout(e, request=request) requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/datasets/datasets/glue/glue.py (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f47db511e80>, 'Connection to s3.amazonaws.com timed out. (connect timeout=10)')) ```
https://github.com/huggingface/datasets/issues/1541
[ "could you tell me how I can avoid download, by pre-downloading the data first, put them in a folder so the code does not try to redownload? could you tell me the path to put the downloaded data, and how to do it? thanks\r\n@lhoestq ", "Does your instance have an internet connection ?\r\n\r\nIf you don't have an ...
null
1,541
false
added TTC4900: A Benchmark Data for Turkish Text Categorization dataset
This PR adds the TTC4900 dataset which is a Turkish Text Categorization dataset by me and @basakbuluz. Homepage: [https://www.kaggle.com/savasy/ttc4900](https://www.kaggle.com/savasy/ttc4900) Point of Contact: [Savaş Yıldırım](mailto:savasy@gmail.com) / @savasy
https://github.com/huggingface/datasets/pull/1540
[ "@lhoestq, can you help with creating dummy_data?\r\n", "Hi @yavuzKomecoglu did you manage to build the dummy data ?", "> Hi @yavuzKomecoglu did you manage to build the dummy data ?\r\n\r\nHi, sorry for the return. I've created dummy_data.zip manually.", "> Nice thank you !\r\n> \r\n> Before we merge can you ...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1540", "html_url": "https://github.com/huggingface/datasets/pull/1540", "diff_url": "https://github.com/huggingface/datasets/pull/1540.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1540.patch", "merged_at": "2020-12-18T10:09:01" }
1,540
true
Added Wiki Asp dataset
Hello, I have added Wiki Asp dataset. Please review the PR.
https://github.com/huggingface/datasets/pull/1539
[ "> Awesome thank you !\r\n> \r\n> I just left one comment.\r\n> \r\n> Also it looks like the dummy_data.zip files are quite big (around 500KB each)\r\n> Can you try to reduce their sizes please ? Ideally they should be <20KB each\r\n> \r\n> To do so feel free to take a look inside them and in the jsonl files only k...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1539", "html_url": "https://github.com/huggingface/datasets/pull/1539", "diff_url": "https://github.com/huggingface/datasets/pull/1539.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1539.patch", "merged_at": null }
1,539
true
tweets_hate_speech_detection
https://github.com/huggingface/datasets/pull/1538
[ "Hi @lhoestq I have added this new dataset for tweet's hate speech detection. \r\n\r\nPlease if u could review it. \r\n\r\nThank you", "Hi @darshan-gandhi have you add a chance to take a look at my suggestions ?\r\n\r\nFeel free to ping me when you're ready for the final review", "Closing in favor of #1607" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1538", "html_url": "https://github.com/huggingface/datasets/pull/1538", "diff_url": "https://github.com/huggingface/datasets/pull/1538.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1538.patch", "merged_at": null }
1,538
true
added ohsumed
UPDATE2: PR passed all tests. Now waiting for review. UPDATE: pushed a new version. cross fingers that it should complete all the tests! :) If it passes all tests then it's not a draft version. This is a draft version
https://github.com/huggingface/datasets/pull/1537
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1537", "html_url": "https://github.com/huggingface/datasets/pull/1537", "diff_url": "https://github.com/huggingface/datasets/pull/1537.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1537.patch", "merged_at": "2020-12-17T18:28:16" }
1,537
true
Add Hippocorpus Dataset
https://github.com/huggingface/datasets/pull/1536
[ "> Before we merge can you try to reduce the size of the dummy_data.zip file ?\r\n> \r\n> To do so feel free to only keep a few lines of the csv files ans also remove unnecessary chunks of texts (for example keep only the first sentences of a story).\r\n\r\nHi @lhoestq, I have reduced the size of the dummy_data.zip...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1536", "html_url": "https://github.com/huggingface/datasets/pull/1536", "diff_url": "https://github.com/huggingface/datasets/pull/1536.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1536.patch", "merged_at": "2020-12-15T13:40:11" }
1,536
true
Adding Igbo monolingual dataset
This PR adds the Igbo Monolingual dataset. Data: https://github.com/IgnatiusEzeani/IGBONLP/tree/master/ig_monoling Paper: https://arxiv.org/abs/2004.00648
https://github.com/huggingface/datasets/pull/1535
[ "@lhoestq Thank you for the review. I have made all the changes you mentioned. PTAL! " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1535", "html_url": "https://github.com/huggingface/datasets/pull/1535", "diff_url": "https://github.com/huggingface/datasets/pull/1535.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1535.patch", "merged_at": "2020-12-21T14:39:48" }
1,535
true
adding dataset for diplomacy detection
https://github.com/huggingface/datasets/pull/1534
[ "Requested changes made and new PR submitted here: https://github.com/huggingface/datasets/pull/1580 " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1534", "html_url": "https://github.com/huggingface/datasets/pull/1534", "diff_url": "https://github.com/huggingface/datasets/pull/1534.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1534.patch", "merged_at": null }
1,534
true
add id_panl_bppt, a parallel corpus for en-id
Parallel Text Corpora for English - Indonesian
https://github.com/huggingface/datasets/pull/1533
[ "Hi @lhoestq, thanks for the review. I will have a look and update it accordingly.", "Strange error message :-)\r\n\r\n```\r\n> tf_context = tf.python.context.context() # eager mode context\r\nE AttributeError: module 'tensorflow' has no attribute 'python'\r\n```\r\n" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1533", "html_url": "https://github.com/huggingface/datasets/pull/1533", "diff_url": "https://github.com/huggingface/datasets/pull/1533.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1533.patch", "merged_at": "2020-12-21T10:40:36" }
1,533
true
adding hate-speech-and-offensive-language
https://github.com/huggingface/datasets/pull/1532
[ "made suggested changes and a new PR created here : https://github.com/huggingface/datasets/pull/1597" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1532", "html_url": "https://github.com/huggingface/datasets/pull/1532", "diff_url": "https://github.com/huggingface/datasets/pull/1532.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1532.patch", "merged_at": null }
1,532
true
adding hate-speech-and-offensive-language
https://github.com/huggingface/datasets/pull/1531
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1531", "html_url": "https://github.com/huggingface/datasets/pull/1531", "diff_url": "https://github.com/huggingface/datasets/pull/1531.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1531.patch", "merged_at": null }
1,531
true
add indonlu benchmark datasets
The IndoNLU benchmark is a collection of resources for training, evaluating, and analyzing natural language understanding systems for the Indonesian language. There are 12 datasets in IndoNLU. This is a new clean PR from [#1322](https://github.com/huggingface/datasets/pull/1322)
https://github.com/huggingface/datasets/pull/1530
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1530", "html_url": "https://github.com/huggingface/datasets/pull/1530", "diff_url": "https://github.com/huggingface/datasets/pull/1530.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1530.patch", "merged_at": "2020-12-16T11:11:43" }
1,530
true
Ro sent
Movies reviews dataset for Romanian language.
https://github.com/huggingface/datasets/pull/1529
[ "Hi @iliemihai, it looks like this PR holds changes from your previous PR #1493 .\r\nWould you mind removing them from the branch please ?", "@SBrandeis I am sorry. Yes I will remove them. Thank you :D ", "Hi @lhoestq @SBrandeis @iliemihai\r\n\r\nIs this still in progress or can I take over this one?\r\n\r\nTha...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1529", "html_url": "https://github.com/huggingface/datasets/pull/1529", "diff_url": "https://github.com/huggingface/datasets/pull/1529.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1529.patch", "merged_at": null }
1,529
true
initial commit for Common Crawl Domain Names
https://github.com/huggingface/datasets/pull/1528
[ "Thank you :)" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1528", "html_url": "https://github.com/huggingface/datasets/pull/1528", "diff_url": "https://github.com/huggingface/datasets/pull/1528.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1528.patch", "merged_at": "2020-12-18T10:22:32" }
1,528
true
Add : Conv AI 2 (Messed up original PR)
@lhoestq Sorry I messed up the previous 2 PR's -> https://github.com/huggingface/datasets/pull/1462 -> https://github.com/huggingface/datasets/pull/1383. So created a new one. Also, everything is fixed in this PR. Can you please review it ? Thanks in advance.
https://github.com/huggingface/datasets/pull/1527
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1527", "html_url": "https://github.com/huggingface/datasets/pull/1527", "diff_url": "https://github.com/huggingface/datasets/pull/1527.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1527.patch", "merged_at": "2020-12-13T19:14:24" }
1,527
true
added Hebrew thisworld corpus
added corpus from https://thisworld.online/ , https://github.com/thisworld1/thisworld.online
https://github.com/huggingface/datasets/pull/1526
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1526", "html_url": "https://github.com/huggingface/datasets/pull/1526", "diff_url": "https://github.com/huggingface/datasets/pull/1526.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1526.patch", "merged_at": "2020-12-18T10:47:30" }
1,526
true
Adding a second branch for Atomic to fix git errors
Adding the Atomic common sense dataset. See https://homes.cs.washington.edu/~msap/atomic/
https://github.com/huggingface/datasets/pull/1525
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1525", "html_url": "https://github.com/huggingface/datasets/pull/1525", "diff_url": "https://github.com/huggingface/datasets/pull/1525.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1525.patch", "merged_at": "2020-12-28T15:51:11" }
1,525
true
ADD: swahili dataset for language modeling
Add a corpus for Swahili language modelling. All tests passed locally. README updated with all information available.
https://github.com/huggingface/datasets/pull/1524
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1524", "html_url": "https://github.com/huggingface/datasets/pull/1524", "diff_url": "https://github.com/huggingface/datasets/pull/1524.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1524.patch", "merged_at": "2020-12-17T16:37:16" }
1,524
true
Add eHealth Knowledge Discovery dataset
This Spanish dataset can be used to mine knowledge from unstructured health texts. In particular, for: - Entity recognition - Relation extraction
https://github.com/huggingface/datasets/pull/1523
[ "Thank you very much for your review @lewtun ! \r\n\r\nI've updated the script metadata, created the README and fixed the two details you commented.\r\n\r\nReady for another review! 🤗 ", "I've updated the task tag as we discussed and also added a couple of lines of code to make sure I include all the available e...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1523", "html_url": "https://github.com/huggingface/datasets/pull/1523", "diff_url": "https://github.com/huggingface/datasets/pull/1523.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1523.patch", "merged_at": "2020-12-17T16:48:56" }
1,523
true
Add semeval 2020 task 11
Adding in propaganda detection task (task 11) from Sem Eval 2020
https://github.com/huggingface/datasets/pull/1522
[ "@SBrandeis : Thanks for the feedback! Just updated to use context manager for the `open`s and removed the placeholder text from the `README`!", "Great, thanks @ZacharySBrown !\r\nFailing tests seem to be unrelated to your changes, merging the current master branch into yours should fix them.\r\n" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1522", "html_url": "https://github.com/huggingface/datasets/pull/1522", "diff_url": "https://github.com/huggingface/datasets/pull/1522.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1522.patch", "merged_at": "2020-12-15T16:48:52" }
1,522
true
Atomic
This is the ATOMIC common sense dataset. More info can be found here: * README.md still to be created.
https://github.com/huggingface/datasets/pull/1521
[ "I had to create a new PR to fix git errors. See: https://github.com/huggingface/datasets/pull/1525\r\n\r\nI'm closing this PR. " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1521", "html_url": "https://github.com/huggingface/datasets/pull/1521", "diff_url": "https://github.com/huggingface/datasets/pull/1521.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1521.patch", "merged_at": null }
1,521
true
ru_reviews dataset adding
RuReviews: An Automatically Annotated Sentiment Analysis Dataset for Product Reviews in Russian
https://github.com/huggingface/datasets/pull/1520
[ "Hi @lhoestq \r\n\r\nI have added the readme as well \r\n\r\nPlease do have a look at it when suitable ", "Chatted with @darshan-gandhi on Slack about parsing examples into a separate text and sentiment field", "Thanks for your contribution, @darshan-gandhi. Are you still interested in adding this dataset?\r\n...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1520", "html_url": "https://github.com/huggingface/datasets/pull/1520", "diff_url": "https://github.com/huggingface/datasets/pull/1520.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1520.patch", "merged_at": null }
1,520
true
Initial commit for AQuaMuSe
There is an issue in generation of dummy data. Tests on real data have passed locally.
https://github.com/huggingface/datasets/pull/1519
[ "@yjernite Thank you for your help, generating the dummy data 🤗 Having that all the tests have passed 👍🏻", "merging since the CI is fixed on master", "Thank you :)" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1519", "html_url": "https://github.com/huggingface/datasets/pull/1519", "diff_url": "https://github.com/huggingface/datasets/pull/1519.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1519.patch", "merged_at": "2020-12-17T17:03:30" }
1,519
true
Add twi text
Add Twi texts
https://github.com/huggingface/datasets/pull/1518
[ "Hii please follow me", "thank you" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1518", "html_url": "https://github.com/huggingface/datasets/pull/1518", "diff_url": "https://github.com/huggingface/datasets/pull/1518.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1518.patch", "merged_at": "2020-12-13T18:53:37" }
1,518
true
Kd conv smangrul
https://github.com/huggingface/datasets/pull/1517
[ "Hii please follow me", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1517", "html_url": "https://github.com/huggingface/datasets/pull/1517", "diff_url": "https://github.com/huggingface/datasets/pull/1517.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1517.patch", "merged_at": "2020-12-16T14:56:14" }
1,517
true
adding wrbsc
https://github.com/huggingface/datasets/pull/1516
[ "@lhoestq thanks for the comments! Should be fixed in the latest commit, I assume the CI errors are unrelated. ", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1516", "html_url": "https://github.com/huggingface/datasets/pull/1516", "diff_url": "https://github.com/huggingface/datasets/pull/1516.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1516.patch", "merged_at": "2020-12-18T09:41:33" }
1,516
true
Add yoruba text
Adding Yoruba text C3
https://github.com/huggingface/datasets/pull/1515
[ "closing since #1379 got merged" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1515", "html_url": "https://github.com/huggingface/datasets/pull/1515", "diff_url": "https://github.com/huggingface/datasets/pull/1515.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1515.patch", "merged_at": null }
1,515
true
how to get all the options of a property in datasets
Hi could you tell me how I can get all unique options of a property of dataset? for instance in case of boolq, if the user wants to know which unique labels it has, is there a way to access unique labels without getting all training data lables and then forming a set i mean? thanks
https://github.com/huggingface/datasets/issues/1514
[ "In a dataset, labels correspond to the `ClassLabel` feature that has the `names` property that returns string represenation of the integer classes (or `num_classes` to get the number of different classes).", "I think the `features` attribute of the dataset object is what you are looking for:\r\n```\r\n>>> datase...
null
1,514
false
app_reviews_by_users
Software Applications User Reviews
https://github.com/huggingface/datasets/pull/1513
[ "Hi @lhoestq \r\n\r\nI have added the readme file as well, please if you could check it once \r\n\r\nThank you " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1513", "html_url": "https://github.com/huggingface/datasets/pull/1513", "diff_url": "https://github.com/huggingface/datasets/pull/1513.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1513.patch", "merged_at": "2020-12-14T20:45:24" }
1,513
true
Add Hippocorpus Dataset
https://github.com/huggingface/datasets/pull/1512
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1512", "html_url": "https://github.com/huggingface/datasets/pull/1512", "diff_url": "https://github.com/huggingface/datasets/pull/1512.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1512.patch", "merged_at": null }
1,512
true
poleval cyberbullying
https://github.com/huggingface/datasets/pull/1511
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1511", "html_url": "https://github.com/huggingface/datasets/pull/1511", "diff_url": "https://github.com/huggingface/datasets/pull/1511.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1511.patch", "merged_at": "2020-12-17T16:19:58" }
1,511
true
Add Dataset for (qa_srl)Question-Answer Driven Semantic Role Labeling
- Added tags, Readme file - Added code changes
https://github.com/huggingface/datasets/pull/1510
[ "Hii please follow me", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1510", "html_url": "https://github.com/huggingface/datasets/pull/1510", "diff_url": "https://github.com/huggingface/datasets/pull/1510.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1510.patch", "merged_at": "2020-12-17T16:06:22" }
1,510
true
Added dataset Makhzan
Need help with the dummy data.
https://github.com/huggingface/datasets/pull/1509
[ "The only CI error comes from \r\n```\r\nFAILED tests/test_file_utils.py::TempSeedTest::test_tensorflow\r\n```\r\n\r\nwhich is not related to your PR and is fixed on master.\r\n\r\nYou can ignore it", "@lhoestq I've made the changes. Please review and merge. \r\n\r\nI have a similar PR https://github.com/huggingf...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1509", "html_url": "https://github.com/huggingface/datasets/pull/1509", "diff_url": "https://github.com/huggingface/datasets/pull/1509.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1509.patch", "merged_at": "2020-12-16T15:04:51" }
1,509
true
Fix namedsplit docs
Fixes a broken link and `DatasetInfoMixin.split`'s docstring.
https://github.com/huggingface/datasets/pull/1508
[ "Hii please follow me", "Thanks @mariosasko!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1508", "html_url": "https://github.com/huggingface/datasets/pull/1508", "diff_url": "https://github.com/huggingface/datasets/pull/1508.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1508.patch", "merged_at": "2020-12-15T12:57:48" }
1,508
true