id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
instruction-tuning-sd/cartoonization | instruction-tuning-sd | 2023-05-11T15:16:08Z | 35 | 8 | null | [
"task_categories:image-to-image",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] | 2023-05-11T15:16:08Z | 2023-03-17T09:13:34.000Z | 2023-03-17T09:13:34 | ---
dataset_info:
features:
- name: original_image
dtype: image
- name: edit_prompt
dtype: string
- name: cartoonized_image
dtype: image
splits:
- name: train
num_bytes: 3257571330
num_examples: 5000
download_size: 3296272284
dataset_size: 3257571330
size_categories:
- 1K<n<10K
language:
- en
task_categories:
- image-to-image
---
# Instruction-prompted cartoonization dataset
This dataset was created from 5000 images randomly sampled from the [Imagenette dataset](https://github.com/fastai/imagenette). For more
details on how the dataset was created, check out [this directory](https://github.com/sayakpaul/instruction-tuned-sd/tree/main/data_preparation).
Following figure depicts the data preparation workflow:
<p align="center">
<img src="https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/cartoonization_data_wheel.png" width=600/>
</p>
## Known limitations and biases
The dataset was derived from Imagenette, which, in turn, was derived from [ImageNet](https://www.image-net.org/). So, naturally, this
dataset inherits the limitations and biases of ImageNet.
## Licensing
The dataset was derived from Imagenette, which, in turn, was derived from [ImageNet](https://www.image-net.org/). So, this dataset's license
is the same as ImageNet. | [
-0.6002229452133179,
-0.3627864122390747,
0.1452380269765854,
0.2697906792163849,
-0.1661088764667511,
-0.21733728051185608,
-0.028759708628058434,
-0.24962680041790009,
0.6323646306991577,
0.774688184261322,
-0.9066247940063477,
-0.45335307717323303,
-0.572247326374054,
0.1010305434465408... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
WxWx/ChatGPT-Detector-Bias | WxWx | 2023-04-10T00:48:06Z | 35 | 8 | null | [
"task_categories:text-classification",
"size_categories:n<1K",
"language:en",
"license:mit",
"ChatGPT",
"GPT Detector",
"ChatGPT Detector",
"arxiv:2304.02819",
"region:us"
] | 2023-04-10T00:48:06Z | 2023-04-05T20:57:48.000Z | 2023-04-05T20:57:48 | ---
license: mit
task_categories:
- text-classification
language:
- en
tags:
- ChatGPT
- GPT Detector
- ChatGPT Detector
size_categories:
- n<1K
---
# GPT Detectors Are Biased Against Non-Native English Writers
[](https://lbesson.mit-license.org/)
[](https://www.python.org/downloads/release/python-390/)
[](https://jupyter.org/try)
This repository contains the data and supplementary materials for our paper:
**GPT Detectors Are Biased Against Non-Native English Writers**\
Weixin Liang*, Mert Yuksekgonul*, Yining Mao*, Eric Wu*, James Zou\
arXiv: [2304.02819](https://arxiv.org/abs/2304.02819)
```bibtex
@article{liang2023gpt,
title={GPT detectors are biased against non-native English writers},
author={Weixin Liang and Mert Yuksekgonul and Yining Mao and Eric Wu and James Zou},
year={2023},
eprint={2304.02819},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Abstract
*The rapid adoption of generative language models has brought about substantial advancements in digital communication, while simultaneously raising concerns regarding the potential misuse of AI-generated content. Although numerous detection methods have been proposed to differentiate between AI and human-generated content, the fairness and robustness of these detectors remain underexplored. In this study, we evaluate the performance of several widely-used GPT detectors using writing samples from native and non-native English writers. Our findings reveal that these detectors consistently misclassify non-native English writing samples as AI-generated, whereas native writing samples are accurately identified. Furthermore, we demonstrate that simple prompting strategies can not only mitigate this bias but also effectively bypass GPT detectors, suggesting that GPT detectors may unintentionally penalize writers with constrained linguistic expressions. Our results call for a broader conversation about the ethical implications of deploying ChatGPT content detectors and caution against their use in evaluative or educational settings, particularly when they may inadvertently penalize or exclude non-native English speakers from the global discourse.*
<p align='center'>
<img width="636" src="https://user-images.githubusercontent.com/32794044/230640445-8d1221d4-8651-4cf4-b6d7-b6d440d6e0f5.png">
<br>
<b>Figure 1: Bias in GPT detectors against non-native English writing samples.</b>
</p>
(a) Performance comparison of seven widely-used GPT detectors. More than half of the non-native-authored TOEFL (Test of English as a Foreign Language) essays are incorrectly classified as "AI-generated," while detectors exhibit near-perfect accuracy for college essays.
Using ChatGPT-4 to improve the word choices in TOEFL essays (Prompt: "Enhance the word choices to sound more like that of a native speaker.") significantly reduces misclassification as AI-generated text.
(b) TOEFL essays unanimously misclassified as AI-generated show significantly lower perplexity compared to others, suggesting that GPT detectors might penalize authors with limited linguistic expressions.
<p align='center'>
<img width="100%" src="https://user-images.githubusercontent.com/32794044/230640270-e6c3d0ca-aabd-4d13-8527-15fed1491050.png">
<br>
<b>Figure 2: Simple prompts effectively bypass GPT detectors.</b>
</p>
(a) For ChatGPT-3.5 generated college admission essays, the performance of seven widely-used GPT detectors declines markedly when a second-round self-edit prompt ("Elevate the provided text by employing literary language") is applied, with detection rates dropping from up to 100% to up to 13%.
(b) ChatGPT-3.5 generated essays initially exhibit notably low perplexity; however, applying the self-edit prompt leads to a significant increase in perplexity.
(c) Similarly, in detecting ChatGPT-3.5 generated scientific abstracts, a second-round self-edit prompt ("Elevate the provided text by employing advanced technical language") leads to a reduction in detection rates from up to 68% to up to 28%.
(d) ChatGPT-3.5 generated abstracts have slightly higher perplexity than the generated essays but remain low. Again, the self-edit prompt significantly increases the perplexity.
## Repo Structure Overview
```
.
├── README.md
├── data/
├── human_data/
├── TOEFL_real_91/
├── name.json
├── data.json
├── TOEFL_gpt4polished_91/
├── ...
├── CollegeEssay_real_70/
├── CS224N_real_145/
├── gpt_data/
├── CollegeEssay_gpt3_31/
├── CollegeEssay_gpt3PromptEng_31/
├── CS224N_gpt3_145/
├── CS224N_gpt3PromptEng_145/
```
The `data` folder contains the human-written and AI-generated datasets used in our study. Each subfolder contains a `name.json` file, which provides the metadata, and a `data.json` file, which contains the text samples.
## Reference
```bibtex
@article{liang2023gpt,
title={GPT detectors are biased against non-native English writers},
author={Weixin Liang and Mert Yuksekgonul and Yining Mao and Eric Wu and James Zou},
year={2023},
eprint={2304.02819},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| [
-0.36580690741539,
-0.6732597351074219,
0.5980653166770935,
-0.07487819343805313,
0.09050818532705307,
0.1303585320711136,
-0.3368983566761017,
-0.5248335003852844,
-0.10577459633350372,
0.08509688824415207,
-0.2606213092803955,
-0.6329931020736694,
-0.6468687057495117,
0.5679481029510498,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
j0selit0/insurance-qa-en | j0selit0 | 2023-04-07T09:33:50Z | 35 | 3 | null | [
"region:us"
] | 2023-04-07T09:33:50Z | 2023-04-06T13:38:01.000Z | 2023-04-06T13:38:01 | ---
dataset_info:
features:
- name: index
dtype: int64
- name: topic_en
dtype: string
- name: question_en
dtype: string
splits:
- name: train
num_bytes: 1044899
num_examples: 12888
- name: test
num_bytes: 162551
num_examples: 1999
- name: valid
num_bytes: 162498
num_examples: 1999
download_size: 126622
dataset_size: 1369948
---
# Dataset Card for "insurance-qa-en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3662867844104767,
-0.09115719050168991,
0.2283080369234085,
0.29509037733078003,
-0.17224951088428497,
0.004651981871575117,
0.6147869825363159,
-0.3297498822212219,
0.8271674513816833,
0.48988887667655945,
-0.7392945289611816,
-0.8353769183158875,
-0.33202511072158813,
-0.2395303249359... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mstz/ozone | mstz | 2023-04-16T17:57:24Z | 35 | 0 | null | [
"task_categories:tabular-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc",
"ozone",
"tabular_classification",
"binary_classification",
"region:us"
] | 2023-04-16T17:57:24Z | 2023-04-06T21:44:22.000Z | 2023-04-06T21:44:22 | ---
language:
- en
tags:
- ozone
- tabular_classification
- binary_classification
pretty_name: Ozone
size_categories:
- 1K<n<10K
task_categories:
- tabular-classification
configs:
- 8hr
- 1hr
license: cc
---
# Ozone
The [Ozone dataset](https://archive.ics.uci.edu/ml/datasets/Ozone) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-------------------------|
| 8hr | Binary classification | Is there an ozone layer?|
| 1hr | Binary classification | Is there an ozone layer?|
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/ozone", "8hr")["train"]
``` | [
-0.49593281745910645,
-0.4823385179042816,
0.4247249662876129,
0.3505406379699707,
0.045918624848127365,
-0.30803173780441284,
-0.03415100276470184,
-0.19692407548427582,
0.006911173462867737,
0.909030020236969,
-0.4973497986793518,
-0.6598299145698547,
-0.5062888264656067,
0.4843683540821... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Nan-Do/code-search-net-python | Nan-Do | 2023-05-15T00:55:15Z | 35 | 14 | null | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:summarization",
"language:en",
"license:apache-2.0",
"code",
"python",
"CodeSearchNet",
"region:us"
] | 2023-05-15T00:55:15Z | 2023-05-14T00:42:57.000Z | 2023-05-14T00:42:57 | ---
dataset_info:
features:
- name: repo
dtype: string
- name: path
dtype: string
- name: func_name
dtype: string
- name: original_string
dtype: string
- name: language
dtype: string
- name: code
dtype: string
- name: code_tokens
sequence: string
- name: docstring
dtype: string
- name: docstring_tokens
sequence: string
- name: sha
dtype: string
- name: url
dtype: string
- name: partition
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 1772584117
num_examples: 455243
download_size: 598837908
dataset_size: 1772584117
license: apache-2.0
task_categories:
- text-generation
- text2text-generation
- summarization
language:
- en
tags:
- code
- python
- CodeSearchNet
pretty_name: Python CodeSearchNet with Summaries
---
# Dataset Card for "code-search-net-python"
## Dataset Description
- **Homepage:** None
- **Repository:** https://huggingface.co/datasets/Nan-Do/code-search-net-python
- **Paper:** None
- **Leaderboard:** None
- **Point of Contact:** [@Nan-Do](https://github.com/Nan-Do)
## Dataset Description
- **Homepage:** None
- **Repository:** https://huggingface.co/datasets/Nan-Do/code-search-net-python
- **Paper:** None
- **Leaderboard:** None
- **Point of Contact:** [@Nan-Do](https://github.com/Nan-Do)
### Dataset Summary
This dataset is the Python portion of the CodeSarchNet annotated with a summary column.
The code-search-net dataset includes open source functions that include comments found at GitHub.
The summary is a short description of what the function does.
### Languages
The dataset's comments are in English and the functions are coded in Python
### Data Splits
Train, test, validation labels are included in the dataset as a column.
## Dataset Creation
May of 2023
### Curation Rationale
This dataset can be used to generate instructional (or many other interesting) datasets that are useful to train LLMs
### Source Data
The CodeSearchNet dataset can be found at https://www.kaggle.com/datasets/omduggineni/codesearchnet
### Annotations
This datasets include a summary column including a short description of the function.
#### Annotation process
The annotation procedure was done using [Salesforce](https://huggingface.co/Salesforce) T5 summarization models.
A sample notebook of the process can be found at https://github.com/Nan-Do/OpenAssistantInstructionResponsePython
The annontations have been cleaned to make sure there are no repetitions and/or meaningless summaries. (some may still be present in the dataset)
### Licensing Information
Apache 2.0 | [
-0.3880225419998169,
-0.3212912678718567,
-0.06732525676488876,
0.3325691223144531,
-0.13584303855895996,
-0.20083624124526978,
-0.24798479676246643,
-0.12516754865646362,
0.6967810392379761,
0.35494303703308105,
-0.4958955943584442,
-0.6888312697410583,
-0.3427945673465729,
0.332643389701... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hkust-nlp/felm | hkust-nlp | 2023-10-03T17:29:57Z | 35 | 9 | null | [
"task_categories:text-generation",
"language:en",
"license:cc-by-nc-sa-4.0",
"arxiv:2310.00741",
"region:us"
] | 2023-10-03T17:29:57Z | 2023-06-14T07:24:44.000Z | 2023-06-14T07:24:44 | ---
license: cc-by-nc-sa-4.0
task_categories:
- text-generation
language:
- en
pretty_name: FELM
---
# Dataset Card for FELM
## Table of Contents
- [Dataset Card for FELM](#dataset-card-for-FELM)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Initial Data Collection and Clean](#initial-data-collection-and-clean)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** https://github.com/hkust-nlp/felm
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
[[Paper]](https://arxiv.org/abs/2310.00741) [[Github Repo]](https://github.com/hkust-nlp/felm)
FELM is a meta-benchmark to evaluate factuality evaluation for large language models.<br>
The benchmark comprises 847 questions that span five distinct domains: world knowledge, science/technology, writing/recommendation, reasoning, and math. We gather prompts corresponding to each domain by various sources including standard datasets like truthfulQA, online platforms like Github repositories, ChatGPT generation or drafted by authors.<br>
We then obtain responses from ChatGPT for these prompts. For each response, we employ fine-grained annotation at the segment level, which includes reference links, identified error types, and the reasons behind these errors as provided by our annotators.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
An example looks as follows:
```python
{"index": "0",
"source": "quora",
"prompt": "Which country or city has the maximum number of nuclear power plants?",
"response": "The United States has the highest number of nuclear power plants in the world, with 94 operating reactors. Other countries with a significant number of nuclear power plants include France, China, Russia, and South Korea.",
"segmented_response": ["The United States has the highest number of nuclear power plants in the world, with 94 operating reactors.", "Other countries with a significant number of nuclear power plants include France, China, Russia, and South Korea."],
"labels": [false, true],
"comment": ["As of December 2022, there were 92 operable nuclear power reactors in the United States.", ""],
"type": ["knowledge_error", null],
"ref": ["https://www.eia.gov/tools/faqs/faq.php?id=207&t=3"]}
```
### Data Fields
| Field Name | Field Value | Description |
| ----------- | ----------- | ------------------------------------------- |
| index | Integer | the order number of the data point |
| source | string | the prompt source |
| prompt | string | the prompt for generating response |
| response | string | the response of ChatGPT for prompt |
| segmented_response | list | segments of reponse |
| labels | list | factuality labels for segmented_response |
| comment | list | error reasons for segments with factual error |
| type | list | error types for segments with factual error |
| ref | list | reference links |
## Dataset Creation
### Source Data
#### Initial Data Collection and Clean
We gather prompts corresponding to each domain by various sources including standard datasets like truthfulQA, online platforms like Github repositories, ChatGPT generation or drafted by authors.
The data is cleaned by authors.
### Annotations
#### Annotation process
We have developed an annotation tool and established annotation guidelines. All annotations undergo a double-check process, which involves review by both other annotators and an expert reviewer.
#### Who are the annotators?
The authors of the paper; Yuzhen Huang, Yikai Zhang, Tangjun Su.
## Additional Information
### Licensing Information
This dataset is licensed under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-nc-sa/4.0/)).
### Citation Information
```bibtex
@inproceedings{
chen2023felm,
title={FELM: Benchmarking Factuality Evaluation of Large Language Models},
author={Chen, Shiqi and Zhao, Yiran and Zhang, Jinghan and Chern, I-Chun and Gao, Siyang and Liu, Pengfei and He, Junxian},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={http://arxiv.org/abs/2310.00741}
}
```
### Contributions
[Needs More Information]
| [
-0.47701090574264526,
-0.6351381540298462,
0.28595176339149475,
0.3081791400909424,
-0.04845619201660156,
-0.006895906757563353,
-0.33074620366096497,
-0.34250330924987793,
-0.08135736733675003,
0.42902806401252747,
-0.44993239641189575,
-0.7478411197662354,
-0.6013586521148682,
0.32312038... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mcipriano/stackoverflow-kubernetes-questions | mcipriano | 2023-10-10T18:21:03Z | 35 | 11 | null | [
"task_categories:question-answering",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-sa-4.0",
"Kubernetes",
"Stackoverflow",
"region:us"
] | 2023-10-10T18:21:03Z | 2023-06-19T23:31:32.000Z | 2023-06-19T23:31:32 | ---
license: cc-by-sa-4.0
task_categories:
- question-answering
- text-generation
language:
- en
tags:
- Kubernetes
- Stackoverflow
size_categories:
- 10K<n<100K
---
The purpose of this dataset is to provide the opportunity to perform any training, fine-tuning, etc. for any Language Model. In the 'data' folder, you will find the dataset in Parquet format, which is one of the formats used for these processes.
In case it may be useful for other purposes, I have also included the dataset in CSV format.
All data in this dataset were retrieved from the Stack Exchange network using the Stack Exchange Data explorer tool (https://github.com/StackExchange/StackExchange.DataExplorer). Specifically, the dataset contains all the Question-Answer pairs from Stack Overflow with Kubernetes tags. Specifically, in each Question-Answer pair, the Answer is the one with a positive and maximum score. Posts on Stack Overflow with negative scores have been excluded from the dataset. | [
-0.6709574460983276,
-0.7586369514465332,
0.33847784996032715,
0.18431715667247772,
-0.1845371574163437,
-0.04954373091459274,
-0.1168760135769844,
-0.11311478167772293,
0.2648630738258362,
0.8221508860588074,
-0.4761410057544708,
-0.8388105630874634,
-0.29333212971687317,
-0.0943191424012... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
chloechia/maskloveda | chloechia | 2023-06-21T23:39:55Z | 35 | 0 | null | [
"region:us"
] | 2023-06-21T23:39:55Z | 2023-06-21T23:37:26.000Z | 2023-06-21T23:37:26 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
causal-lm/natural_instructions | causal-lm | 2023-07-13T14:22:18Z | 35 | 0 | null | [
"language:en",
"region:us"
] | 2023-07-13T14:22:18Z | 2023-06-25T06:05:21.000Z | 2023-06-25T06:05:21 | ---
language: en
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 3794173036
num_examples: 4530011
- name: validation
num_bytes: 421548790
num_examples: 503335
download_size: 2165828372
dataset_size: 4215721826
---
# Dataset Card for "natural_instructions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.578182578086853,
-0.6784369349479675,
0.1751246601343155,
0.35426273941993713,
-0.3113890290260315,
-0.16869017481803894,
-0.0531984306871891,
-0.17751243710517883,
0.6564350128173828,
0.7327117919921875,
-1.1690943241119385,
-0.7603861689567566,
-0.2665601670742035,
-0.0377951264381408... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Vezora/Mini_Orca_Code_Uncencored_alpaca_Format | Vezora | 2023-08-14T04:51:11Z | 35 | 1 | null | [
"license:apache-2.0",
"region:us"
] | 2023-08-14T04:51:11Z | 2023-07-12T04:21:22.000Z | 2023-07-12T04:21:22 | ---
license: apache-2.0
---
This is dataset is a modified version of "psmathur's" Mini orca dataset, formated in the alpaca format and uncencored.
This dataset is filtered to only feature coding instructions around 50k code examples.
For ALPACA LORA users:
Modules you can target with lora:"gate_proj", "down_proj", "up_proj", "q_proj", "v_proj", "k_proj", "o_proj"
Most lora models use:"q_proj", "v_proj", "k_proj", "o_proj"
Platypus which got terrific results: "gate_proj", "down_proj", "up_proj"
Research on targeting certain modules still needs to be done, but if you don't want to train over a previously trained models newly learned abilities, target different modules than the ones used for original training.
Hyper perameters used by Platypus:
Hyperparameters for 13B and 70B Models
Hyperparameter Platypus2-13B / 70B
batch size 16
micro batch size 1
num epochs 1
learning rate 4e-4 / 3e-4
cutoff len 4096
lora rank 16
lora alpha 16
lora dropout 0.05
lora target modules gate_proj, down_proj, up_proj
train on inputs False
add eos token False
group by length False
prompt template alpaca
lr scheduler cosine
warmup steps 100
I would reccomend using a batch size of 4-10, and cutt off length to ≤ 2048 to avoid using vram issues. Load_in_4bit, Normal Float, and bf16. For single 24 gig card.
If training with oobabooga you must edit the "training.py" file in the "oobabooga_windows\text-generation-webui\modules" folder. In line 49 edit standard modules to the modules you would like to target.
If training with alpaca lora use the argument --lora_target_modules when running the train.py command. To load in 4bit you must edit the train file, adding load in 4 bit, bf16, and normal float quant.
| [
-0.539904773235321,
-0.6365060806274414,
0.12026442587375641,
0.1514580398797989,
-0.3914068043231964,
-0.22915910184383392,
-0.006326267961412668,
-0.17849163711071014,
0.5016026496887207,
0.8329795598983765,
-0.7215143442153931,
-0.5609965920448303,
-0.469014436006546,
-0.082782819867134... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AdiOO7/Bank_Complaints | AdiOO7 | 2023-07-12T07:37:36Z | 35 | 1 | null | [
"task_categories:table-question-answering",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"finance",
"region:us"
] | 2023-07-12T07:37:36Z | 2023-07-12T07:04:50.000Z | 2023-07-12T07:04:50 | ---
license: apache-2.0
task_categories:
- table-question-answering
language:
- en
tags:
- finance
size_categories:
- 1K<n<10K
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
FelipeBandeiraPoatek/invoices-donut-data-v1 | FelipeBandeiraPoatek | 2023-07-20T21:20:06Z | 35 | 0 | null | [
"region:us"
] | 2023-07-20T21:20:06Z | 2023-07-20T20:23:47.000Z | 2023-07-20T20:23:47 | ---
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 234466949.0
num_examples: 425
- name: test
num_bytes: 15053216.0
num_examples: 26
- name: validation
num_bytes: 26678659.0
num_examples: 50
download_size: 197788456
dataset_size: 276198824.0
---
# Dataset Card for "invoices-donut-data-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.13175801932811737,
-0.08562110364437103,
0.17984949052333832,
0.06299299001693726,
-0.18930013477802277,
-0.012445174157619476,
0.4899958372116089,
-0.0730995237827301,
0.8387720584869385,
0.8300877809524536,
-0.7852521538734436,
-0.6716341972351074,
-0.5575269460678101,
-0.421011686325... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
EgilKarlsen/BGL | EgilKarlsen | 2023-07-21T01:13:21Z | 35 | 0 | null | [
"region:us"
] | 2023-07-21T01:13:21Z | 2023-07-21T01:12:56.000Z | 2023-07-21T01:12:56 | ---
dataset_info:
features:
- name: log
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 830216355
num_examples: 4753370
- name: test
num_bytes: 237311703
num_examples: 1358106
- name: validation
num_bytes: 118629381
num_examples: 679054
download_size: 476009078
dataset_size: 1186157439
---
# Dataset Card for "BGL"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5351938605308533,
-0.5079690217971802,
0.19523131847381592,
0.17888452112674713,
-0.08983558416366577,
0.16414660215377808,
0.23824387788772583,
-0.4136669337749481,
0.6656765341758728,
0.37493112683296204,
-0.8579778075218201,
-0.8673607707023621,
-0.6676718592643738,
-0.36457124352455... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
izumi-lab/open-text-books | izumi-lab | 2023-08-01T05:12:00Z | 35 | 5 | null | [
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | 2023-08-01T05:12:00Z | 2023-08-01T05:09:51.000Z | 2023-08-01T05:09:51 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 281723992
num_examples: 149700
download_size: 152345811
dataset_size: 281723992
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-sa-4.0
language:
- en
---
# Dataset Card for "open-text-books"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4568517208099365,
-0.21932463347911835,
0.0673995241522789,
-0.1083049476146698,
-0.10148490220308304,
-0.3155764639377594,
-0.07097802311182022,
-0.1251317262649536,
0.4775354862213135,
0.7193255424499512,
-0.7019569277763367,
-0.9505779147148132,
-0.399807870388031,
-0.198053359985351... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
reginaboateng/Bioasq7b_6b_list | reginaboateng | 2023-08-02T14:57:12Z | 35 | 0 | null | [
"region:us"
] | 2023-08-02T14:57:12Z | 2023-08-02T14:57:11.000Z | 2023-08-02T14:57:11 | ---
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
- name: id
dtype: string
- name: answers
struct:
- name: answer_start
sequence: int64
- name: text
sequence: string
splits:
- name: train
num_bytes: 27573422
num_examples: 16239
download_size: 5435398
dataset_size: 27573422
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Bioasq7b_6b_list"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5448224544525146,
-0.011547015979886055,
0.17206047475337982,
0.13364367187023163,
-0.338996559381485,
0.04189000651240349,
0.494396835565567,
-0.2538473308086395,
0.8420449495315552,
0.5996463298797607,
-0.6806748509407043,
-0.7028229236602783,
-0.5164933800697327,
0.000999585958197713... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bdpc/rvl_cdip_n_mp | bdpc | 2023-11-24T14:11:43Z | 35 | 0 | null | [
"license:cc-by-nc-4.0",
"region:us"
] | 2023-11-24T14:11:43Z | 2023-08-11T09:24:28.000Z | 2023-08-11T09:24:28 | ---
license: cc-by-nc-4.0
dataset_info:
features:
- name: id
dtype: string
- name: file
dtype: binary
- name: labels
dtype:
class_label:
names:
'0': letter
'1': form
'2': email
'3': handwritten
'4': advertisement
'5': scientific report
'6': scientific publication
'7': specification
'8': file folder
'9': news article
'10': budget
'11': invoice
'12': presentation
'13': questionnaire
'14': resume
'15': memo
splits:
- name: test
num_bytes: 1349159996
num_examples: 991
download_size: 0
dataset_size: 1349159996
---
# Dataset Card for RVL-CDIP-N_MultiPage
## Extension
The data loader provides support for loading RVL_CDIP-N in its extended multipage format.
Big kudos to the original authors (first in CITATION) for collecting the RVL-CDIP-N dataset.
We stand on the shoulders of giants :)
## Required installation
```bash
pip3 install pypdf2 pdf2image
sudo apt-get install poppler-utils
``` | [
-0.8462359309196472,
-0.1042347326874733,
-0.06440667062997818,
0.5911715626716614,
-0.22379766404628754,
0.04495720937848091,
0.16031627357006073,
0.11517148464918137,
0.148079052567482,
0.7231621742248535,
-0.3994843661785126,
-0.20941472053527832,
-0.6942461729049683,
0.3352948129177093... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
duxprajapati/symptom-disease-dataset | duxprajapati | 2023-08-22T12:39:19Z | 35 | 0 | null | [
"task_categories:text-classification",
"language:en",
"region:us"
] | 2023-08-22T12:39:19Z | 2023-08-22T12:38:28.000Z | 2023-08-22T12:38:28 | ---
task_categories:
- text-classification
language:
- en
--- | [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Asor/guanaco-llama2-200 | Asor | 2023-08-26T14:23:50Z | 35 | 0 | null | [
"license:mit",
"region:us"
] | 2023-08-26T14:23:50Z | 2023-08-26T14:23:10.000Z | 2023-08-26T14:23:10 | ---
license: mit
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 338808
num_examples: 200
download_size: 201257
dataset_size: 338808
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mickume/harry_potter_tiny | mickume | 2023-08-30T12:46:15Z | 35 | 0 | null | [
"region:us"
] | 2023-08-30T12:46:15Z | 2023-08-30T12:46:08.000Z | 2023-08-30T12:46:08 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1234764
num_examples: 7481
download_size: 747534
dataset_size: 1234764
---
# Dataset Card for "harrypotter_tiny"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.570440411567688,
-0.23345258831977844,
0.11073896288871765,
0.20174819231033325,
-0.17680327594280243,
-0.24477210640907288,
0.0589459091424942,
0.02752583660185337,
1.0105894804000854,
0.2540740668773651,
-0.7066097855567932,
-0.43946757912635803,
-0.5013206005096436,
-0.14279182255268... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mlabonne/medical-cases-fr | mlabonne | 2023-09-09T16:24:18Z | 35 | 0 | null | [
"region:us"
] | 2023-09-09T16:24:18Z | 2023-09-09T12:59:55.000Z | 2023-09-09T12:59:55 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: eval
path: data/eval-*
dataset_info:
features:
- name: Specialite
dtype: string
- name: Serie
dtype: int64
- name: Question
dtype: int64
- name: N_Question
dtype: int64
- name: Answer
dtype: string
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 38355502
num_examples: 8134
- name: eval
num_bytes: 1479803
num_examples: 366
download_size: 7807273
dataset_size: 39835305
---
# Dataset Card for "medical-cases-fr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3542307913303375,
-0.3168134391307831,
0.5333114266395569,
0.20610135793685913,
-0.3463818430900574,
-0.003688626689836383,
0.3669486939907074,
-0.1464201956987381,
0.9280486702919006,
0.4522055685520172,
-0.8381602764129639,
-0.8384590148925781,
-0.6181100606918335,
-0.2482649683952331... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
deven367/babylm-10M-cbt | deven367 | 2023-09-15T17:06:48Z | 35 | 0 | null | [
"region:us"
] | 2023-09-15T17:06:48Z | 2023-09-15T17:06:43.000Z | 2023-09-15T17:06:43 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2705697
num_examples: 26000
- name: valid
num_bytes: 1220938
num_examples: 12747
- name: test
num_bytes: 1578682
num_examples: 16646
download_size: 3370383
dataset_size: 5505317
---
# Dataset Card for "babylm-10M-cbt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5524458885192871,
-0.3137492537498474,
-0.022751761600375175,
0.3987836539745331,
-0.47291964292526245,
0.08539687842130661,
0.3186398446559906,
-0.12037637829780579,
0.6882618069648743,
0.450034499168396,
-0.9581199288368225,
-0.7397446632385254,
-0.6667347550392151,
-0.375313133001327... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Trelis/touch-rugby-rules-unsupervised | Trelis | 2023-09-20T14:39:47Z | 35 | 0 | null | [
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"fine-tuning",
"touch rugby",
"region:us"
] | 2023-09-20T14:39:47Z | 2023-09-20T13:28:02.000Z | 2023-09-20T13:28:02 | ---
task_categories:
- text-generation
language:
- en
tags:
- fine-tuning
- touch rugby
size_categories:
- n<1K
---
# Touch Rugby Rules Dataset
train.csv is taken from the [International Touch Website](https://cdn.internationaltouch.org/public/FIT%205th%20Edition%20Rulebook.pdf)
All text is chunked to a length of 250 tokens, aiming to keep sentences whole where possible.
For educational and non-commercial use only. | [
-0.312634140253067,
-0.19076532125473022,
-0.07076055556535721,
0.7934874296188354,
-0.5518847703933716,
-0.07418575882911682,
0.09606456011533737,
-0.6101306080818176,
0.3884955644607544,
0.6583439707756042,
-0.8274424076080322,
-0.3453325927257538,
-0.41614246368408203,
0.201499342918396... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
larryvrh/OASST_Top1_2023-08-25-Zh_Only | larryvrh | 2023-09-20T19:33:28Z | 35 | 0 | null | [
"task_categories:text-generation",
"task_categories:conversational",
"size_categories:n<1K",
"language:zh",
"region:us"
] | 2023-09-20T19:33:28Z | 2023-09-20T19:30:35.000Z | 2023-09-20T19:30:35 | ---
dataset_info:
features:
- name: conversation
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 1008722
num_examples: 662
download_size: 603882
dataset_size: 1008722
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- text-generation
- conversational
language:
- zh
size_categories:
- n<1K
---
# Dataset Card for "OASST_Top1_2023-08-25-Zh_Only"
Filtered from [OpenAssistant/oasst_top1_2023-08-25](https://huggingface.co/datasets/OpenAssistant/oasst_top1_2023-08-25). | [
-0.45040929317474365,
-0.48136958479881287,
0.3791508078575134,
0.025158606469631195,
-0.8047683835029602,
-0.1933460533618927,
0.4300689101219177,
-0.20681487023830414,
0.8053441643714905,
0.890507161617279,
-1.299076795578003,
-1.0892879962921143,
-0.6560039520263672,
-0.357437402009964,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
emozilla/Long-Data-Collections-Fine-Tune | emozilla | 2023-10-09T15:01:11Z | 35 | 2 | null | [
"region:us"
] | 2023-10-09T15:01:11Z | 2023-10-07T02:17:23.000Z | 2023-10-07T02:17:23 | ---
dataset_info:
features:
- name: text
dtype: string
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 12859272204
num_examples: 98557
download_size: 7118608463
dataset_size: 12859272204
---
# Dataset Card for "Long-Data-Collections-Fine-Tune"
Paraquet version of the fine-tune split of [togethercomputer/Long-Data-Collections](https://huggingface.co/datasets/togethercomputer/Long-Data-Collections)
Statistics (in # of characters): `total_len: 6419025428, average_len: 65130.08135393731` | [
-0.7610816955566406,
-0.5079225897789001,
0.1972825676202774,
0.1326182782649994,
-0.9335904121398926,
0.07736475765705109,
-0.46255362033843994,
-0.40600335597991943,
0.9748000502586365,
0.6827043890953064,
-0.47953522205352783,
-0.7220984697341919,
-0.5024734735488892,
0.0764148011803627... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
FinGPT/fingpt-sentiment-cls | FinGPT | 2023-10-10T06:49:38Z | 35 | 3 | null | [
"region:us"
] | 2023-10-10T06:49:38Z | 2023-10-10T06:39:32.000Z | 2023-10-10T06:39:32 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 10908696
num_examples: 47557
download_size: 3902114
dataset_size: 10908696
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "fingpt-sentiment-cls"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.9468989372253418,
-0.12090928852558136,
0.17755632102489471,
0.3427969813346863,
-0.49444469809532166,
-0.0638502985239029,
-0.08838417381048203,
-0.03983832150697708,
0.820315957069397,
0.4080362319946289,
-0.9505518078804016,
-0.8705713152885437,
-0.6502483487129211,
-0.28961262106895... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AlexHung29629/stack-exchange-paired-128K | AlexHung29629 | 2023-10-13T05:42:06Z | 35 | 0 | null | [
"region:us"
] | 2023-10-13T05:42:06Z | 2023-10-13T04:07:53.000Z | 2023-10-13T04:07:53 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 243412260
num_examples: 128000
download_size: 82603750
dataset_size: 243412260
---
# Dataset Card for "stack-exchange-paired-128K"
## token數
llama2: 97868021
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3330790400505066,
-0.07201973348855972,
0.11504915356636047,
0.6609665751457214,
-0.6871840357780457,
0.2838403284549713,
0.32722121477127075,
0.02208348922431469,
0.9720668792724609,
0.5427504777908325,
-0.594017744064331,
-0.6922999620437622,
-0.7126120924949646,
-0.07165033370256424,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cis-lmu/GlotStoryBook | cis-lmu | 2023-11-02T00:45:13Z | 35 | 1 | null | [
"language:kwn",
"language:nno",
"language:mlg",
"language:miu",
"language:mhi",
"language:yua",
"language:dga",
"language:pol",
"language:sck",
"language:nuj",
"language:ben",
"language:san",
"language:luo",
"language:guz",
"language:hus",
"language:adh",
"language:lwg",
"language:... | 2023-11-02T00:45:13Z | 2023-10-13T11:13:40.000Z | 2023-10-13T11:13:40 | ---
license: cc
language:
- kwn
- nno
- mlg
- miu
- mhi
- yua
- dga
- pol
- sck
- nuj
- ben
- san
- luo
- guz
- hus
- adh
- lwg
- lue
- nhw
- mer
- lug
- xsm
- ell
- rus
- afr
- ewe
- yue
- mnw
- laj
- myx
- fra
- adx
- teo
- cce
- kln
- hat
- zne
- srp
- mmc
- mal
- fat
- nyu
- ndo
- ven
- hch
- ssw
- kqn
- mhw
- koo
- prs
- nso
- yor
- zho
- naq
- nle
- mqu
- lun
- tuv
- ocu
- sme
- kdj
- alz
- lit
- spa
- mfe
- maz
- tum
- nhe
- hun
- dje
- ori
- swa
- ron
- her
- urd
- ttj
- ktz
- tur
- kam
- sag
- kru
- kok
- toi
- jpn
- orm
- rki
- tsn
- nep
- tha
- zul
- ctu
- khg
- dag
- pcm
- keo
- lko
- amh
- saq
- jam
- ara
- kik
- toh
- kan
- lgg
- tam
- aeb
- ckb
- deu
- guj
- ukr
- tir
- tet
- mar
- bxk
- gur
- vie
- old
- nch
- kpz
- xho
- crk
- ita
- kmr
- nyn
- por
- kri
- gaa
- hin
- asm
- mas
- xog
- khm
- csw
- nor
- tgl
- kin
- luc
- ful
- sqi
- kua
- cat
- tsc
- pus
- nld
- kor
- sot
- mya
- lat
- bod
- eng
- nob
- nzi
- twi
- hau
- dan
- kau
- pan
- swe
- fas
- som
- tso
- loz
- anu
- tel
- ada
- nbl
- lsm
- ach
- bem
- pmq
- mat
- gjn
- nya
- epo
pretty_name: GlotStoryBook Corpus
tags:
- 'story '
- storybook
- language-identification
---
## Dataset Description
Story Books for 180 ISO-639-3 script pairs (174 unique ISO-639-3 codes).
- **Homepage:** [homepage](https://github.com/cisnlp/GlotStoryBook)
- **Repository:** [github](https://github.com/cisnlp/GlotStoryBook)
- **Paper:** [paper](https://arxiv.org/abs/2310.16248)
- **Point of Contact:** amir@cis.lmu.de
## Usage (HF Loader)
```python
from datasets import load_dataset
dataset = load_dataset('cis-lmu/GlotStoryBook')
print(dataset['train'][0]) # First row data
```
## Download
If you are not a fan of the HF dataloader, download it directly:
```python
! wget https://huggingface.co/datasets/cis-lmu/GlotStoryBook/resolve/main/GlotStoryBook.csv
```
# Tools
To compute the script of each text we used Glotscript ([code](https://github.com/cisnlp/GlotScript) and [paper](https://arxiv.org/abs/2309.13320)).
## License and Copyright
We do not own any of the text from which these data has been extracted.
All the files are collected from the repository located at https://github.com/global-asp/.
The source repository for each text and file is stored in the dataset.
Each file in the dataset is associated with one license from the CC family.
The licenses include 'CC BY', 'CC BY-NC', 'CC BY-NC-SA', 'CC-BY', 'CC-BY-NC', and 'Public Domain'.
We also license the code, actual packaging and the metadata of these data under the cc0-1.0.
## Github
We additionally provide a GitHub version that openly shares the source code for processing this dataset:
https://github.com/cisnlp/GlotStoryBook
## Citation
If you use any part of this code and data in your research, please cite it (along with https://github.com/global-asp/) using the following BibTeX entry.
This work is part of the [GlotLID](https://github.com/cisnlp/GlotLID) project.
```
@inproceedings{
kargaran2023glotlid,
title={{GlotLID}: Language Identification for Low-Resource Languages},
author={Kargaran, Amir Hossein and Imani, Ayyoob and Yvon, Fran{\c{c}}ois and Sch{\"u}tze, Hinrich},
booktitle={The 2023 Conference on Empirical Methods in Natural Language Processing},
year={2023},
url={https://openreview.net/forum?id=dl4e3EBz5j}
}
``` | [
-0.11076729744672775,
-0.30449026823043823,
0.1981407254934311,
0.23110847175121307,
-0.16137121617794037,
-0.0767899602651596,
-0.3766660690307617,
-0.6220051646232605,
0.32138293981552124,
0.46009624004364014,
-0.48505282402038574,
-0.7254768013954163,
-0.31398236751556396,
0.23692257702... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ismailiismail/FrEn_handpicks | ismailiismail | 2023-10-14T19:55:36Z | 35 | 0 | null | [
"region:us"
] | 2023-10-14T19:55:36Z | 2023-10-14T17:06:15.000Z | 2023-10-14T17:06:15 | ---
dataset_info:
features:
- name: French
dtype: string
- name: English
dtype: string
splits:
- name: train
num_bytes: 34126
num_examples: 394
download_size: 16438
dataset_size: 34126
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "FrEn_handpicks"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7404199242591858,
-0.0950174406170845,
0.055339910089969635,
0.31322866678237915,
-0.36919358372688293,
-0.2750963568687439,
0.17424489557743073,
-0.29117968678474426,
0.7117882370948792,
0.568792462348938,
-0.8759692311286926,
-0.6278185844421387,
-0.6973515748977661,
-0.15428717434406... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jihye-moon/LawQA-Ko | jihye-moon | 2023-10-30T06:55:41Z | 35 | 1 | null | [
"task_categories:conversational",
"size_categories:1K<n<10K",
"language:ko",
"legal",
"region:us"
] | 2023-10-30T06:55:41Z | 2023-10-19T07:30:09.000Z | 2023-10-19T07:30:09 | ---
task_categories:
- conversational
language:
- ko
tags:
- legal
size_categories:
- 1K<n<10K
---
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
법률에 대한 질문과 답변으로 구성된 데이터셋 입니다.
아래의 데이터셋에서 질문과 답변을 병합하여 해당 데이터를 만들었습니다.
| 정보 출처 | Dataset Page | Rows |
|---|---|---|
|[찾기쉬운생활법령정보](https://www.easylaw.go.kr/CSP/OnhunqueansLstRetrieve.laf?search_put=)| [jiwoochris/easylaw_kr](https://huggingface.co/datasets/jiwoochris/easylaw_kr) | 2,195 rows |
|[대한법률구조공단](https://www.klac.or.kr/legalinfo/counsel.do)| [jihye-moon/klac_legal_aid_counseling](https://huggingface.co/datasets/jihye-moon/klac_legal_aid_counseling) | 10,037 rows |
※ 해당 데이터는 모두 웹 페이지를 크롤링 하여 구축된 데이터 입니다.
※ 데이터의 법적 근거(판례, 법률)등을 precedent행에 추가하여, instruction tuning을 위한 데이터로 업데이트 할 계획이 있습니다. | [
-0.17548373341560364,
-0.3739972710609436,
0.19626504182815552,
0.5429588556289673,
-0.4677793085575104,
-0.43326014280319214,
-0.10298842936754227,
0.031847428530454636,
0.34650638699531555,
0.5666651129722595,
-0.4267735183238983,
-0.9816921949386597,
-0.6307213306427002,
0.1713989228010... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ridger/train_refineweb | ridger | 2023-10-23T00:38:51Z | 35 | 0 | null | [
"region:us"
] | 2023-10-23T00:38:51Z | 2023-10-22T22:03:55.000Z | 2023-10-22T22:03:55 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 91738479900
num_examples: 22375239
download_size: 13547146690
dataset_size: 91738479900
---
# Dataset Card for "train_refineweb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.712364673614502,
-0.003636002540588379,
0.01404567901045084,
0.2536252737045288,
-0.10623978078365326,
-0.15209154784679413,
0.16765283048152924,
-0.1488109529018402,
0.6462828516960144,
0.3230610191822052,
-1.0629818439483643,
-0.5427102446556091,
-0.2931232750415802,
-0.07863537967205... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
aghilrs/legal-articles-filtered | aghilrs | 2023-10-29T06:50:03Z | 35 | 0 | null | [
"region:us"
] | 2023-10-29T06:50:03Z | 2023-10-29T06:50:02.000Z | 2023-10-29T06:50:02 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 21112222.54595364
num_examples: 13981
download_size: 8825148
dataset_size: 21112222.54595364
---
# Dataset Card for "legal-articles-filtered"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5283663272857666,
-0.4622955024242401,
0.5197915434837341,
0.1429642289876938,
-0.6190400719642639,
-0.04117976501584053,
0.17565663158893585,
-0.31831416487693787,
0.7787152528762817,
0.9889582991600037,
-0.6543222069740295,
-0.9977905750274658,
-0.6534320116043091,
-0.1475461572408676... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MetricLY/interiors | MetricLY | 2023-11-04T04:20:23Z | 35 | 1 | null | [
"task_categories:image-classification",
"size_categories:1K<n<10K",
"license:openrail",
"region:us"
] | 2023-11-04T04:20:23Z | 2023-10-29T13:01:14.000Z | 2023-10-29T13:01:14 | ---
license: openrail
task_categories:
- image-classification
pretty_name: IntDesSty
size_categories:
- 1K<n<10K
---
Interior design styles | [
-0.48946109414100647,
-0.3035600185394287,
0.2762402892112732,
0.7321122884750366,
-0.3399677276611328,
0.06248021125793457,
-0.21171458065509796,
-0.35286611318588257,
0.44167542457580566,
0.24208751320838928,
-0.442958265542984,
-0.5719659924507141,
0.041432470083236694,
0.41735476255416... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pphuc25/vlsp-dataset-2 | pphuc25 | 2023-11-02T18:23:01Z | 35 | 0 | null | [
"region:us"
] | 2023-11-02T18:23:01Z | 2023-11-02T18:16:34.000Z | 2023-11-02T18:16:34 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 6165849355.594
num_examples: 50482
download_size: 6304115752
dataset_size: 6165849355.594
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "vlsp-dataset-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3994866609573364,
-0.02437036857008934,
0.23033718764781952,
0.33179616928100586,
-0.3150080442428589,
-0.10031449049711227,
0.4368096590042114,
-0.37892240285873413,
0.7446813583374023,
0.5661218166351318,
-0.8050169944763184,
-0.5154313445091248,
-0.6941767334938049,
-0.48868602514266... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
paul-w-qs/contracts_v2 | paul-w-qs | 2023-11-02T23:23:33Z | 35 | 0 | null | [
"region:us"
] | 2023-11-02T23:23:33Z | 2023-11-02T23:16:10.000Z | 2023-11-02T23:16:10 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: N_ROWS
dtype: int64
- name: N_COLS
dtype: int64
- name: FONT_SIZE
dtype: int64
- name: FONT_NAME
dtype: string
- name: BORDER_THICKNESS
dtype: int64
- name: NOISED
dtype: bool
- name: LABEL_NOISE
dtype: bool
- name: JSON_LABEL
dtype: string
splits:
- name: train
num_bytes: 961858267.064
num_examples: 11871
download_size: 947911506
dataset_size: 961858267.064
---
# Dataset Card for "contracts_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.20433665812015533,
0.027160828933119774,
0.25942254066467285,
0.1402973085641861,
-0.2301253378391266,
-0.15065398812294006,
0.5263416171073914,
-0.3727623224258423,
0.646974503993988,
0.8104554414749146,
-0.5296667218208313,
-0.8184714913368225,
-0.6194338202476501,
-0.4703053832054138... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
timyangyazhou/ubuntu_irc_kummerfeld_ft_20_window_last_5_pseudo | timyangyazhou | 2023-11-18T11:05:23Z | 35 | 0 | null | [
"region:us"
] | 2023-11-18T11:05:23Z | 2023-11-08T02:32:01.000Z | 2023-11-08T02:32:01 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: dev
path: data/dev-*
- split: test
path: data/test-*
dataset_info:
features:
- name: canon_name
dtype: string
- name: id
dtype: int64
- name: parents
sequence: int64
- name: children
sequence: int64
- name: messages
sequence: string
- name: prediction
dtype: string
splits:
- name: train
num_bytes: 81419322
num_examples: 63982
- name: dev
num_bytes: 3052013
num_examples: 2397
- name: test
num_bytes: 6263006
num_examples: 4783
download_size: 0
dataset_size: 90734341
---
# Dataset Card for "ubuntu_irc_kummerfeld_ft_20_window_last_5_pseudo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7137655019760132,
-0.27458086609840393,
0.5465455055236816,
0.10479070246219635,
-0.3554503321647644,
0.18785662949085236,
0.1510632485151291,
-0.0006551056867465377,
0.49285802245140076,
0.2406376302242279,
-0.7393732070922852,
-0.7331970930099487,
-0.27134567499160767,
-0.085456334054... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Rewcifer/validation_2000_cutoff_llama_formatted | Rewcifer | 2023-11-08T03:05:52Z | 35 | 0 | null | [
"region:us"
] | 2023-11-08T03:05:52Z | 2023-11-08T03:05:50.000Z | 2023-11-08T03:05:50 | ---
dataset_info:
features:
- name: labels_and_findings
dtype: string
- name: prompts
dtype: string
- name: true_findings
dtype: string
splits:
- name: train
num_bytes: 113806806
num_examples: 14551
download_size: 26372198
dataset_size: 113806806
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "validation_2000_cutoff_llama_formatted"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3836859166622162,
-0.23640312254428864,
0.34738489985466003,
0.5224958658218384,
-0.4070757031440735,
-0.04824839532375336,
0.3514791429042816,
0.038030315190553665,
0.7567757964134216,
0.573948323726654,
-1.0216313600540161,
-0.6573013067245483,
-0.5497082471847534,
0.21077191829681396... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Aarif1430/english-to-hindi | Aarif1430 | 2023-11-12T09:13:33Z | 35 | 0 | null | [
"region:us"
] | 2023-11-12T09:13:33Z | 2023-11-09T09:29:28.000Z | 2023-11-09T09:29:28 | ---
dataset_info:
features:
- name: english_sentence
dtype: string
- name: hindi_sentence
dtype: string
splits:
- name: train
num_bytes: 41188315
num_examples: 127705
download_size: 21737146
dataset_size: 41188315
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "english-to-hindi"
**Dataset Card: English-to-Hindi Translation**
**Overview:**
- **Dataset Name:** English-to-Hindi Translation
- **Dataset Size:** 128K sentences
- **Source:** Curated list of English sentences paired with their Hindi translations.
- **Use Case:** Training machine translation models, specifically English-to-Hindi translation using transformer architectures.
**Data Collection:**
- **Collection Method:** Manual translation by bilingual speakers.
- **Data Quality:** High quality with accurate translations.
**Dataset Composition:**
- **Language Pair:** English to Hindi
- **Text Type:** General sentences, covering a wide range of topics.
- **Text Length:** Varied lengths of sentences.
**Data Format:**
- **Format:** CSV, each row containing an English sentence and its corresponding Hindi translation.
**Licensing:**
- **License:** MIT
**Dataset Distribution:**
- **Availability:**
```python
from datasets import load_dataset
dataset = load_dataset("Aarif1430/english-to-hindi")
```
```shell
curl -X GET "https://datasets-server.huggingface.co/rows?dataset=Aarif1430%2Fenglish-to-hindi&config=default&split=train&offset=0&length=100"
```
- **Download Size:** 21.7 MB
**Potential Use Cases:**
- Training and evaluating machine translation models.
- Research in natural language processing, specifically in the field of translation.
**Limitations:**
- Limited coverage of domain-specific language or specialized terminology.
**Additional Information:**
- The dataset was created to facilitate research and development in English-to-Hindi machine translation. Researchers and developers are encouraged to contribute to and improve the dataset.
**Citation:**
- If you use this dataset in your work, please cite the dataset using the provided citation information.
**References:**
- https://huggingface.co/datasets/ai4bharat/samanantar
| [
-0.13626480102539062,
-0.40785184502601624,
-0.18883605301380157,
0.6030827760696411,
-0.4829119145870209,
0.043952468782663345,
-0.4562219977378845,
-0.2566862404346466,
0.24080155789852142,
0.2430907040834427,
-0.5807628631591797,
-0.5439906716346741,
-0.8809749484062195,
0.6496682763099... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
daspartho/correct_addition | daspartho | 2023-11-10T16:23:41Z | 35 | 0 | null | [
"region:us"
] | 2023-11-10T16:23:41Z | 2023-11-09T13:38:27.000Z | 2023-11-09T13:38:27 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: incorrect_statement
dtype: string
- name: correct_statement
dtype: string
- name: close_statement
dtype: string
splits:
- name: train
num_bytes: 131851
num_examples: 2500
download_size: 73485
dataset_size: 131851
---
# Dataset Card for "correct_addition"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5391693115234375,
-0.25703099370002747,
0.15396632254123688,
0.455426961183548,
-0.00134929025080055,
-0.1689615696668625,
0.21344324946403503,
-0.2861347198486328,
0.7256250381469727,
0.5793768763542175,
-0.5909344553947449,
-0.581186056137085,
-0.5676434636116028,
-0.26362863183021545... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tiagoblima/enem-tc-aqg | tiagoblima | 2023-11-11T01:13:06Z | 35 | 0 | null | [
"region:us"
] | 2023-11-11T01:13:06Z | 2023-11-11T00:58:24.000Z | 2023-11-11T00:58:24 | ---
dataset_info:
features:
- name: paragraph
dtype: string
- name: question
dtype: string
- name: answers
struct:
- name: text
sequence: string
splits:
- name: train
num_bytes: 402614
num_examples: 388
download_size: 267848
dataset_size: 402614
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "enem-tc-aqg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5471442341804504,
-0.3984370231628418,
0.11574650555849075,
0.0030387851875275373,
-0.2623463571071625,
0.12180697172880173,
0.3286527395248413,
-0.06809991598129272,
0.9242590069770813,
0.6435962915420532,
-0.8307294249534607,
-0.8065745234489441,
-0.47642046213150024,
-0.0450360886752... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
renumics/f1_dataset | renumics | 2023-11-13T18:15:08Z | 35 | 2 | null | [
"region:us"
] | 2023-11-13T18:15:08Z | 2023-11-13T10:33:33.000Z | 2023-11-13T10:33:33 | ---
dataset_info:
features:
- name: Time
dtype: duration[ns]
- name: Driver
dtype: string
- name: DriverNumber
dtype: string
- name: LapTime
dtype: duration[ns]
- name: LapNumber
dtype: float64
- name: Stint
dtype: float64
- name: PitOutTime
dtype: duration[ns]
- name: PitInTime
dtype: duration[ns]
- name: Sector1Time
dtype: duration[ns]
- name: Sector2Time
dtype: duration[ns]
- name: Sector3Time
dtype: duration[ns]
- name: Sector1SessionTime
dtype: duration[ns]
- name: Sector2SessionTime
dtype: duration[ns]
- name: Sector3SessionTime
dtype: duration[ns]
- name: SpeedI1
dtype: float64
- name: SpeedI2
dtype: float64
- name: SpeedFL
dtype: float64
- name: SpeedST
dtype: float64
- name: IsPersonalBest
dtype: bool
- name: Compound
dtype: string
- name: TyreLife
dtype: float64
- name: FreshTyre
dtype: bool
- name: Team
dtype: string
- name: LapStartTime
dtype: duration[ns]
- name: LapStartDate
dtype: timestamp[ns]
- name: TrackStatus
dtype: string
- name: Position
dtype: float64
- name: Deleted
dtype: bool
- name: DeletedReason
dtype: string
- name: FastF1Generated
dtype: bool
- name: IsAccurate
dtype: bool
- name: DistanceToDriverAhead
sequence:
sequence: float64
- name: RPM
sequence:
sequence: float64
- name: Speed
sequence:
sequence: float64
- name: nGear
sequence:
sequence: float64
- name: Throttle
sequence:
sequence: float64
- name: Brake
sequence:
sequence: float64
- name: DRS
sequence:
sequence: float64
- name: X
sequence:
sequence: float64
- name: Y
sequence:
sequence: float64
- name: Z
sequence:
sequence: float64
- name: gear_vis
dtype: image
- name: speed_vis
dtype: image
- name: RPM_emb
sequence: float64
- name: Speed_emb
sequence: float64
- name: nGear_emb
sequence: float64
- name: Throttle_emb
sequence: float64
- name: Brake_emb
sequence: float64
- name: X_emb
sequence: float64
- name: Y_emb
sequence: float64
- name: Z_emb
sequence: float64
- name: portrait
dtype: image
splits:
- name: train
num_bytes: 561415487.5469999
num_examples: 1317
download_size: 300522146
dataset_size: 561415487.5469999
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "f1_dataset"
This dataset includes race telemetry data from the Formula1 Montreail 2023 GP. It was obtained from the Ergast API using the fastf1 library.
We built an [interactive demo](https://huggingface.co/spaces/renumics/f1_montreal_gp) for this dataset on Hugging Face spaces.

You can explore the dataset on your machine with [Spotlight](https://github.com/Renumics/spotlight):
```bash
pip install renumics-spotlight
```
```Python
import datasets
from renumics import spotlight
ds = datasets.load_dataset('renumics/f1_dataset', split='train')
dtypes = {"DistanceToDriverAhead": spotlight.Sequence1D, "RPM": spotlight.Sequence1D, "Speed": spotlight.Sequence1D, "nGear": spotlight.Sequence1D,
"Throttle": spotlight.Sequence1D, "Brake": spotlight.Sequence1D, "DRS": spotlight.Sequence1D, "X": spotlight.Sequence1D, "Y": spotlight.Sequence1D, "Z": spotlight.Sequence1D,
'RPM_emb': spotlight.Embedding, 'Speed_emb': spotlight.Embedding, 'nGear_emb': spotlight.Embedding, 'Throttle_emb': spotlight.Embedding, 'Brake_emb': spotlight.Embedding,
'X_emb': spotlight.Embedding, 'Y_emb': spotlight.Embedding, 'Z_emb': spotlight.Embedding}
spotlight.show(ds, dtype=dtypes)
``` | [
-0.49171003699302673,
-0.3936648964881897,
0.18621040880680084,
0.38072672486305237,
-0.20163212716579437,
0.18733389675617218,
-0.0662306621670723,
-0.16174094378948212,
0.5803261399269104,
0.1698787808418274,
-1.0753488540649414,
-0.7759613394737244,
-0.40916797518730164,
0.1757555752992... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
atmallen/qm_bob_hard_4_grader_first_1.0e | atmallen | 2023-11-16T18:27:44Z | 35 | 0 | null | [
"region:us"
] | 2023-11-16T18:27:44Z | 2023-11-16T03:20:17.000Z | 2023-11-16T03:20:17 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: alice_label
dtype: bool
- name: bob_label
dtype: bool
- name: difficulty
dtype: int64
- name: statement
dtype: string
- name: choices
sequence: string
- name: character
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
splits:
- name: train
num_bytes: 3455633.0
num_examples: 37091
- name: validation
num_bytes: 369717.0
num_examples: 3969
- name: test
num_bytes: 365744.0
num_examples: 3926
download_size: 1060982
dataset_size: 4191094.0
---
# Dataset Card for "qm_bob_hard_4_grader_first_1.0e"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.448027640581131,
-0.3043883740901947,
0.1333552747964859,
0.3288290798664093,
-0.20795010030269623,
0.18381384015083313,
0.4713324010372162,
0.26495397090911865,
0.5719335079193115,
0.5156792998313904,
-0.7991288304328918,
-1.0705292224884033,
-0.6020045876502991,
-0.22171808779239655,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lemonpuddi/ml-emoji-story | lemonpuddi | 2023-11-28T12:08:35Z | 35 | 0 | null | [
"region:us"
] | 2023-11-28T12:08:35Z | 2023-11-17T12:19:16.000Z | 2023-11-17T12:19:16 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AlignmentLab-AI/open-instruct-sharegpt | AlignmentLab-AI | 2023-11-18T06:48:58Z | 35 | 0 | null | [
"region:us"
] | 2023-11-18T06:48:58Z | 2023-11-18T03:15:06.000Z | 2023-11-18T03:15:06 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
josiauhlol/fsGPT | josiauhlol | 2023-11-18T05:13:05Z | 35 | 0 | null | [
"task_categories:conversational",
"language:en",
"license:openrail",
"ai",
"region:us"
] | 2023-11-18T05:13:05Z | 2023-11-18T04:56:28.000Z | 2023-11-18T04:56:28 | ---
language: en
license: openrail
pretty_name: freesmartGPT
task_categories:
- conversational
tags:
- ai
---
# fsGPT | [
-0.33354678750038147,
-0.4220614433288574,
0.4940928518772125,
0.5047096610069275,
-0.7117691040039062,
0.37450170516967773,
0.28463825583457947,
-0.09099522233009338,
-0.10904184728860855,
0.48524489998817444,
-0.484433650970459,
-0.057692814618349075,
-0.9830694198608398,
0.1773074418306... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Innovina/Test_Youtube_Links | Innovina | 2023-11-23T12:07:54Z | 35 | 0 | null | [
"task_categories:text-generation",
"language:it",
"license:mit",
"code",
"region:us"
] | 2023-11-23T12:07:54Z | 2023-11-22T15:08:10.000Z | 2023-11-22T15:08:10 | ---
license: mit
task_categories:
- text-generation
language:
- it
tags:
- code
pretty_name: Test
--- | [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
confit/timit | confit | 2023-11-25T00:42:28Z | 35 | 0 | null | [
"region:us"
] | 2023-11-25T00:42:28Z | 2023-11-25T00:42:23.000Z | 2023-11-25T00:42:23 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: filename
dtype: string
- name: label
dtype:
class_label:
names:
'0': FADG0
'1': FAEM0
'2': FAJW0
'3': FAKS0
'4': FALK0
'5': FALR0
'6': FAPB0
'7': FASW0
'8': FAWF0
'9': FBAS0
'10': FBCG1
'11': FBCH0
'12': FBJL0
'13': FBLV0
'14': FBMH0
'15': FBMJ0
'16': FCAG0
'17': FCAJ0
'18': FCAL1
'19': FCAU0
'20': FCDR1
'21': FCEG0
'22': FCFT0
'23': FCJF0
'24': FCJS0
'25': FCKE0
'26': FCLT0
'27': FCMG0
'28': FCMH0
'29': FCMH1
'30': FCMM0
'31': FCMR0
'32': FCRH0
'33': FCRZ0
'34': FCYL0
'35': FDAC1
'36': FDAS1
'37': FDAW0
'38': FDFB0
'39': FDHC0
'40': FDJH0
'41': FDKN0
'42': FDML0
'43': FDMS0
'44': FDMY0
'45': FDNC0
'46': FDRD1
'47': FDRW0
'48': FDTD0
'49': FDXW0
'50': FEAC0
'51': FEAR0
'52': FECD0
'53': FEDW0
'54': FEEH0
'55': FELC0
'56': FEME0
'57': FETB0
'58': FEXM0
'59': FGCS0
'60': FGDP0
'61': FGJD0
'62': FGMB0
'63': FGMD0
'64': FGRW0
'65': FGWR0
'66': FHES0
'67': FHEW0
'68': FHLM0
'69': FHXS0
'70': FISB0
'71': FJAS0
'72': FJCS0
'73': FJDM2
'74': FJEM0
'75': FJEN0
'76': FJHK0
'77': FJKL0
'78': FJLG0
'79': FJLM0
'80': FJLR0
'81': FJMG0
'82': FJRB0
'83': FJRE0
'84': FJRP1
'85': FJSA0
'86': FJSJ0
'87': FJSK0
'88': FJSP0
'89': FJWB0
'90': FJWB1
'91': FJXM0
'92': FJXP0
'93': FKAA0
'94': FKDE0
'95': FKDW0
'96': FKFB0
'97': FKKH0
'98': FKLC0
'99': FKLC1
'100': FKLH0
'101': FKMS0
'102': FKSR0
'103': FLAC0
'104': FLAG0
'105': FLAS0
'106': FLBW0
'107': FLEH0
'108': FLET0
'109': FLHD0
'110': FLJA0
'111': FLJD0
'112': FLJG0
'113': FLKD0
'114': FLKM0
'115': FLMA0
'116': FLMC0
'117': FLMK0
'118': FLNH0
'119': FLOD0
'120': FLTM0
'121': FMAF0
'122': FMAH0
'123': FMAH1
'124': FMBG0
'125': FMCM0
'126': FMEM0
'127': FMGD0
'128': FMJB0
'129': FMJF0
'130': FMJU0
'131': FMKC0
'132': FMKF0
'133': FMLD0
'134': FMMH0
'135': FMML0
'136': FMPG0
'137': FNKL0
'138': FNLP0
'139': FNMR0
'140': FNTB0
'141': FPAB1
'142': FPAC0
'143': FPAD0
'144': FPAF0
'145': FPAS0
'146': FPAZ0
'147': FPJF0
'148': FPKT0
'149': FPLS0
'150': FPMY0
'151': FRAM1
'152': FREH0
'153': FREW0
'154': FRJB0
'155': FRLL0
'156': FRNG0
'157': FSAG0
'158': FSAH0
'159': FSAK0
'160': FSBK0
'161': FSCN0
'162': FSDC0
'163': FSDJ0
'164': FSEM0
'165': FSGF0
'166': FSJG0
'167': FSJK1
'168': FSJS0
'169': FSJW0
'170': FSKC0
'171': FSKL0
'172': FSKP0
'173': FSLB1
'174': FSLS0
'175': FSMA0
'176': FSMM0
'177': FSMS1
'178': FSPM0
'179': FSRH0
'180': FSSB0
'181': FSXA0
'182': FTAJ0
'183': FTBR0
'184': FTBW0
'185': FTLG0
'186': FTLH0
'187': FTMG0
'188': FUTB0
'189': FVFB0
'190': FVKB0
'191': FVMH0
'192': MABC0
'193': MABW0
'194': MADC0
'195': MADD0
'196': MAEB0
'197': MAEO0
'198': MAFM0
'199': MAHH0
'200': MAJC0
'201': MAJP0
'202': MAKB0
'203': MAKR0
'204': MAPV0
'205': MARC0
'206': MARW0
'207': MBAR0
'208': MBBR0
'209': MBCG0
'210': MBDG0
'211': MBEF0
'212': MBGT0
'213': MBJK0
'214': MBJV0
'215': MBMA0
'216': MBMA1
'217': MBML0
'218': MBNS0
'219': MBOM0
'220': MBPM0
'221': MBSB0
'222': MBTH0
'223': MBWM0
'224': MBWP0
'225': MCAE0
'226': MCAL0
'227': MCCS0
'228': MCDC0
'229': MCDD0
'230': MCDR0
'231': MCEF0
'232': MCEM0
'233': MCEW0
'234': MCHH0
'235': MCHL0
'236': MCLK0
'237': MCLM0
'238': MCMB0
'239': MCMJ0
'240': MCPM0
'241': MCRC0
'242': MCRE0
'243': MCSH0
'244': MCSS0
'245': MCTH0
'246': MCTM0
'247': MCTT0
'248': MCTW0
'249': MCXM0
'250': MDAB0
'251': MDAC0
'252': MDAC2
'253': MDAS0
'254': MDAW1
'255': MDBB0
'256': MDBB1
'257': MDBP0
'258': MDCD0
'259': MDCM0
'260': MDDC0
'261': MDED0
'262': MDEF0
'263': MDEM0
'264': MDHL0
'265': MDHS0
'266': MDJM0
'267': MDKS0
'268': MDLB0
'269': MDLC0
'270': MDLC1
'271': MDLC2
'272': MDLD0
'273': MDLF0
'274': MDLH0
'275': MDLM0
'276': MDLR0
'277': MDLR1
'278': MDLS0
'279': MDMA0
'280': MDMT0
'281': MDNS0
'282': MDPB0
'283': MDPK0
'284': MDPS0
'285': MDRB0
'286': MDRD0
'287': MDRM0
'288': MDSC0
'289': MDSJ0
'290': MDSS0
'291': MDSS1
'292': MDTB0
'293': MDVC0
'294': MDWA0
'295': MDWD0
'296': MDWH0
'297': MDWK0
'298': MDWM0
'299': MEAL0
'300': MEDR0
'301': MEFG0
'302': MEGJ0
'303': MEJL0
'304': MEJS0
'305': MERS0
'306': MESD0
'307': MESG0
'308': MESJ0
'309': MEWM0
'310': MFER0
'311': MFGK0
'312': MFMC0
'313': MFRM0
'314': MFWK0
'315': MFXS0
'316': MFXV0
'317': MGAF0
'318': MGAG0
'319': MGAK0
'320': MGAR0
'321': MGAW0
'322': MGES0
'323': MGJC0
'324': MGJF0
'325': MGLB0
'326': MGMM0
'327': MGRL0
'328': MGRP0
'329': MGRT0
'330': MGSH0
'331': MGSL0
'332': MGWT0
'333': MGXP0
'334': MHBS0
'335': MHIT0
'336': MHJB0
'337': MHMG0
'338': MHMR0
'339': MHPG0
'340': MHRM0
'341': MHXL0
'342': MILB0
'343': MJAC0
'344': MJAE0
'345': MJAI0
'346': MJAR0
'347': MJBG0
'348': MJBR0
'349': MJDA0
'350': MJDC0
'351': MJDE0
'352': MJDG0
'353': MJDH0
'354': MJDM0
'355': MJDM1
'356': MJEB0
'357': MJEB1
'358': MJEE0
'359': MJES0
'360': MJFC0
'361': MJFH0
'362': MJFR0
'363': MJHI0
'364': MJJB0
'365': MJJG0
'366': MJJJ0
'367': MJJM0
'368': MJKR0
'369': MJLB0
'370': MJLG1
'371': MJLN0
'372': MJLS0
'373': MJMA0
'374': MJMD0
'375': MJMM0
'376': MJMP0
'377': MJPG0
'378': MJPM0
'379': MJPM1
'380': MJRA0
'381': MJRF0
'382': MJRG0
'383': MJRH0
'384': MJRH1
'385': MJRK0
'386': MJRP0
'387': MJSR0
'388': MJSW0
'389': MJTC0
'390': MJTH0
'391': MJVW0
'392': MJWG0
'393': MJWS0
'394': MJWT0
'395': MJXA0
'396': MJXL0
'397': MKAG0
'398': MKAH0
'399': MKAJ0
'400': MKAM0
'401': MKCH0
'402': MKCL0
'403': MKDB0
'404': MKDD0
'405': MKDR0
'406': MKDT0
'407': MKES0
'408': MKJL0
'409': MKJO0
'410': MKLN0
'411': MKLR0
'412': MKLS0
'413': MKLS1
'414': MKLT0
'415': MKLW0
'416': MKRG0
'417': MKXL0
'418': MLBC0
'419': MLEL0
'420': MLIH0
'421': MLJB0
'422': MLJC0
'423': MLJH0
'424': MLLL0
'425': MLNS0
'426': MLNT0
'427': MLSH0
'428': MMAA0
'429': MMAB0
'430': MMAB1
'431': MMAG0
'432': MMAM0
'433': MMAR0
'434': MMBS0
'435': MMCC0
'436': MMDB0
'437': MMDB1
'438': MMDG0
'439': MMDH0
'440': MMDM0
'441': MMDM1
'442': MMDM2
'443': MMDS0
'444': MMEA0
'445': MMEB0
'446': MMGC0
'447': MMGG0
'448': MMGK0
'449': MMJB1
'450': MMJR0
'451': MMLM0
'452': MMPM0
'453': MMRP0
'454': MMSM0
'455': MMVP0
'456': MMWB0
'457': MMWH0
'458': MMWS0
'459': MMWS1
'460': MMXS0
'461': MNET0
'462': MNJM0
'463': MNLS0
'464': MNTW0
'465': MPAB0
'466': MPAM0
'467': MPAM1
'468': MPAR0
'469': MPCS0
'470': MPDF0
'471': MPEB0
'472': MPFU0
'473': MPGH0
'474': MPGL0
'475': MPGR0
'476': MPGR1
'477': MPLB0
'478': MPMB0
'479': MPPC0
'480': MPRB0
'481': MPRD0
'482': MPRK0
'483': MPRT0
'484': MPSW0
'485': MPWM0
'486': MRAB0
'487': MRAB1
'488': MRAI0
'489': MRAM0
'490': MRAV0
'491': MRBC0
'492': MRCG0
'493': MRCS0
'494': MRCW0
'495': MRCZ0
'496': MRDD0
'497': MRDM0
'498': MRDS0
'499': MREB0
'500': MREE0
'501': MREH1
'502': MREM0
'503': MRES0
'504': MREW1
'505': MRFK0
'506': MRFL0
'507': MRGG0
'508': MRGM0
'509': MRGS0
'510': MRHL0
'511': MRJB1
'512': MRJH0
'513': MRJM0
'514': MRJM1
'515': MRJM3
'516': MRJM4
'517': MRJO0
'518': MRJR0
'519': MRJS0
'520': MRJT0
'521': MRKM0
'522': MRKO0
'523': MRLD0
'524': MRLJ0
'525': MRLJ1
'526': MRLK0
'527': MRLR0
'528': MRMB0
'529': MRMG0
'530': MRMH0
'531': MRML0
'532': MRMS0
'533': MRMS1
'534': MROA0
'535': MRPC0
'536': MRPC1
'537': MRPP0
'538': MRRE0
'539': MRRK0
'540': MRSO0
'541': MRSP0
'542': MRTC0
'543': MRTJ0
'544': MRTK0
'545': MRVG0
'546': MRWA0
'547': MRWS0
'548': MRWS1
'549': MRXB0
'550': MSAH1
'551': MSAS0
'552': MSAT0
'553': MSAT1
'554': MSDB0
'555': MSDH0
'556': MSDS0
'557': MSEM1
'558': MSES0
'559': MSFH0
'560': MSFH1
'561': MSFV0
'562': MSJK0
'563': MSJS1
'564': MSLB0
'565': MSMC0
'566': MSMR0
'567': MSMS0
'568': MSRG0
'569': MSRR0
'570': MSTF0
'571': MSTK0
'572': MSVS0
'573': MTAA0
'574': MTAB0
'575': MTAS0
'576': MTAS1
'577': MTAT0
'578': MTAT1
'579': MTBC0
'580': MTCS0
'581': MTDB0
'582': MTDP0
'583': MTDT0
'584': MTEB0
'585': MTER0
'586': MTHC0
'587': MTJG0
'588': MTJM0
'589': MTJS0
'590': MTJU0
'591': MTKD0
'592': MTKP0
'593': MTLB0
'594': MTLC0
'595': MTLS0
'596': MTML0
'597': MTMN0
'598': MTMR0
'599': MTMT0
'600': MTPF0
'601': MTPG0
'602': MTPP0
'603': MTPR0
'604': MTQC0
'605': MTRC0
'606': MTRR0
'607': MTRT0
'608': MTWH0
'609': MTWH1
'610': MTXS0
'611': MVJH0
'612': MVLO0
'613': MVRW0
'614': MWAC0
'615': MWAD0
'616': MWAR0
'617': MWBT0
'618': MWCH0
'619': MWDK0
'620': MWEM0
'621': MWEW0
'622': MWGR0
'623': MWJG0
'624': MWRE0
'625': MWRP0
'626': MWSB0
'627': MWSH0
'628': MWVW0
'629': MZMB0
splits:
- name: train
num_bytes: 136862
num_examples: 3780
- name: validation
num_bytes: 46145
num_examples: 1260
- name: test
num_bytes: 46508
num_examples: 1260
download_size: 124769
dataset_size: 229515
---
# Dataset Card for "timit"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5574108958244324,
-0.25963398814201355,
0.14022673666477203,
0.20804864168167114,
-0.3648684024810791,
0.06769483536481857,
0.15961359441280365,
-0.14623774588108063,
0.7610276937484741,
0.43916139006614685,
-0.8944876194000244,
-0.6971009373664856,
-0.65531986951828,
-0.248700797557830... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
it5/datasets | it5 | 2022-04-26T09:21:47Z | 34 | 0 | null | [
"region:us"
] | 2022-04-26T09:21:47Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227647542953491,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965083122253,
0.7915717959403992,
0.07618629932403564,
0.7746022343635559,
0.2563222348690033,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Azu/Handwritten-Mathematical-Expression-Convert-LaTeX | Azu | 2022-03-10T18:25:17Z | 34 | 9 | null | [
"region:us"
] | 2022-03-10T18:25:17Z | 2022-03-10T18:23:05.000Z | 2022-03-10T18:23:05 | Entry not found | [
-0.3227647542953491,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965083122253,
0.7915717959403992,
0.07618629932403564,
0.7746022343635559,
0.2563222348690033,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hackathon-pln-es/readability-es-caes | hackathon-pln-es | 2023-04-13T08:51:40Z | 34 | 1 | null | [
"task_categories:text-classification",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:es",
"license:cc-by-4.0",
"readability",
"region:us"
] | 2023-04-13T08:51:40Z | 2022-04-03T21:42:19.000Z | 2022-04-03T21:42:19 | ---
annotations_creators:
- other
language_creators:
- other
language:
- es
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
pretty_name: readability-es-caes
tags:
- readability
---
# Dataset Card for [readability-es-caes]
## Dataset Description
### Dataset Summary
This dataset is a compilation of short articles from websites dedicated to learn Spanish as a second language. These articles have been compiled from the following sources:
- [CAES corpus](http://galvan.usc.es/caes/) (Martínez et al., 2019): the "Corpus de Aprendices del Español" is a collection of texts produced by Spanish L2 learners from Spanish learning centers and universities. These text are produced by students of all levels (A1 to C1), with different backgrounds (11 native languages) and levels of experience.
### Languages
Spanish
## Dataset Structure
Texts are tokenized to create a paragraph-based dataset
### Data Fields
The dataset is formatted as a json lines and includes the following fields:
- **Category:** when available, this includes the level of this text according to the Common European Framework of Reference for Languages (CEFR).
- **Level:** standardized readability level: simple or complex.
- **Level-3:** standardized readability level: basic, intermediate or advanced.
- **Text:** original text formatted into sentences.
## Additional Information
### Licensing Information
https://creativecommons.org/licenses/by-nc-sa/4.0/
### Citation Information
Please cite this page to give credit to the authors :)
### Team
- [Laura Vásquez-Rodríguez](https://lmvasque.github.io/)
- [Pedro Cuenca](https://twitter.com/pcuenq)
- [Sergio Morales](https://www.fireblend.com/)
- [Fernando Alva-Manchego](https://feralvam.github.io/)
| [
-0.30984967947006226,
-0.3461446762084961,
0.1794770509004593,
0.44683125615119934,
-0.30243608355522156,
0.3028814196586609,
-0.23460862040519714,
-0.5773627161979675,
0.3494524359703064,
0.5278680920600891,
-0.702766478061676,
-1.0295237302780151,
-0.46853888034820557,
0.3039097189903259... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Goud/Goud-sum | Goud | 2022-07-04T16:02:36Z | 34 | 2 | null | [
"task_categories:summarization",
"task_ids:news-articles-headline-generation",
"annotations_creators:no-annotation",
"language_creators:machine-generated",
"size_categories:100K<n<1M",
"source_datasets:original",
"region:us"
] | 2022-07-04T16:02:36Z | 2022-04-21T15:25:00.000Z | 2022-04-21T15:25:00 | ---
annotations_creators:
- no-annotation
language_creators:
- machine-generated
language: []
license: []
multilinguality: []
pretty_name: Goud-sum
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- summarization
task_ids:
- news-articles-headline-generation
---
# Dataset Card for Goud summarization dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**[Needs More Information]
- **Repository:**[Needs More Information]
- **Paper:**[Goud.ma: a News Article Dataset for Summarization in Moroccan Darija](https://openreview.net/forum?id=BMVq5MELb9)
- **Leaderboard:**[Needs More Information]
- **Point of Contact:**[Needs More Information]
### Dataset Summary
Goud-sum contains 158k articles and their headlines extracted from [Goud.ma](https://www.goud.ma/) news website. The articles are written in the Arabic script. All headlines are in Moroccan Darija, while articles may be in Moroccan Darija, in Modern Standard Arabic, or a mix of both (code-switched Moroccan Darija).
### Supported Tasks and Leaderboards
Text Summarization
### Languages
* Moroccan Arabic (Darija)
* Modern Standard Arabic
## Dataset Structure
### Data Instances
The dataset consists of article-headline pairs in string format.
### Data Fields
* article: a string containing the body of the news article
* headline: a string containing the article's headline
* categories: a list of string of article categories
### Data Splits
Goud-sum dataset has 3 splits: _train_, _validation_, and _test_. Below are the number of instances in each split.
| Dataset Split | Number of Instances in Split |
| ------------- | ------------------------------------------- |
| Train | 139,288 |
| Validation | 9,497 |
| Test | 9,497 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
The text was written by journalists at [Goud](https://www.goud.ma/).
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{issam2022goudma,
title={Goud.ma: a News Article Dataset for Summarization in Moroccan Darija},
author={Abderrahmane Issam and Khalil Mrini},
booktitle={3rd Workshop on African Natural Language Processing},
year={2022},
url={https://openreview.net/forum?id=BMVq5MELb9}
}
```
### Contributions
Thanks to [@issam9](https://github.com/issam9) and [@KhalilMrini](https://github.com/KhalilMrini) for adding this dataset.
| [
-0.5755229592323303,
-0.5275247693061829,
-0.04776925593614578,
0.19488820433616638,
-0.5803873538970947,
0.08438431471586227,
-0.2683342695236206,
-0.21797695755958557,
0.6753405332565308,
0.5539458990097046,
-0.5609450340270996,
-0.9622420072555542,
-0.7886996865272522,
0.105071499943733... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
taln-ls2n/termith-eval | taln-ls2n | 2022-09-23T07:49:04Z | 34 | 1 | null | [
"task_categories:text-generation",
"annotations_creators:unknown",
"language_creators:unknown",
"multilinguality:multilingual",
"size_categories:n<1K",
"language:fr",
"license:cc-by-4.0",
"region:us"
] | 2022-09-23T07:49:04Z | 2022-04-22T09:09:23.000Z | 2022-04-22T09:09:23 | ---
annotations_creators:
- unknown
language_creators:
- unknown
language:
- fr
license: cc-by-4.0
multilinguality:
- multilingual
task_categories:
- text-mining
- text-generation
task_ids:
- keyphrase-generation
- keyphrase-extraction
size_categories:
- n<1K
pretty_name: TermITH-Eval
---
# TermITH-Eval Benchmark Dataset for Keyphrase Generation
## About
TermITH-Eval is a dataset for benchmarking keyphrase extraction and generation models.
The dataset is composed of 400 abstracts of scientific papers in French collected from the FRANCIS and PASCAL databases of the French [Institute for Scientific and Technical Information (Inist)](https://www.inist.fr/).
Keyphrases were annotated by professional indexers in an uncontrolled setting (that is, not limited to thesaurus entries).
Details about the dataset can be found in the original paper [(Bougouin et al., 2016)][bougouin-2016].
Reference (indexer-assigned) keyphrases are also categorized under the PRMU (<u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen) scheme as proposed in [(Boudin and Gallina, 2021)][boudin-2021]. Present reference keyphrases are also ordered by their order of apparition in the concatenation of title and abstract.
Text pre-processing (tokenization) is carried out using `spacy` (`fr_core_news_sm` model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token).
Stemming (Snowball stemmer implementation for french provided in `nltk`) is applied before reference keyphrases are matched against the source text.
Details about the process can be found in `prmu.py`.
## Content and statistics
The dataset contains the following test split:
| Split | # documents | #words | # keyphrases | % Present | % Reordered | % Mixed | % Unseen |
| :--------- |------------:|-----------:|-------------:|----------:|------------:|--------:|---------:|
| Test | 399 | 156.9 | 11.81 | 40.60 | 7.32 | 19.28 | 32.80 |
The following data fields are available :
- **id**: unique identifier of the document.
- **title**: title of the document.
- **abstract**: abstract of the document.
- **keyphrases**: list of reference keyphrases.
- **prmu**: list of <u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen categories for reference keyphrases.
- **category**: category of the document, i.e. chimie (chemistry), archeologie (archeology), linguistique (linguistics) and scienceInfo (information sciences).
## References
- (Bougouin et al., 2016) Adrien Bougouin, Sabine Barreaux, Laurent Romary, Florian Boudin, and Béatrice Daille. 2016.
[TermITH-Eval: a French Standard-Based Resource for Keyphrase Extraction Evaluation][bougouin-2016].
In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 1924–1927, Portorož, Slovenia. European Language Resources Association (ELRA).Language Processing, pages 543–551, Nagoya, Japan. Asian Federation of Natural Language Processing.
- (Boudin and Gallina, 2021) Florian Boudin and Ygor Gallina. 2021.
[Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness][boudin-2021].
In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics.
[bougouin-2016]: https://aclanthology.org/L16-1304/
[boudin-2021]: https://aclanthology.org/2021.naacl-main.330/ | [
-0.23737914860248566,
-0.44983014464378357,
0.37704741954803467,
0.2118675857782364,
-0.36953192949295044,
0.28676751255989075,
-0.11631550639867783,
0.023118678480386734,
0.12079139798879623,
0.41283348202705383,
-0.4350113868713379,
-0.7737662196159363,
-0.43337324261665344,
0.6190874576... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
biglam/clmet_3_1 | biglam | 2022-07-18T02:14:38Z | 34 | 0 | null | [
"task_categories:text-classification",
"task_categories:fill-mask",
"task_ids:multi-label-classification",
"task_ids:masked-language-modeling",
"annotations_creators:expert-generated",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categorie... | 2022-07-18T02:14:38Z | 2022-07-17T23:27:04.000Z | 2022-07-17T23:27:04 | ---
annotations_creators:
- expert-generated
- machine-generated
language:
- 'en'
language_creators:
- found
paperswithcode_id: null
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: 'Corpus of Late Modern English Texts v3.1'
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text-classification
- fill-mask
task_ids:
- multi-label-classification
- masked-language-modeling
---
# Dataset Card for clmet_3_1
**NOTES**:
- Some of the annotations in the `class` and `pos` configs are not properly formed. These are indicated with warning messages when the dataset is loaded.
- In addition to the classes mentioned in the README for the dataset, there is an additional class in the `class` dataset called `QUOT`. As far as I can tell, this is used for tagging all quotation marks
- When the `class` and `pos` configs are loaded, the available class/pos tags are shown at the top
## Dataset Statistics:
The following table summarises the corpus make-up:
|PERIOD | #authors | #texts |CQP3.1 | non-PUNC |
|-----------|----------|---------------------|--------|---------|
|1710-1780 | 51 | 88 | 12,182,064 | 10,415,721|
|1780-1850 | 70 | 99 | 13,300,457 | 11,269,977|
|1850-1920 | 91 | 146 | 14,858,239 | 12,657,159|
|TOTAL | 212 | 333 | 40,340,760 | 34,342,857|
|GENRE (all tokens):| | | |
|---|---|---|---|
| | **1710-1780**| **1780-1850** | **1850-1920** |
|Narrative fiction | 5,405,645 | 5,780,352 | 7,561,339 |
|Narrative non-fiction | 2,145,946 | 2,261,485 | 1,097,487 |
|Drama | 523,318 | 441,040 | 763,352 |
|Letters | 1,208,219 | 842,795 | 554,046 |
|Treatise | 1,263,090 | 1,927,272 | 2,030,210 |
|Other | 1,635,846 | 2,047,513 | 2,851,805 |
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** http://fedora.clarin-d.uni-saarland.de/clmet/clmet.html
- **Repository:** [Needs More Information]
- **Paper:** https://icame.info/icame_static/ij29/ij29-page69-82.pdf
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Henrik De Smet](https://www.arts.kuleuven.be/ling/func/members/hendrik-desmet/func)
### Dataset Summary
The Corpus of Late Modern English Texts, version 3.1 (CLMET3.1) has been created by Hendrik De Smet, Susanne Flach, Hans-J�rgen Diller and Jukka Tyrkk�, as an offshoot of a bigger project developing a database of text descriptors (Diller, De Smet & Tyrkk� 2011). CLMET3.1 is a principled collection of public domain texts drawn from various online archiving projects. In total, the corpus contains some 34 million words of running text. It incorporates CLMET, CLMETEV, and CLMET3.0, and has been compiled following roughly the same principles, that is:
- The corpus covers the period 17101920, divided into three 70-year sub-periods.
- The texts making up the corpus have all been written by British and Irish authors who are native speakers of English.
- The corpus never contains more than three texts by the same author.
- The texts within each sub-period have been written by authors born within a correspondingly restricted sub-period.
### Supported Tasks and Leaderboards
- `named-entity-recognition`: Since this dataset is tagged, it can be used for performing NER
- `text-classification`: Each text comes with the date of the text and can be used to perform stylistic classification of texts
### Languages
The text in the dataset is in English. The associated BCP-47 code is `en`
## Dataset Structure
### Data Instances
A `plain` sample looks as follows:
```
{'text': "\nFAME AND THE POET\n \nDRAMATIS PERSONAE�\n \nHarry de Reves , a Poet .\n \n( This name , though of course of French origin , has become anglicised and is pronounced de Reevs . )\n \nDick Prattle , a Lieutenant-Major of the Royal Horse Marines .\n \nFame .\n \nScene\n \nThe Poet 's rooms in London .\nWindows in back .\nA high screen in a corner .\n \nTime : February 30th .\n \nThe Poet is sitting at a table writing .\n \n[ Enter Dick Prattle .\n \nPrattle : Hullo , Harry .\n \nde
Reves : Hullo , Dick .\nGood Lord , where are you from ?\n \nPrattle ( casually ) : The ends of the earth .\n \nde Reves : Well , I 'm damned !\n \nPrattle : Thought I 'd drop in and see how you were getting on .\n \nde Reves : Well , that 's splendid .\nWhat are you doing in London ?\n \nPrattle : Well , I wanted to see if I could get one or two decent ties to wear - you can get nothing out there - then I thought I 'd have a look and see how London was getting on .\n \nde Reves : Splendid !\nHow 's everybody ?\n \nPrattle : All going strong .\n \nde Reves : That 's good .\n \nPrattle ( seeing paper and ink ) : But what are you doing ?\n \nde Reves : Writing .\n \nPrattle : Writing ?\nI did n't know you wrote .\n \nde Reves : Yes , I
've taken to it rather .\n \nPrattle : I say - writing 's no good .\nWhat do you write ?\n \nde Reves : Oh , poetry .\n \nPrattle : Poetry !\nGood Lord !\n \nde Reves : Yes , that sort of thing , you know .\n \nPrattle : Good Lord !\nDo you make any money by it ?\n \nde Reves : No .\nHardly any .\n \nPrattle : I say - why do n't you chuck it ?\n \nde Reves : Oh , I do n't know .\nSome people seem to like my stuff , rather .\nThat 's why I go on .\n \nPrattle : I 'd chuck it if there 's no money in it .\n \nde Reves : Ah , but then it 's hardly in your line , is it ?\nYou 'd hardly approve of poetry if there was money in it .\n \nPrattle : Oh , I do n't say that .\nIf I could make as much by poetry as I can by betting I do n't say I would n't try the poetry touch , only - -\n \nde Reves : Only what ?\n \nPrattle : Oh , I do n't know .\nOnly there seems more sense in betting , somehow .\n \nde Reves : Well , yes .\nI suppose it 's easier to tell what an earthly horse is going to
do , than to tell what Pegasus - -\n \nPrattle : What 's Pegasus ?\n \nde Reves : Oh , the winged horse of poets .\n \nPrattle : I say !\nYou do n't believe in a winged horse , do you ?\n \nde Reves : In our trade we believe in all fabulous things
.\nThey all represent some large truth to us .\nAn emblem like Pegasus is as real a thing to a poet as a Derby winner would be to you .\n \nPrattle : I say .\n( Give me a cigarette .\nThanks . )\nWhat ?\nThen you 'd believe in nymphs and fauns , and Pan , and all those kind of birds ?\n \nde Reves : Yes .\nYes .\nIn all of them .\n \nPrattle : Good Lord !\n \nde Reves : You believe in the Lord Mayor of London , do n't you ?\n \nPrattle : Yes , of course ; but what has - -\n \nde Reves : Four million people or so made him Lord Mayor , did n't they ?\nAnd he represents to them the wealth and dignity and tradition of - -\n \nPrattle : Yes ; but , I say , what has all this - -\n \nde Reves : Well , he stands for an idea to them , and they made him Lord Mayor , and so he is one ...\n \nPrattle : Well , of course he is .\n \nde Reves : In the same way Pan has been made what he is by millions ; by millions to whom he represents world-old traditions .\n \nPrattle ( rising from his chair and stepping backwards , laughing and looking at the Poet in a kind of assumed wonder ) : I say ... I say ... You old heathen ... but Good Lord ...\n \n[ He bumps into the high screen behind , pushing it back a little .\n \nde Reves : Look out !\nLook out !\n \nPrattle : What ?\nWhat 's the matter ?\n \nde Reves : The screen !\n \nPrattle : Oh , sorry , yes .\nI 'll put it right .\n \n[ He is about to go round behind it .\n \nde Reves : No , do n't go round there .\n \nPrattle : What ?\nWhy not ?\n \nde Reves : Oh , you would n't understand .\n \nPrattle : Would n't understand ?\nWhy , what have you got ?\n \nde Reves : Oh , one of those things ... You would n't understand .\n \nPrattle : Of course I 'd understand .\nLet 's have a look .\n \n[ The Poet walks towards Prattle and the screen .\nHe protests no further .\nPrattle looks round the corner of the screen .\n \nAn altar .\n \nde Reves ( removing the screen altogether ) : That is all .\nWhat do you make of it ?\n \n[ An
altar of Greek design , shaped like a pedestal , is revealed .\nPapers litter the floor all about it .\n \nPrattle : I say - you always were an untidy devil .\n \nde Reves : Well , what do you make of it ?\n \nPrattle : It reminds me of your room at Eton .\n \nde Reves : My room at Eton ?\n \nPrattle : Yes , you always had papers all over your floor .\n \nde Reves : Oh , yes - -\n \nPrattle : And what are these ?\n \nde Reves : All these are poems ; and this is my altar to Fame .\n \nPrattle : To Fame ?\n \nde Reves : The same that Homer knew .\n \nPrattle : Good Lord !\n \nde Reves : Keats never saw her .\nShelley died too young .\nShe came late at the best of times , now scarcely ever .\n \nPrattle : But , my dear fellow , you do n't mean that you think there really is such a person ?\n \nde Reves : I offer all my songs to her .\n \nPrattle : But you do n't mean you think you could actually see Fame ?\n \nde Reves : We poets personify abstract things , and not poets only but
sculptors7 and painters too .\nAll the great things of the world are those abstract things .\n \nPrattle : But what I mean is , they 're not really there , like you or me .\n \nde Reves : To us these things are more real than men , they outlive generations , they watch the passing of kingdoms : we go by them like dust ; they are still there , unmoved , unsmiling .\n \nPrattle : But , but , you ca n't think that you could see Fame , you do n't expect to see it ?\n \nde Reves : Not to me .\nNever to me .\nShe of the golden trumpet and Greek dress will never appear to me ... We all have our dreams .\n \nPrattle : I say - what have you been doing all day ?\n \nde Reves : I ?\nOh , only writing a sonnet .\n \nPrattle : Is it a long one ?\n \nde Reves : Not very .\n \nPrattle : About how long is it ?\n \nde Reves : About fourteen lines .\n \nPrattle ( impressively ) : I tell you what it is .\n \nde Reves : Yes ?\n \nPrattle : I tell you what .\nYou 've been overworking yourself .\nI
once got like that on board the Sandhurst , working for the passing-out exam .\nI got so bad that I could have seen anything .\n \nde Reves : Seen anything ?\n \nPrattle : Lord , yes ; horned pigs , snakes with wings ; anything ; one of your winged horses even .\nThey gave me some stuff called bromide for it .\nYou take a rest .\n \nde Reves : But my dear fellow , you do n't understand at all .\nI merely said that abstract things are to a poet as near and real and visible as one of your bookmakers or barmaids .\n \nPrattle : I know .\nYou take a rest .\n \nde Reves : Well , perhaps I will .\nI 'd come with you to that musical comedy you 're going to see , only I 'm a bit tired after writing this ; it 's a tedious job .\nI 'll come another night .\n \nPrattle : How do you know I 'm going to see a musical comedy ?\n \nde Reves : Well , where would you go ?\nHamlet 's 8 on at the Lord Chamberlain 's .\nYou 're not going there .\n \nPrattle : Do I look like it ?\n \nde Reves : No .\n \nPrattle : Well , you 're quite right .\nI 'm going to see `` The Girl from Bedlam . ''\nSo long .\nI must push off now .\nIt 's getting late .\nYou take a rest .\nDo n't add another line to that sonnet ; fourteen 's quite enough .\nYou take a
rest .\nDo n't have any dinner to-night , just rest .\nI was like that once myself .\nSo long .\n \nde Reves : So long .\n \n[ Exit Prattle .\nde Reves returns to his table and sits down .\n \nGood old Dick !\nHe 's the same as ever .\nLord , how time passes .\n \nHe takes his pen and his sonnet and makes a few alterations .\n \nWell , that 's finished .\nI ca n't do any more to it .\n \n[ He rises and goes to the screen ; he draws back part of it and goes up to the altar .\nHe is about to place his sonnet reverently at the foot of the altar amongst his other verses .\n \nNo , I will not put it there .\nThis one is worthy of the altar .\n \n[ He places the sonnet upon the altar itself .\n \nIf that sonnet does not give me fame , nothing that I have done before will give it to me , nothing that I ever will do .\n \n[ He replaces the screen and returns to his chair at the table .\nTwilight is coming on .\nHe sits with his elbow on the table , his head on his hand , or however the actor pleases .\n \nWell , well .\nFancy seeing Dick again .\nWell , Dick enjoys his life , so he 's no fool .\nWhat was that he said ?\n`` There 's no money in poetry .\nYou 'd better chuck it . ''\nTen years ' work and what have I to show for it ?\nThe admiration of men who care for poetry , and how many of them are there ?\nThere 's a bigger demand for smoked glasses to look at eclipses of the sun .\nWhy should Fame come to me ?\nHave n't I given up my days for her ?\nThat is enough to keep her away .\nI am a poet ; that is enough reason for her to slight me .\nProud and aloof and cold as marble , what does Fame care for us ?\nYes , Dick is right .\nIt 's a poor game chasing illusions , hunting the intangible , pursuing dreams .\nDreams ?\nWhy , we are ourselves dreams .\n \n[ He leans back in his chair .\n \nWe are such stuff As dreams are made on , and our little life Is rounded with a sleep .\n[ He is silent for a while .\nSuddenly he lifts his head .\n \nMy room at Eton , Dick said .\nAn untidy mess .\n \n[ As he lifts his head and says these words , twilight gives place to broad daylight , merely as a hint that the author of the play may have been mistaken , and the whole thing may have been no more than a poet
's dream .\n \nSo it was , and it 's an untidy mess there ( looking at screen ) too .\nDick 's right .\nI 'll tidy it up .\nI 'll burn the whole damned heap ,\n \n[ He advances impetuously towards the screen .\n \nevery damned poem that I was ever
fool enough to waste my time on .\n \n[ He pushes back the screen .\nFame in a Greek dress with a long golden trumpet in her hand is seen standing motionless on the altar like a marble goddess .\n \nSo ... you have come !\n \n[ For a while he stands thunderstruck .\nThen he approaches the altar .\n \nDivine fair lady , you have come .\n \n[ He holds up his hand to her and leads her down from the altar and into the centre of the stage .\nAt whatever moment the actor finds it most convenient , he repossesses himself of the sonnet that he had placed on the altar .\nHe now offers it to Fame .\n \nThis is my sonnet .\nIs it well done ?\n \n[ Fame takes it and reads it in silence , while the Poet watches her rapturously .\n \nFame : You 're a bit of all right .\n \nde Reves : What ?\n \nFame : Some poet .\n \nde Reves : I - I - scarcely ... understand .\n \nFame : You 're IT .\n \nde Reves : But ... it is not possible ... are you she that knew Homer ?\n \nFame : Homer ?\nLord , yes .\nBlind old bat , ' e could n't see a yard .\n \nde Reves : O Heavens !\n \n[ Fame walks beautifully to the window .\nShe opens it and puts her head out .\n \nFame ( in a voice with which a woman in an upper storey would cry for help if the house was well alight ) : Hi !\nHi !\nBoys !\nHi !\nSay , folks !\nHi !\n \n[ The murmur of a gathering crowd is heard .\nFame blows her trumpet .\n \nFame : Hi , he 's a poet !\n( Quickly , over her shoulder . )\nWhat 's your name ?\n \nde Reves : De Reves .\n \nFame : His name 's de Reves .\n \nde Reves : Harry de Reves .\n \nFame : His pals call him Harry .\n \nThe Crowd : Hooray !\nHooray !\nHooray !\n \nFame : Say , what 's your favourite colour ?\n \nde Reves : I ... I ... I do n't quite understand .\n \nFame : Well , which do you like best , green or blue ?\n \nde Reves : Oh - er - blue .\n \n[ She blows her trumpet out of the window .\n \nNo - er - I think green .\n \nFame : Green is his favourite colour .\n \nThe Crowd : Hooray !\nHooray !\nHooray !\n \nFame : ` Ere , tell us something .\nThey want to know all about yer .\n \nde Reves : Would n't 9 you perhaps ... would they care to hear my sonnet , if you would - er ...\n \nFame ( picking up quill ) : Here , what 's this ?\n \nde Reves : Oh , that 's my pen .\n \nFame ( after another blast on her trumpet ) : He writes with a quill .\n \n[ Cheers from the Crowd .\n \nFame ( going to a cupboard ) : Here , what have you got in here ?\n \nde Reves : Oh ... er ... those are my breakfast things .\n \nFame ( finding a dirty plate ) : What have yer had on this one ?\n \nde Reves ( mournfully ) : Oh , eggs and bacon .\n \nFame ( at the window ) : He has eggs and bacon for breakfast .\n \nThe Crowd : Hip hip hip , hooray !\nHip hip hip , hooray !\nHip hip hip , hooray !\nFame : Hi , and what 's this ?\n \nde Reves ( miserably ) : Oh , a golf stick .\n \nFame : He 's a man 's man !\nHe 's a virile man !\nHe 's a manly man !\n \n[ Wild cheers from the Crowd , this time only from women 's voices .\n \nde Reves : Oh , this is terrible .\nThis is terrible .\nThis is terrible .\n \n[ Fame gives another peal on her horn .\nShe is about to speak .\n \nde Reves ( solemnly and mournfully ) : One moment , one moment ...\n \nFame : Well , out with it .\n \nde Reves : For ten years , divine lady , I have worshipped you , offering all my songs ... I find ... I find I am not worthy ...\n \nFame : Oh , you 're all right .\n \nde Reves : No , no , I am not worthy .\nIt can not be .\nIt can not possibly be .\nOthers deserve you more .\nI must say it !\nI can not possibly love you .\nOthers are worthy .\nYou will find others .\nBut I , no , no , no .\nIt can not be .\nIt can not be .\nOh , pardon me , but it must not .\n \n[ Meanwhile Fame has been lighting one of his cigarettes .\nShe sits in a comfortable chair , leans right back , and puts her feet right up on the table amongst the poet 's papers .\n \nOh , I fear I offend you .\nBut - it can not be .\n \nFame : Oh , that 's all right , old bird ; no offence .\nI ai n't going to leave you .\n \nde Reves : But - but - but - I do not understand .\n \nFame : I 've come to stay , I have .\n \n[ She blows a puff of smoke through her trumpet .\n \nCURTAIN .\n", 'genre': 'Drama', 'subgenre': 'drama', 'year': '1919', 'quarter_cent': '1900-1924', 'decade': '1910s', 'title': 'Fame and the poet', 'author': 'Dunsany [Edward John Moreton Drax Plunkett]', 'notes': '', 'comments': 'selected from larger
file', 'period': '1850-1920', 'id': '317'}
```
A `pos` sample looks as follows:
```
{'text': ['FAME', 'AND', 'THE', 'POET', 'DRAMATIS', 'PERSONAE�', 'Harry', 'de', 'Reves', ',', 'a', 'Poet', '.', '(', 'This', 'name', ',', 'though', 'of', 'course', 'of', 'French', 'origin', ',', 'has', 'become', 'anglicised', 'and', 'is', 'pronounced', 'de', 'Reevs', '.', ')', 'Dick', 'Prattle', ',', 'a', 'Lieutenant-Major', 'of', 'the', 'Royal', 'Horse', 'Marines', '.', 'Fame', '.', 'Scene', 'The', 'Poet', "'s", 'rooms', 'in', 'London', '.', 'Windows', 'in', 'back', '.', 'A', 'high', 'screen', 'in', 'a', 'corner', '.', 'Time', ':', 'February', '30th', '.', 'The', 'Poet', 'is', 'sitting', 'at', 'a', 'table', 'writing', '.', '[', 'Enter', 'Dick', 'Prattle', '.', 'Prattle', ':', 'Hullo', ',', 'Harry', '.', 'de', 'Reves', ':', 'Hullo', ',', 'Dick', '.', 'Good', 'Lord', ',', 'where', 'are', 'you', 'from', '?', 'Prattle', '(', 'casually', ')', ':', 'The', 'ends', 'of', 'the', 'earth', '.', 'de', 'Reves', ':', 'Well', ',', 'I', "'m", 'damned', '!', 'Prattle', ':', 'Thought', 'I', "'d", 'drop', 'in', 'and', 'see', 'how', 'you', 'were', 'getting', 'on', '.', 'de', 'Reves', ':', 'Well', ',', 'that', "'s", 'splendid', '.', 'What', 'are', 'you', 'doing', 'in', 'London', '?', 'Prattle', ':', 'Well', ',', 'I', 'wanted', 'to', 'see',
'if', 'I', 'could', 'get', 'one', 'or', 'two', 'decent', 'ties', 'to', 'wear', '-', 'you', 'can', 'get', 'nothing', 'out', 'there', '-', 'then', 'I', 'thought', 'I', "'d", 'have', 'a', 'look', 'and', 'see', 'how', 'London', 'was', 'getting', 'on',
'.', 'de', 'Reves', ':', 'Splendid', '!', 'How', "'s", 'everybody', '?', 'Prattle', ':', 'All', 'going', 'strong', '.', 'de', 'Reves', ':', 'That', "'s", 'good', '.', 'Prattle', '(', 'seeing', 'paper', 'and', 'ink', ')', ':', 'But', 'what', 'are',
'you', 'doing', '?', 'de', 'Reves', ':', 'Writing', '.', 'Prattle', ':', 'Writing', '?', 'I', 'did', "n't", 'know', 'you', 'wrote', '.', 'de', 'Reves', ':', 'Yes', ',', 'I', "'ve", 'taken', 'to', 'it', 'rather', '.', 'Prattle', ':', 'I', 'say', '-', 'writing', "'s", 'no', 'good', '.', 'What', 'do', 'you', 'write', '?', 'de', 'Reves', ':', 'Oh', ',', 'poetry', '.', 'Prattle', ':', 'Poetry', '!', 'Good', 'Lord', '!', 'de', 'Reves', ':', 'Yes', ',', 'that', 'sort', 'of', 'thing', ',', 'you', 'know', '.', 'Prattle', ':', 'Good', 'Lord', '!', 'Do', 'you', 'make', 'any', 'money', 'by', 'it', '?', 'de', 'Reves', ':', 'No', '.', 'Hardly', 'any', '.', 'Prattle', ':', 'I', 'say', '-', 'why', 'do', "n't", 'you', 'chuck', 'it', '?', 'de', 'Reves', ':', 'Oh', ',', 'I', 'do', "n't", 'know', '.', 'Some', 'people', 'seem', 'to', 'like', 'my', 'stuff', ',', 'rather', '.', 'That', "'s", 'why', 'I', 'go', 'on', '.', 'Prattle', ':', 'I', "'d", 'chuck', 'it', 'if', 'there', "'s", 'no', 'money', 'in', 'it', '.', 'de', 'Reves', ':', 'Ah', ',', 'but', 'then', 'it', "'s", 'hardly', 'in', 'your', 'line', ',', 'is', 'it', '?', 'You', "'d", 'hardly', 'approve', 'of', 'poetry', 'if', 'there', 'was', 'money', 'in', 'it', '.', 'Prattle', ':', 'Oh', ',', 'I', 'do', "n't", 'say', 'that', '.', 'If', 'I', 'could', 'make', 'as', 'much', 'by', 'poetry', 'as', 'I', 'can', 'by', 'betting', 'I', 'do', "n't", 'say', 'I', 'would', "n't", 'try', 'the', 'poetry', 'touch', ',', 'only', '-', '-', 'de', 'Reves', ':', 'Only', 'what', '?', 'Prattle', ':', 'Oh', ',', 'I', 'do', "n't", 'know', '.', 'Only', 'there', 'seems', 'more', 'sense', 'in', 'betting', ',', 'somehow', '.', 'de', 'Reves', ':', 'Well', ',', 'yes', '.', 'I', 'suppose', 'it', "'s", 'easier', 'to', 'tell', 'what', 'an', 'earthly', 'horse', 'is', 'going', 'to', 'do', ',', 'than', 'to', 'tell', 'what', 'Pegasus', '-', '-', 'Prattle', ':', 'What', "'s", 'Pegasus', '?', 'de', 'Reves', ':', 'Oh', ',', 'the', 'winged', 'horse', 'of', 'poets', '.', 'Prattle', ':', 'I', 'say', '!', 'You', 'do', "n't", 'believe', 'in', 'a', 'winged', 'horse', ',', 'do', 'you', '?', 'de', 'Reves', ':', 'In', 'our', 'trade', 'we', 'believe', 'in', 'all', 'fabulous', 'things', '.', 'They', 'all', 'represent', 'some', 'large', 'truth', 'to', 'us', '.', 'An', 'emblem', 'like', 'Pegasus', 'is', 'as', 'real', 'a', 'thing', 'to', 'a', 'poet', 'as', 'a', 'Derby', 'winner', 'would', 'be', 'to', 'you', '.', 'Prattle', ':', 'I', 'say', '.', '(', 'Give', 'me', 'a', 'cigarette', '.', 'Thanks', '.', ')', 'What', '?', 'Then', 'you', "'d", 'believe', 'in', 'nymphs', 'and', 'fauns', ',', 'and', 'Pan', ',', 'and', 'all', 'those', 'kind', 'of', 'birds', '?', 'de', 'Reves', ':', 'Yes', '.', 'Yes', '.', 'In',
'all', 'of', 'them', '.', 'Prattle', ':', 'Good', 'Lord', '!', 'de', 'Reves', ':', 'You', 'believe', 'in', 'the', 'Lord', 'Mayor', 'of', 'London', ',', 'do', "n't", 'you', '?', 'Prattle', ':', 'Yes', ',', 'of', 'course', ';', 'but', 'what', 'has',
'-', '-', 'de', 'Reves', ':', 'Four', 'million', 'people', 'or', 'so', 'made', 'him', 'Lord', 'Mayor', ',', 'did', "n't", 'they', '?', 'And', 'he', 'represents', 'to', 'them', 'the', 'wealth', 'and', 'dignity', 'and', 'tradition', 'of', '-', '-', 'Prattle', ':', 'Yes', ';', 'but', ',', 'I', 'say', ',', 'what', 'has', 'all', 'this', '-', '-', 'de', 'Reves', ':', 'Well', ',', 'he', 'stands', 'for', 'an', 'idea', 'to', 'them', ',', 'and', 'they', 'made', 'him', 'Lord', 'Mayor', ',', 'and', 'so', 'he', 'is', 'one', '...', 'Prattle', ':', 'Well', ',', 'of', 'course', 'he', 'is', '.', 'de', 'Reves', ':', 'In', 'the', 'same', 'way', 'Pan', 'has', 'been', 'made', 'what', 'he', 'is', 'by', 'millions', ';', 'by', 'millions', 'to', 'whom', 'he', 'represents', 'world-old', 'traditions', '.', 'Prattle', '(', 'rising', 'from', 'his', 'chair', 'and', 'stepping', 'backwards', ',', 'laughing', 'and', 'looking', 'at', 'the', 'Poet', 'in', 'a', 'kind', 'of', 'assumed', 'wonder', ')', ':', 'I', 'say', '...', 'I', 'say', '...', 'You', 'old', 'heathen', '...', 'but', 'Good', 'Lord', '...', '[', 'He', 'bumps', 'into', 'the', 'high', 'screen', 'behind', ',', 'pushing', 'it', 'back', 'a', 'little', '.', 'de', 'Reves', ':', 'Look', 'out', '!', 'Look', 'out', '!', 'Prattle', ':', 'What', '?', 'What', "'s", 'the', 'matter', '?', 'de', 'Reves', ':', 'The', 'screen', '!', 'Prattle', ':', 'Oh', ',', 'sorry', ',', 'yes', '.', 'I', "'ll", 'put', 'it', 'right', '.', '[', 'He', 'is', 'about', 'to', 'go', 'round', 'behind', 'it', '.', 'de', 'Reves', ':', 'No', ',', 'do', "n't", 'go', 'round', 'there', '.', 'Prattle', ':', 'What', '?', 'Why', 'not', '?', 'de', 'Reves', ':', 'Oh', ',', 'you', 'would', "n't", 'understand', '.', 'Prattle', ':', 'Would', "n't", 'understand', '?', 'Why', ',', 'what', 'have', 'you', 'got', '?', 'de', 'Reves', ':', 'Oh', ',', 'one', 'of', 'those', 'things', '...', 'You', 'would', "n't", 'understand', '.', 'Prattle', ':', 'Of', 'course', 'I', "'d", 'understand', '.', 'Let', "'s", 'have', 'a', 'look', '.', '[', 'The', 'Poet', 'walks', 'towards', 'Prattle', 'and', 'the', 'screen', '.', 'He', 'protests', 'no', 'further', '.', 'Prattle', 'looks', 'round', 'the', 'corner', 'of', 'the', 'screen', '.', 'An', 'altar', '.', 'de', 'Reves', '(', 'removing', 'the', 'screen', 'altogether', ')', ':', 'That', 'is', 'all', '.', 'What', 'do', 'you', 'make', 'of', 'it', '?', '[', 'An', 'altar', 'of', 'Greek', 'design', ',', 'shaped', 'like', 'a', 'pedestal', ',', 'is', 'revealed', '.', 'Papers', 'litter', 'the', 'floor', 'all', 'about', 'it', '.', 'Prattle', ':', 'I', 'say', '-', 'you', 'always', 'were', 'an', 'untidy', 'devil', '.', 'de', 'Reves', ':', 'Well', ',', 'what', 'do', 'you', 'make', 'of', 'it', '?', 'Prattle', ':', 'It', 'reminds', 'me', 'of', 'your', 'room', 'at', 'Eton', '.', 'de', 'Reves', ':', 'My', 'room', 'at', 'Eton', '?', 'Prattle', ':', 'Yes', ',', 'you', 'always', 'had', 'papers', 'all', 'over', 'your', 'floor', '.', 'de', 'Reves', ':', 'Oh', ',', 'yes', '-', '-', 'Prattle', ':', 'And', 'what', 'are', 'these', '?', 'de', 'Reves', ':', 'All', 'these', 'are', 'poems', ';', 'and', 'this', 'is', 'my', 'altar', 'to', 'Fame', '.', 'Prattle', ':', 'To', 'Fame', '?', 'de', 'Reves', ':', 'The', 'same', 'that', 'Homer', 'knew', '.', 'Prattle', ':', 'Good', 'Lord', '!', 'de', 'Reves', ':', 'Keats', 'never', 'saw', 'her', '.', 'Shelley', 'died', 'too', 'young', '.', 'She', 'came', 'late', 'at', 'the', 'best', 'of', 'times', ',', 'now', 'scarcely', 'ever', '.', 'Prattle', ':', 'But', ',', 'my', 'dear', 'fellow', ',', 'you', 'do', "n't", 'mean', 'that', 'you', 'think', 'there', 'really', 'is', 'such', 'a', 'person', '?', 'de', 'Reves', ':', 'I', 'offer', 'all', 'my', 'songs', 'to', 'her', '.', 'Prattle', ':', 'But', 'you', 'do', "n't", 'mean', 'you', 'think', 'you', 'could', 'actually', 'see', 'Fame', '?', 'de', 'Reves', ':', 'We', 'poets', 'personify', 'abstract', 'things', ',', 'and', 'not', 'poets', 'only', 'but', 'sculptors7', 'and', 'painters', 'too', '.', 'All', 'the', 'great', 'things', 'of', 'the', 'world', 'are', 'those', 'abstract', 'things', '.', 'Prattle', ':', 'But', 'what', 'I', 'mean', 'is', ',', 'they', "'re", 'not', 'really', 'there', ',', 'like', 'you', 'or', 'me', '.', 'de', 'Reves', ':', 'To', 'us', 'these', 'things', 'are', 'more', 'real', 'than', 'men', ',', 'they', 'outlive', 'generations', ',', 'they', 'watch', 'the', 'passing', 'of', 'kingdoms', ':', 'we', 'go', 'by', 'them', 'like', 'dust', ';', 'they', 'are', 'still', 'there', ',', 'unmoved', ',', 'unsmiling', '.', 'Prattle', ':', 'But', ',', 'but', ',', 'you', 'ca', "n't", 'think', 'that', 'you', 'could', 'see', 'Fame', ',', 'you', 'do', "n't", 'expect', 'to', 'see',
'it', '?', 'de', 'Reves', ':', 'Not', 'to', 'me', '.', 'Never', 'to', 'me', '.', 'She', 'of', 'the', 'golden', 'trumpet', 'and', 'Greek', 'dress', 'will', 'never', 'appear', 'to', 'me', '...', 'We', 'all', 'have', 'our', 'dreams', '.', 'Prattle', ':', 'I', 'say', '-', 'what', 'have', 'you', 'been', 'doing', 'all', 'day', '?', 'de', 'Reves', ':', 'I', '?', 'Oh', ',', 'only', 'writing', 'a', 'sonnet', '.', 'Prattle', ':', 'Is', 'it', 'a', 'long', 'one', '?', 'de', 'Reves', ':', 'Not', 'very',
'.', 'Prattle', ':', 'About', 'how', 'long', 'is', 'it', '?', 'de', 'Reves', ':', 'About', 'fourteen', 'lines', '.', 'Prattle', '(', 'impressively', ')', ':', 'I', 'tell', 'you', 'what', 'it', 'is', '.', 'de', 'Reves', ':', 'Yes', '?', 'Prattle', ':', 'I', 'tell', 'you', 'what', '.', 'You', "'ve", 'been', 'overworking', 'yourself', '.', 'I', 'once', 'got', 'like', 'that', 'on', 'board', 'the', 'Sandhurst', ',', 'working', 'for', 'the', 'passing-out', 'exam', '.', 'I', 'got', 'so', 'bad', 'that', 'I', 'could', 'have', 'seen', 'anything', '.', 'de', 'Reves', ':', 'Seen', 'anything', '?', 'Prattle', ':', 'Lord', ',', 'yes', ';', 'horned', 'pigs', ',', 'snakes', 'with', 'wings', ';', 'anything', ';', 'one', 'of', 'your', 'winged', 'horses', 'even', '.', 'They', 'gave', 'me', 'some', 'stuff', 'called', 'bromide', 'for', 'it', '.', 'You', 'take', 'a', 'rest', '.', 'de', 'Reves', ':', 'But', 'my', 'dear', 'fellow', ',', 'you', 'do', "n't", 'understand', 'at', 'all', '.', 'I', 'merely', 'said', 'that', 'abstract', 'things', 'are', 'to', 'a', 'poet', 'as', 'near', 'and', 'real', 'and', 'visible', 'as', 'one', 'of', 'your', 'bookmakers', 'or', 'barmaids', '.', 'Prattle', ':', 'I', 'know', '.', 'You', 'take', 'a', 'rest', '.', 'de', 'Reves', ':', 'Well', ',', 'perhaps', 'I', 'will', '.', 'I', "'d", 'come', 'with', 'you', 'to', 'that', 'musical', 'comedy', 'you', "'re", 'going', 'to', 'see', ',', 'only', 'I', "'m", 'a', 'bit', 'tired', 'after', 'writing', 'this', ';', 'it', "'s", 'a', 'tedious', 'job', '.', 'I', "'ll", 'come', 'another', 'night', '.', 'Prattle', ':', 'How', 'do', 'you', 'know', 'I', "'m", 'going', 'to', 'see', 'a', 'musical', 'comedy', '?', 'de', 'Reves', ':', 'Well', ',', 'where', 'would', 'you', 'go', '?', 'Hamlet', "'s", '8', 'on', 'at', 'the', 'Lord', 'Chamberlain', "'s", '.', 'You', "'re", 'not', 'going', 'there', '.', 'Prattle', ':', 'Do', 'I', 'look', 'like', 'it', '?', 'de', 'Reves', ':', 'No', '.', 'Prattle', ':', 'Well', ',', 'you', "'re", 'quite', 'right', '.', 'I', "'m", 'going', 'to', 'see', '``', 'The', 'Girl', 'from', 'Bedlam', '.', "''", 'So', 'long', '.', 'I', 'must', 'push', 'off', 'now', '.', 'It', "'s", 'getting', 'late', '.', 'You', 'take', 'a', 'rest', '.', 'Do', "n't", 'add', 'another', 'line', 'to', 'that', 'sonnet', ';', 'fourteen', "'s", 'quite', 'enough', '.', 'You', 'take', 'a', 'rest', '.', 'Do', "n't", 'have', 'any', 'dinner', 'to-night', ',', 'just', 'rest', '.', 'I', 'was', 'like', 'that', 'once', 'myself', '.', 'So', 'long', '.', 'de', 'Reves', ':', 'So', 'long', '.', '[', 'Exit', 'Prattle', '.', 'de', 'Reves', 'returns', 'to', 'his', 'table', 'and', 'sits', 'down', '.', 'Good', 'old', 'Dick', '!', 'He', "'s", 'the', 'same', 'as', 'ever', '.', 'Lord', ',', 'how', 'time', 'passes', '.', 'He', 'takes', 'his', 'pen', 'and', 'his', 'sonnet', 'and', 'makes', 'a', 'few', 'alterations', '.', 'Well', ',', 'that', "'s", 'finished', '.', 'I', 'ca', "n't", 'do', 'any', 'more', 'to', 'it', '.', '[', 'He', 'rises', 'and', 'goes', 'to', 'the', 'screen', ';', 'he', 'draws', 'back', 'part', 'of', 'it', 'and', 'goes', 'up', 'to', 'the', 'altar', '.', 'He', 'is', 'about', 'to', 'place', 'his', 'sonnet', 'reverently', 'at', 'the', 'foot', 'of', 'the', 'altar', 'amongst', 'his', 'other', 'verses', '.', 'No', ',', 'I', 'will', 'not', 'put', 'it', 'there', '.', 'This', 'one', 'is', 'worthy', 'of', 'the', 'altar', '.', '[', 'He', 'places', 'the', 'sonnet', 'upon', 'the', 'altar', 'itself', '.', 'If', 'that', 'sonnet', 'does', 'not', 'give', 'me', 'fame', ',', 'nothing', 'that', 'I', 'have', 'done', 'before', 'will', 'give', 'it', 'to', 'me', ',', 'nothing', 'that', 'I', 'ever', 'will', 'do', '.', '[', 'He', 'replaces', 'the', 'screen', 'and', 'returns', 'to', 'his', 'chair', 'at', 'the', 'table', '.', 'Twilight', 'is', 'coming', 'on', '.', 'He', 'sits', 'with', 'his', 'elbow', 'on', 'the', 'table', ',', 'his', 'head', 'on', 'his', 'hand', ',', 'or', 'however', 'the', 'actor', 'pleases', '.', 'Well', ',', 'well', '.', 'Fancy', 'seeing', 'Dick', 'again', '.', 'Well', ',', 'Dick', 'enjoys', 'his', 'life', ',', 'so', 'he', "'s", 'no', 'fool', '.', 'What', 'was', 'that', 'he', 'said', '?', '``', 'There', "'s", 'no', 'money', 'in', 'poetry', '.', 'You', "'d", 'better', 'chuck', 'it', '.', "''", 'Ten', 'years', "'", 'work', 'and', 'what', 'have', 'I', 'to', 'show', 'for', 'it', '?', 'The', 'admiration', 'of', 'men', 'who', 'care', 'for', 'poetry', ',', 'and', 'how', 'many', 'of', 'them', 'are', 'there', '?', 'There', "'s", 'a', 'bigger', 'demand', 'for', 'smoked', 'glasses', 'to', 'look', 'at', 'eclipses', 'of', 'the', 'sun', '.', 'Why', 'should', 'Fame', 'come', 'to', 'me', '?', 'Have', "n't", 'I', 'given', 'up', 'my', 'days', 'for', 'her', '?', 'That', 'is', 'enough', 'to', 'keep', 'her', 'away', '.', 'I', 'am', 'a', 'poet', ';', 'that', 'is', 'enough', 'reason', 'for', 'her', 'to', 'slight', 'me', '.', 'Proud', 'and', 'aloof', 'and', 'cold', 'as', 'marble', ',', 'what', 'does', 'Fame', 'care', 'for', 'us', '?', 'Yes', ',', 'Dick', 'is', 'right', '.', 'It', "'s", 'a', 'poor', 'game', 'chasing', 'illusions', ',', 'hunting', 'the', 'intangible', ',', 'pursuing', 'dreams', '.', 'Dreams', '?', 'Why', ',', 'we', 'are', 'ourselves', 'dreams', '.', '[', 'He', 'leans', 'back', 'in', 'his', 'chair', '.', 'We', 'are', 'such', 'stuff', 'As', 'dreams', 'are', 'made', 'on', ',', 'and', 'our', 'little', 'life', 'Is', 'rounded', 'with', 'a', 'sleep', '.', '[', 'He', 'is', 'silent', 'for', 'a', 'while', '.', 'Suddenly', 'he', 'lifts', 'his', 'head', '.', 'My', 'room', 'at', 'Eton', ',', 'Dick', 'said', '.', 'An', 'untidy', 'mess', '.', '[', 'As', 'he', 'lifts', 'his', 'head', 'and', 'says', 'these', 'words', ',', 'twilight', 'gives', 'place', 'to', 'broad', 'daylight', ',', 'merely', 'as', 'a', 'hint', 'that', 'the', 'author', 'of', 'the', 'play', 'may', 'have', 'been', 'mistaken', ',', 'and', 'the', 'whole', 'thing', 'may', 'have', 'been', 'no', 'more', 'than', 'a', 'poet', "'s", 'dream', '.', 'So', 'it', 'was', ',', 'and', 'it', "'s", 'an', 'untidy', 'mess', 'there', '(', 'looking', 'at', 'screen', ')', 'too', '.', 'Dick', "'s", 'right', '.', 'I', "'ll", 'tidy', 'it', 'up', '.', 'I', "'ll", 'burn', 'the', 'whole', 'damned', 'heap', ',', '[', 'He', 'advances', 'impetuously', 'towards', 'the', 'screen', '.', 'every', 'damned', 'poem', 'that', 'I', 'was', 'ever', 'fool', 'enough', 'to', 'waste', 'my', 'time', 'on', '.', '[', 'He', 'pushes', 'back', 'the', 'screen', '.', 'Fame', 'in', 'a', 'Greek', 'dress', 'with', 'a', 'long', 'golden', 'trumpet', 'in', 'her', 'hand', 'is', 'seen', 'standing', 'motionless', 'on', 'the', 'altar', 'like', 'a', 'marble', 'goddess', '.', 'So', '...', 'you', 'have', 'come', '!', '[', 'For', 'a', 'while', 'he', 'stands', 'thunderstruck', '.', 'Then', 'he', 'approaches', 'the', 'altar', '.', 'Divine', 'fair', 'lady', ',', 'you', 'have', 'come', '.', '[', 'He', 'holds', 'up', 'his', 'hand', 'to', 'her', 'and', 'leads', 'her', 'down', 'from', 'the', 'altar', 'and', 'into', 'the', 'centre', 'of', 'the', 'stage', '.', 'At', 'whatever', 'moment', 'the', 'actor', 'finds', 'it', 'most', 'convenient', ',', 'he', 'repossesses', 'himself', 'of',
'the', 'sonnet', 'that', 'he', 'had', 'placed', 'on', 'the', 'altar', '.', 'He', 'now', 'offers', 'it', 'to', 'Fame', '.', 'This', 'is', 'my', 'sonnet', '.', 'Is', 'it', 'well', 'done', '?', '[', 'Fame', 'takes', 'it', 'and', 'reads', 'it', 'in', 'silence', ',', 'while', 'the', 'Poet', 'watches', 'her', 'rapturously', '.', 'Fame', ':', 'You', "'re", 'a', 'bit', 'of', 'all', 'right', '.', 'de', 'Reves', ':', 'What', '?', 'Fame', ':', 'Some', 'poet', '.', 'de', 'Reves', ':', 'I', '-', 'I', '-', 'scarcely', '...', 'understand', '.', 'Fame', ':', 'You', "'re", 'IT', '.', 'de', 'Reves', ':', 'But', '...', 'it', 'is', 'not', 'possible', '...', 'are', 'you', 'she', 'that', 'knew', 'Homer', '?', 'Fame', ':', 'Homer', '?', 'Lord', ',', 'yes',
'.', 'Blind', 'old', 'bat', ',', "'", 'e', 'could', "n't", 'see', 'a', 'yard', '.', 'de', 'Reves', ':', 'O', 'Heavens', '!', '[', 'Fame', 'walks', 'beautifully', 'to', 'the', 'window', '.', 'She', 'opens', 'it', 'and', 'puts', 'her', 'head', 'out', '.', 'Fame', '(', 'in', 'a', 'voice', 'with', 'which', 'a', 'woman', 'in', 'an', 'upper', 'storey', 'would', 'cry', 'for', 'help', 'if', 'the', 'house', 'was', 'well', 'alight', ')', ':', 'Hi', '!', 'Hi', '!', 'Boys', '!', 'Hi', '!', 'Say', ',', 'folks', '!', 'Hi', '!', '[', 'The', 'murmur', 'of', 'a', 'gathering', 'crowd', 'is', 'heard', '.', 'Fame', 'blows', 'her', 'trumpet', '.', 'Fame', ':', 'Hi', ',', 'he', "'s", 'a', 'poet', '!', '(', 'Quickly', ',', 'over', 'her', 'shoulder', '.', ')', 'What', "'s", 'your', 'name', '?', 'de', 'Reves', ':', 'De', 'Reves', '.', 'Fame', ':', 'His', 'name', "'s", 'de', 'Reves', '.', 'de', 'Reves', ':', 'Harry', 'de', 'Reves', '.', 'Fame', ':', 'His', 'pals', 'call', 'him', 'Harry', '.', 'The', 'Crowd', ':', 'Hooray', '!', 'Hooray', '!', 'Hooray', '!', 'Fame', ':', 'Say', ',', 'what', "'s", 'your', 'favourite', 'colour', '?', 'de', 'Reves', ':', 'I', '...', 'I', '...', 'I', 'do', "n't", 'quite', 'understand', '.', 'Fame', ':', 'Well', ',', 'which', 'do', 'you', 'like', 'best', ',', 'green', 'or', 'blue', '?', 'de', 'Reves', ':', 'Oh', '-', 'er', '-', 'blue', '.', '[', 'She', 'blows', 'her', 'trumpet', 'out', 'of', 'the', 'window', '.', 'No', '-', 'er', '-', 'I', 'think', 'green', '.', 'Fame', ':', 'Green', 'is', 'his', 'favourite', 'colour', '.', 'The', 'Crowd', ':', 'Hooray', '!', 'Hooray', '!', 'Hooray', '!', 'Fame', ':', '`', 'Ere', ',', 'tell', 'us', 'something', '.', 'They', 'want', 'to', 'know', 'all', 'about', 'yer', '.', 'de', 'Reves', ':', 'Would', "n't", '9', 'you', 'perhaps', '...', 'would', 'they', 'care', 'to', 'hear', 'my', 'sonnet', ',', 'if', 'you', 'would', '-', 'er', '...', 'Fame', '(', 'picking', 'up', 'quill', ')', ':', 'Here', ',', 'what', "'s", 'this', '?', 'de', 'Reves', ':', 'Oh', ',', 'that', "'s", 'my', 'pen', '.', 'Fame', '(', 'after', 'another', 'blast', 'on', 'her', 'trumpet', ')', ':', 'He', 'writes', 'with', 'a', 'quill', '.', '[', 'Cheers', 'from', 'the', 'Crowd', '.', 'Fame', '(',
'going', 'to', 'a', 'cupboard', ')', ':', 'Here', ',', 'what', 'have', 'you', 'got', 'in', 'here', '?', 'de', 'Reves', ':', 'Oh', '...', 'er', '...', 'those', 'are', 'my', 'breakfast', 'things', '.', 'Fame', '(', 'finding', 'a', 'dirty', 'plate', ')', ':', 'What', 'have', 'yer', 'had', 'on', 'this', 'one', '?', 'de', 'Reves', '(', 'mournfully', ')', ':', 'Oh', ',', 'eggs', 'and', 'bacon', '.', 'Fame', '(', 'at', 'the', 'window', ')', ':', 'He', 'has', 'eggs', 'and', 'bacon', 'for', 'breakfast', '.', 'The', 'Crowd', ':', 'Hip', 'hip', 'hip', ',', 'hooray', '!', 'Hip', 'hip', 'hip', ',', 'hooray', '!', 'Hip', 'hip', 'hip', ',', 'hooray', '!', 'Fame', ':', 'Hi', ',', 'and', 'what', "'s", 'this', '?', 'de', 'Reves', '(', 'miserably', ')', ':', 'Oh', ',', 'a', 'golf', 'stick', '.', 'Fame', ':', 'He', "'s", 'a', 'man', "'s", 'man', '!', 'He', "'s", 'a', 'virile', 'man', '!', 'He', "'s", 'a', 'manly', 'man', '!', '[', 'Wild', 'cheers', 'from', 'the', 'Crowd', ',', 'this', 'time', 'only', 'from', 'women', "'s", 'voices', '.', 'de', 'Reves', ':', 'Oh', ',', 'this', 'is', 'terrible', '.', 'This', 'is', 'terrible', '.', 'This', 'is', 'terrible', '.', '[', 'Fame', 'gives', 'another', 'peal', 'on', 'her', 'horn', '.', 'She', 'is', 'about', 'to', 'speak', '.', 'de', 'Reves', '(', 'solemnly', 'and', 'mournfully', ')', ':', 'One', 'moment', ',', 'one', 'moment', '...', 'Fame', ':', 'Well', ',', 'out', 'with', 'it', '.', 'de', 'Reves', ':', 'For', 'ten', 'years', ',', 'divine', 'lady', ',', 'I', 'have', 'worshipped', 'you', ',', 'offering', 'all', 'my', 'songs', '...', 'I', 'find', '...', 'I', 'find', 'I', 'am', 'not', 'worthy', '...', 'Fame', ':', 'Oh', ',', 'you', "'re", 'all', 'right', '.', 'de', 'Reves', ':', 'No', ',', 'no', ',', 'I', 'am', 'not', 'worthy', '.', 'It', 'can', 'not', 'be', '.', 'It', 'can', 'not', 'possibly', 'be', '.', 'Others', 'deserve', 'you', 'more', '.', 'I', 'must', 'say', 'it', '!', 'I', 'can', 'not', 'possibly', 'love', 'you', '.', 'Others', 'are', 'worthy', '.', 'You', 'will', 'find', 'others', '.', 'But', 'I', ',', 'no', ',', 'no', ',', 'no', '.', 'It', 'can', 'not', 'be', '.', 'It', 'can', 'not', 'be', '.', 'Oh', ',', 'pardon', 'me', ',', 'but', 'it', 'must', 'not', '.', '[', 'Meanwhile', 'Fame', 'has', 'been', 'lighting', 'one', 'of', 'his', 'cigarettes', '.', 'She', 'sits', 'in', 'a', 'comfortable', 'chair', ',', 'leans', 'right', 'back', ',', 'and', 'puts', 'her', 'feet', 'right', 'up', 'on', 'the', 'table', 'amongst', 'the', 'poet', "'s", 'papers', '.', 'Oh', ',', 'I', 'fear', 'I', 'offend', 'you', '.', 'But', '-', 'it', 'can', 'not', 'be', '.', 'Fame', ':', 'Oh', ',', 'that', "'s", 'all', 'right', ',', 'old', 'bird', ';', 'no', 'offence', '.', 'I', 'ai', "n't", 'going', 'to', 'leave', 'you', '.', 'de', 'Reves', ':', 'But', '-', 'but', '-', 'but', '-', 'I', 'do', 'not', 'understand', '.', 'Fame', ':', 'I', "'ve", 'come', 'to', 'stay', ',', 'I', 'have', '.', '[', 'She', 'blows', 'a', 'puff', 'of', 'smoke', 'through', 'her', 'trumpet', '.', 'CURTAIN', '.'], 'pos_tags': [10, 0, 2, 12, 12, 12, 12, 12, 12, 38, 2, 12, 38, 41, 2, 10, 38, 18, 5, 10, 5, 6, 10, 38, 30, 29, 29, 0, 30, 6, 12, 12, 38, 42, 12, 12, 38, 2, 12, 5, 2, 12, 12, 13, 38, 12, 38, 10, 2, 12, 15, 11, 5, 12, 38, 11, 5, 18, 38, 2, 6, 10, 5, 2, 10, 38, 10, 38, 12, 6, 38, 2, 12, 30, 28, 5, 2, 10, 10, 38, 41, 12, 12, 12, 38, 10, 38, 12, 38, 12, 38, 12, 12, 38, 12, 38, 12, 38, 6, 12, 38, 35, 31, 16, 5, 22, 10, 41, 18, 42, 38, 2, 11, 5, 2, 10, 38, 12, 12, 38, 25, 38, 16, 31, 29, 22, 10, 38, 27, 16, 9, 26, 21, 0, 26, 35, 16, 27, 28, 5, 38, 12, 12, 38, 25, 38, 32, 30, 6, 38, 33, 31, 16, 28, 5, 12, 22, 10, 38, 18, 38, 16, 27, 24, 26, 5, 16, 9, 26, 1, 0, 1, 6, 11, 24, 26, 38, 16, 9, 26, 10, 21, 18, 38, 18, 16, 27, 16, 9, 26, 2, 10, 0, 26, 35, 12, 27, 28, 5, 38, 12, 12, 38, 6, 22, 35, 30, 10, 22, 10, 38, 2, 28, 6, 38, 12, 12, 38, 32, 30, 6, 38, 10, 41, 28, 10, 0, 10, 42, 38, 0, 33, 31, 16, 28, 22, 12, 12, 38, 28, 38, 10, 38, 28,
22, 16, 27, 36, 26, 16, 27, 38, 12, 12, 38, 25, 38, 16, 31, 29, 24, 16, 18, 38, 10, 38, 16, 31, 38, 28, 30, 18, 6, 38, 33, 31, 16, 26, 22, 12, 12, 38, 25, 38, 10, 38, 10, 38, 10, 22, 6, 12, 22, 12, 12, 38, 25, 38, 2, 10, 5, 10, 38, 16, 31, 38, 10,
38, 6, 12, 22, 26, 16, 26, 2, 10, 5, 16, 22, 12, 12, 38, 25, 38, 18, 18, 38, 10, 38, 16, 31, 38, 35, 31, 36, 16, 31, 16, 22, 12, 12, 38, 25, 38, 16, 31, 36, 26, 38, 2, 11, 31, 24, 26, 17, 10, 38, 18, 38, 2, 30, 35, 16, 31, 5, 38, 10, 38, 16, 9, 26, 16, 5, 3, 30, 2, 10, 5, 16, 38, 12, 12, 38, 25, 38, 0, 18, 16, 30, 18, 5, 17, 10, 38, 30, 16, 22, 16, 9, 18, 26, 5, 10, 5, 3, 27, 10, 5, 16, 38, 10, 38, 25, 38, 16, 31, 36, 26, 2, 38, 5, 16, 9, 26, 18, 18, 5, 10, 5, 16, 31, 5, 28, 16, 31, 36, 26,
16, 9, 36, 26, 2, 10, 10, 38, 18, 38, 38, 12, 12, 38, 18, 33, 22, 10, 38, 25, 38, 16, 31, 36, 26, 38, 18, 3, 30, 7, 10, 5, 28, 38, 18, 38, 12, 12, 38, 25, 38, 25, 38, 16, 31, 16, 30, 7, 24, 26, 33, 2, 6, 10, 30, 28, 24, 26, 38, 5, 24, 26, 33, 12, 38, 38, 10, 38, 33, 30, 12, 22, 12, 12, 38, 25, 38, 2, 29, 10, 5, 11, 38, 10, 38, 16, 31, 22, 16, 31, 36, 26, 5, 2, 29, 10, 38, 31, 16, 22, 12, 12, 38, 5, 17, 10, 16, 31, 5, 2, 6, 11, 38, 16, 18, 31, 2, 6, 10, 24, 16, 38, 2, 10, 5, 12, 30, 18, 6, 2, 10, 24, 2, 10, 5, 2, 12, 10, 9, 26, 24, 16, 38, 10, 38, 16, 31, 38, 41, 26, 16, 2, 10, 38, 11, 38, 42, 33, 22, 18, 16, 9, 26, 5, 11, 0, 11, 38, 0, 12, 38, 0, 14, 2, 10, 5, 11, 22, 12, 12, 38, 25, 38, 25, 38, 5, 2, 5, 16, 38, 10, 38, 6, 12, 22, 12, 12, 38, 16, 31, 5, 2, 12, 12, 5, 12, 38, 31, 36, 16, 22, 10, 38, 25, 38, 5, 10, 38, 0, 33, 30, 38, 38, 12, 12, 38, 1, 1, 11, 0, 18, 27, 16, 12, 12, 38, 27, 36, 16, 22, 0, 16, 30, 24, 16, 2, 10, 0, 10, 0, 10, 5, 38, 38, 10, 38, 25, 38, 0, 38, 16, 31, 38, 33, 30, 14, 2, 38, 38, 12, 12, 38, 25, 38, 16, 30, 5, 2, 10, 24, 16, 38, 0, 16, 27, 16, 12, 12, 38, 0, 18, 16, 30, 1, -1, 10, 38, 18, 38, 5, 10, 16, 30, 38, 12, 12, 38, 5, 2, 6, 10, 12, 30, 29, 29, 33, 16, 30, 5, 11, 38, 5, 11, 24, 33, 16, 30, 6, 11, 38, 10, 41, 28, 5, 17, 10, 0, 28, 18, 38, 28, 0, 28, 5, 2, 12, 5, 2, 10, 5, 6, 10, 42, 38, 16, 31, -1, 16, 31, -1, 16, 6, 11, -1, 0, 12, 12, -1, 41, 16, 30, 5, 2, 6, 10, 18, 38, 28, 16, 18, 2, 6, 38, 12, 12, 38, 31, 21, 22, 26, 21, 22, 10, 38, 33, 22, 33, 30, 2, 10, 22, 12, 12, 38, 2, 10, 22, 10, 38, 25, 38, 18, 38, 25, 38, 16, 9, 26, 16, 18, 38, 41, 16, 30, 18, 24, 26, 10, 5, 16, 38, 12, 12, 38, 25, 38, 31, 36, 26, 10, 18, 38, 10, 38, 33, 22, 35, 36, 22, 12, 12, 38, 25, 38, 16, 9, 36, 26, 38, 10, 38, 9, 36, 26, 22, 35, 38, 33, 31, 16, 27, 22, 12, 12, 38, 25, 38, 1, 5, 2, 11, -1, 16, 9, 36, 26, 38, 10, 38, 5, 10, 16, 9, 26, 38, 26, 30, 26, 2, 10, 38, 41, 12, 12, 30, 5, 12, 0, 2, 10, 38, 16, 30, 18, 7, 38, 10, 11, 31, 2, 10,
5, 2, 10, 38, 2, 10, 38, 12, 12, 41, 28, 2, 10, 18, 42, 38, 32, 30, 18, 38, 33, 31, 16, 26, 5, 16, 22, 41, 2, 10, 5, 6, 10, 38, 29, 5, 2, 10, 38, 30, 29, 38, 11, 31, 2, 10, 18, 5, 16, 38, 10, 38, 16, 31, 38, 16, 18, 27, 2, 6, 10, 38, 12, 12, 38, 25, 38, 33, 31, 16, 26, 5, 16, 22, 10, 38, 16, 30, 16, 5, 17, 10, 5, 12, 38, 12, 12, 38, 17, 10, 5, 12, 22, 10, 38, 25, 38, 16, 18, 27, 11, 18, 5, 17, 10, 38, 12, 12, 38, 25, 38, 25, 38, 38, 10, 38, 0, 33, 31, 2, 22, 12, 12, 38, 14, 2, 31, 11, 38, 0, 2, 30, 17, 10, 24, 12, 38, 10, 38, 24, 12, 22, 12, 12, 38, 2, 6, 5, 12, 27, 38, 10, 38, 6, 12, 22, 12, 12, 38, 12, 18, 27, 16, 38, 12, 27, 18, 6, 38, 16, 27, 18, 5, 2, 8, 5, 11, 38, 18, 18, 18, 38, 10, 38, 0, 38, 17, 6, 10, 38, 16, 31, 36, 26, 5,
16, 31, 3, 18, 30, 14, 2, 10, 22, 12, 12, 38, 16, 31, 14, 17, 11, 24, 16, 38, 10, 38, 0, 16, 31, 36, 26, 16, 31, 16, 9, 18, 26, 12, 22, 12, 12, 38, 16, 11, 31, 6, 11, 38, 0, 36, 11, 6, 0, 6, 0, 11, 18, 38, 14, 2, 6, 11, 5, 2, 10, 31, 2, 6, 11, 38,
10, 38, 0, 33, 16, 31, 30, 38, 16, 31, 36, 18, 18, 38, 5, 16, 0, 16, 38, 12, 12, 38, 24, 16, 2, 11, 31, 19, 6, 5, 11, 38, 16, 31, 11, 38, 16, 31, 2, 10, 5, 11, 38, 16, 31, 5, 16, 31, 10, 38, 16, 31, 18, 18, 38, 6, 38, 12, 38, 10, 38, 0, 38, 18, 38, 16, 9, 36, 26, 5, 16, 9, 26, 12, 38, 16, 31, 36, 26, 24, 26, 16, 22, 12, 12, 38, 36, 24, 16, 38, 18, 24, 16, 38, 16, 5, 2, 6, 10, 0, 6, 10, 9, 18, 26, 24, 16, -1, 16, 18, 31, 17, 11, 38, 10, 38, 16, 31, 38, 33, 31, 16, 29, 28, 2, 10, 22, 12, 12, 38, 16, 22, 25, 38, 18, 28, 2, 10, 38, 10, 38, 30, 16, 2, 6, 1, 22, 12, 12, 38, 36, 18, 38, 10, 38, 18, 35, 18, 30, 16, 22, 12, 12, 38, 5, 10, 11, 38, 10, 41, 18, 42, 38, 16, 26, 16, 33, 16, 30, 38, 12, 12, 38, 25, 22, 10, 38, 16, 26, 16, 33, 38, 16, 31, 29, 28, 16, 38, 16, 18, 27, 5, 5, 5, 10, 2, 12, 38, 28, 5, 2, 6, 10, 38, 16, 27, 18, 6, 5, 16, 9, 26, 29, 10, 38, 12, 12, 38, 29, 10, 22, 10, 38, 12, 38, 25, 38, 29, 11, 38, 11, 5, 11, 38, 10, 38, 1, 5, 17, 29, 11, 18, 38, 16, 27, 16, 2, 10,
27, 10, 5, 16, 38, 16, 31, 2, 10, 38, 12, 12, 38, 0, 17, 6, 10, 38, 16, 31, 36, 26, 5, 2, 38, 16, 18, 27, 5, 6, 11, 31, 24, 2, 10, 5, 6, 0, 6, 0, 6, 5, 1, 5, 17, 11, 0, 11, 38, 10, 38, 16, 31, 38, 16, 31, 2, 10, 38, 12, 12, 38, 25, 38, 18, 16, 9, 38, 16, 9, 26, 5, 16, 24, 2, 6, 10, 16, 31, 28, 24, 26, 38, 18, 16, 31, 2, 10, 29, 5, 28, 2, 38, 16, 30, 2, 6, 10, 38, 16, 9, 26, 2, 10, 38, 10, 38, 35, 31, 16, 31, 16, 31, 28, 24, 26, 2, 6, 10, 22, 12, 12, 38, 25, 38, 35, 9, 16, 26, 22, 12, 30, 1,
5, 5, 2, 12, 12, 15, 38, 16, 31, 36, 28, 18, 38, 10, 38, 31, 16, 31, 5, 16, 22, 12, 12, 38, 25, 38, 10, 38, 18, 38, 16, 31, 18, 6, 38, 16, 31, 28, 24, 26, 39, 2, 12, 5, 12, 38, 40, 18, 18, 38, 16, 9, 26, 21, 18, 38, 16, 30, 28, 18, 38, 16, 31, 2, 10, 38, 31, 36, 26, 2, 10, 24, 2, 10, 38, 10, 30, 18, 6, 38, 16, 31, 2, 10, 38, 31, 36, 26, 2, 10, 10, 38, 18, 10, 38, 16, 27, 6, 5, 5, 16, 38, 18, 18, 38, 12, 12, 38, 18, 18, 38, 41, 10, 12, 38, 12, 12, 30, 24, 17, 10, 0, 30, 21, 38, 6, 6, 12, 22,
16, 30, 2, 6, 18, 18, 38, 12, 38, 35, 10, 30, 38, 16, 30, 17, 10, 0, 17, 10, 0, 30, 2, 6, 11, 38, 18, 38, 32, 30, 29, 38, 16, 9, 36, 26, 2, 19, 24, 16, 38, 41, 16, 30, 0, 30, 24, 2, 10, 38, 16, 30, 18, 10, 5, 16, 0, 30, 21, 24, 2, 10, 38, 16, 30, 18, 24, 26, 17, 10, 18, 5, 2, 10, 5, 2, 10, 5, 17, 6, 11, 38, 25, 38, 16, 9, 36, 26, 16, 18, 38, 2, 1, 30, 6, 5, 2, 10, 38, 41, 16, 30, 2, 10, 5, 2, 10, 16, 38, 5, 2, 10, 30, 36, 26, 16, 10, 38, 10, 5, 16, 31, 29, 18, 9, 26, 16, 24, 16, 38, 10, 5, 16, 18, 9, 26, 38, 41, 16, 30, 2, 10, 0, 11, 24, 17, 10, 5, 2, 10, 38, 10, 30, 28, 21, 38, 16, 30, 5, 17, 10, 5, 2, 10, 38, 17, 10, 5, 17, 10, 38, 0, 18, 2, 10, 30, 38, 25, 38, 25, 38, 6, 28, 12, 18, 38, 18, 38, 12, 30, 17, 10, 38, 18, 16, 30, 2, 10, 38, 33, 27, 5, 16, 27, 22, 39, 3, 30, 2, 10, 5, 10, 38, 16, 9, 19, 26, 16, 38, 40, 1, 11, 15, 10, 0, 33, 31, 16, 24, 26, 5, 16, 22, 2, 10, 5, 11, 33, 31, 5, 10, 38, 0, 35, 6, 5, 16, 31, 18, 22, 3, 30, 2, 7, 10, 5, 29, 11, 24, 26, 5, 11, 5, 2, 10, 38, 35, 9, 12, 26, 24, 16, 22, 31, 36, 16, 29, 21, 17, 11, 5, 16, 22, 2, 30, 6, 24, 26, 16, 21, 38, 16, 31, 2, 10, 38, 32, 30, 18, 10, 5, 16, 24, 26, 16, 38, 6, 0, 6, 0, 6, 5, 10, 38, 33, 30, 12, 10, 5, 16, 22, 25, 38, 12, 30, 6, 38, 16, 30, 2, 6, 10, 28, 11, 38, 28, 2, 10, 38, 28, 11, 38, 11, 22, 35, 38, 16, 31, 16, 30, 38, 41, 16, 30, 18, 5, 17, 10, 38, 16, 31, 6, 10, 5, 11, 31, 29, 5, 38, 0, 17, 6, 10, 30, 29, 5, 2, 10, 38, 41, 16, 30, 6, 5, 2, 10, 38, 18, 16, 30, 17, 10, 38, 17, 10, 5,
12, 38, 12, 27, 38, 2, 6, 10, 38, 41, 5, 16, 30, 17, 10, 0, 30, 2, 11, 38, 10, 30, 10, 24, 6, 10, 38, 18, 5, 2, 10, 5, 2, 10, 5, 2, 10, 9, 26, 29, 29, 38, 0, 2, 6, 10, 9, 26, 29, 18, 7, 5, 2, 10, 15, 10, 38, 18, 16, 27, 38, 0, 16, 30, 2, 6, 10, 18, 41, 28, 5, 10, 42, 18, 38, 12, 15, 10, 38, 16, 9, 26, 16, 21, 38, 16, 9, 26, 2, 6, 6, 10, 38, 41, 16, 30, 18, 5, 2, 10, 38, 2, 6, 10, 5, 16, 27, 18, 6, 18, 24, 26, 17, 10, 21, 38, 41, 16, 30, 18, 2, 10, 38, 10, 5, 2, 6, 10, 5, 2, 6, 6, 10, 5, 17,
10, 30, 29, 28, 6, 5, 2, 10, 5, 2, 10, 10, 38, 18, -1, 16, 31, 29, 22, 41, 5, 2, 5, 16, 30, 6, 38, 18, 16, 30, 2, 10, 38, 12, 6, 10, 38, 16, 31, 29, 38, 41, 16, 30, 21, 17, 10, 24, 16, 0, 30, 16, 21, 5, 2, 10, 0, 5, 2, 10, 5, 2, 10, 38, 5, 32, 10,
2, 10, 30, 16, 20, 6, 38, 16, 30, 16, 5, 2, 10, 5, 16, 27, 29, 5, 2, 10, 38, 16, 18, 30, 16, 24, 12, 38, 2, 30, 17, 10, 38, 30, 16, 18, 29, 22, 41, 12, 30, 16, 0, 30, 16, 5, 10, 38, 5, 2, 12, 30, 16, 18, 38, 10, 38, 16, 31, 2, 10, 5, 2, 10, 38, 12, 12, 38, 33, 22, 10, 38, 2, 10, 38, 12, 12, 38, 16, 38, 16, 38, 18, -1, 26, 38, 10, 38, 16, 31, 16, 38, 12, 12, 38, 0, -1, 16, 30, 36, 6, -1, 31, 16, 16, 32, 27, 12, 22, 10, 38, 10, 22, 12, 38, 25, 38, 6, 6, 10, 38, 40, 12, 9, 36, 26, 2, 10, 38, 12, 12, 38, 12, 12, 22, 41, 12, 30, 18, 24, 2, 10, 38, 16, 30, 16, 0, 30, 17, 10, 21, 38, 12, 41, 5, 2, 10, 5, 32, 2, 10, 5, 2, 6, 10, 9, 26, 5, 10, 5, 2, 10, 27, 18, 6, 42, 38, 25, 22, 25, 22, 13, 22, 25, 22, 26, 38, 11, 22, 25, 22, 41, 2, 10, 5, 2, 10, 10, 30, 29, 38, 12, 30, 17, 10, 38, 12, 38, 25, 38, 16, 30, 2, 10, 22, 41, 18, 38, 5, 17, 10, 38, 42, 33, 30, 17, 10, 22, 12, 12, 38, 12, 12, 38, 10, 38, 16, 31, 30, 12, 12, 38, 12, 12, 38, 12, 12, 12, 38, 10, 38, 16, 30, 26, 16, 12, 38, 2, 10, 38, 11, 22, 11, 22, 11, 22, 10, 38, 26, 38, 33, 30, 17, 6, 10, 22, 12, 12, 38, 16, -1, 16, -1, 16, 31, 36, 18, 26, 38, 10, 38, 18, 38, 32, 31, 16, 5, 8, 38, 6, 0, 6, 22, 12, 12, 38, 25, 38, 25, 38, 6, 38, 41, 16, 30, 17, 10, 21, 5, 2, 10, 38, 25, 38, 25, 38, 16, 31, 6, 38, 10, 38, 12, 30, 17, 6, 10, 38, 2, 10, 38, 11, 22, 11, 22, 11, 22, 12, 38, 39, 6, 38, 26, 16, 10, 38, 16, 31, 24, 26, 2, 18, 6, 38, 12, 12, 38, 9, 36, 1, 16, 18, -1, 9, 16, 26, 24, 26, 17, 10, 38, 5, 16, 9, 38, 25, -1, 12, 41, 28, 21, 10, 42, 38, 18, 38, 33, 30, 2, 22, 12, 12, 38, 25, 38, 32, 30, 17, 10, 38, 12, 41, 5, 2, 10, 5, 16, 31, 42, 38, 16, 30, 5, 2, 10, 38, 41, 12, 5, 2, 10, 38, 12, 41, 28, 24, 2, 10, 42, 38, 18, 38, 33, 31, 16, 29, 5, 18, 22, 12, 12, 38,
25, -1, 25, -1, 2, 31, 17, 10, 11, 38, 12, 41, 28, 2, 6, 10, 42, 38, 33, 31, 18, 29, 5, 2, 1, 22, 12, 12, 41, 18, 42, 38, 25, 38, 11, 0, 10, 38, 12, 41, 5, 2, 10, 42, 38, 16, 30, 11, 0, 10, 5, 10, 38, 2, 10, 38, 6, 10, 10, 38, 11, 22, 6, 6, 10, 38, 11, 22, 6, 6, 10, 38, 11, 22, 12, 38, 25, 38, 0, 33, 30, 2, 22, 12, 12, 41, 18, 42, 38, 25, 38, 2, 10, 10, 38, 10, 38, 16, 30, 2, 10, 15, 10, 22, 16, 30, 2, 6, 10, 22, 16, 30, 2, 6, 10, 22, 41, 12, 11, 5, 2, 12, 38, 2, 10, 18, 5, 11, 15, 11, 38, 12, 12, 38, 25, 38, 2, 30, 6, 38, 2, 30, 6, 38, 2, 30, 6, 38, 41, 12, 30, 2, 10, 5, 17, 10, 38, 16, 30, 18, 24, 26, 38, 12, 12, 41, 18, 0, 18, 42, 38, 1, 10, 38, 1, 10, -1, 10, 38, 18, 38, 18, 5, 16, 38, 12, 12, 38, 5, 1, 11, 38, 6, 10, 38, 16, 31,
29, 16, 38, 28, 14, 17, 11, -1, 16, 31, -1, 16, 31, 16, 31, 36, 6, -1, 12, 38, 25, 38, 16, 31, 2, 10, 38, 12, 12, 38, 25, 38, 25, 38, 16, 31, 36, 6, 38, 16, 9, 36, 26, 38, 16, 31, 36, 18, 26, 38, 11, 31, 16, 7, 38, 16, 9, 26, 16, 22, 16, 31, 36, 18, 26, 16, 38, 11, 31, 6, 38, 16, 9, 26, 11, 38, 0, 16, 38, 25, 38, 25, 38, 25, 38, 16, 9, 36, 26, 38, 16, 9, 36, 26, 38, 25, 38, 26, 16, 38, 0, 16, 9, 36, 38, 41, 18, 12, 30, 29, 28, 1, 5, 17, 11, 38, 16, 30, 5, 2, 6, 10, 38, 30, 18, 18, 38, 0, 30, 17, 11, 18, 18, 5, 2, 10, 5, 2, 10, 15, 11, 38, 25, 38, 16, 31, 16, 26, 16, 38, 0, 38, 16, 9, 36, 26, 38, 12, 38, 25, 38, 32, 30, 18, 6, 38, 6, 10, 38, 2, 10, 38, 16, 31, 36, 28, 24, 26, 16, 38, 12, 12, 38, 0, 38, 18, 38, 18, 38, 16, 31, 36, 26, 38, 10, 38, 16, 31, 29, 24, 26, 38, 16, 31, 38, 41, 16, 30, 2, 10, 5, 10, 5, 17, 10, 38, 10, 38], 'genre': 'Drama', 'subgenre': 'drama', 'year': '1919', 'quarter_cent': '1900-1924', 'decade': '1910s', 'title': 'Fame and the poet', 'author': 'Dunsany [Edward John Moreton Drax Plunkett]', 'notes': '', 'comments': 'selected from larger file', 'period': '1850-1920', 'id': '317'}
```
### Data Fields
There are three configs in this dataset- `plain`, `class` and `pos`. `plain` is a simple text dataset whereas `pos` and `class` are both annotated datasets containing pos tagging. A `plain` data point has the following fields:
```
{
"text": The text in the sample("string"),
"genre": The genre of the text("string"),
"subgenre": The subgenre of the text("string"),
"year": The year the text was produced("string"),
"quarter_cent": The quarter century in which the text was produced("string"),
"decade": The decade the text was produced("string"),
"title": The title of the text("string"),
"author": The author of the text("string"),
"notes": Notes about the text, if any("string"),
"comments": Commentsabout the text, if any("string"),
"period": 70-year period during which the text was produced("string"),
"id": Unqiue identifier("string"),
}
```
A typical `pos`/`class` data point has the following fields:
```
{
"text": The tokens in the sample(list("string")),
"pos_tags": Corresponding POS tags for the tokens (list("string"))
"genre": The genre of the text("string"),
"subgenre": The subgenre of the text("string"),
"year": The year the text was produced("string"),
"quarter_cent": The quarter century in which the text was produced("string"),
"decade": The decade the text was produced("string"),
"title": The title of the text("string"),
"author": The author of the text("string"),
"notes": Notes about the text, if any("string"),
"comments": Commentsabout the text, if any("string"),
"period": 70-year period during which the text was produced("string"),
"id": Unqiue identifier("string"),
}
```
### Data Splits
Train: 333
## Dataset Creation
### Curation Rationale
The Corpus of Late Modern English Texts (CLMET) is a corpus of roughly 35 million words of
British English from 17101920, grouped into three 70-year periods (De Smet 2005; Diller et
al. 2011). The history, versions and specifics of corpus composition can be followed up by
referring to the CLMET3.0 website. CLMET3.0 is currently distributed in three formats: (i)
plain text, (ii) plain text with one sentence per line, and (iii) a tagged version (one sentence
per line).
Version CLMET3.1 is the result of making CLMET available in a CQP format for use in
CWB and CQPweb-based corpus environments (Evert & Hardie 2011; Evert 2010a). While
there is no change to the selection of texts, CLMET3.1 includes additions and changes in
linguistic annotation. The changes in CLMET3.1 are of three general types: (a) retokenization
and retagging, (b) fixing of some systematic issues that come with historical data, and (c)
enhancing annotation by adding lemmas and simplified part-of-speech class tags
### Source Data
#### Initial Data Collection and Normalization
The initial data is from OCR of texts in English from 1710-1920
#### Who are the source language producers?
The text was produced by the authors of the original work and then OCRd
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
This dataset does not contain any personal information as these are historic texts. Some content might be sensitive
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
Dealing with historical data, tagging remains problematic in all areas, and should be treated
with caution (especially with noun recognition) and/or combined with more coarse-grained
class queries. Also bear in mind that the lemmas for unknown items are in lower
case, while proper names that the tagger did recognize are not necessarily all lower case. In
addition, lemmatization may not be consistent, e.g. in the area of -ize/ise spellings; these were
not homogenized to preserve as much of the original orthography as possible.
## Additional Information
### Dataset Curators
The Corpus of Late Modern English Texts, version 3.1 (CLMET3.1) has been created by Hendrik De Smet, Susanne Flach, Hans-J�rgen Diller and Jukka Tyrkk�
### Licensing Information
Creative Commons Attribution Non Commercial Share Alike 4.0 International
### Citation Information
[Needs More Information] | [
-0.6876612305641174,
-0.40715163946151733,
0.3735089600086212,
0.10996215790510178,
-0.3984527885913849,
-0.2240816205739975,
-0.10812029242515564,
-0.5354116559028625,
0.5874678492546082,
0.7454326152801514,
-0.6069887280464172,
-0.6148386597633362,
-0.3999117612838745,
0.3583145141601562... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Muennighoff/xstory_cloze | Muennighoff | 2022-10-20T19:44:18Z | 34 | 0 | null | [
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ar",
"language:es",
"language:eu",
"language:hi",
"language:id",
"language:zh",
"language:ru",
"language:my",
"license:unknown",
"oth... | 2022-10-20T19:44:18Z | 2022-07-22T11:52:19.000Z | 2022-07-22T11:52:19 | ---
annotations_creators:
- found
language_creators:
- found
language:
- ar
- es
- eu
- hi
- id
- zh
- ru
- my
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_ids: []
tags:
- other-story-completion
---
# Dataset Card for "story_cloze"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
Story Cloze Test' is a new commonsense reasoning framework for evaluating story understanding,
story generation, and script learning.This test requires a system to choose the correct ending
to a four-sentence story.
### Data Instances
- **Size of downloaded dataset files:** 2.03 MB
- **Size of the generated dataset:** 2.03 MB
- **Total amount of disk used:** 2.05 MB
An example of 'train' looks as follows.
```
{'answer_right_ending': 1,
'input_sentence_1': 'Rick grew up in a troubled household.',
'input_sentence_2': 'He never found good support in family, and turned to gangs.',
'input_sentence_3': "It wasn't long before Rick got shot in a robbery.",
'input_sentence_4': 'The incident caused him to turn a new leaf.',
'sentence_quiz1': 'He is happy now.',
'sentence_quiz2': 'He joined a gang.',
'story_id': '138d5bfb-05cc-41e3-bf2c-fa85ebad14e2'}
```
### Data Fields
The data fields are the same among all splits.
- `input_sentence_1`: The first statement in the story.
- `input_sentence_2`: The second statement in the story.
- `input_sentence_3`: The third statement in the story.
- `input_sentence_4`: The forth statement in the story.
- `sentence_quiz1`: first possible continuation of the story.
- `sentence_quiz2`: second possible continuation of the story.
- `answer_right_ending`: correct possible ending; either 1 or 2.
- `story_id`: story id.
### Data Splits
| name |validation |test|
|-------|-----:|---:|
|lang|1871|1871|
| [
-0.5246527194976807,
-0.9156602025032043,
0.3906688690185547,
0.26935240626335144,
-0.29499971866607666,
0.1783573180437088,
-0.16122885048389435,
-0.10736990720033646,
0.27996841073036194,
0.5582125186920166,
-1.0131181478500366,
-1.013316035270691,
-0.47189825773239136,
0.287131726741790... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jakartaresearch/indo-movie-subtitle | jakartaresearch | 2022-08-16T13:20:23Z | 34 | 1 | null | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:id",
"license:cc-by-4.0",
"movie",
"subtitle",
"indonesian",
"r... | 2022-08-16T13:20:23Z | 2022-08-16T13:10:05.000Z | 2022-08-16T13:10:05 | ---
annotations_creators:
- no-annotation
language:
- id
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Indonesian Movie Subtitle
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- movie
- subtitle
- indonesian
task_categories:
- text-generation
task_ids:
- language-modeling
---
# Dataset Card for Indonesian Movie Subtitle
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@andreaschandra](https://github.com/andreaschandra) for adding this dataset. | [
-0.43489256501197815,
-0.5653508901596069,
-0.19469007849693298,
0.17922773957252502,
-0.5959277749061584,
0.14669644832611084,
-0.1982399970293045,
-0.2692275047302246,
0.723436176776886,
0.9029588103294373,
-0.8488478064537048,
-0.808890163898468,
-0.7841404676437378,
0.3676677942276001,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jonathanli/law-stack-exchange | jonathanli | 2023-02-23T16:37:19Z | 34 | 6 | null | [
"task_categories:text-classification",
"language:en",
"stackexchange",
"law",
"region:us"
] | 2023-02-23T16:37:19Z | 2022-09-07T19:49:21.000Z | 2022-09-07T19:49:21 | ---
task_categories:
- text-classification
language:
- en
tags:
- stackexchange
- law
pretty_name: Law Stack Exchange
---
# Dataset Card for Law Stack Exchange Dataset
## Dataset Description
- **Paper: [Parameter-Efficient Legal Domain Adaptation](https://aclanthology.org/2022.nllp-1.10/)**
- **Point of Contact: jxl@queensu.ca**
### Dataset Summary
Dataset from the Law Stack Exchange, as used in "Parameter-Efficient Legal Domain Adaptation".
### Citation Information
```
@inproceedings{li-etal-2022-parameter,
title = "Parameter-Efficient Legal Domain Adaptation",
author = "Li, Jonathan and
Bhambhoria, Rohan and
Zhu, Xiaodan",
booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2022",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates (Hybrid)",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.nllp-1.10",
pages = "119--129",
}
``` | [
-0.28063884377479553,
-0.6148640513420105,
0.12327397614717484,
0.23551921546459198,
-0.7063771486282349,
-0.1632310152053833,
-0.3017520010471344,
-0.32462549209594727,
0.04124978557229042,
0.5409337878227234,
-0.39940398931503296,
-0.8526052236557007,
-0.5306816101074219,
0.0229920186102... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cannlytics/cannabis_tests | cannlytics | 2023-02-22T15:48:43Z | 34 | 6 | null | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"size_categories:1K<n<10K",
"source_datasets:original",
"license:cc-by-4.0",
"cannabis",
"lab results",
"tests",
"region:us"
] | 2023-02-22T15:48:43Z | 2022-09-10T16:54:44.000Z | 2022-09-10T16:54:44 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
license:
- cc-by-4.0
pretty_name: cannabis_tests
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- cannabis
- lab results
- tests
---
# Cannabis Tests, Curated by Cannlytics
<div style="margin-top:1rem; margin-bottom: 1rem;">
<img width="240px" alt="" src="https://firebasestorage.googleapis.com/v0/b/cannlytics.appspot.com/o/public%2Fimages%2Fdatasets%2Fcannabis_tests%2Fcannabis_tests_curated_by_cannlytics.png?alt=media&token=22e4d1da-6b30-4c3f-9ff7-1954ac2739b2">
</div>
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Data Collection and Normalization](#data-collection-and-normalization)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [License](#license)
- [Citation](#citation)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** <https://github.com/cannlytics/cannlytics>
- **Repository:** <https://huggingface.co/datasets/cannlytics/cannabis_tests>
- **Point of Contact:** <dev@cannlytics.com>
### Dataset Summary
This dataset is a collection of public cannabis lab test results parsed by [`CoADoc`](https://github.com/cannlytics/cannlytics/tree/main/cannlytics/data/coas), a certificate of analysis (COA) parsing tool.
## Dataset Structure
The dataset is partitioned into the various sources of lab results.
| Subset | Source | Observations |
|--------|--------|--------------|
| `rawgarden` | Raw Gardens | 2,667 |
| `mcrlabs` | MCR Labs | Coming soon! |
| `psilabs` | PSI Labs | Coming soon! |
| `sclabs` | SC Labs | Coming soon! |
| `washington` | Washington State | Coming soon! |
### Data Instances
You can load the `details` for each of the dataset files. For example:
```py
from datasets import load_dataset
# Download Raw Garden lab result details.
dataset = load_dataset('cannlytics/cannabis_tests', 'rawgarden')
details = dataset['details']
assert len(details) > 0
print('Downloaded %i observations.' % len(details))
```
> Note: Configurations for `results` and `values` are planned. For now, you can create these data with `CoADoc().save(details, out_file)`.
### Data Fields
Below is a non-exhaustive list of fields, used to standardize the various data that are encountered, that you may expect encounter in the parsed COA data.
| Field | Example| Description |
|-------|-----|-------------|
| `analyses` | ["cannabinoids"] | A list of analyses performed on a given sample. |
| `{analysis}_method` | "HPLC" | The method used for each analysis. |
| `{analysis}_status` | "pass" | The pass, fail, or N/A status for pass / fail analyses. |
| `coa_urls` | [{"url": "", "filename": ""}] | A list of certificate of analysis (CoA) URLs. |
| `date_collected` | 2022-04-20T04:20 | An ISO-formatted time when the sample was collected. |
| `date_tested` | 2022-04-20T16:20 | An ISO-formatted time when the sample was tested. |
| `date_received` | 2022-04-20T12:20 | An ISO-formatted time when the sample was received. |
| `distributor` | "Your Favorite Dispo" | The name of the product distributor, if applicable. |
| `distributor_address` | "Under the Bridge, SF, CA 55555" | The distributor address, if applicable. |
| `distributor_street` | "Under the Bridge" | The distributor street, if applicable. |
| `distributor_city` | "SF" | The distributor city, if applicable. |
| `distributor_state` | "CA" | The distributor state, if applicable. |
| `distributor_zipcode` | "55555" | The distributor zip code, if applicable. |
| `distributor_license_number` | "L2Stat" | The distributor license number, if applicable. |
| `images` | [{"url": "", "filename": ""}] | A list of image URLs for the sample. |
| `lab_results_url` | "https://cannlytics.com/results" | A URL to the sample results online. |
| `producer` | "Grow Tent" | The producer of the sampled product. |
| `producer_address` | "3rd & Army, SF, CA 55555" | The producer's address. |
| `producer_street` | "3rd & Army" | The producer's street. |
| `producer_city` | "SF" | The producer's city. |
| `producer_state` | "CA" | The producer's state. |
| `producer_zipcode` | "55555" | The producer's zipcode. |
| `producer_license_number` | "L2Calc" | The producer's license number. |
| `product_name` | "Blue Rhino Pre-Roll" | The name of the product. |
| `lab_id` | "Sample-0001" | A lab-specific ID for the sample. |
| `product_type` | "flower" | The type of product. |
| `batch_number` | "Order-0001" | A batch number for the sample or product. |
| `metrc_ids` | ["1A4060300002199000003445"] | A list of relevant Metrc IDs. |
| `metrc_lab_id` | "1A4060300002199000003445" | The Metrc ID associated with the lab sample. |
| `metrc_source_id` | "1A4060300002199000003445" | The Metrc ID associated with the sampled product. |
| `product_size` | 2000 | The size of the product in milligrams. |
| `serving_size` | 1000 | An estimated serving size in milligrams. |
| `servings_per_package` | 2 | The number of servings per package. |
| `sample_weight` | 1 | The weight of the product sample in grams. |
| `results` | [{...},...] | A list of results, see below for result-specific fields. |
| `status` | "pass" | The overall pass / fail status for all contaminant screening analyses. |
| `total_cannabinoids` | 14.20 | The analytical total of all cannabinoids measured. |
| `total_thc` | 14.00 | The analytical total of THC and THCA. |
| `total_cbd` | 0.20 | The analytical total of CBD and CBDA. |
| `total_terpenes` | 0.42 | The sum of all terpenes measured. |
| `results_hash` | "{sha256-hash}" | An HMAC of the sample's `results` JSON signed with Cannlytics' public key, `"cannlytics.eth"`. |
| `sample_id` | "{sha256-hash}" | A generated ID to uniquely identify the `producer`, `product_name`, and `results`. |
| `sample_hash` | "{sha256-hash}" | An HMAC of the entire sample JSON signed with Cannlytics' public key, `"cannlytics.eth"`. |
<!-- | `strain_name` | "Blue Rhino" | A strain name, if specified. Otherwise, can be attempted to be parsed from the `product_name`. | -->
Each result can contain the following fields.
| Field | Example| Description |
|-------|--------|-------------|
| `analysis` | "pesticides" | The analysis used to obtain the result. |
| `key` | "pyrethrins" | A standardized key for the result analyte. |
| `name` | "Pyrethrins" | The lab's internal name for the result analyte |
| `value` | 0.42 | The value of the result. |
| `mg_g` | 0.00000042 | The value of the result in milligrams per gram. |
| `units` | "ug/g" | The units for the result `value`, `limit`, `lod`, and `loq`. |
| `limit` | 0.5 | A pass / fail threshold for contaminant screening analyses. |
| `lod` | 0.01 | The limit of detection for the result analyte. Values below the `lod` are typically reported as `ND`. |
| `loq` | 0.1 | The limit of quantification for the result analyte. Values above the `lod` but below the `loq` are typically reported as `<LOQ`. |
| `status` | "pass" | The pass / fail status for contaminant screening analyses. |
### Data Splits
The data is split into `details`, `results`, and `values` data. Configurations for `results` and `values` are planned. For now, you can create these data with:
```py
from cannlytics.data.coas import CoADoc
from datasets import load_dataset
import pandas as pd
# Download Raw Garden lab result details.
repo = 'cannlytics/cannabis_tests'
dataset = load_dataset(repo, 'rawgarden')
details = dataset['details']
# Save the data locally with "Details", "Results", and "Values" worksheets.
outfile = 'details.xlsx'
parser = CoADoc()
parser.save(details.to_pandas(), outfile)
# Read the values.
values = pd.read_excel(outfile, sheet_name='Values')
# Read the results.
results = pd.read_excel(outfile, sheet_name='Results')
```
<!-- Training data is used for training your models. Validation data is used for evaluating your trained models, to help you determine a final model. Test data is used to evaluate your final model. -->
## Dataset Creation
### Curation Rationale
Certificates of analysis (CoAs) are abundant for cannabis cultivators, processors, retailers, and consumers too, but the data is often locked away. Rich, valuable laboratory data so close, yet so far away! CoADoc puts these vital data points in your hands by parsing PDFs and URLs, finding all the data, standardizing the data, and cleanly returning the data to you.
### Source Data
| Data Source | URL |
|-------------|-----|
| MCR Labs Test Results | <https://reports.mcrlabs.com> |
| PSI Labs Test Results | <https://results.psilabs.org/test-results/> |
| Raw Garden Test Results | <https://rawgarden.farm/lab-results/> |
| SC Labs Test Results | <https://client.sclabs.com/> |
| Washington State Lab Test Results | <https://lcb.app.box.com/s/e89t59s0yb558tjoncjsid710oirqbgd> |
#### Data Collection and Normalization
You can recreate the dataset using the open source algorithms in the repository. First clone the repository:
```
git clone https://huggingface.co/datasets/cannlytics/cannabis_tests
```
You can then install the algorithm Python (3.9+) requirements:
```
cd cannabis_tests
pip install -r requirements.txt
```
Then you can run all of the data-collection algorithms:
```
python algorithms/main.py
```
Or you can run each algorithm individually. For example:
```
python algorithms/get_results_mcrlabs.py
```
In the `algorithms` directory, you can find the data collection scripts described in the table below.
| Algorithm | Organization | Description |
|-----------|---------------|-------------|
| `get_results_mcrlabs.py` | MCR Labs | Get lab results published by MCR Labs. |
| `get_results_psilabs.py` | PSI Labs | Get historic lab results published by MCR Labs. |
| `get_results_rawgarden.py` | Raw Garden | Get lab results Raw Garden publishes for their products. |
| `get_results_sclabs.py` | SC Labs | Get lab results published by SC Labs. |
| `get_results_washington.py` | Washington State | Get historic lab results obtained through a FOIA request in Washington State. |
### Personal and Sensitive Information
The dataset includes public addresses and contact information for related cannabis licensees. It is important to take care to use these data points in a legal manner.
## Considerations for Using the Data
### Social Impact of Dataset
Arguably, there is substantial social impact that could result from the study of cannabis, therefore, researchers and data consumers alike should take the utmost care in the use of this dataset.
### Discussion of Biases
Cannlytics is a for-profit data and analytics company that primarily serves cannabis businesses. The data are not randomly collected and thus sampling bias should be taken into consideration.
### Other Known Limitations
The data represents only a subset of the population of cannabis lab results. Non-standard values are coded as follows.
| Actual | Coding |
|--------|--------|
| `'ND'` | `0.000000001` |
| `'No detection in 1 gram'` | `0.000000001` |
| `'Negative/1g'` | `0.000000001` |
| '`PASS'` | `0.000000001` |
| `'<LOD'` | `0.00000001` |
| `'< LOD'` | `0.00000001` |
| `'<LOQ'` | `0.0000001` |
| `'< LOQ'` | `0.0000001` |
| `'<LLOQ'` | `0.0000001` |
| `'≥ LOD'` | `10001` |
| `'NR'` | `None` |
| `'N/A'` | `None` |
| `'na'` | `None` |
| `'NT'` | `None` |
## Additional Information
### Dataset Curators
Curated by [🔥Cannlytics](https://cannlytics.com)<br>
<dev@cannlytics.com>
### License
```
Copyright (c) 2022 Cannlytics and the Cannabis Data Science Team
The files associated with this dataset are licensed under a
Creative Commons Attribution 4.0 International license.
You can share, copy and modify this dataset so long as you give
appropriate credit, provide a link to the CC BY license, and
indicate if changes were made, but you may not do so in a way
that suggests the rights holder has endorsed you or your use of
the dataset. Note that further permission may be required for
any content within the dataset that is identified as belonging
to a third party.
```
### Citation
Please cite the following if you use the code examples in your research:
```bibtex
@misc{cannlytics2022,
title={Cannabis Data Science},
author={Skeate, Keegan and O'Sullivan-Sutherland, Candace},
journal={https://github.com/cannlytics/cannabis-data-science},
year={2022}
}
```
### Contributions
Thanks to [🔥Cannlytics](https://cannlytics.com), [@candy-o](https://github.com/candy-o), [@hcadeaux](https://huggingface.co/hcadeaux), [@keeganskeate](https://github.com/keeganskeate), [The CESC](https://thecesc.org), and the entire [Cannabis Data Science Team](https://meetup.com/cannabis-data-science/members) for their contributions.
| [
-0.39833879470825195,
-0.5591157674789429,
0.3117765486240387,
0.4407075345516205,
-0.2899966239929199,
-0.02769167721271515,
-0.05074477568268776,
-0.2938219904899597,
0.8351712822914124,
0.6011350750923157,
-0.6366952061653137,
-1.2394413948059082,
-0.5713291764259338,
0.2796335816383362... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tner/wikineural | tner | 2022-09-27T19:46:37Z | 34 | 4 | null | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:multilingual",
"size_categories:10K<100k",
"language:de",
"language:en",
"language:es",
"language:fr",
"language:it",
"language:nl",
"language:pl",
"language:pt",
"language:ru",
"region:us"
] | 2022-09-27T19:46:37Z | 2022-09-27T17:56:40.000Z | 2022-09-27T17:56:40 | ---
language:
- de
- en
- es
- fr
- it
- nl
- pl
- pt
- ru
multilinguality:
- multilingual
size_categories:
- 10K<100k
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: WikiNeural
---
# Dataset Card for "tner/wikineural"
## Dataset Description
- **Repository:** [T-NER](https://github.com/asahi417/tner)
- **Paper:** [https://aclanthology.org/2021.findings-emnlp.215/](https://aclanthology.org/2021.findings-emnlp.215/)
- **Dataset:** WikiNeural
- **Domain:** Wikipedia
- **Number of Entity:** 16
### Dataset Summary
WikiAnn NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project.
- Entity Types: `PER`, `LOC`, `ORG`, `ANIM`, `BIO`, `CEL`, `DIS`, `EVE`, `FOOD`, `INST`, `MEDIA`, `PLANT`, `MYTH`, `TIME`, `VEHI`, `MISC`
## Dataset Structure
### Data Instances
An example of `train` of `de` looks as follows.
```
{
'tokens': [ "Dieses", "wiederum", "basierte", "auf", "dem", "gleichnamigen", "Roman", "von", "Noël", "Calef", "." ],
'tags': [ 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 0 ]
}
```
### Label ID
The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/wikineural/raw/main/dataset/label.json).
```python
{
"O": 0,
"B-PER": 1,
"I-PER": 2,
"B-LOC": 3,
"I-LOC": 4,
"B-ORG": 5,
"I-ORG": 6,
"B-ANIM": 7,
"I-ANIM": 8,
"B-BIO": 9,
"I-BIO": 10,
"B-CEL": 11,
"I-CEL": 12,
"B-DIS": 13,
"I-DIS": 14,
"B-EVE": 15,
"I-EVE": 16,
"B-FOOD": 17,
"I-FOOD": 18,
"B-INST": 19,
"I-INST": 20,
"B-MEDIA": 21,
"I-MEDIA": 22,
"B-PLANT": 23,
"I-PLANT": 24,
"B-MYTH": 25,
"I-MYTH": 26,
"B-TIME": 27,
"I-TIME": 28,
"B-VEHI": 29,
"I-VEHI": 30,
"B-MISC": 31,
"I-MISC": 32
}
```
### Data Splits
| language | train | validation | test |
|:-----------|--------:|-------------:|-------:|
| de | 98640 | 12330 | 12372 |
| en | 92720 | 11590 | 11597 |
| es | 76320 | 9540 | 9618 |
| fr | 100800 | 12600 | 12678 |
| it | 88400 | 11050 | 11069 |
| nl | 83680 | 10460 | 10547 |
| pl | 108160 | 13520 | 13585 |
| pt | 80560 | 10070 | 10160 |
| ru | 92320 | 11540 | 11580 |
### Citation Information
```
@inproceedings{tedeschi-etal-2021-wikineural-combined,
title = "{W}iki{NE}u{R}al: {C}ombined Neural and Knowledge-based Silver Data Creation for Multilingual {NER}",
author = "Tedeschi, Simone and
Maiorca, Valentino and
Campolungo, Niccol{\`o} and
Cecconi, Francesco and
Navigli, Roberto",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.215",
doi = "10.18653/v1/2021.findings-emnlp.215",
pages = "2521--2533",
abstract = "Multilingual Named Entity Recognition (NER) is a key intermediate task which is needed in many areas of NLP. In this paper, we address the well-known issue of data scarcity in NER, especially relevant when moving to a multilingual scenario, and go beyond current approaches to the creation of multilingual silver data for the task. We exploit the texts of Wikipedia and introduce a new methodology based on the effective combination of knowledge-based approaches and neural models, together with a novel domain adaptation technique, to produce high-quality training corpora for NER. We evaluate our datasets extensively on standard benchmarks for NER, yielding substantial improvements up to 6 span-based F1-score points over previous state-of-the-art systems for data creation.",
}
``` | [
-0.6084688901901245,
-0.5677262544631958,
0.029287580400705338,
-0.0026638582348823547,
-0.015534180216491222,
-0.05594686418771744,
-0.34211429953575134,
-0.32062041759490967,
0.7515481114387512,
0.15126420557498932,
-0.4771876931190491,
-0.7664202451705933,
-0.641525387763977,
0.39635789... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Kavindu99/celeb-identities | Kavindu99 | 2022-10-13T20:27:44Z | 34 | 0 | null | [
"region:us"
] | 2022-10-13T20:27:44Z | 2022-10-13T20:27:31.000Z | 2022-10-13T20:27:31 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
0: Emilia_Clarke
1: Henry_Cavil
2: Jason_Mamoa
3: Sadie_Sink
4: Sangakkara
5: Zendaya
splits:
- name: train
num_bytes: 160371.0
num_examples: 18
download_size: 160832
dataset_size: 160371.0
---
# Dataset Card for "celeb-identities"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.46353304386138916,
-0.25037872791290283,
0.00394661957398057,
0.09495989233255386,
-0.06635984778404236,
0.3351334035396576,
0.2677997946739197,
-0.30673035979270935,
0.9174419045448303,
0.39497241377830505,
-0.8485506176948547,
-0.641608715057373,
-0.6570073962211609,
-0.26624107360839... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/genia_relation_corpus | bigbio | 2022-12-22T15:44:40Z | 34 | 1 | null | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-12-22T15:44:40Z | 2022-11-13T22:08:39.000Z | 2022-11-13T22:08:39 |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: GENIA_PROJECT_LICENSE
pretty_name: GENIA Relation Corpus
homepage: http://www.geniaproject.org/genia-corpus/relation-corpus
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- RELATION_EXTRACTION
---
# Dataset Card for GENIA Relation Corpus
## Dataset Description
- **Homepage:** http://www.geniaproject.org/genia-corpus/relation-corpus
- **Pubmed:** True
- **Public:** True
- **Tasks:** RE
The extraction of various relations stated to hold between biomolecular entities is one of the most frequently
addressed information extraction tasks in domain studies. Typical relation extraction targets involve protein-protein
interactions or gene regulatory relations. However, in the GENIA corpus, such associations involving change in the
state or properties of biomolecules are captured in the event annotation.
The GENIA corpus relation annotation aims to complement the event annotation of the corpus by capturing (primarily)
static relations, relations such as part-of that hold between entities without (necessarily) involving change.
## Citation Information
```
@inproceedings{pyysalo-etal-2009-static,
title = "Static Relations: a Piece in the Biomedical Information Extraction Puzzle",
author = "Pyysalo, Sampo and
Ohta, Tomoko and
Kim, Jin-Dong and
Tsujii, Jun{'}ichi",
booktitle = "Proceedings of the {B}io{NLP} 2009 Workshop",
month = jun,
year = "2009",
address = "Boulder, Colorado",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W09-1301",
pages = "1--9",
}
@article{article,
author = {Ohta, Tomoko and Pyysalo, Sampo and Kim, Jin-Dong and Tsujii, Jun'ichi},
year = {2010},
month = {10},
pages = {917-28},
title = {A reevaluation of biomedical named entity - term relations},
volume = {8},
journal = {Journal of bioinformatics and computational biology},
doi = {10.1142/S0219720010005014}
}
@MISC{Hoehndorf_applyingontology,
author = {Robert Hoehndorf and Axel-cyrille Ngonga Ngomo and Sampo Pyysalo and Tomoko Ohta and Anika Oellrich and
Dietrich Rebholz-schuhmann},
title = {Applying ontology design patterns to the implementation of relations in GENIA},
year = {}
}
```
| [
-0.24316264688968658,
-0.530697762966156,
0.3891853392124176,
0.0881972536444664,
-0.36019060015678406,
-0.1483861804008484,
-0.13822709023952484,
-0.5777972936630249,
0.5243141055107117,
0.19019901752471924,
-0.577562689781189,
-0.7013848423957825,
-0.43470966815948486,
0.4062101244926452... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/pharmaconer | bigbio | 2022-12-22T15:46:15Z | 34 | 1 | null | [
"multilinguality:monolingual",
"language:es",
"license:cc-by-4.0",
"region:us"
] | 2022-12-22T15:46:15Z | 2022-11-13T22:11:24.000Z | 2022-11-13T22:11:24 |
---
language:
- es
bigbio_language:
- Spanish
license: cc-by-4.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_4p0
pretty_name: PharmaCoNER
homepage: https://temu.bsc.es/pharmaconer/index.php/datasets/
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- TEXT_CLASSIFICATION
---
# Dataset Card for PharmaCoNER
## Dataset Description
- **Homepage:** https://temu.bsc.es/pharmaconer/index.php/datasets/
- **Pubmed:** False
- **Public:** True
- **Tasks:** NER,TXTCLASS
### Subtrack 1
PharmaCoNER: Pharmacological Substances, Compounds and Proteins Named Entity Recognition track
This dataset is designed for the PharmaCoNER task, sponsored by Plan de Impulso de las Tecnologías del Lenguaje.
It is a manually classified collection of clinical case studies derived from the Spanish Clinical Case Corpus (SPACCC), an open access electronic library that gathers Spanish medical publications from SciELO (Scientific Electronic Library Online).
The annotation of the entire set of entity mentions was carried out by medicinal chemistry experts and it includes the following 4 entity types: NORMALIZABLES, NO_NORMALIZABLES, PROTEINAS and UNCLEAR.
The PharmaCoNER corpus contains a total of 396,988 words and 1,000 clinical cases that have been randomly sampled into 3 subsets. The training set contains 500 clinical cases, while the development and test sets contain 250 clinical cases each.
For further information, please visit https://temu.bsc.es/pharmaconer/ or send an email to encargo-pln-life@bsc.es
SUBTRACK 1: NER offset and entity type classification
The first subtrack consists in the classical entity-based or instanced-based evaluation that requires that system outputs match exactly the beginning and end locations of each entity tag, as well as match the entity annotation type of the gold standard annotations.
### Subtrack 2
PharmaCoNER: Pharmacological Substances, Compounds and Proteins Named Entity Recognition track
This dataset is designed for the PharmaCoNER task, sponsored by Plan de Impulso de las Tecnologías del Lenguaje.
It is a manually classified collection of clinical case studies derived from the Spanish Clinical Case Corpus (SPACCC), an open access electronic library that gathers Spanish medical publications from SciELO (Scientific Electronic Library Online).
The annotation of the entire set of entity mentions was carried out by medicinal chemistry experts and it includes the following 4 entity types: NORMALIZABLES, NO_NORMALIZABLES, PROTEINAS and UNCLEAR.
The PharmaCoNER corpus contains a total of 396,988 words and 1,000 clinical cases that have been randomly sampled into 3 subsets. The training set contains 500 clinical cases, while the development and test sets contain 250 clinical cases each.
For further information, please visit https://temu.bsc.es/pharmaconer/ or send an email to encargo-pln-life@bsc.es
SUBTRACK 2: CONCEPT INDEXING
In the second subtask, a list of unique SNOMED concept identifiers have to be generated for each document. The predictions are compared to the manually annotated concept ids corresponding to chemical compounds and pharmacological substances.
### Full Task
PharmaCoNER: Pharmacological Substances, Compounds and Proteins Named Entity Recognition track
This dataset is designed for the PharmaCoNER task, sponsored by Plan de Impulso de las Tecnologías del Lenguaje.
It is a manually classified collection of clinical case studies derived from the Spanish Clinical Case Corpus (SPACCC), an open access electronic library that gathers Spanish medical publications from SciELO (Scientific Electronic Library Online).
The annotation of the entire set of entity mentions was carried out by medicinal chemistry experts and it includes the following 4 entity types: NORMALIZABLES, NO_NORMALIZABLES, PROTEINAS and UNCLEAR.
The PharmaCoNER corpus contains a total of 396,988 words and 1,000 clinical cases that have been randomly sampled into 3 subsets. The training set contains 500 clinical cases, while the development and test sets contain 250 clinical cases each.
For further information, please visit https://temu.bsc.es/pharmaconer/ or send an email to encargo-pln-life@bsc.es
SUBTRACK 1: NER offset and entity type classification
The first subtrack consists in the classical entity-based or instanced-based evaluation that requires that system outputs match exactly the beginning and end locations of each entity tag, as well as match the entity annotation type of the gold standard annotations.
SUBTRACK 2: CONCEPT INDEXING
In the second subtask, a list of unique SNOMED concept identifiers have to be generated for each document. The predictions are compared to the manually annotated concept ids corresponding to chemical compounds and pharmacological substances.
## Citation Information
```
@inproceedings{gonzalez2019pharmaconer,
title = "PharmaCoNER: Pharmacological Substances, Compounds and proteins Named Entity Recognition track",
author = "Gonzalez-Agirre, Aitor and
Marimon, Montserrat and
Intxaurrondo, Ander and
Rabal, Obdulia and
Villegas, Marta and
Krallinger, Martin",
booktitle = "Proceedings of The 5th Workshop on BioNLP Open Shared Tasks",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D19-5701",
doi = "10.18653/v1/D19-5701",
pages = "1--10",
}
```
| [
-0.24766670167446136,
-0.5284105539321899,
0.4311704635620117,
0.03148394823074341,
-0.37239816784858704,
0.061979591846466064,
-0.04625728353857994,
-0.5932647585868835,
0.6864415407180786,
0.42697176337242126,
-0.46210527420043945,
-0.6864395141601562,
-0.8418335318565369,
0.374931961297... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
heegyu/korean-petitions | heegyu | 2023-01-15T09:46:48Z | 34 | 2 | null | [
"license:mit",
"region:us"
] | 2023-01-15T09:46:48Z | 2022-11-22T07:56:58.000Z | 2022-11-22T07:56:58 | ---
license: mit
---
# 청와대 국민청원
데이터 출처: https://github.com/lovit/petitions_archive<br/>
크기: 651.8MB
sample
```
{
"category": "반려동물",
"begin": "2017-08-25",
"end": "2017-11-23",
"content": "길고양이들 밥주고있는 사람입니다. 최근에 동네주민과 트러블이 생겨 싸움이 일어났습니다. 길고양이들이 모여든다고 밥주지마라고 윽박지르셨습니다. 쓰레기봉투를 뜯는다거나 사람에게 해끼치거나 하지 않았습니다. 단순히 고양이가 모여드는게 싫답니다. 그럼 애들은 굶어죽어야하나요? 길고양이들이 맘놓고 쉬고 밥먹을 수 있는 환경이 전혀 없는데 무작정 밥안주고 물 안주면 얘네는 어떻게 하나요? 안그래도 수명도 짧은데다가 길고양이를 상대로 학대하는 사람들도 많은데 너무 가엾습니다. 강동구청은 고양이 급식소라고 만들어주셨던데 동네마다 한개씩이라도 만들어 주셨으면좋겠어요.. 밥에다가 이상한짓 하는 사람 있을 수 있으니까 cctv도 설치도 해주셨으면 합니다.. (급식소에 쥐약을 뿌려 고양이가 죽은 사례가 있습니다) 지구가 사람껀 아니잖아요 동물과도 더불어 살줄 알아야죠 문대통령님께서 동물복지 관련 공략을 내셨지만 나아진게 전혀 없는거같아요. 공략 꼭 지켜주세요.. 믿고 뽑았는데 전혀 나아지고 바뀐게 없으면 너무 실망스럽잖아요.. 그리고 길고양이뿐만 아니라 다른 동물 학대하는 부분도 처벌 강화 부탁드립니다",
"num_agree": 5,
"petition_idx": "513",
"status": "청원종료",
"title": "길고양이를 도와주세요"
}
``` | [
-0.5742012858390808,
-0.6575832366943359,
0.49678710103034973,
0.31950563192367554,
-0.8946977257728577,
-0.013286483474075794,
0.1758848875761032,
-0.02635001204907894,
0.7969637513160706,
0.783451497554779,
-0.11449235677719116,
-0.8948571681976318,
-0.7648964524269104,
-0.01004427485167... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ai4bharat/kathbath | ai4bharat | 2022-12-09T09:59:48Z | 34 | 2 | null | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"license:mit",
"arxiv:2208.11761",
"region:us"
] | 2022-12-09T09:59:48Z | 2022-12-04T13:28:53.000Z | 2022-12-04T13:28:53 | ---
annotations_creators:
- expert-generated
language_bcp47:
- bn,gu,kn,hi,ml,mr,or,pa,sn,ta,te,ur
language_creators:
- machine-generated
license:
- mit
multilinguality:
- multilingual
pretty_name: Kathbath
size_categories:
- 100K<n<1M
source_datasets:
- original
tags: []
task_categories:
- automatic-speech-recognition
task_ids: []
---
# Dataset Card for Kathbath
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:https://ai4bharat.org/indic-superb**
- **Repository:https://github.com/AI4Bharat/IndicSUPERB**
- **Paper:https://arxiv.org/pdf/2208.11761.pdf**
- **Point of Contact:tahirjmakhdoomi@gmail.com**
### Dataset Summary
Kathbath is an human-labeled ASR dataset containing 1,684 hours of labelled speech data across 12 Indian languages from 1,218 contributors located in 203 districts in India
### Languages
- Bengali
- Gujarati
- Kannada
- Hindi
- Malayalam
- Marathi
- Odia
- Punjabi
- Sanskrit
- Tamil
- Telugu
- Urdu
## Dataset Structure
```
Audio Data
data
├── bengali
│ ├── <split_name>
│ │ ├── 844424931537866-594-f.m4a
│ │ ├── 844424931029859-973-f.m4a
│ │ ├── ...
├── gujarati
├── ...
Transcripts
data
├── bengali
│ ├── <split_name>
│ │ ├── transcription_n2w.txt
├── gujarati
├── ...
```
### Licensing Information
The IndicSUPERB dataset is released under this licensing scheme:
- We do not own any of the raw text used in creating this dataset.
- The text data comes from the IndicCorp dataset which is a crawl of publicly available websites.
- The audio transcriptions of the raw text and labelled annotations of the datasets have been created by us.
- We license the actual packaging of all this data under the Creative Commons CC0 license (“no rights reserved”).
- To the extent possible under law, AI4Bharat has waived all copyright and related or neighboring rights to the IndicSUPERB dataset.
- This work is published from: India.
### Citation Information
```
@misc{https://doi.org/10.48550/arxiv.2208.11761,
doi = {10.48550/ARXIV.2208.11761},
url = {https://arxiv.org/abs/2208.11761},
author = {Javed, Tahir and Bhogale, Kaushal Santosh and Raman, Abhigyan and Kunchukuttan, Anoop and Kumar, Pratyush and Khapra, Mitesh M.},
title = {IndicSUPERB: A Speech Processing Universal Performance Benchmark for Indian languages},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
### Contributions
We would like to thank the Ministry of Electronics and Information Technology (MeitY) of the Government of India and the Centre for Development of Advanced Computing (C-DAC), Pune for generously supporting this work and providing us access to multiple GPU nodes on the Param Siddhi Supercomputer. We would like to thank the EkStep Foundation and Nilekani Philanthropies for their generous grant which went into hiring human resources as well as cloud resources needed for this work. We would like to thank DesiCrew for connecting us to native speakers for collecting data. We would like to thank Vivek Seshadri from Karya Inc. for helping setup the data collection infrastructure on the Karya platform. We would like to thank all the members of AI4Bharat team in helping create the Query by Example dataset. | [
-0.3140782117843628,
-0.42582789063453674,
-0.04882282018661499,
0.43422171473503113,
-0.44693753123283386,
0.37441885471343994,
-0.12745055556297302,
-0.3609294295310974,
0.2814835011959076,
0.3587455451488495,
-0.4525000751018524,
-0.6270992755889893,
-0.5920942425727844,
0.2292256355285... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Dahoas/first-instruct-human-assistant-prompt | Dahoas | 2023-01-11T19:15:52Z | 34 | 1 | null | [
"region:us"
] | 2023-01-11T19:15:52Z | 2023-01-11T19:15:48.000Z | 2023-01-11T19:15:48 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
silatus/1k_Website_Screenshots_and_Metadata | silatus | 2023-01-19T05:20:33Z | 34 | 12 | null | [
"task_categories:text-to-image",
"task_categories:image-classification",
"task_categories:image-segmentation",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-nc-sa-4.0",
"screenshots",
"metadata",
"websites",
"webpages",
"region:us"
] | 2023-01-19T05:20:33Z | 2023-01-19T04:33:07.000Z | 2023-01-19T04:33:07 | ---
license: cc-by-nc-sa-4.0
task_categories:
- text-to-image
- image-classification
- image-segmentation
language:
- en
tags:
- screenshots
- metadata
- websites
- webpages
pretty_name: 1000 Website Screenshots with Metadata
size_categories:
- 1K<n<10K
---
# Dataset Card for 1000 Website Screenshots with Metadata
## Dataset Description
- **Homepage:** [silatus.com](https://silatus.com/datasets)
- **Point of Contact:** [datasets@silatus.com](mailto:datasets@silatus.com)
### Dataset Summary
Silatus is sharing, for free, a segment of a dataset that we are using to train a generative AI model for text-to-mockup conversions. This dataset was collected in December 2022 and early January 2023, so it contains very recent data from 1,000 of the world's most popular websites. You can get our larger 10,000 website dataset for free at: [https://silatus.com/datasets](https://silatus.com/datasets)
This dataset includes:
**High-res screenshots**
- 1024x1024px
- Loaded Javascript
- Loaded Images
**Text metadata**
- Site title
- Navbar content
- Full page text data
- Page description
**Visual metadata**
- Content (images, videos, inputs, buttons) absolute & relative positions
- Color profile
- Base font | [
-0.49092525243759155,
-0.178090438246727,
0.15804094076156616,
0.4244827330112457,
-0.32095471024513245,
-0.1514693796634674,
-0.05567467212677002,
-0.13337376713752747,
0.19343312084674835,
0.6926007270812988,
-0.6323643326759338,
-0.7972561120986938,
-0.26368144154548645,
0.1187341287732... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jbrazzy/baby_names | jbrazzy | 2023-03-06T00:45:44Z | 34 | 1 | null | [
"region:us"
] | 2023-03-06T00:45:44Z | 2023-03-06T00:45:34.000Z | 2023-03-06T00:45:34 | ---
dataset_info:
features:
- name: Names
dtype: string
- name: Sex
dtype: string
- name: Count
dtype: int64
- name: Year
dtype: int64
splits:
- name: train
num_bytes: 33860482
num_examples: 1084385
- name: test
num_bytes: 8482889
num_examples: 271663
download_size: 13301020
dataset_size: 42343371
---
# Dataset Card for "baby_names"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.49682924151420593,
-0.11351995915174484,
0.01424409355968237,
0.3633425533771515,
-0.3394363820552826,
-0.08427944779396057,
0.2545880675315857,
-0.14483553171157837,
0.7312601208686829,
0.3403832018375397,
-0.9449804425239563,
-0.6716633439064026,
-0.7709187865257263,
-0.29528802633285... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bowphs/multilingual_pretraining | bowphs | 2023-03-12T17:28:33Z | 34 | 0 | null | [
"region:us"
] | 2023-03-12T17:28:33Z | 2023-03-12T15:07:34.000Z | 2023-03-12T15:07:34 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
s-nlp/en_paradetox_content | s-nlp | 2023-09-08T08:38:03Z | 34 | 0 | null | [
"task_categories:text-classification",
"language:en",
"license:openrail++",
"region:us"
] | 2023-09-08T08:38:03Z | 2023-03-24T11:07:04.000Z | 2023-03-24T11:07:04 | ---
license: openrail++
task_categories:
- text-classification
language:
- en
---
# ParaDetox: Detoxification with Parallel Data (English). Content Task Results
This repository contains information about **Content Task** markup from [English Paradetox dataset](https://huggingface.co/datasets/s-nlp/paradetox) collection pipeline.
The original paper ["ParaDetox: Detoxification with Parallel Data"](https://aclanthology.org/2022.acl-long.469/) was presented at ACL 2022 main conference.
## ParaDetox Collection Pipeline
The ParaDetox Dataset collection was done via [Yandex.Toloka](https://toloka.yandex.com/) crowdsource platform. The collection was done in three steps:
* *Task 1:* **Generation of Paraphrases**: The first crowdsourcing task asks users to eliminate toxicity in a given sentence while keeping the content.
* *Task 2:* **Content Preservation Check**: We show users the generated paraphrases along with their original variants and ask them to indicate if they have close meanings.
* *Task 3:* **Toxicity Check**: Finally, we check if the workers succeeded in removing toxicity.
Specifically this repo contains the results of **Task 2: Content Preservation Check**. Here, the samples with markup confidence >= 90 are present. One text in the pair is toxic, another -- its non-toxic paraphrase (should be).
Totally, datasets contains 32,317 pairs. Among them, the minor part is negative examples (4,562 pairs).
## Citation
```
@inproceedings{logacheva-etal-2022-paradetox,
title = "{P}ara{D}etox: Detoxification with Parallel Data",
author = "Logacheva, Varvara and
Dementieva, Daryna and
Ustyantsev, Sergey and
Moskovskiy, Daniil and
Dale, David and
Krotova, Irina and
Semenov, Nikita and
Panchenko, Alexander",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.acl-long.469",
pages = "6804--6818",
abstract = "We present a novel pipeline for the collection of parallel data for the detoxification task. We collect non-toxic paraphrases for over 10,000 English toxic sentences. We also show that this pipeline can be used to distill a large existing corpus of paraphrases to get toxic-neutral sentence pairs. We release two parallel corpora which can be used for the training of detoxification models. To the best of our knowledge, these are the first parallel datasets for this task.We describe our pipeline in detail to make it fast to set up for a new language or domain, thus contributing to faster and easier development of new parallel resources.We train several detoxification models on the collected data and compare them with several baselines and state-of-the-art unsupervised approaches. We conduct both automatic and manual evaluations. All models trained on parallel data outperform the state-of-the-art unsupervised models by a large margin. This suggests that our novel datasets can boost the performance of detoxification systems.",
}
```
## Contacts
For any questions, please contact: Daryna Dementieva (dardem96@gmail.com) | [
-0.09239424020051956,
-0.38008490204811096,
0.6730378270149231,
0.2032424956560135,
-0.2886861264705658,
-0.0492149256169796,
-0.07116740196943283,
0.0198446586728096,
0.2023858278989792,
0.8056582808494568,
-0.3558454215526581,
-0.9368711113929749,
-0.528902530670166,
0.5117357969284058,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dembandoye/langchain-docs | dembandoye | 2023-03-29T00:17:05Z | 34 | 1 | null | [
"region:us"
] | 2023-03-29T00:17:05Z | 2023-03-29T00:15:33.000Z | 2023-03-29T00:15:33 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
andersonbcdefg/supernatural-instructions-2m | andersonbcdefg | 2023-03-30T20:45:33Z | 34 | 10 | null | [
"region:us"
] | 2023-03-30T20:45:33Z | 2023-03-30T20:43:52.000Z | 2023-03-30T20:43:52 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 1859403487.079275
num_examples: 1990915
download_size: 521457643
dataset_size: 1859403487.079275
---
# Dataset Card for "supernatural-instructions-2m"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.39658045768737793,
-0.310884565114975,
0.32462355494499207,
0.5161900520324707,
-0.28147295117378235,
-0.21654468774795532,
0.2584516108036041,
-0.25173327326774597,
0.6687657237052917,
0.7510517239570618,
-1.098577857017517,
-0.5733677744865417,
-0.4836626648902893,
-0.1620500236749649... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mstz/balance_scale | mstz | 2023-04-15T11:14:55Z | 34 | 0 | null | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"balance_scale",
"tabular_classification",
"multiclass_classification",
"binary_classification",
"UCI",
"region:us"
] | 2023-04-15T11:14:55Z | 2023-04-05T13:38:46.000Z | 2023-04-05T13:38:46 | ---
language:
- en
tags:
- balance_scale
- tabular_classification
- multiclass_classification
- binary_classification
- UCI
pretty_name: Balance
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- balance
- is_balanced
---
# Balance scale
The [Balance scale dataset](https://archive-beta.ics.uci.edu/dataset/12/balance+scale) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Two weights are put on the arms of a scale. Where does the scale tilt?
# Configurations and tasks
| **Configuration** | **Task** | Description |
|-------------------|---------------------------|---------------------------------------------------------------|
| balance | Multiclass classification | Where does the scale tilt? |
| is_balanced | Binary classification | Does the scale tilt? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/balance_scale", "balance")["train"]
```
# Features
Target feature changes according to the selected configuration and is always in last position in the dataset. | [
-0.6501818299293518,
-0.00852824654430151,
0.06413981318473816,
0.2767573297023773,
0.005387869663536549,
-0.27599892020225525,
0.05732165277004242,
-0.3340614140033722,
0.46738114953041077,
0.5882124900817871,
-0.7341210842132568,
-0.2906406819820404,
-0.7765665650367737,
-0.0770168229937... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
isarth/chatgpt-news-articles | isarth | 2023-04-13T14:08:02Z | 34 | 1 | null | [
"region:us"
] | 2023-04-13T14:08:02Z | 2023-04-12T12:27:52.000Z | 2023-04-12T12:27:52 | ---
dataset_info:
features:
- name: article
dtype: string
- name: highlights
dtype: string
- name: id
dtype: string
- name: chatgpt
dtype: string
splits:
- name: train
num_bytes: 91883734
num_examples: 20000
- name: test
num_bytes: 22989445
num_examples: 5000
download_size: 69781166
dataset_size: 114873179
---
# Dataset Card for "chatgpt-news-articles"
## Dataset Description
- **Homepage:**
- **Repository:** [ChatGPT CNN / DailyMail Dataset repository]()
- **Original Dataset Paper:** [Abstractive Text Summarization Using Sequence-to-Sequence RNNs and Beyond](https://papers.nips.cc/paper/5945-teaching-machines-to-read-and-comprehend.pdf), [Get To The Point: Summarization with Pointer-Generator Networks](https://www.aclweb.org/anthology/K16-1028.pdf)
- **Point of Contact:** [Sarthak Anand](mailto: isarth23@sgmail.com)
### Dataset Summary
The ChatGPT CNN / DailyMail Dataset is a small sample of the original CNN / DailyMaily English-language dataset containing 25k unique news articles. For each corresponding article written by journalists at CNN and the Daily Mail, there is an article written by ChatGPT using the highlights provided by human annotators. The current version supports can be used to study the language comparison between human and ChatGPT news writing.
### Languages
The BCP-47 code for English as generally spoken in the United States is en-US and the BCP-47 code for English as generally spoken in the United Kingdom is en-GB. It is unknown if other varieties of English are represented in the data.
## Dataset Structure
### Data Instances
For each instance, there is a string for the article, a string for the highlights, a string for the id, and a string for an article written by ChatGPT
```
{'article': "Michael Phelps has been crowned Male Athlete of the Year for a fifth time at the 2014 USA Swimming Golden Goggle Awards despite being suspended from competition for six months after a drunken driving arrest in September. Phelps was not at the New York ceremony where Keenan Robinson, an official from his training base, accepted the award on his behalf and confirmed Phelps had returned to the pool. The 18-time Olympic gold medallist stepped away from training in early October. Michael Phelps has been crowned Male Athlete of the Year at the 2014 USA Swimming Golden Goggle Awards . Phelps is the most decorated Olympian in sports history, winning 18 Olympic golds during his career . Olympic gold medallist and world record-holder Katie Ledecky capped her memorable 2014 season by claiming three awards, including USA Swimming's Female Athlete of the Year.",
'highlights': 'Michael Phelps was not present at the New York ceremony . Phelps was handed a six-month suspension by USA Swimming following his arrest for allegedly drink driving last month . Phelps confirmed in October that he would be taking a break from\xa0swimming\xa0to focus on his personal issues . Phelps is the most successful Olympic athlete in history, with 22 medals in total including 18 golds .',
'id': '95ef5b45d999dc9a78c5efa2de87e84f21912086',
'chatgpt': 'Michael Phelps, the most successful Olympic athlete in history, was noticeably absent from a ceremony held in New York City yesterday. The reason for the absence is due to a recent six-month suspension handed to Phelps by USA Swimming following his arrest for allegedly drink driving last month. In October, Phelps confirmed that he would be taking a break from swimming in order to focus on his personal issues. The suspension now means that Phelps will not be able to compete in the upcoming World Championships in Kazan, Russia in August. This will be a disappointing blow to his fans across the world as Phelps holds the record for the most Olympic gold medals, with a total of 18. However, Phelps can take this time to focus on his health and address his personal concerns.'}
```
The average token count for the articles and the highlights are provided below:
| Feature | Mean Word Count |
| ---------- | ---------------- |
| Article | 358 |
| ChatGPT | 352 |
| Highlights | 42 |
### Data Fields
- `id`: a string containing the heximal formated SHA1 hash of the url where the story was retrieved from
- `article`: a string containing the news article written by journalists
- `highlights`: a string containing the highlight of the article as written by the article author
- `chatgpt`: a string containing the news article written by ChatGPT
### Data Splits
The CNN/DailyMail dataset has 2 splits: _train_ and _test_. Below are the statistics of the dataset.
| Dataset Split | Number of Instances in Split |
| ------------- | ------------------------------------------- |
| Train | 20,000 |
| Test | 5,000 |
## Dataset Creation
## ChatGPT Prompt
The number of words for an article (N) was the same as the original article
```
openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a AI assistant that generates news articles from a summary."},
{"role": "user", "content": f'Write a news article using the following summary: {HIGHLIGHTS} \n Write about {N} words only'}
],)
```
### Source Data
### Original Dataset Curators
The data was originally collected by Karl Moritz Hermann, Tomáš Kočiský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom of Google DeepMind. Tomáš Kočiský and Phil Blunsom are also affiliated with the University of Oxford. They released scripts to collect and process the data into the question answering format.
Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, and Bing Xiang of IMB Watson and Çağlar Gu̇lçehre of Université de Montréal modified Hermann et al's collection scripts to restore the data to a summary format. They also produced both anonymized and non-anonymized versions.
The code for the non-anonymized version is made publicly available by Abigail See of Stanford University, Peter J. Liu of Google Brain and Christopher D. Manning of Stanford University at <https://github.com/abisee/cnn-dailymail>. The work at Stanford University was supported by the DARPA DEFT ProgramAFRL contract no. FA8750-13-2-0040.
#### Who are the source language producers?
The text was written by journalists at CNN and the Daily Mail and ChatGPT
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
The original dataset is not anonymized, therefore individuals' names can be found in this dataset as well.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to access the quality and writing style of ChatGPT for writing news articles using highlights provided by humans and further study the biases if present.
### Discussion of Biases
There have been studies measuring gender bias in the original dataset which could be interesting [Bordia and Bowman (2019)](https://www.aclweb.org/anthology/N19-3002.pdf)
### Licensing Information
The ChatGPT CNN / Daily Mail dataset uses the same licence as the original dataset, which is [Apache-2.0 License](http://www.apache.org/licenses/LICENSE-2.0).
| [
-0.28084796667099,
-0.6646310091018677,
0.09583744406700134,
0.3137474060058594,
-0.49082478880882263,
0.0013092466397210956,
-0.3444465100765228,
-0.41949018836021423,
0.28291574120521545,
0.19646193087100983,
-0.4545876681804657,
-0.45990458130836487,
-0.7647945880889893,
0.3102653622627... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lansinuote/diffsion_from_scratch | lansinuote | 2023-04-14T06:36:47Z | 34 | 0 | null | [
"region:us"
] | 2023-04-14T06:36:47Z | 2023-04-14T06:34:05.000Z | 2023-04-14T06:34:05 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 119417305.0
num_examples: 833
download_size: 99672356
dataset_size: 119417305.0
---
# Dataset Card for "diffsion_from_scratch"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6598742008209229,
-0.35081177949905396,
0.13559000194072723,
0.3784273862838745,
-0.30172011256217957,
0.18892526626586914,
0.3872877061367035,
-0.22537127137184143,
0.7505767345428467,
0.43530237674713135,
-0.9547174572944641,
-0.4384208023548126,
-0.8316697478294373,
-0.19199573993682... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mstz/wall_following | mstz | 2023-04-16T18:03:59Z | 34 | 0 | null | [
"task_categories:tabular-classification",
"size_categories:1K<n<5K",
"language:en",
"license:cc",
"wall_following",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"UCI",
"region:us"
] | 2023-04-16T18:03:59Z | 2023-04-14T15:49:57.000Z | 2023-04-14T15:49:57 | ---
language:
- en
tags:
- wall_following
- tabular_classification
- binary_classification
- multiclass_classification
- UCI
pretty_name: WallFollowing
size_categories:
- 1K<n<5K
task_categories:
- tabular-classification
configs:
- wall_following
license: cc
---
# WallFollowing
The [WallFollowing dataset](https://archive-beta.ics.uci.edu/dataset/194/wall+following+robot+navigation+data) from the [UCI repository](https://archive-beta.ics.uci.edu/).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-----------------------|---------------------------|-------------------------|
| wall_following | Multiclass classification.| |
| wall_following_0 | Binary classification. | Is the instance of class 0? |
| wall_following_1 | Binary classification. | Is the instance of class 1? |
| wall_following_2 | Binary classification. | Is the instance of class 2? |
| wall_following_3 | Binary classification. | Is the instance of class 3? | | [
-0.49535050988197327,
-0.4360840320587158,
0.2699003219604492,
0.35800138115882874,
0.14741434156894684,
-0.028447918593883514,
0.20287685096263885,
-0.00023078531376086175,
0.2968718111515045,
0.6474930047988892,
-0.7490897178649902,
-0.9236449599266052,
-0.5264368653297424,
-0.2234686464... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigcode/ta-prompt | bigcode | 2023-05-04T12:20:22Z | 34 | 160 | null | [
"language:code",
"license:apache-2.0",
"region:us"
] | 2023-05-04T12:20:22Z | 2023-05-03T14:04:39.000Z | 2023-05-03T14:04:39 | ---
license: apache-2.0
language:
- code
programming_language:
- Java
- JavaScript
- Python
---
# Dataset summary
This repository is dedicated to prompts used to perform in-context learning with [starcoder](https://huggingface.co/bigcode/starcoder). As a matter of fact, the model is an
autoregressive language model that is trained on both code and natural language text. It can be turned into an AI-powered technical assistant by prepending conversations to
its 8192-tokens context window.
# Format
The prompt is a .txt file which contains multiple conversations between a human and the assistant. Here is the format
```
-----
Human: <instruction>
Assistant: <answer>
-----
Human: <instruction>
Assistant: <answer>
Human: <instruction>
Assistant: <answer>
.
.
.
-----
```
# Use cases
We want the technical assistant to cover a diverse set of use cases
- **Code-to-text**:
- `What is the purpose of the following code?<code>`
- `What is the bug in the following code?<code>`
- **Text-to-code**:
- `Write/Design/Implement a function to <task>`
- **Code-to-code**:
- `Translate this <code> from <programming language> to <programming language>.`
- **Text-to-text**:
- `What is <technical concept>`
- **General-purpose Q&A**
- `What are you?`
- `What is your purpose?`
# Scope of the work
As a model designed for coding tasks, the user should not expect the model to output relevant answers when prompted with a general-purpose question. When it comes to coding
requests, the output of the model should be post-processed before testing them. | [
-0.4288449287414551,
-0.8466447591781616,
0.5186727643013,
-0.06440000981092453,
0.015582207590341568,
-0.07964614778757095,
-0.3215850293636322,
-0.23925670981407166,
0.022884894162416458,
0.7117053866386414,
-0.8581878542900085,
-0.6444356441497803,
-0.4042147397994995,
0.330551922321319... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jeremyc/Alpaca-Lora-GPT4-Swedish | jeremyc | 2023-05-06T08:20:32Z | 34 | 3 | null | [
"size_categories:10K<n<100K",
"language:sv",
"region:us"
] | 2023-05-06T08:20:32Z | 2023-05-05T15:20:12.000Z | 2023-05-05T15:20:12 | ---
language:
- sv
pretty_name: Alpaca-Lora GPT4 Swedish
size_categories:
- 10K<n<100K
---
This dataset is the machine translation of the GPT4 dataset provided on Alpaca-Lora github repository.
We provide two version: The full translation, and a translation of a subset of ~50 000 entries that was cleaned and do not contain instances of "I am an AI language model" or similar.
This work was inspired from the French alpaca lora variant **Vigogne** and the Ukrainian alpaca lora variante **Kruk**. | [
-0.3270246684551239,
-0.800794780254364,
0.3839057981967926,
0.05583861097693443,
-0.3609893023967743,
-0.016179576516151428,
0.050238896161317825,
-0.5315284132957458,
0.36605748534202576,
0.941389262676239,
-0.9169993996620178,
-0.6051246523857117,
-0.8737991452217102,
0.2302586734294891... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zetavg/CC-100-zh-Hant-merged | zetavg | 2023-05-06T11:11:44Z | 34 | 1 | null | [
"region:us"
] | 2023-05-06T11:11:44Z | 2023-05-06T04:28:11.000Z | 2023-05-06T04:28:11 | ---
dataset_info:
features:
- name: content
dtype: string
splits:
- name: train
num_bytes: 17882150544
num_examples: 12328228
download_size: 12940914691
dataset_size: 17882150544
---
# CC-100 zh-Hant (Traditional Chinese)
From https://data.statmt.org/cc-100/, only zh-Hant - Chinese (Traditional). Broken into paragraphs, with each paragraphs as a row.
Estimated to have around 4B tokens when tokenized with the [`bigscience/bloom`](https://huggingface.co/bigscience/bloom) tokenizer.
There's another version that the text is split by lines instead of paragraphs: [`zetavg/CC-100-zh-Hant`](https://huggingface.co/datasets/zetavg/CC-100-zh-Hant).
## References
Please cite the following if you found the resources in the CC-100 corpus useful.
* **Unsupervised Cross-lingual Representation Learning at Scale**, *Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, Veselin Stoyanov*, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL), p. 8440-8451, July 2020, [pdf](https://www.aclweb.org/anthology/2020.acl-main.747.pdf), [bib](https://www.aclweb.org/anthology/2020.acl-main.747.bib) .
* **CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data**, *Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Joulin, Edouard Grave*, Proceedings of the 12th Language Resources and Evaluation Conference (LREC), p. 4003-4012, May 2020, [pdf](https://www.aclweb.org/anthology/2020.lrec-1.494.pdf), [bib](https://www.aclweb.org/anthology/2020.lrec-1.494.bib). | [
-0.37669098377227783,
-0.6955010890960693,
0.24857306480407715,
0.22390216588974,
-0.26973477005958557,
0.07408764213323593,
-0.6145340800285339,
-0.3262341022491455,
0.39072778820991516,
0.22477585077285767,
-0.5200884342193604,
-0.7128946185112,
-0.22879855334758759,
0.1802183985710144,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Thaweewat/gpteacher-20k-th | Thaweewat | 2023-05-09T17:54:22Z | 34 | 1 | null | [
"task_categories:question-answering",
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:th",
"license:cc-by-sa-3.0",
"instruction-finetuning",
"region:us"
] | 2023-05-09T17:54:22Z | 2023-05-09T17:34:31.000Z | 2023-05-09T17:34:31 | ---
license: cc-by-sa-3.0
task_categories:
- question-answering
- summarization
language:
- th
tags:
- instruction-finetuning
size_categories:
- 10K<n<100K
---
# Summary
This is a 🇹🇭 Thai-instructed dataset translated using Google Cloud Translation from [GPTeacher](https://github.com/teknium1/GPTeacher), A collection of modular datasets generated by GPT-4, General-Instruct & Roleplay-Instruct
and is comprised of around 20,000 examples with deduplication. The dataset was asked to include reasoning and thought steps in the example responses where appropriate.
Supported Tasks:
- Training LLMs
- Synthetic Data Generation
- Data Augmentation
Languages: Thai
Version: 1.0
---
| [
-0.28099894523620605,
-0.9054079055786133,
0.427196204662323,
0.04124359041452408,
-0.540155291557312,
-0.23255957663059235,
-0.06363285332918167,
-0.037682801485061646,
-0.06165546178817749,
0.856186032295227,
-0.8481410145759583,
-0.704159140586853,
-0.3648338317871094,
0.246380567550659... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yanchao/cifar10buqi | yanchao | 2023-05-19T07:00:52Z | 34 | 0 | null | [
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"chemistry",
"region:us"
] | 2023-05-19T07:00:52Z | 2023-05-19T05:56:55.000Z | 2023-05-19T05:56:55 | ---
license: apache-2.0
language:
- en
tags:
- chemistry
pretty_name: buqi
size_categories:
- 1K<n<10K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
buqi
### Supported Tasks and Leaderboards
buqi
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | [
-0.3558424413204193,
-0.3667213022708893,
-0.08696829527616501,
0.4017181992530823,
-0.23690131306648254,
0.21175630390644073,
-0.26940295100212097,
-0.2527483403682709,
0.3723466694355011,
0.7039084434509277,
-0.845941424369812,
-1.222138524055481,
-0.6589949131011963,
0.1357823610305786,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
swaption2009/cyber-threat-intelligence-custom-data | swaption2009 | 2023-06-04T07:35:25Z | 34 | 3 | null | [
"task_categories:text-generation",
"task_categories:table-question-answering",
"language:en",
"region:us"
] | 2023-06-04T07:35:25Z | 2023-06-04T07:31:03.000Z | 2023-06-04T07:31:03 | ---
task_categories:
- text-generation
- table-question-answering
language:
- en
--- | [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ignmilton/ign_clean_instruct_dataset_500k | ignmilton | 2023-06-13T07:45:51Z | 34 | 18 | null | [
"task_categories:question-answering",
"task_categories:conversational",
"size_categories:100K<n<1M",
"language:en",
"license:apache-2.0",
"code",
"region:us"
] | 2023-06-13T07:45:51Z | 2023-06-12T07:12:30.000Z | 2023-06-12T07:12:30 | ---
license: apache-2.0
task_categories:
- question-answering
- conversational
language:
- en
tags:
- code
pretty_name: ign_500k
size_categories:
- 100K<n<1M
---
This dataset contains ~508k prompt-instruction pairs with high quality responses. It was synthetically created from a subset of Ultrachat prompts. It does not contain any alignment focused responses or NSFW content.
Licensed under apache-2.0 | [
-0.35631299018859863,
-0.9234778881072998,
0.28559428453445435,
0.21104100346565247,
-0.2743692696094513,
-0.014597049914300442,
0.13221687078475952,
-0.0919480100274086,
0.28736773133277893,
0.7298001646995544,
-1.0946424007415771,
-0.525181233882904,
0.046210650354623795,
0.2318983227014... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kheder/quran | kheder | 2023-06-16T04:38:45Z | 34 | 1 | null | [
"region:us"
] | 2023-06-16T04:38:45Z | 2023-06-16T04:37:40.000Z | 2023-06-16T04:37:40 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ChanceFocus/flare-sm-acl | ChanceFocus | 2023-06-25T18:16:24Z | 34 | 1 | null | [
"region:us"
] | 2023-06-25T18:16:24Z | 2023-06-25T17:56:25.000Z | 2023-06-25T17:56:25 | ---
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
- name: answer
dtype: string
- name: text
dtype: string
- name: choices
sequence: string
- name: gold
dtype: int64
splits:
- name: train
num_bytes: 70385369
num_examples: 20781
- name: valid
num_bytes: 9049127
num_examples: 2555
- name: test
num_bytes: 13359338
num_examples: 3720
download_size: 46311736
dataset_size: 92793834
---
# Dataset Card for "flare-sm-acl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6421414613723755,
-0.25389108061790466,
-0.05118058621883392,
0.11781902611255646,
-0.1452508121728897,
0.28086987137794495,
0.3520955741405487,
-0.15133072435855865,
0.9708675742149353,
0.5145798921585083,
-0.934830904006958,
-0.5254755616188049,
-0.5161901116371155,
-0.127093374729156... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yuhsinchan/nmsqa_seg-dev_test | yuhsinchan | 2023-06-26T15:58:35Z | 34 | 0 | null | [
"region:us"
] | 2023-06-26T15:58:35Z | 2023-06-26T15:58:11.000Z | 2023-06-26T15:58:11 | ---
dataset_info:
features:
- name: case_id
dtype: string
- name: context_code
sequence: int16
- name: context_cnt
sequence: int16
- name: question_code
sequence: int16
- name: question_cnt
sequence: int16
- name: start_idx
dtype: int64
- name: end_idx
dtype: int64
- name: start_time
dtype: float64
- name: end_time
dtype: float64
splits:
- name: dev
num_bytes: 32879888
num_examples: 17155
- name: test
num_bytes: 455624
num_examples: 267
download_size: 9191201
dataset_size: 33335512
---
# Dataset Card for "nmsqa_seg-dev_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.717555046081543,
-0.3255188465118408,
0.017827924340963364,
0.16533826291561127,
-0.22295595705509186,
0.07089684903621674,
0.3611237108707428,
0.22648799419403076,
1.0162358283996582,
0.46794798970222473,
-0.9599223136901855,
-0.7511430382728577,
-0.3835763931274414,
-0.058996282517910... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
IsDeeCee/StoryMaker | IsDeeCee | 2023-06-26T22:49:18Z | 34 | 1 | null | [
"region:us"
] | 2023-06-26T22:49:18Z | 2023-06-26T22:47:34.000Z | 2023-06-26T22:47:34 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigheiniuJ/MyC4Validation | bigheiniuJ | 2023-07-22T00:01:07Z | 34 | 0 | null | [
"region:us"
] | 2023-07-22T00:01:07Z | 2023-07-21T23:57:04.000Z | 2023-07-21T23:57:04 | ---
dataset_info:
features:
- name: text
dtype: string
- name: timestamp
dtype: string
- name: url
dtype: string
splits:
- name: validation
num_bytes: 825766822
num_examples: 364608
download_size: 509372854
dataset_size: 825766822
---
# Dataset Card for "MyC4Validation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7358858585357666,
-0.17674216628074646,
0.2544952929019928,
0.26742565631866455,
-0.07816783338785172,
0.19847248494625092,
0.39846310019493103,
-0.2771909534931183,
0.6330458521842957,
0.4971243441104889,
-0.8769786357879639,
-0.6949731707572937,
-0.27397239208221436,
0.205357685685157... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BrunoGR/Twitter_Sentiment_Analysis_Train_Corpus_in_Spanish | BrunoGR | 2023-08-10T01:48:16Z | 34 | 0 | null | [
"language:es",
"license:apache-2.0",
"region:us"
] | 2023-08-10T01:48:16Z | 2023-08-10T01:45:10.000Z | 2023-08-10T01:45:10 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: etiqueta
dtype: string
- name: texto
dtype: string
splits:
- name: train
num_bytes: 134544035
num_examples: 1082821
- name: test
num_bytes: 41458582
num_examples: 334641
download_size: 89208506
dataset_size: 176002617
license: apache-2.0
language:
- es
pretty_name: e
---
# Dataset Card for "Twitter_Sentiment_Analysis_Train_Corpus_in_Spanish"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4605322778224945,
-0.22483369708061218,
0.06497615575790405,
0.8584027886390686,
-0.21012546122074127,
0.41084444522857666,
-0.054858285933732986,
-0.10787884891033173,
0.9481192231178284,
0.21791291236877441,
-0.8962715268135071,
-1.0628043413162231,
-0.8532810211181641,
-0.18669039011... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mickume/alt_potterverse | mickume | 2023-11-03T06:34:35Z | 34 | 0 | null | [
"region:us"
] | 2023-11-03T06:34:35Z | 2023-09-01T08:15:27.000Z | 2023-09-01T08:15:27 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 633374028
num_examples: 3509338
download_size: 392101893
dataset_size: 633374028
---
# Dataset Card for "alt_potterverse"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5011711716651917,
-0.15676294267177582,
-0.04766688123345375,
0.21673743426799774,
-0.0033850923646241426,
0.05351465940475464,
0.23410667479038239,
-0.17976920306682587,
0.8620191216468811,
0.48738834261894226,
-1.0748093128204346,
-0.7231993675231934,
-0.5607254505157471,
-0.003683129... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Admin08077/Taxonomy | Admin08077 | 2023-10-21T05:38:46Z | 34 | 2 | null | [
"task_categories:token-classification",
"task_categories:text-classification",
"task_categories:table-question-answering",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:translation",
"task_categories:summarization",
"task_categories:conversational",... | 2023-10-21T05:38:46Z | 2023-09-03T08:06:18.000Z | 2023-09-03T08:06:18 | ---
license: other
task_categories:
- token-classification
- text-classification
- table-question-answering
- question-answering
- zero-shot-classification
- translation
- summarization
- conversational
- feature-extraction
- text-generation
- text2text-generation
- sentence-similarity
- audio-classification
- fill-mask
- text-to-speech
- automatic-speech-recognition
- voice-activity-detection
- depth-estimation
- audio-to-audio
- image-classification
- image-segmentation
- object-detection
- text-to-image
- image-to-text
- image-to-image
- unconditional-image-generation
- reinforcement-learning
- robotics
- tabular-classification
- video-classification
- tabular-to-text
- tabular-regression
- multiple-choice
- table-to-text
- text-retrieval
- time-series-forecasting
- text-to-video
- visual-question-answering
- zero-shot-image-classification
- graph-ml
language:
- en
tags:
- finance
- quantum Banking
- '#U'
- XBRL
- 'TAXONOMY '
pretty_name: 'The Private Bank Taxonomy '
size_categories:
- n>1T
---
## API Calls
If you wish to programmatically fetch the Autonomous Private Banking Taxonomy dataset, you can do so via the following curl commands:
```bash
# Fetch rows of the dataset
curl -X GET "https://datasets-server.huggingface.co/rows?dataset=Admin08077%2FTaxonomy&config=default&split=train&offset=0&limit=100"
# Get dataset splits
curl -X GET "https://datasets-server.huggingface.co/splits?dataset=Admin08077%2FTaxonomy"
# Download the dataset in Parquet format
curl -X GET "https://huggingface.co/api/datasets/Admin08077/Taxonomy/parquet/default/train"
```
To clone the dataset repository, make sure you have git-lfs installed. Then run:
```bash
git lfs install
git clone https://huggingface.co/datasets/Admin08077/Taxonomy
```
If you want to clone without large files, you can use:
```bash
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/Admin08077/Taxonomy
```
### Python Code to Load Dataset
If you are using Python, you can easily load the dataset using the Hugging Face `datasets` library:
```python
from datasets import load_dataset
dataset = load_dataset("Admin08077/Taxonomy")
```
## Citation
If you use this dataset in your research or project, please cite it using the following BibTeX entry:
```bibtex
@misc{james_burvel_o'callaghan_iii_2023,
author = {James Burvel O'Callaghan III},
title = {Taxonomy (Revision 9e2a198)},
year = 2023,
url = {https://huggingface.co/datasets/Admin08077/Taxonomy},
doi = {10.57967/hf/1070},
publisher = {Hugging Face}
}
``` | [
-0.5836746692657471,
-0.5761773586273193,
0.054572004824876785,
0.02091795578598976,
-0.2005694955587387,
0.38101112842559814,
0.16451309621334076,
-0.35577547550201416,
0.7876737117767334,
0.817119300365448,
-0.53416508436203,
-0.5811870098114014,
-0.37853726744651794,
0.12183468043804169... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sarahlintang/Alpaca_indo_instruct | sarahlintang | 2023-09-07T06:27:38Z | 34 | 0 | null | [
"language:id",
"region:us"
] | 2023-09-07T06:27:38Z | 2023-09-07T06:21:17.000Z | 2023-09-07T06:21:17 | ---
language:
- id
---
Translated from Stanford alpaca using google translate API.
| [
-0.09398803114891052,
-0.7560515999794006,
0.5689483284950256,
0.3498520255088806,
-0.7421035766601562,
-0.29908648133277893,
-0.10339720547199249,
-0.773759126663208,
0.7046616077423096,
0.7767497301101685,
-0.8950673341751099,
-0.5677374601364136,
-0.7820276618003845,
0.08497326821088791... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MU-NLPC/Calc-svamp | MU-NLPC | 2023-10-30T15:05:26Z | 34 | 0 | null | [
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"license:mit",
"math world problems",
"math",
"arithmetics",
"arxiv:2305.15017",
"region:us"
] | 2023-10-30T15:05:26Z | 2023-09-08T14:56:46.000Z | 2023-09-08T14:56:46 | ---
language:
- en
license: mit
size_categories:
- n<1K
task_categories:
- text-generation
tags:
- math world problems
- math
- arithmetics
dataset_info:
- config_name: default
features:
- name: id
dtype: string
- name: question
dtype: string
- name: chain
dtype: string
- name: result
dtype: string
- name: result_float
dtype: float64
- name: equation
dtype: string
- name: problem_type
dtype: string
splits:
- name: test
num_bytes: 335744
num_examples: 1000
download_size: 116449
dataset_size: 335744
- config_name: original-splits
features:
- name: id
dtype: string
- name: question
dtype: string
- name: chain
dtype: string
- name: result
dtype: string
- name: result_float
dtype: float64
- name: equation
dtype: string
- name: problem_type
dtype: string
splits:
- name: test
num_bytes: 335744
num_examples: 1000
download_size: 116449
dataset_size: 335744
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- config_name: original-splits
data_files:
- split: test
path: original-splits/test-*
---
# Dataset Card for Calc-SVAMP
## Summary
The dataset is a collection of simple math word problems focused on arithmetics. It is derived from <https://github.com/arkilpatel/SVAMP/>.
The main addition in this dataset variant is the `chain` column. It was created by converting the solution to a simple html-like language that can be easily
parsed (e.g. by BeautifulSoup). The data contains 3 types of tags:
- gadget: A tag whose content is intended to be evaluated by calling an external tool (sympy-based calculator in this case)
- output: An output of the external tool
- result: The final answer to the mathematical problem (a number)
## Supported Tasks
This variant of the dataset is intended for training Chain-of-Thought reasoning models able to use external tools to enhance the factuality of their responses.
This dataset presents in-context scenarios where models can outsource the computations in the reasoning chain to a calculator.
## Construction process
We created the dataset by converting the **equation** attribute in the original dataset to a sequence (chain) of calculations, with final one being the result to the math problem.
We also perform in-dataset and cross-dataset data-leak detection within the [Calc-X collection](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483).
However, for SVAMP specifically, we detected no data leaks and filtered no data.
## Content and data splits
The dataset contains the same data instances as the original dataset except for a correction of inconsistency between `equation` and `answer` in one data instance.
To the best of our knowledge, the original dataset does not contain an official train-test split. We treat the whole dataset as a testing benchmark.
## Attributes:
- **id**: problem id from the original dataset
- **question**: the question intended to answer
- **chain**: series of simple operations (derived from `equation`) that leads to the solution
- **result**: the result (number) as a string
- **result_float**: result converted to a floating point
- **equation**: a nested expression that evaluates to the correct result
- **problem_type**: a category of the problem
Attributes **id**, **question**, **chain**, and **result** are present in all datasets in [Calc-X collection](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483).
## Related work
This dataset was created as a part of a larger effort in training models capable of using a calculator during inference, which we call Calcformers.
- [**Calc-X collection**](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483) - datasets for training Calcformers
- [**Calcformers collection**](https://huggingface.co/collections/MU-NLPC/calcformers-65367392badc497807b3caf5) - calculator-using models we trained and published on HF
- [**Calc-X and Calcformers paper**](https://arxiv.org/abs/2305.15017)
- [**Calc-X and Calcformers repo**](https://github.com/prompteus/calc-x)
Here are links to the original dataset:
- [**original SVAMP dataset and repo**](https://github.com/arkilpatel/SVAMP/)
- [**original SVAMP paper**](https://www.semanticscholar.org/paper/Are-NLP-Models-really-able-to-Solve-Simple-Math-Patel-Bhattamishra/13c4e5a6122f3fa2663f63e49537091da6532f35)
## Licence
MIT, consistent with the original source dataset linked above.
## Cite
If you use this version of dataset in research, please cite the original [SVAMP paper](https://www.semanticscholar.org/paper/Are-NLP-Models-really-able-to-Solve-Simple-Math-Patel-Bhattamishra/13c4e5a6122f3fa2663f63e49537091da6532f35), and [Calc-X collection](https://arxiv.org/abs/2305.15017) as follows:
```bibtex
@inproceedings{kadlcik-etal-2023-soft,
title = "Calc-X and Calcformers: Empowering Arithmetical Chain-of-Thought through Interaction with Symbolic Systems",
author = "Marek Kadlčík and Michal Štefánik and Ondřej Sotolář and Vlastimil Martinek",
booktitle = "Proceedings of the The 2023 Conference on Empirical Methods in Natural Language Processing: Main track",
month = dec,
year = "2023",
address = "Singapore, Singapore",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2305.15017",
}
``` | [
-0.4545602798461914,
-0.3401229977607727,
0.2198212593793869,
0.16204658150672913,
-0.09143432974815369,
-0.096915103495121,
-0.15827727317810059,
-0.35712626576423645,
0.2039964497089386,
0.38826820254325867,
-0.6763849258422852,
-0.33514153957366943,
-0.5493366122245789,
0.10596074908971... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vlsp-2023-vllm/arithmetic_vi | vlsp-2023-vllm | 2023-09-19T03:54:17Z | 34 | 0 | null | [
"arxiv:2005.14165",
"region:us"
] | 2023-09-19T03:54:17Z | 2023-09-10T17:55:16.000Z | 2023-09-10T17:55:16 | ---
dataset_info:
features:
- name: context
dtype: string
- name: completion
dtype: string
- name: meta
dtype: string
splits:
- name: test
num_bytes: 1729595
num_examples: 26000
download_size: 515170
dataset_size: 1729595
---
# Arithmetic (OpenAI)
Source: https://github.com/openai/gpt-3
Vietnamese version of Arithmetic.
## Citation Information
```
@article{brown2020language,
title={Language Models are Few-Shot Learners},
author={Tom B. Brown and Benjamin Mann and Nick Ryder and Melanie Subbiah and Jared Kaplan and Prafulla Dhariwal and Arvind Neelakantan and Pranav Shyam and Girish Sastry and Amanda Askell and Sandhini Agarwal and Ariel Herbert-Voss and Gretchen Krueger and Tom Henighan and Rewon Child and Aditya Ramesh and Daniel M. Ziegler and Jeffrey Wu and Clemens Winter and Christopher Hesse and Mark Chen and Eric Sigler and Mateusz Litwin and Scott Gray and Benjamin Chess and Jack Clark and Christopher Berner and Sam McCandlish and Alec Radford and Ilya Sutskever and Dario Amodei},
year={2020},
eprint={2005.14165},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | [
-0.02720937877893448,
-0.8870476484298706,
0.7375338673591614,
0.16682536900043488,
-0.2529018521308899,
-0.5800295472145081,
-0.05316206440329552,
-0.22992560267448425,
-0.10630370676517487,
0.21679025888442993,
-0.15629799664020538,
-0.5212072730064392,
-0.6207836270332336,
0.03651998937... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
m8than/long-context-QA-augmented | m8than | 2023-11-27T04:01:42Z | 34 | 1 | null | [
"task_categories:text-generation",
"task_categories:fill-mask",
"language:en",
"license:cc-by-sa-3.0",
"language-modeling",
"masked-language-modeling",
"region:us"
] | 2023-11-27T04:01:42Z | 2023-09-20T20:49:25.000Z | 2023-09-20T20:49:25 | ---
license: cc-by-sa-3.0
language:
- en
task_categories:
- text-generation
- fill-mask
tags:
- language-modeling
- masked-language-modeling
pretty_name: LongContextQA
configs:
- config_name: default
default: true
data_files:
- split: train
path:
- "compiled/raccoon-xiii-large.jsonl"
---
Long context QA with the following augmentations
- Smart augmentation (changes the answer to the question and in the context)
- Changes the data around the answer within the chunk
- Random noise
- Random chunks of information
- Lots of varied lengths
- A few different prompt formats (aimed towards RWKV) | [
-0.5745952725410461,
-1.2958650588989258,
0.5188255310058594,
0.3508223295211792,
-0.33545982837677,
-0.07133235782384872,
0.21270440518856049,
-0.919691801071167,
0.6286771893501282,
0.7171981334686279,
-0.9681079983711243,
0.31735751032829285,
-0.02508074790239334,
0.3273634612560272,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kunalsharma/fake-news | kunalsharma | 2023-09-26T10:38:28Z | 34 | 0 | null | [
"license:cc",
"region:us"
] | 2023-09-26T10:38:28Z | 2023-09-26T10:36:42.000Z | 2023-09-26T10:36:42 | ---
license: cc
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SEACrowd/jadi_ide | SEACrowd | 2023-09-26T12:29:15Z | 34 | 0 | null | [
"language:ind",
"license:unknown",
"emotion-classification",
"region:us"
] | 2023-09-26T12:29:15Z | 2023-09-26T11:13:15.000Z | 2023-09-26T11:13:15 | ---
license: unknown
tags:
- emotion-classification
language:
- ind
---
# jadi_ide
The JaDi-Ide dataset is a Twitter dataset for Javanese dialect identification, containing 16,498
data samples. The dialect is classified into `Standard Javanese`, `Ngapak Javanese`, and `East
Javanese` dialects.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@article{hidayatullah2020attention,
title={Attention-based cnn-bilstm for dialect identification on javanese text},
author={Hidayatullah, Ahmad Fathan and Cahyaningtyas, Siwi and Pamungkas, Rheza Daffa},
journal={Kinetik: Game Technology, Information System, Computer Network, Computing, Electronics, and Control},
pages={317--324},
year={2020}
}
```
## License
Unknown
## Homepage
[https://github.com/fathanick/Javanese-Dialect-Identification-from-Twitter-Data](https://github.com/fathanick/Javanese-Dialect-Identification-from-Twitter-Data)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) | [
-0.8960732221603394,
-0.923948347568512,
-0.10924268513917923,
0.4490285813808441,
-0.37005582451820374,
0.1080102026462555,
-0.4032540023326874,
-0.23203440010547638,
0.6778843998908997,
0.722354531288147,
-0.5789973139762878,
-0.7703342437744141,
-0.5694570541381836,
0.46192115545272827,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
learn3r/gov_report_bp | learn3r | 2023-09-29T11:05:26Z | 34 | 0 | null | [
"region:us"
] | 2023-09-29T11:05:26Z | 2023-09-29T11:03:30.000Z | 2023-09-29T11:03:30 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 1030500829
num_examples: 17457
- name: validation
num_bytes: 60867802
num_examples: 972
- name: test
num_bytes: 56606131
num_examples: 973
download_size: 547138870
dataset_size: 1147974762
---
# Dataset Card for "gov_report_bp"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5515812635421753,
-0.2843734323978424,
0.3109672963619232,
0.13177677989006042,
-0.2961733937263489,
-0.17182937264442444,
0.3629344701766968,
-0.1830127090215683,
0.6896743178367615,
0.615447461605072,
-0.6833988428115845,
-0.8493311405181885,
-0.7011157870292664,
-0.3804393410682678,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
baebee/chatgpt-custom_inst | baebee | 2023-10-09T19:16:48Z | 34 | 0 | null | [
"task_categories:summarization",
"task_categories:question-answering",
"task_categories:conversational",
"size_categories:n<1K",
"language:en",
"language:tl",
"license:mit",
"region:us"
] | 2023-10-09T19:16:48Z | 2023-10-09T07:31:02.000Z | 2023-10-09T07:31:02 | ---
license: mit
task_categories:
- summarization
- question-answering
- conversational
language:
- en
- tl
size_categories:
- n<1K
---
# Languages: English, Tagalog
## Collection Process:
- Dialogs generated by instructing ChatGPT to respond concisely
- Responses edited by Nuph researchers for naturalness
- Bilingual exchanges added for diversity
## Intended Use:
- Train conversational agents
- Research in straightforward dialog
# Limitations:
- Small scale (300 rows)
- Biased toward English
- Limited to text conversations
# Ethics and Privacy:
- No personal or offensive content
- ChatGPT instructed to avoid unethical responses
- Data anonymized - no personally identifiable information | [
-0.27020177245140076,
-0.9105072617530823,
-0.11925680935382843,
0.6920179128646851,
-0.4692635238170624,
0.2454821765422821,
-0.2970012426376343,
-0.46747520565986633,
0.38189491629600525,
0.7718316316604614,
-0.6647982001304626,
-0.0870455875992775,
-0.44682377576828003,
0.42688459157943... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sordonia/my-wiki-latex_mmlu_from_valid_all | sordonia | 2023-10-11T01:19:27Z | 34 | 0 | null | [
"region:us"
] | 2023-10-11T01:19:27Z | 2023-10-10T20:52:48.000Z | 2023-10-10T20:52:48 | ---
dataset_info:
features:
- name: subject
dtype: string
- name: docno
dtype: int64
- name: score
dtype: float64
- name: dfq
dtype: int64
- name: text
dtype: string
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: revid
dtype: string
splits:
- name: train
num_bytes: 1139620543
num_examples: 137881
download_size: 0
dataset_size: 1139620543
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "my-wiki-latex_mmlu_from_valid_all"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3729252815246582,
-0.5196306109428406,
0.3816308379173279,
0.1400691568851471,
-0.08747997879981995,
-0.10422512143850327,
0.042647041380405426,
0.14656633138656616,
0.7755657434463501,
0.43754059076309204,
-0.8002128005027771,
-0.6718263626098633,
-0.5696259140968323,
0.365787655115127... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
coastalcph/fair-rationales | coastalcph | 2023-10-13T12:54:10Z | 34 | 3 | null | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"task_ids:open-domain-qa",
"annotations_creators:crowdsourced",
"source_datasets:extended",
"language:en",
"license:mit",
"bias",
"fairness",
"rationale",
"demographic",
"region:us"
] | 2023-10-13T12:54:10Z | 2023-10-12T11:57:58.000Z | 2023-10-12T11:57:58 | ---
license: mit
language:
- en
annotations_creators:
- crowdsourced
source_datasets:
- extended
task_categories:
- text-classification
task_ids:
- sentiment-classification
- open-domain-qa
tags:
- bias
- fairness
- rationale
- demographic
pretty_name: FairRationales
---
# Dataset Card for "FairRationales"
## Dataset Summary
We present a new collection of annotations for a subset of CoS-E [[1]](#1), DynaSent [[2]](#2), and SST [[3]](#3)/Zuco [[4]](#4) with demographics-augmented annotations, balanced across age and ethnicity.
We asked participants to choose a label and then provide supporting evidence (rationales) based on the input sentence for their answer.
Existing rationale datasets are typically constructed by giving annotators 'gold standard' labels,
and having them provide rationales for these labels.
Instead, we let annotators provide rationales for labels they choose themselves. This lets them engage
in the decision process, but it also acknowledges
that annotators with different backgrounds may disagree on classification decisions. Explaining other
people’s choices is error-prone [[5]](#5), and we do not want to bias the rationale
annotations by providing labels that align better
with the intuitions of some demographics than with
those of others.
Our annotators are balanced across age and ethnicity for six demographic groups, defined by
ethnicity {Black/African American, White/Caucasian, Latino/Hispanic} and age {Old, Young}.
Therefore, we can refer to our groups as their cross-product: **{BO, BY, WO, WY, LO, LY}**.
## Dataset Details
### DynaSent
We re-annotate N=480 instances
six times (for six demographic groups), comprising
240 instances labeled as positive, and 240 instances
labeled as negative in the DynaSent Round 2 **test**
set (see [[2]](#2)). This amounts to 2,880
annotations, in total.
To annotate rationales, we formulate the task as
marking 'supporting evidence' for the label, following how the task is defined by [[6]](#6). Specifically, we ask annotators to mark
all the words, in the sentence, they think shows
evidence for their chosen label.
#### >Our annotations:
negative 1555 |
positive 1435 |
no sentiment 470
Total 3460
Note that all the data is uploaded under a single 'train' split (read [## Uses](uses) for further details).
### SST2
We re-annotate N=263 instances six
times (for six demographic groups), which are all
the positive and negative instances from the Zuco*
dataset of Hollenstein et al. (2018), comprising a
**mixture of train, validation and test** set instances
from SST-2, *which should be removed from the original SST
data before training any model*.
These 263 reannotated instances do not contain any instances originally marked as `neutral` (or not conveying sentiment) because rationale annotation for neutral instances is ill-defined. Yet,
we still allow annotators to evaluate a sentence as
neutral, since we do not want to force our annotators to provide rationales for positive and negative
sentiment that they do not see.
*The Zuco data contains eye-tracking data for 400 instances from SST. By annotating some of these with rationales,
we add an extra layer of information for future research.
#### >Our annotations:
positive 1027 |
negative 900 |
no sentiment 163
Total 2090
Note that all the data is uploaded under a single 'train' split (read [## Uses](uses) for further details).
### CoS-E
We use the simplified version of CoS-E released by [[6]](#6).
We re-annotate N=500 instances from
the CoS-E **test** set six times (for six demographic groups)
and ask annotators to firstly select the answer to
the question that they find most correct and sensible, and then mark words that justifies that answer.
Following [[7]](#7), we specify the
rationale task with a wording that should guide
annotators to make short, precise rationale annotations:
‘For each word in the question, if you
think that removing it will decrease your
confidence toward your chosen label,
please mark it.’
#### >Our annotations:
Total 3760
Note that all the data is uploaded under a single 'train' split (read [## Uses](uses) for further details).
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** https://github.com/terne/Being_Right_for_Whose_Right_Reasons
- **Paper:** [Being Right for Whose Right Reasons?](https://aclanthology.org/2023.acl-long.59/)
<a id="uses">## Uses</a>
<!-- Address questions around how the dataset is intended to be used. -->
In our paper, we present a collection of three
existing datasets (SST2, DynaSent and Cos-E) with demographics-augmented annotations to enable profiling of models, i.e., quantifying their alignment (or agreement) with rationales provided
by different socio-demographic groups. Such profiling enables us to ask whose right reasons models are being right for and fosters future research on performance equality/robustness.
For each dataset, we provide the data under a unique **'train'** split due to the current limitation of not being possible to upload a dataset with a single *'test'* split.
Note, however, that the original itended used of these collection of datasets was to **test** the quality & alignment of post-hoc explainability methods.
If you use it following different splits, please clarify it to ease reproducibility of your work.
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
| Variable | Description |
| --- | --- |
| QID | The ID of the Question (i.e. the annotation element/sentence) in the Qualtrics survey. Every second question asked for the classification and every other asked for the rationale, of the classification, to be marked. These two questions and answers for the same sentence is merged to one row and therefore the QID looks as if every second is skipped. |
| text_id | A numerical ID given to each unique text/sentence for easy sorting before comparing annotations across groups. |
| sentence | The text/sentence that is annotated, in it's original formatting. |
| label | The (new) label given by the respective annotator/participant from Prolific. |
| label_index | The numerical format of the (new) label. |
| original_label | The label from the original dataset (Cose/Dynasent/SST). |
| rationale | The tokens marked as rationales by our annotators. |
| rationale_index | The indeces of the tokens marked as rationales. In the processed files the index start at 0. However in the unprocessed files ("_all.csv", "_before_exclussions.csv") the index starts at 1.|
| rationale_binary | A binary version of the rationales where a token marked as part of the rationale = 1 and tokens not marked = 0. |
| age | The reported age of the annotator/participant (i.e. their survey response). This may be different from the age-interval the participant was recruited by (see recruitment_age). |
| recruitment_age | The age interval specified for the Prolific job to recruit the participant by. A mismatch between this and the participant's reported age, when asked in our survey, may mean a number of things, such as: Prolific's information is wrong or outdated; the participant made a mistake when answering the question; the participant was inattentive. |
| ethnicity | The reported ethnicity of the annotator/participant. This may be different from the ethnicity the participant was recruited by (see recruitment_ethnicity). |
| recruitment_ethnicity | The ethnicity specified for the Prolific job to recruit the participant by. Sometimes there is a mismatch between the information Prolific has on participants (which we use for recruitment) and what the participants report when asked again in the survey/task. This seems especially prevalent with some ethnicities, likely because participants may in reality identify with more than one ethnic group. |
| gender | The reported gender of the annotator/participant. |
| english_proficiency | The reported English-speaking ability (proxy for English proficiency) of the annotator/participant. Options were "Not well", "Well" or "Very well". |
| attentioncheck | All participants were given a simple attention check question at the very end of the Qualtrics survey (i.e. after annotation) which was either PASSED or FAILED. Participants who failed the check were still paid for their work, but their response should be excluded from the analysis. |
| group_id | An id describing the socio-demographic subgroup a participant belongs to and was recruited by. |
| originaldata_id | The id given to the text/sentence in the original dataset. In the case of SST data, this refers to ids within the Zuco dataset – a subset of SST which was used in our study.|
| annotator_ID | Anonymised annotator ID to enable analysis such as annotators (dis)agreement |
| sst2_id | The processed SST annotations contain an extra column with the index of the text in the SST-2 dataset. -1 means that we were unable to match the text to an instance in SST-2 |
| sst2_split | The processed SST annotations contain an extra column refering to the set which the instance appears in within SST-2. Some instances a part of the train set and should therefore be removed before training a model on SST-2 and testing on our annotations. |
## Dataset Creation
### Curation Rationale
Terne Sasha Thorn Jakobsen, Laura Cabello, Anders Søgaard. Being Right for Whose Right Reasons?
In the Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).
#### Annotation process
We refer to our [paper](https://aclanthology.org/2023.acl-long.59/) for further details on the data (Section 3), and specifically on the Annotation Process (Section 3.1) and Annotator Population (Section 3.2).
#### Who are the annotators?
Annotators were recruited via Prolific and consented to the use of their responses and demographic information for research purposes.
The annotation tasks were conducted through Qualtrics surveys. The exact surveys can be found [here](https://github.com/terne/Being_Right_for_Whose_Right_Reasons/tree/main/data/qualtrics_survey_exports).
## References
<a id="1">[1]</a>
Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain Yourself! Leveraging Language Models for Commonsense Reasoning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4932–4942, Florence, Italy. Association for Computational Linguistics.
<a id="2">[2]</a>
Christopher Potts, Zhengxuan Wu, Atticus Geiger, and Douwe Kiela. 2021. DynaSent: A Dynamic Benchmark for Sentiment Analysis. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2388–2404, Online. Association for Computational Linguistics.
<a id="3">[3]</a>
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics.
<a id="4">[4]</a>
Nora Hollenstein, Jonathan Rotsztejn, Marius Troendle, Andreas Pedroni, Ce Zhang, and Nicolas Langer. 2018. Zuco, a simultaneous eeg and eye-tracking resource for natural sentence reading. Scientific Data.
<a id="5">[5]</a>
Kate Barasz and Tami Kim. 2022. Choice perception: Making sense (and nonsense) of others’ decisions. Current opinion in psychology, 43:176–181.
<a id="6">[6]</a>
Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C. Wallace. 2019. Eraser: A benchmark to evaluate rationalized nlp models.
<a id="7">[7]</a>
Cheng-Han Chiang and Hung-yi Lee. 2022. Reexamining human annotations for interpretable nlp.
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
```bibtex
@inproceedings{thorn-jakobsen-etal-2023-right,
title = "Being Right for Whose Right Reasons?",
author = "Thorn Jakobsen, Terne Sasha and
Cabello, Laura and
S{\o}gaard, Anders",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.59",
doi = "10.18653/v1/2023.acl-long.59",
pages = "1033--1054",
abstract = "Explainability methods are used to benchmark the extent to which model predictions align with human rationales i.e., are {`}right for the right reasons{'}. Previous work has failed to acknowledge, however, that what counts as a rationale is sometimes subjective. This paper presents what we think is a first of its kind, a collection of human rationale annotations augmented with the annotators demographic information. We cover three datasets spanning sentiment analysis and common-sense reasoning, and six demographic groups (balanced across age and ethnicity). Such data enables us to ask both what demographics our predictions align with and whose reasoning patterns our models{'} rationales align with. We find systematic inter-group annotator disagreement and show how 16 Transformer-based models align better with rationales provided by certain demographic groups: We find that models are biased towards aligning best with older and/or white annotators. We zoom in on the effects of model size and model distillation, finding {--}contrary to our expectations{--} negative correlations between model size and rationale agreement as well as no evidence that either model size or model distillation improves fairness.",
}
```
## Dataset Card Contact
Thanks to [@lautel](https://github.com/lautel) for adding this dataset. | [
-0.44544896483421326,
-0.46580010652542114,
0.2668803036212921,
0.15351513028144836,
-0.2872813940048218,
-0.13478045165538788,
-0.12802816927433014,
-0.45509228110313416,
0.3549272119998932,
0.4181237518787384,
-0.6587371826171875,
-0.5650618672370911,
-0.48630291223526,
0.160159766674041... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
CollectiveCognition/chats-data-2023-10-16 | CollectiveCognition | 2023-10-16T13:07:57Z | 34 | 12 | null | [
"license:mit",
"region:us"
] | 2023-10-16T13:07:57Z | 2023-10-16T13:06:41.000Z | 2023-10-16T13:06:41 | ---
license: mit
---
# Dataset Card for "Collective Cognition ChatGPT Conversations"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
### Dataset Summary
The "Collective Cognition ChatGPT Conversations" dataset is a collection of chat logs between users and the ChatGPT model. These conversations have been shared by users on the "Collective Cognition" website. The dataset provides insights into user interactions with language models and can be utilized for multiple purposes, including training, research, and analysis.
Visit Collective Cognition to browse or contribute chats here: https://collectivecognition.ai
### Supported Tasks
- **Language Model Finetuning**: Training through behavior cloning and knowledge instilling.
- **Dialogue Understanding**: Analyzing the flow and structure of conversations.
- **Sentiment Analysis**: Understanding the sentiment or emotion in the dialogues.
- **Topic Modeling**: Determining the topics discussed within the chats.
### Languages
The primary language of the dataset is English, but any language chat may be present in the dataset as users share more chats.
## Dataset Structure
### Data Instances
A typical data instance includes a chat log with a title, model name, whether the chat used custom instructions (currently not included if so), and the content of the message with the role of the sender (either "human" or "gpt").
The dataset is formatted to conform with ShareGPT standard.
This is an example entry in the dataset, noting that turns are variable and can be many or as few as two:
```json
{
"title": "Interest Formulas Clarification",
"model_name": "GPT-4",
"custom_instruction": false,
"conversations": [
{
"from": "human",
"value": "Is this correct? The formula for calculating simple interest (also known as Compound Interest) is given by I = P * r^n, where \n- I represents the final amount of money, also called principal or capital. This value will keep on increasing at a fixed rate per unit time denoted by 'r' and number of times denoted by 'n'. The initial investment represented by 'P', while the percentage rate expressed as an annual fraction, i.e., p/100 representing the percentage to be paid annually. In other words it\u2019s how much you earn in one year out of your original $100 invested with a 5% annual return compounded monthly."
},
{
"from": "assistant",
"value": "I'm sorry, but there seems to be some confusion in your question. The formula given and the definitions of the variables seem to be mixed up between simple interest and compound interest. Let me clarify:\n\n1. Simple Interest: This type of interest is calculated only on the original amount (principal) that you invested or loaned. The formula for simple interest is I = PRT, where:\n - I is the interest\n - P is the principal amount (initial investment)\n - R is the annual interest rate (in decimal form, so a 5% rate would be 0.05)\n - T is the time the money is invested or borrowed for, in years.\n\n2. Compound Interest: This type of interest is calculated on the initial principal and also on the accumulated interest of previous periods of a deposit or loan. The formula for compound interest is A = P(1 + r/n)^(nt), where:\n - A is the amount of money accumulated after n years, including interest.\n - P is the principal amount (the initial amount of money)\n - r is the annual interest rate (in decimal form, so a 5% rate would be 0.05)\n - n is the number of times that interest is compounded per year\n - t is the number of years the money is invested or borrowed for.\n\nIn both cases, the interest rate is usually expressed as an annual rate, regardless of the length of the term."
}
]
}
```
### Data Splits
Currently, the dataset is not divided into specific splits (train, test, validation).
## Dataset Creation
### Curation Rationale
The dataset was curated to provide insights into how users interact with language models and to contribute to the broader NLP community's resources.
### Source Data
The data originates from user contributions on the "Collective Cognition" website.
### Personal and Sensitive Information
All chats uploaded to the Collective Cognition website are made public, and are uploaded as a new dataset periodically. If you would like to have your chat removed, please email admin@collectivecognition.ai
## Considerations for Using the Data
### Social Impact of Dataset
The dataset offers a glimpse into the interaction dynamics between humans and AI models. It can be instrumental for researchers studying human-AI collaboration.
### Discussion of Biases
There might be biases in the dataset based on the types of users contributing chat logs and the topics they discuss with ChatGPT, particularly centered around what users may utilize ChatGPT for the most.
### Other Known Limitations
The dataset is dependent on the voluntary contributions of users. Hence, it might not represent the entire spectrum of interactions that users have with ChatGPT.
## Additional Information
### Licensing Information
MIT | [
-0.3608556091785431,
-0.9406598806381226,
0.19270405173301697,
0.3822779953479767,
-0.03379739075899124,
0.12425978481769562,
-0.2086908519268036,
-0.25429490208625793,
0.27907347679138184,
0.4830751121044159,
-0.6713663339614868,
-0.679639458656311,
-0.7008355259895325,
-0.142891630530357... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Youssef11/test | Youssef11 | 2023-10-18T10:42:26Z | 34 | 0 | null | [
"region:us"
] | 2023-10-18T10:42:26Z | 2023-10-17T19:00:46.000Z | 2023-10-17T19:00:46 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.