id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
archanatikayatray/aeroBERT-classification | 2023-05-20T22:40:37.000Z | [
"task_categories:text-classification",
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"sentence classification",
"aerospace requirements",
"design",
"functional",
"performance",
"requirements",
"NLP4RE",
"doi:10.57967/hf/0433",
"region:us"
] | archanatikayatray | null | null | 2 | 5 | 2023-01-12T05:00:31 | ---
license: apache-2.0
task_categories:
- text-classification
tags:
- sentence classification
- aerospace requirements
- design
- functional
- performance
- requirements
- NLP4RE
pretty_name: requirements_classification_dataset.txt
size_categories:
- n<1K
language:
- en
---
# Dataset Card for aeroBERT-classification
## Dataset Description
- **Paper:** aeroBERT-Classifier: Classification of Aerospace Requirements using BERT
- **Point of Contact:** archanatikayatray@gmail.com
### Dataset Summary
This dataset contains requirements from the aerospace domain. The requirements are tagged based on the "type"/category of requirement they belong to.
The creation of this dataset is aimed at - <br>
(1) Making available an **open-source** dataset for aerospace requirements which are often proprietary <br>
(2) Fine-tuning language models for **requirements classification** specific to the aerospace domain <br>
This dataset can be used for training or fine-tuning language models for the identification of the following types of requirements - <br>
<br>
**Design Requirement** - Dictates "how" a system should be designed given certain technical standards and specifications;
**Example:** Trim control systems must be designed to prevent creeping in flight.<br>
<br>
**Functional Requirement** - Defines the functions that need to be performed by a system in order to accomplish the desired system functionality;
**Example:** Each cockpit voice recorder shall record the voice communications of flight crew members on the flight deck.<br>
<br>
**Performance Requirement** - Defines "how well" a system needs to perform a certain function;
**Example:** The airplane must be free from flutter, control reversal, and divergence for any configuration and condition of operation.<br>
## Dataset Structure
The tagging scheme followed: <br>
(1) Design requirements: 0 (Count = 149) <br>
(2) Functional requirements: 1 (Count = 99) <br>
(3) Performance requirements: 2 (Count = 62) <br>
<br>
The dataset is of the format: ``requirements | label`` <br>
| requirements | label |
| :----: | :----: |
| Each cockpit voice recorder shall record voice communications transmitted from or received in the airplane by radio.| 1 |
| Each recorder container must be either bright orange or bright yellow.| 0 |
| Single-engine airplanes, not certified for aerobatics, must not have a tendency to inadvertently depart controlled flight. | 2|
| Each part of the airplane must have adequate provisions for ventilation and drainage. | 0 |
| Each baggage and cargo compartment must have a means to prevent the contents of the compartment from becoming a hazard by impacting occupants or shifting. | 1 |
## Dataset Creation
### Source Data
A total of 325 aerospace requirements were collected from Parts 23 and 25 of Title 14 of the Code of Federal Regulations (CFRs) and annotated (refer to the paper for more details). <br>
### Importing dataset into Python environment
Use the following code chunk to import the dataset into Python environment as a DataFrame.
```
from datasets import load_dataset
import pandas as pd
dataset = load_dataset("archanatikayatray/aeroBERT-classification")
#Converting the dataset into a pandas DataFrame
dataset = pd.DataFrame(dataset["train"]["text"])
dataset = dataset[0].str.split('*', expand = True)
#Getting the headers from the first row
header = dataset.iloc[0]
#Excluding the first row since it contains the headers
dataset = dataset[1:]
#Assigning the header to the DataFrame
dataset.columns = header
#Viewing the last 10 rows of the annotated dataset
dataset.tail(10)
```
### Annotations
#### Annotation process
A Subject Matter Expert (SME) was consulted for deciding on the annotation categories for the requirements.
The final classification dataset had 149 Design requirements, 99 Functional requirements, and 62 Performance requirements.
Lastly, the 'labels' attached to the requirements (design requirement, functional requirement, and performance requirement) were converted into numeric values: 0, 1, and 2 respectively.
### Limitations
(1)The dataset is an imbalanced dataset (more Design requirements as compared to the other types). Hence, using ``Accuracy`` as a metric for the model performance is
NOT a good idea. The use of Precision, Recall, and F1 scores are suggested for model performance evaluation.
(2)This dataset does not contain a test set. Hence, it is suggested that the user split the dataset into training/validation/testing after importing the data into a Python environment.
Please refer to the Appendix of the paper for information on the test set.
### Citation Information
```
@Article{aeroBERT-Classifier,
AUTHOR = {Tikayat Ray, Archana and Cole, Bjorn F. and Pinon Fischer, Olivia J. and White, Ryan T. and Mavris, Dimitri N.},
TITLE = {aeroBERT-Classifier: Classification of Aerospace Requirements Using BERT},
JOURNAL = {Aerospace},
VOLUME = {10},
YEAR = {2023},
NUMBER = {3},
ARTICLE-NUMBER = {279},
URL = {https://www.mdpi.com/2226-4310/10/3/279},
ISSN = {2226-4310},
DOI = {10.3390/aerospace10030279}
}
@phdthesis{tikayatray_thesis,
author = {Tikayat Ray, Archana},
title = {Standardization of Engineering Requirements Using Large Language Models},
school = {Georgia Institute of Technology},
year = {2023},
doi = {10.13140/RG.2.2.17792.40961},
URL = {https://repository.gatech.edu/items/964c73e3-f0a8-487d-a3fa-a0988c840d04}
}
``` | 5,567 | [
[
-0.04571533203125,
-0.02423095703125,
-0.0009684562683105469,
0.0215301513671875,
0.0033931732177734375,
-0.0206298828125,
-0.00606536865234375,
-0.0308837890625,
0.0025157928466796875,
0.039215087890625,
-0.03338623046875,
-0.046630859375,
-0.02447509765625,
... |
Zombely/wikisource-small | 2023-01-15T18:48:01.000Z | [
"region:us"
] | Zombely | null | null | 0 | 5 | 2023-01-15T09:28:13 | ---
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 24302805827.009
num_examples: 15549
download_size: 19231095073
dataset_size: 24302805827.009
---
# Dataset Card for "wikisource-small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 420 | [
[
-0.050689697265625,
-0.006511688232421875,
0.0189208984375,
-0.0071258544921875,
-0.0184478759765625,
-0.023101806640625,
-0.00675201416015625,
-0.0101776123046875,
0.06689453125,
0.02020263671875,
-0.06842041015625,
-0.036773681640625,
-0.0303955078125,
0.0... |
metaeval/cycic_classification | 2023-05-31T08:47:48.000Z | [
"task_categories:question-answering",
"task_categories:text-classification",
"language:en",
"license:apache-2.0",
"arxiv:2301.05948",
"region:us"
] | metaeval | null | null | 1 | 5 | 2023-01-18T11:03:35 | ---
license: apache-2.0
task_categories:
- question-answering
- text-classification
language:
- en
---
https://storage.googleapis.com/ai2-mosaic/public/cycic/CycIC-train-dev.zip
https://colab.research.google.com/drive/16nyxZPS7-ZDFwp7tn_q72Jxyv0dzK1MP?usp=sharing
```
@article{Kejriwal2020DoFC,
title={Do Fine-tuned Commonsense Language Models Really Generalize?},
author={Mayank Kejriwal and Ke Shen},
journal={ArXiv},
year={2020},
volume={abs/2011.09159}
}
```
added for
```
@article{sileo2023tasksource,
title={tasksource: Structured Dataset Preprocessing Annotations for Frictionless Extreme Multi-Task Learning and Evaluation},
author={Sileo, Damien},
url= {https://arxiv.org/abs/2301.05948},
journal={arXiv preprint arXiv:2301.05948},
year={2023}
}
``` | 779 | [
[
-0.024932861328125,
-0.035400390625,
0.02471923828125,
0.01214599609375,
-0.01317596435546875,
-0.0196380615234375,
-0.054168701171875,
-0.034576416015625,
-0.006137847900390625,
0.020538330078125,
-0.0577392578125,
-0.051483154296875,
-0.04296875,
0.0116729... |
lorenzoscottb/PLANE-ood | 2023-01-25T09:51:09.000Z | [
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-2.0",
"region:us"
] | lorenzoscottb | null | null | 0 | 5 | 2023-01-22T21:22:03 | ---
license: cc-by-2.0
task_categories:
- text-classification
language:
- en
size_categories:
- 100K<n<1M
---
# PLANE Out-of-Distribution Sets
PLANE (phrase-level adjective-noun entailment) is a benchmark to test models on fine-grained compositional inference.
The current dataset contains five sampled splits, used in the supervised experiments of [Bertolini et al., 22](https://aclanthology.org/2022.coling-1.359/).
## Data Structure
The `dataset` is organised around five `Train/test_split#`, each containing a training and test set of circa 60K and 2K.
### Features
Each entrance has 6 features: `seq, label, Adj_Class, Adj, Nn, Hy`
- `seq`:test sequense
- `label`: ground truth (1:entialment, 0:no-entailment)
- `Adj_Class`: the class of the sequence adjectives
- `Adj`: the adjective of the sequence (I: intersective, S: subsective, O: intensional)
- `N`n: the noun
- `Hy`: the noun's hypericum
Each sample in `seq` can take one of three forms (or inference types, in paper):
- An *Adjective-Noun* is a *Noun* (e.g. A red car is a car)
- An *Adjective-Noun* is a *Hypernym(Noun)* (e.g. A red car is a vehicle)
- An *Adjective-Noun* is a *Adjective-Hypernym(Noun)* (e.g. A red car is a red vehicle)
Please note that, as specified in the paper, the ground truth is automatically assigned based on the linguistic rule that governs the interaction between each adjective class and inference type – see the paper for more detail.
### Trained Model
You can find a tuned BERT-base model (tuned and validated using the 2nd split) [here](https://huggingface.co/lorenzoscottb/bert-base-cased-PLANE-ood-2?text=A+fake+smile+is+a+smile).
### Cite
If you use PLANE for your work, please cite the main COLING 2022 paper.
```
@inproceedings{bertolini-etal-2022-testing,
title = "Testing Large Language Models on Compositionality and Inference with Phrase-Level Adjective-Noun Entailment",
author = "Bertolini, Lorenzo and
Weeds, Julie and
Weir, David",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics",
url = "https://aclanthology.org/2022.coling-1.359",
pages = "4084--4100",
}
``` | 2,319 | [
[
-0.048583984375,
-0.06793212890625,
0.015777587890625,
0.03094482421875,
-0.0120849609375,
-0.0266571044921875,
-0.0158843994140625,
-0.00740814208984375,
0.0188751220703125,
0.041259765625,
-0.034027099609375,
-0.026947021484375,
-0.03045654296875,
-0.00632... |
juancopi81/yannic_ada_embeddings | 2023-01-24T13:46:03.000Z | [
"region:us"
] | juancopi81 | null | null | 0 | 5 | 2023-01-24T13:45:56 | ---
dataset_info:
features:
- name: TITLE
dtype: string
- name: URL
dtype: string
- name: TRANSCRIPTION
dtype: string
- name: transcription_length
dtype: int64
- name: text
dtype: string
- name: ada_embedding
dtype: string
splits:
- name: train
num_bytes: 127436085
num_examples: 3194
download_size: 81996580
dataset_size: 127436085
---
# Dataset Card for "yannic_ada_embeddings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 566 | [
[
-0.027008056640625,
-0.028472900390625,
0.0269012451171875,
0.004955291748046875,
-0.00882720947265625,
-0.0108184814453125,
0.0166168212890625,
-0.00531768798828125,
0.0771484375,
0.0139617919921875,
-0.046905517578125,
-0.07305908203125,
-0.034637451171875,
... |
nglaura/scielo-summarization | 2023-04-11T10:21:45.000Z | [
"task_categories:summarization",
"language:fr",
"license:apache-2.0",
"region:us"
] | nglaura | null | null | 0 | 5 | 2023-01-25T12:02:33 | ---
license: apache-2.0
task_categories:
- summarization
language:
- fr
pretty_name: SciELO
---
# LoRaLay: A Multilingual and Multimodal Dataset for Long Range and Layout-Aware Summarization
A collaboration between [reciTAL](https://recital.ai/en/), [MLIA](https://mlia.lip6.fr/) (ISIR, Sorbonne Université), [Meta AI](https://ai.facebook.com/), and [Università di Trento](https://www.unitn.it/)
## SciELO dataset for summarization
SciELO is a dataset for summarization of research papers written in Spanish and Portuguese, for which layout information is provided.
### Data Fields
- `article_id`: article id
- `article_words`: sequence of words constituting the body of the article
- `article_bboxes`: sequence of corresponding word bounding boxes
- `norm_article_bboxes`: sequence of corresponding normalized word bounding boxes
- `abstract`: a string containing the abstract of the article
- `article_pdf_url`: URL of the article's PDF
### Data Splits
This dataset has 3 splits: _train_, _validation_, and _test_.
| Dataset Split | Number of Instances (ES/PT) |
| ------------- | ----------------------------|
| Train | 20,853 / 19,407 |
| Validation | 1,158 / 1,078 |
| Test | 1,159 / 1,078 |
## Citation
``` latex
@article{nguyen2023loralay,
title={LoRaLay: A Multilingual and Multimodal Dataset for Long Range and Layout-Aware Summarization},
author={Nguyen, Laura and Scialom, Thomas and Piwowarski, Benjamin and Staiano, Jacopo},
journal={arXiv preprint arXiv:2301.11312},
year={2023}
}
``` | 1,581 | [
[
-0.01251220703125,
-0.031707763671875,
0.01375579833984375,
0.06536865234375,
-0.0232391357421875,
-0.00328826904296875,
-0.02294921875,
-0.0285491943359375,
0.0511474609375,
0.0338134765625,
-0.0185546875,
-0.06988525390625,
-0.0277252197265625,
0.028411865... |
michelecafagna26/hl | 2023-08-02T11:50:20.000Z | [
"task_categories:image-to-text",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_ids:text-scoring",
"annotations_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"arxiv:1405.0312",
"a... | michelecafagna26 | High-level Dataset | @inproceedings{Cafagna2023HLDG,
title={HL Dataset: Grounding High-Level Linguistic Concepts in Vision},
author={Michele Cafagna and Kees van Deemter and Albert Gatt},
year={2023}
} | 4 | 5 | 2023-01-25T16:15:17 | ---
license: apache-2.0
task_categories:
- image-to-text
- question-answering
- zero-shot-classification
language:
- en
multilinguality:
- monolingual
task_ids:
- text-scoring
pretty_name: HL (High-Level Dataset)
size_categories:
- 10K<n<100K
annotations_creators:
- crowdsourced
annotations_origin:
- crowdsourced
dataset_info:
splits:
- name: train
num_examples: 13498
- name: test
num_examples: 1499
---
# Dataset Card for the High-Level Dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
The High-Level (HL) dataset aligns **object-centric descriptions** from [COCO](https://arxiv.org/pdf/1405.0312.pdf)
with **high-level descriptions** crowdsourced along 3 axes: **_scene_, _action_, _rationale_**
The HL dataset contains 14997 images from COCO and a total of 134973 crowdsourced captions (3 captions for each axis) aligned with ~749984 object-centric captions from COCO.
Each axis is collected by asking the following 3 questions:
1) Where is the picture taken?
2) What is the subject doing?
3) Why is the subject doing it?
**The high-level descriptions capture the human interpretations of the images**. These interpretations contain abstract concepts not directly linked to physical objects.
Each high-level description is provided with a _confidence score_, crowdsourced by an independent worker measuring the extent to which
the high-level description is likely given the corresponding image, question, and caption. The higher the score, the more the high-level caption is close to the commonsense (in a Likert scale from 1-5).
- **🗃️ Repository:** [github.com/michelecafagna26/HL-dataset](https://github.com/michelecafagna26/HL-dataset)
- **📜 Paper:** [HL Dataset: Visually-grounded Description of Scenes, Actions and Rationales](https://arxiv.org/abs/2302.12189?context=cs.CL)
- **🧭 Spaces:** [Dataset explorer](https://huggingface.co/spaces/michelecafagna26/High-Level-Dataset-explorer)
- **🖊️ Contact:** michele.cafagna@um.edu.mt
### Supported Tasks
- image captioning
- visual question answering
- multimodal text-scoring
- zero-shot evaluation
### Languages
English
## Dataset Structure
The dataset is provided with images from COCO and two metadata jsonl files containing the annotations
### Data Instances
An instance looks like this:
```json
{
"file_name": "COCO_train2014_000000138878.jpg",
"captions": {
"scene": [
"in a car",
"the picture is taken in a car",
"in an office."
],
"action": [
"posing for a photo",
"the person is posing for a photo",
"he's sitting in an armchair."
],
"rationale": [
"to have a picture of himself",
"he wants to share it with his friends",
"he's working and took a professional photo."
],
"object": [
"A man sitting in a car while wearing a shirt and tie.",
"A man in a car wearing a dress shirt and tie.",
"a man in glasses is wearing a tie",
"Man sitting in the car seat with button up and tie",
"A man in glasses and a tie is near a window."
]
},
"confidence": {
"scene": [
5,
5,
4
],
"action": [
5,
5,
4
],
"rationale": [
5,
5,
4
]
},
"purity": {
"scene": [
-1.1760284900665283,
-1.0889461040496826,
-1.442818284034729
],
"action": [
-1.0115827322006226,
-0.5917857885360718,
-1.6931917667388916
],
"rationale": [
-1.0546956062316895,
-0.9740906357765198,
-1.2204363346099854
]
},
"diversity": {
"scene": 25.965358893403383,
"action": 32.713305568898775,
"rationale": 2.658757840479801
}
}
```
### Data Fields
- ```file_name```: original COCO filename
- ```captions```: Dict containing all the captions for the image. Each axis can be accessed with the axis name and it contains a list of captions.
- ```confidence```: Dict containing the captions confidence scores. Each axis can be accessed with the axis name and it contains a list of captions. Confidence scores are not provided for the _object_ axis (COCO captions).t
- ```purity score```: Dict containing the captions purity scores. The purity score measures the semantic similarity of the captions within the same axis (Bleurt-based).
- ```diversity score```: Dict containing the captions diversity scores. The diversity score measures the lexical diversity of the captions within the same axis (Self-BLEU-based).
### Data Splits
There are 14997 images and 134973 high-level captions split into:
- Train-val: 13498 images and 121482 high-level captions
- Test: 1499 images and 13491 high-level captions
## Dataset Creation
The dataset has been crowdsourced on Amazon Mechanical Turk.
From the paper:
>We randomly select 14997 images from the COCO 2014 train-val split. In order to answer questions related to _actions_ and _rationales_ we need to
> ensure the presence of a subject in the image. Therefore, we leverage the entity annotation provided in COCO to select images containing
> at least one person. The whole annotation is conducted on Amazon Mechanical Turk (AMT). We split the workload into batches in order to ease
>the monitoring of the quality of the data collected. Each image is annotated by three different annotators, therefore we collect three annotations per axis.
### Curation Rationale
From the paper:
>In this work, we tackle the issue of **grounding high-level linguistic concepts in the visual modality**, proposing the High-Level (HL) Dataset: a
V\&L resource aligning existing object-centric captions with human-collected high-level descriptions of images along three different axes: _scenes_, _actions_ and _rationales_.
The high-level captions capture the human interpretation of the scene, providing abstract linguistic concepts complementary to object-centric captions
>used in current V\&L datasets, e.g. in COCO. We take a step further, and we collect _confidence scores_ to distinguish commonsense assumptions
>from subjective interpretations and we characterize our data under a variety of semantic and lexical aspects.
### Source Data
- Images: COCO
- object axis annotations: COCO
- scene, action, rationale annotations: crowdsourced
- confidence scores: crowdsourced
- purity score and diversity score: automatically computed
#### Annotation process
From the paper:
>**Pilot:** We run a pilot study with the double goal of collecting feedback and defining the task instructions.
>With the results from the pilot we design a beta version of the task and we run a small batch of cases on the crowd-sourcing platform.
>We manually inspect the results and we further refine the instructions and the formulation of the task before finally proceeding with the
>annotation in bulk. The final annotation form is shown in Appendix D.
>***Procedure:*** The participants are shown an image and three questions regarding three aspects or axes: _scene_, _actions_ and _rationales_
> i,e. _Where is the picture taken?_, _What is the subject doing?_, _Why is the subject doing it?_. We explicitly ask the participants to use
>their personal interpretation of the scene and add examples and suggestions in the instructions to further guide the annotators. Moreover,
>differently from other VQA datasets like (Antol et al., 2015) and (Zhu et al., 2016), where each question can refer to different entities
>in the image, we systematically ask the same three questions about the same subject for each image. The full instructions are reported
>in Figure 1. For details regarding the annotation costs see Appendix A.
#### Who are the annotators?
Turkers from Amazon Mechanical Turk
### Personal and Sensitive Information
There is no personal or sensitive information
## Considerations for Using the Data
[More Information Needed]
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
From the paper:
>**Quantitying grammatical errors:** We ask two expert annotators to correct grammatical errors in a sample of 9900 captions, 900 of which are shared between the two annotators.
> The annotators are shown the image caption pairs and they are asked to edit the caption whenever they identify a grammatical error.
>The most common errors reported by the annotators are:
>- Misuse of prepositions
>- Wrong verb conjugation
>- Pronoun omissions
>In order to quantify the extent to which the corrected captions differ from the original ones, we compute the Levenshtein distance (Levenshtein, 1966) between them.
>We observe that 22.5\% of the sample has been edited and only 5\% with a Levenshtein distance greater than 10. This suggests a reasonable
>level of grammatical quality overall, with no substantial grammatical problems. This can also be observed from the Levenshtein distance
>distribution reported in Figure 2. Moreover, the human evaluation is quite reliable as we observe a moderate inter-annotator agreement
>(alpha = 0.507, (Krippendorff, 2018) computed over the shared sample.
### Dataset Curators
Michele Cafagna
### Licensing Information
The Images and the object-centric captions follow the [COCO terms of Use](https://cocodataset.org/#termsofuse)
The remaining annotations are licensed under Apache-2.0 license.
### Citation Information
```BibTeX
@inproceedings{cafagna2023hl,
title={{HL} {D}ataset: {V}isually-grounded {D}escription of {S}cenes, {A}ctions and
{R}ationales},
author={Cafagna, Michele and van Deemter, Kees and Gatt, Albert},
booktitle={Proceedings of the 16th International Natural Language Generation Conference (INLG'23)},
address = {Prague, Czech Republic},
year={2023}
}
```
| 10,695 | [
[
-0.0546875,
-0.051605224609375,
0.00823211669921875,
0.0215606689453125,
-0.0233001708984375,
0.0108184814453125,
-0.01171875,
-0.03717041015625,
0.025634765625,
0.0423583984375,
-0.043609619140625,
-0.06109619140625,
-0.040313720703125,
0.0209503173828125,
... |
liyucheng/UFSAC | 2023-01-26T15:41:19.000Z | [
"task_categories:token-classification",
"size_categories:1M<n<10M",
"language:en",
"license:cc-by-2.0",
"region:us"
] | liyucheng | null | null | 0 | 5 | 2023-01-25T22:17:54 | ---
license: cc-by-2.0
task_categories:
- token-classification
language:
- en
size_categories:
- 1M<n<10M
---
# Dataset Card for Dataset Name
UFSAC: Unification of Sense Annotated Corpora and Tools
## Dataset Description
- **Homepage:** https://github.com/getalp/UFSAC
- **Repository:** https://github.com/getalp/UFSAC
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
### Supported Tasks and Leaderboards
WSD: Word Sense Disambiguation
### Languages
English
## Dataset Structure
### Data Instances
```
{'lemmas': ['_',
'be',
'quite',
'_',
'hefty',
'spade',
'_',
'_',
'bicycle',
'_',
'type',
'handlebar',
'_',
'_',
'spring',
'lever',
'_',
'_',
'rear',
'_',
'_',
'_',
'step',
'on',
'_',
'activate',
'_',
'_'],
'pos_tags': ['PRP',
'VBZ',
'RB',
'DT',
'JJ',
'NN',
',',
'IN',
'NN',
':',
'NN',
'NNS',
'CC',
'DT',
'VBN',
'NN',
'IN',
'DT',
'NN',
',',
'WDT',
'PRP',
'VBP',
'RP',
'TO',
'VB',
'PRP',
'.'],
'sense_keys': ['activate%2:36:00::'],
'target_idx': 25,
'tokens': ['It',
'is',
'quite',
'a',
'hefty',
'spade',
',',
'with',
'bicycle',
'-',
'type',
'handlebars',
'and',
'a',
'sprung',
'lever',
'at',
'the',
'rear',
',',
'which',
'you',
'step',
'on',
'to',
'activate',
'it',
'.']}
```
### Data Fields
```
{'tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None),
'lemmas': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None),
'pos_tags': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None),
'target_idx': Value(dtype='int32', id=None),
'sense_keys': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}
```
### Data Splits
Not split. Use `train` split directly.
| 2,709 | [
[
-0.02960205078125,
-0.0171661376953125,
0.027435302734375,
0.0096588134765625,
-0.0245819091796875,
0.00539398193359375,
-0.01265716552734375,
-0.0182342529296875,
0.041351318359375,
0.0120849609375,
-0.05242919921875,
-0.07379150390625,
-0.046142578125,
0.0... |
metaeval/naturallogic | 2023-01-26T09:51:03.000Z | [
"task_categories:text-classification",
"language:en",
"license:apache-2.0",
"region:us"
] | metaeval | null | null | 0 | 5 | 2023-01-26T09:49:49 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
---
https://github.com/feng-yufei/Neural-Natural-Logic
```bib
@inproceedings{feng2020exploring,
title={Exploring End-to-End Differentiable Natural Logic Modeling},
author={Feng, Yufei, Ziou Zheng, and Liu, Quan and Greenspan, Michael and Zhu, Xiaodan},
booktitle={Proceedings of the 28th International Conference on Computational Linguistics},
pages={1172--1185},
year={2020}
}
``` | 469 | [
[
-0.0170745849609375,
-0.042205810546875,
0.0165863037109375,
0.0193023681640625,
-0.006595611572265625,
0.001873016357421875,
-0.0306396484375,
-0.06085205078125,
0.025482177734375,
0.01335906982421875,
-0.051544189453125,
-0.0152435302734375,
-0.0218505859375,
... |
Cohere/miracl-ar-queries-22-12 | 2023-02-06T12:00:30.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:ar",
"license:apache-2.0",
"region:us"
] | Cohere | null | null | 0 | 5 | 2023-01-30T09:57:38 | ---
annotations_creators:
- expert-generated
language:
- ar
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# MIRACL (ar) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-ar-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ar-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-ar-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ar-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-ar-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ar-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-ar-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-ar-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-ar-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ar-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-ar-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-ar-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
| 6,103 | [
[
-0.04571533203125,
-0.05792236328125,
0.0225830078125,
0.016387939453125,
-0.003787994384765625,
-0.0044097900390625,
-0.0203704833984375,
-0.035400390625,
0.039459228515625,
0.0157012939453125,
-0.03851318359375,
-0.07232666015625,
-0.0511474609375,
0.02359... |
gsdf/EasyNegative | 2023-02-12T14:39:30.000Z | [
"license:other",
"region:us"
] | gsdf | null | null | 1,064 | 5 | 2023-02-01T10:58:06 | ---
license: other
---
# Negative Embedding
This is a Negative Embedding trained with Counterfeit. Please use it in the "\stable-diffusion-webui\embeddings" folder.
It can be used with other models, but the effectiveness is not certain.
# Counterfeit-V2.0.safetensors

# AbyssOrangeMix2_sfw.safetensors

# anything-v4.0-pruned.safetensors
 | 608 | [
[
-0.035400390625,
-0.0537109375,
0.0138702392578125,
0.004913330078125,
-0.039825439453125,
0.0113525390625,
0.0426025390625,
-0.02447509765625,
0.04656982421875,
0.03839111328125,
-0.0438232421875,
-0.04205322265625,
-0.05145263671875,
-0.01132965087890625,
... |
Cohere/miracl-en-corpus-22-12 | 2023-02-06T11:54:52.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:en",
"license:apache-2.0",
"region:us"
] | Cohere | null | null | 0 | 5 | 2023-02-02T23:21:21 | ---
annotations_creators:
- expert-generated
language:
- en
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# MIRACL (en) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-en-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-en-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-en-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-en-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-en-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-en-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-en-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-en-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-en-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-en-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
| 6,103 | [
[
-0.04510498046875,
-0.058013916015625,
0.0231781005859375,
0.0177764892578125,
-0.003940582275390625,
-0.004413604736328125,
-0.0215606689453125,
-0.036468505859375,
0.039398193359375,
0.01617431640625,
-0.03961181640625,
-0.072265625,
-0.05047607421875,
0.0... |
bigcode/jupyter-parsed | 2023-02-21T19:16:28.000Z | [
"region:us"
] | bigcode | null | null | 3 | 5 | 2023-02-03T17:16:23 | ---
dataset_info:
features:
- name: hexsha
dtype: string
- name: size
dtype: int64
- name: ext
dtype: string
- name: lang
dtype: string
- name: max_stars_repo_path
dtype: string
- name: max_stars_repo_name
dtype: string
- name: max_stars_repo_head_hexsha
dtype: string
- name: max_stars_repo_licenses
sequence: string
- name: max_stars_count
dtype: int64
- name: max_stars_repo_stars_event_min_datetime
dtype: string
- name: max_stars_repo_stars_event_max_datetime
dtype: string
- name: max_issues_repo_path
dtype: string
- name: max_issues_repo_name
dtype: string
- name: max_issues_repo_head_hexsha
dtype: string
- name: max_issues_repo_licenses
sequence: string
- name: max_issues_count
dtype: int64
- name: max_issues_repo_issues_event_min_datetime
dtype: string
- name: max_issues_repo_issues_event_max_datetime
dtype: string
- name: max_forks_repo_path
dtype: string
- name: max_forks_repo_name
dtype: string
- name: max_forks_repo_head_hexsha
dtype: string
- name: max_forks_repo_licenses
sequence: string
- name: max_forks_count
dtype: int64
- name: max_forks_repo_forks_event_min_datetime
dtype: string
- name: max_forks_repo_forks_event_max_datetime
dtype: string
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: cells
sequence:
sequence:
sequence: string
- name: cell_types
sequence: string
- name: cell_type_groups
sequence:
sequence: string
splits:
- name: train
num_bytes: 22910808665
num_examples: 1459454
download_size: 9418947545
dataset_size: 22910808665
---
# Dataset Card for "jupyter-parsed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,945 | [
[
-0.027099609375,
-0.03131103515625,
0.0199737548828125,
0.01448822021484375,
-0.009307861328125,
-0.003337860107421875,
-0.0020904541015625,
-0.00258636474609375,
0.0526123046875,
0.02911376953125,
-0.037689208984375,
-0.0509033203125,
-0.043487548828125,
-0... |
metaeval/lonli | 2023-05-31T08:41:36.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"language:en",
"license:mit",
"region:us"
] | metaeval | null | null | 0 | 5 | 2023-02-04T14:48:11 | ---
license: mit
task_ids:
- natural-language-inference
task_categories:
- text-classification
language:
- en
---
https://github.com/microsoft/LoNLI
```bibtex
@article{Tarunesh2021TrustingRO,
title={Trusting RoBERTa over BERT: Insights from CheckListing the Natural Language Inference Task},
author={Ishan Tarunesh and Somak Aditya and Monojit Choudhury},
journal={ArXiv},
year={2021},
volume={abs/2107.07229}
}
``` | 425 | [
[
-0.014984130859375,
-0.0291748046875,
0.0435791015625,
0.012420654296875,
-0.005207061767578125,
-0.006031036376953125,
-0.0155029296875,
-0.074951171875,
0.02178955078125,
0.036529541015625,
-0.039794921875,
-0.0230865478515625,
-0.048583984375,
-0.00032091... |
metaeval/nli-veridicality-transitivity | 2023-02-04T18:10:09.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"language:en",
"license:cc",
"region:us"
] | metaeval | null | null | 0 | 5 | 2023-02-04T18:04:01 | ---
license: cc
task_categories:
- text-classification
language:
- en
task_ids:
- natural-language-inference
---
```bib
@inproceedings{yanaka-etal-2021-exploring,
title = "Exploring Transitivity in Neural {NLI} Models through Veridicality",
author = "Yanaka, Hitomi and
Mineshima, Koji and
Inui, Kentaro",
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
year = "2021",
pages = "920--934",
}
``` | 518 | [
[
-0.0078277587890625,
-0.0421142578125,
0.03802490234375,
0.0089111328125,
-0.0040435791015625,
0.004627227783203125,
-0.0054473876953125,
-0.046783447265625,
0.0611572265625,
0.038726806640625,
-0.049530029296875,
-0.0173492431640625,
-0.034912109375,
0.0226... |
metaeval/help-nli | 2023-05-31T08:57:01.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"language:en",
"license:cc",
"region:us"
] | metaeval | null | null | 0 | 5 | 2023-02-04T18:07:35 | ---
license: cc
task_ids:
- natural-language-inference
task_categories:
- text-classification
language:
- en
---
https://github.com/verypluming/HELP
```bib
@InProceedings{yanaka-EtAl:2019:starsem,
author = {Yanaka, Hitomi and Mineshima, Koji and Bekki, Daisuke and Inui, Kentaro and Sekine, Satoshi and Abzianidze, Lasha and Bos, Johan},
title = {HELP: A Dataset for Identifying Shortcomings of Neural Models in Monotonicity Reasoning},
booktitle = {Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM2019)},
year = {2019},
}
``` | 587 | [
[
-0.03887939453125,
-0.0297698974609375,
0.0504150390625,
0.0125732421875,
-0.019012451171875,
-0.017120361328125,
-0.019073486328125,
-0.034423828125,
0.0506591796875,
0.02154541015625,
-0.059967041015625,
-0.03497314453125,
-0.0238189697265625,
0.0130615234... |
jrahn/yolochess_lichess-elite_2211 | 2023-02-08T07:19:54.000Z | [
"task_categories:text-classification",
"task_categories:reinforcement-learning",
"size_categories:10M<n<100M",
"license:cc",
"chess",
"region:us"
] | jrahn | null | null | 3 | 5 | 2023-02-05T20:51:21 | ---
dataset_info:
features:
- name: fen
dtype: string
- name: move
dtype: string
- name: result
dtype: string
- name: eco
dtype: string
splits:
- name: train
num_bytes: 1794337922
num_examples: 22116598
download_size: 1044871571
dataset_size: 1794337922
task_categories:
- text-classification
- reinforcement-learning
license: cc
tags:
- chess
size_categories:
- 10M<n<100M
---
# Dataset Card for "yolochess_lichess-elite_2211"
Source: https://database.nikonoel.fr/ - filtered from https://database.lichess.org for November 2022
Features:
- fen = Chess board position in [FEN](https://en.wikipedia.org/wiki/Forsyth%E2%80%93Edwards_Notation) format
- move = Move played by a strong human player in this position
- result = Final result of the match
- eco = [ECO](https://en.wikipedia.org/wiki/Encyclopaedia_of_Chess_Openings)-code of the Opening played
Samples: 22.1 million | 920 | [
[
-0.021636962890625,
-0.02630615234375,
0.0125274658203125,
-0.0023326873779296875,
-0.0210723876953125,
-0.0114288330078125,
-0.00641632080078125,
-0.026275634765625,
0.04608154296875,
0.043182373046875,
-0.061004638671875,
-0.06011962890625,
-0.02606201171875,
... |
neuclir/neumarco | 2023-02-06T16:16:37.000Z | [
"task_categories:text-retrieval",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"source_datasets:extended|irds/msmarco-passage",
"language:fa",
"language:ru",
"language:zh",
"region:us"
] | neuclir | null | null | 1 | 5 | 2023-02-06T15:19:57 | ---
annotations_creators:
- machine-generated
language:
- fa
- ru
- zh
language_creators:
- machine-generated
multilinguality:
- multilingual
pretty_name: NeuMARCO
size_categories:
- 1M<n<10M
source_datasets:
- extended|irds/msmarco-passage
tags: []
task_categories:
- text-retrieval
---
# Dataset Card for NeuMARCO
## Dataset Description
- **Website:** https://neuclir.github.io/
### Dataset Summary
This is the dataset created for TREC 2022 NeuCLIR Track. The collection consists of documents from [`msmarco-passage`](https://ir-datasets.com/msmarco-passage) translated into
Chinese, Persian, and Russian.
### Languages
- Chinese
- Persian
- Russian
## Dataset Structure
### Data Instances
| Split | Documents |
|-----------------|----------:|
| `fas` (Persian) | 8.8M |
| `rus` (Russian) | 8.8M |
| `zho` (Chinese) | 8.8M |
### Data Fields
- `doc_id`: unique identifier for this document
- `text`: translated passage text
## Dataset Usage
Using 🤗 Datasets:
```python
from datasets import load_dataset
dataset = load_dataset('neuclir/neumarco')
dataset['fas'] # Persian passages
dataset['rus'] # Russian passages
dataset['zho'] # Chinese passages
```
| 1,198 | [
[
-0.01477813720703125,
-0.00965118408203125,
0.00959014892578125,
0.01253509521484375,
-0.032318115234375,
0.0126495361328125,
-0.017822265625,
-0.0162200927734375,
0.022369384765625,
0.03912353515625,
-0.04541015625,
-0.0654296875,
-0.0244293212890625,
0.023... |
shahules786/OA-cornell-movies-dialog | 2023-02-10T05:34:43.000Z | [
"region:us"
] | shahules786 | null | null | 3 | 5 | 2023-02-07T15:21:28 | ---
dataset_info:
features:
- name: conversation
dtype: string
splits:
- name: train
num_bytes: 9476338
num_examples: 20959
download_size: 4859997
dataset_size: 9476338
---
# Dataset Card for Open Assistant Cornell Movies Dialog
## Dataset Summary
The dataset was created using [Cornell Movies Dialog Corpus](https://www.cs.cornell.edu/~cristian/Cornell_Movie-Dialogs_Corpus.html) which contains a large metadata-rich collection of fictional conversations extracted from raw movie scripts.
Dialogs and meta-data from the underlying Corpus were used to design a dataset that can be used to InstructGPT based models to learn movie scripts.
Example :
```
User: Assume RICK and ALICE are characters from a fantasy-horror movie, continue the conversation between them
RICK: I heard you screaming. Was it a bad one?
ALICE: It was bad.
RICK: Doesn't the dream master work for you anymore?
Assistant: Sure
ALICE: I can't find him.
RICK: Hey, since when do you play Thomas Edison? This looks like Sheila's.
ALICE: It is...was. It's a zapper, it might help me stay awake.
RICK: Yeah, or turn you into toast.
```
## Citations
```
@InProceedings{Danescu-Niculescu-Mizil+Lee:11a,
author={Cristian Danescu-Niculescu-Mizil and Lillian Lee},
title={Chameleons in imagined conversations:
A new approach to understanding coordination of linguistic style in dialogs.},
booktitle={Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, ACL 2011},
year={2011}
}
``` | 1,542 | [
[
-0.024139404296875,
-0.073974609375,
0.019287109375,
-0.0248870849609375,
-0.006683349609375,
0.005950927734375,
-0.03173828125,
-0.00737762451171875,
0.022918701171875,
0.037994384765625,
-0.04144287109375,
-0.038055419921875,
-0.01220703125,
0.012779235839... |
kasnerz/cacapo | 2023-03-14T15:09:56.000Z | [
"region:us"
] | kasnerz | null | null | 0 | 5 | 2023-02-08T08:38:35 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
kasnerz/eventnarrative | 2023-03-14T15:07:58.000Z | [
"region:us"
] | kasnerz | null | null | 0 | 5 | 2023-02-08T09:06:55 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
cahya/instructions_indonesian | 2023-02-09T17:03:53.000Z | [
"license:mit",
"region:us"
] | cahya | null | null | 0 | 5 | 2023-02-09T16:34:47 | ---
license: mit
---
# Indonesian Instructions Dataset
| 57 | [
[
0.0016126632690429688,
-0.037200927734375,
-0.01113128662109375,
0.04937744140625,
-0.039703369140625,
-0.0261383056640625,
-0.009368896484375,
0.022857666015625,
0.01099395751953125,
0.09747314453125,
-0.057830810546875,
-0.0533447265625,
-0.047576904296875,
... |
IlyaGusev/habr | 2023-03-09T23:16:35.000Z | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:ru",
"language:en",
"region:us"
] | IlyaGusev | null | null | 13 | 5 | 2023-02-10T20:36:09 | ---
dataset_info:
features:
- name: id
dtype: uint32
- name: language
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text_markdown
dtype: string
- name: text_html
dtype: string
- name: author
dtype: string
- name: original_author
dtype: string
- name: original_url
dtype: string
- name: lead_html
dtype: string
- name: lead_markdown
dtype: string
- name: type
dtype: string
- name: time_published
dtype: uint64
- name: statistics
struct:
- name: commentsCount
dtype: uint32
- name: favoritesCount
dtype: uint32
- name: readingCount
dtype: uint32
- name: score
dtype: int32
- name: votesCount
dtype: int32
- name: votesCountPlus
dtype: int32
- name: votesCountMinus
dtype: int32
- name: labels
sequence: string
- name: hubs
sequence: string
- name: flows
sequence: string
- name: tags
sequence: string
- name: reading_time
dtype: uint32
- name: format
dtype: string
- name: complexity
dtype: string
- name: comments
sequence:
- name: id
dtype: uint64
- name: parent_id
dtype: uint64
- name: level
dtype: uint32
- name: time_published
dtype: uint64
- name: score
dtype: int32
- name: votes
dtype: uint32
- name: message_html
dtype: string
- name: message_markdown
dtype: string
- name: author
dtype: string
- name: children
sequence: uint64
splits:
- name: train
num_bytes: 19968161329
num_examples: 302049
download_size: 3485570346
dataset_size: 19968161329
task_categories:
- text-generation
language:
- ru
- en
size_categories:
- 100K<n<1M
---
# Habr dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Description](#description)
- [Usage](#usage)
- [Data Instances](#data-instances)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
## Description
**Summary:** Dataset of posts and comments from [habr.com](https://habr.com/ru/all/), a Russian collaborative blog about IT, computer science and anything related to the Internet.
**Script:** [create_habr.py](https://github.com/IlyaGusev/rulm/blob/master/data_processing/create_habr.py)
**Point of Contact:** [Ilya Gusev](ilya.gusev@phystech.edu)
**Languages:** Russian, English, some programming code.
## Usage
Prerequisites:
```bash
pip install datasets zstandard jsonlines pysimdjson
```
Dataset iteration:
```python
from datasets import load_dataset
dataset = load_dataset('IlyaGusev/habr', split="train", streaming=True)
for example in dataset:
print(example["text_markdown"])
```
## Data Instances
```
{
"id": 12730,
"language": "ru",
"url": "https://habr.com/ru/post/12730/",
"text_markdown": "...",
"text_html": "...",
"lead_markdown": "...",
"lead_html": "...",
"type": "article",
"labels": [],
"original_author": null,
"original_url": null,
"time_published": 1185962380,
"author": "...",
"title": "Хочешь в университет — сделай презентацию",
"statistics": {
"commentsCount": 23,
"favoritesCount": 1,
"readingCount": 1542,
"score": 7,
"votesCount": 15,
"votesCountPlus": 11,
"votesCountMinus": 4
},
"hubs": [
"itcompanies"
],
"flows": [
"popsci"
],
"tags": [
"PowerPoint",
"презентация",
"абитуриенты",
],
"reading_time": 1,
"format": null,
"complexity": null,
"comments": {
"id": [11653537, 11653541],
"parent_id": [null, 11653537],
"level": [0, 1],
"time_published": [1185963192, 1185967886],
"score": [-1, 0],
"votes": [1, 0],
"message_html": ["...", "..."],
"author": ["...", "..."],
"children": [[11653541], []]
}
}
```
You can use this little helper to unflatten sequences:
```python
def revert_flattening(records):
fixed_records = []
for key, values in records.items():
if not fixed_records:
fixed_records = [{} for _ in range(len(values))]
for i, value in enumerate(values):
fixed_records[i][key] = value
return fixed_records
```
The original JSONL is already unflattened.
## Source Data
* The data source is the [Habr](https://habr.com/) website.
* API call example: [post 709430](https://habr.com/kek/v2/articles/709430).
* Processing script is [here](https://github.com/IlyaGusev/rulm/blob/master/data_processing/create_habr.py).
## Personal and Sensitive Information
The dataset is not anonymized, so individuals' names can be found in the dataset. Information about the original authors is included in the dataset where possible.
| 4,745 | [
[
-0.0280914306640625,
-0.04522705078125,
0.0048828125,
0.0224151611328125,
-0.019744873046875,
0.005825042724609375,
-0.0204315185546875,
0.00269317626953125,
0.024688720703125,
0.0244293212890625,
-0.0330810546875,
-0.057525634765625,
-0.0221710205078125,
0.... |
DReAMy-lib/DreamBank-dreams-en | 2023-02-13T22:51:35.000Z | [
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"region:us"
] | DReAMy-lib | null | null | 0 | 5 | 2023-02-13T22:20:25 | ---
dataset_info:
features:
- name: series
dtype: string
- name: description
dtype: string
- name: dreams
dtype: string
- name: gender
dtype: string
- name: year
dtype: string
splits:
- name: train
num_bytes: 21526822
num_examples: 22415
download_size: 11984242
dataset_size: 21526822
license: apache-2.0
language:
- en
size_categories:
- 10K<n<100K
---
# DreamBank - Dreams
The dataset is a collection of ~20 k textual reports of dreams, originally scraped from the [DreamBank](https://www.dreambank.net/) databased by
[`mattbierner`](https://github.com/mattbierner/DreamScrape). The DreamBank reports are divided into `series`,
which are collections of individuals or research projects/groups that have gathered the dreams.
## Content
The dataset revolves around three main features:
- `dreams`: the content of each dream report.
- `series`: the series to which a report belongs
- `description`: a brief description of the `series`
- `gender`: the gender of the individual(s) in the `series`
- `year`: the time window of the recordings
## Series distribution
The following is a summary of (alphabetically ordered) DreamBank's series together with their total amount of dream reports.
- alta: 422
- angie: 48
- arlie: 212
- b: 3114
- b-baseline: 250
- b2: 1138
- bay_area_girls_456: 234
- bay_area_girls_789: 154
- bea1: 223
- bea2: 63
- blind-f: 238
- blind-m: 143
- bosnak: 53
- chris: 100
- chuck: 75
- dahlia: 24
- david: 166
- dorothea: 899
- ed: 143
- edna: 19
- elizabeth: 1707
- emma: 1221
- emmas_husband: 72
- esther: 110
- hall_female: 681
- jasmine1: 39
- jasmine2: 269
- jasmine3: 259
- jasmine4: 94
- jeff: 87
- joan: 42
- kenneth: 2021
- lawrence: 206
- mack: 38
- madeline1-hs: 98
- madeline2-dorms: 186
- madeline3-offcampus: 348
- madeline4-postgrad: 294
- mark: 23
- melissa: 89
- melora: 211
- melvin: 128
- merri: 315
- miami-home: 171
- miami-lab: 274
- midwest_teens-f: 111
- midwest_teens-m: 83
- nancy: 44
- natural_scientist: 234
- norman: 1235
- norms-f: 490
- norms-m: 491
- pegasus: 1093
- peru-f: 381
- peru-m: 384
- phil1: 106
- phil2: 220
- phil3: 180
- physiologist: 86
- ringo: 16
- samantha: 63
- seventh_graders: 69
- toby: 33
- tom: 27
- ucsc_women: 81
- vickie: 35
- vietnam_vet: 98
- wedding: 65
- west_coast_teens: 89
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 2,448 | [
[
-0.03240966796875,
-0.0246429443359375,
0.0198516845703125,
-0.0116424560546875,
-0.01337432861328125,
0.022308349609375,
0.0023403167724609375,
-0.031402587890625,
0.041839599609375,
0.062103271484375,
-0.060333251953125,
-0.070556640625,
-0.04803466796875,
... |
RicardoRei/wmt-sqm-human-evaluation | 2023-02-17T11:10:39.000Z | [
"size_categories:1M<n<10M",
"language:cs",
"language:de",
"language:en",
"language:hr",
"language:ja",
"language:liv",
"language:ru",
"language:sah",
"language:uk",
"language:zh",
"license:apache-2.0",
"mt-evaluation",
"WMT",
"12-lang-pairs",
"region:us"
] | RicardoRei | null | null | 0 | 5 | 2023-02-17T10:42:46 | ---
license: apache-2.0
size_categories:
- 1M<n<10M
language:
- cs
- de
- en
- hr
- ja
- liv
- ru
- sah
- uk
- zh
tags:
- mt-evaluation
- WMT
- 12-lang-pairs
---
# Dataset Summary
In 2022, several changes were made to the annotation procedure used in the WMT Translation task. In contrast to the standard DA (sliding scale from 0-100) used in previous years, in 2022 annotators performed DA+SQM (Direct Assessment + Scalar Quality Metric). In DA+SQM, the annotators still provide a raw score between 0 and 100, but also are presented with seven labeled tick marks. DA+SQM helps to stabilize scores across annotators (as compared to DA).
The data is organised into 8 columns:
- lp: language pair
- src: input text
- mt: translation
- ref: reference translation
- score: direct assessment
- system: MT engine that produced the `mt`
- annotators: number of annotators
- domain: domain of the input text (e.g. news)
- year: collection year
You can also find the original data [here](https://www.statmt.org/wmt22/results.html)
## Python usage:
```python
from datasets import load_dataset
dataset = load_dataset("RicardoRei/wmt-sqm-human-evaluation", split="train")
```
There is no standard train/test split for this dataset but you can easily split it according to year, language pair or domain. E.g. :
```python
# split by year
data = dataset.filter(lambda example: example["year"] == 2022)
# split by LP
data = dataset.filter(lambda example: example["lp"] == "en-de")
# split by domain
data = dataset.filter(lambda example: example["domain"] == "news")
```
Note that, so far, all data is from [2022 General Translation task](https://www.statmt.org/wmt22/translation-task.html)
## Citation Information
If you use this data please cite the WMT findings:
- [Findings of the 2022 Conference on Machine Translation (WMT22)](https://aclanthology.org/2022.wmt-1.1.pdf)
| 1,895 | [
[
-0.043792724609375,
-0.04522705078125,
0.0259857177734375,
0.01096343994140625,
-0.0322265625,
-0.0158843994140625,
-0.0150909423828125,
-0.03558349609375,
0.0182952880859375,
0.03521728515625,
-0.040740966796875,
-0.039886474609375,
-0.05316162109375,
0.034... |
jonathan-roberts1/Ships-In-Satellite-Imagery | 2023-03-31T14:38:12.000Z | [
"license:cc-by-sa-4.0",
"region:us"
] | jonathan-roberts1 | null | null | 2 | 5 | 2023-02-17T16:48:59 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': an entire ship
'1': no ship or part of a ship
splits:
- name: train
num_bytes: 41806886
num_examples: 4000
download_size: 0
dataset_size: 41806886
license: cc-by-sa-4.0
---
# Dataset Card for "Ships-In-Satellite-Imagery"
## Dataset Description
- **Paper:** [Ships in Satellite Imagery](https://www.kaggle.com/datasets/rhammell/ships-in-satellite-imagery)
### Licensing Information
CC BY-SA 4.0
## Citation Information
[Ships in Satellite Imagery](https://www.kaggle.com/datasets/rhammell/ships-in-satellite-imagery)
```
@misc{kaggle_sisi,
author = {Hammell, Robert},
title = {Ships in Satellite Imagery},
howpublished = {\url{https://www.kaggle.com/datasets/rhammell/ships-in-satellite-imagery}},
year = {2018},
version = {9.0}
}
``` | 915 | [
[
-0.031402587890625,
-0.0194091796875,
0.037139892578125,
0.021942138671875,
-0.057037353515625,
-0.0015020370483398438,
0.0159912109375,
-0.03289794921875,
0.0208282470703125,
0.05267333984375,
-0.05059814453125,
-0.0679931640625,
-0.0340576171875,
-0.011146... |
recmeapp/mobilerec | 2023-02-21T17:06:16.000Z | [
"region:us"
] | recmeapp | null | null | 3 | 5 | 2023-02-20T02:40:55 | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- https://github.com/mhmaqbool/mobilerec
- **Repository:**
- https://github.com/mhmaqbool/mobilerec
- **Paper:**
- MobileRec: A Large-Scale Dataset for Mobile Apps Recommendation
- **Point of Contact:**
- M.H. Maqbool (hasan.khowaja@gmail.com)
- Abubakar Siddique (abubakar.ucr@gmail.com)
### Dataset Summary
MobileRec is a large-scale app recommendation dataset. There are 19.3 million user\item interactions. This is a 5-core dataset.
User\item interactions are sorted in ascending chronological order. There are 0.7 million users who have had at least five distinct interactions.
There are 10173 apps in total.
### Supported Tasks and Leaderboards
Sequential Recommendation
### Languages
English
## How to use the dataset?
```
from datasets import load_dataset
import pandas as pd
# load the dataset and meta_data
mbr_data = load_dataset('recmeapp/mobilerec', data_dir='interactions')
mbr_meta = load_dataset('recmeapp/mobilerec', data_dir='app_meta')
# Save dataset to .csv file for creating pandas dataframe
mbr_data['train'].to_csv('./mbr_data.csv')
# Convert to pandas dataframe
mobilerec_df = pd.read_csv('./mbr_data.csv')
# How many interactions are there in the MobileRec dataset?
print(f'There are {len(mobilerec_df)} interactions in mobilerec dataset.')
# How many unique app_packages (apps or items) are there?
print(f'There are {len(mobilerec_df["app_package"].unique())} unique apps in mobilerec dataset.')
# How many unique users are there in the mobilerec dataset?
print(f'There are {len(mobilerec_df["uid"].unique())} unique users in mobilerec dataset.')
# How many categoris are there?
print(f'There are {len(mobilerec_df["app_category"].unique())} unique categories in mobilerec dataset.')
```
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 3,049 | [
[
-0.03265380859375,
-0.0308685302734375,
-0.00377655029296875,
0.0294036865234375,
-0.001773834228515625,
-0.01042938232421875,
-0.00135040283203125,
-0.01322174072265625,
0.034423828125,
0.04168701171875,
-0.0516357421875,
-0.0560302734375,
-0.01548004150390625,... |
Abirami/tamilwikipediadataset | 2023-02-22T12:42:51.000Z | [
"region:us"
] | Abirami | null | null | 2 | 5 | 2023-02-22T11:50:04 | annotations_creators:
- found
language:
- Tamil
language_creators:
- found
license: []
multilinguality:
- multilingual
pretty_name: tamilwikipediadataset
size_categories:
- 100K<n<1M
source_datasets: []
tags: []
task_categories:
- summarization
task_ids: [] | 257 | [
[
-0.02593994140625,
-0.0139312744140625,
0.0030689239501953125,
0.04833984375,
-0.05853271484375,
0.01027679443359375,
-0.016754150390625,
-0.00690460205078125,
0.047027587890625,
0.04339599609375,
-0.03289794921875,
-0.03289794921875,
-0.055633544921875,
0.0... |
sh0416/ag_news | 2023-02-23T07:22:32.000Z | [
"task_categories:text-classification",
"language:en",
"region:us"
] | sh0416 | null | null | 0 | 5 | 2023-02-23T07:13:31 | ---
task_categories:
- text-classification
language:
- en
---
AG's News Topic Classification Dataset
Version 3, Updated 09/09/2015
ORIGIN
AG is a collection of more than 1 million news articles. News articles have been gathered from more than 2000 news sources by ComeToMyHead in more than 1 year of activity. ComeToMyHead is an academic news search engine which has been running since July, 2004. The dataset is provided by the academic comunity for research purposes in data mining (clustering, classification, etc), information retrieval (ranking, search, etc), xml, data compression, data streaming, and any other non-commercial activity. For more information, please refer to the link http://www.di.unipi.it/~gulli/AG_corpus_of_news_articles.html .
The AG's news topic classification dataset is constructed by Xiang Zhang (xiang.zhang@nyu.edu) from the dataset above. It is used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015).
DESCRIPTION
The AG's news topic classification dataset is constructed by choosing 4 largest classes from the original corpus. Each class contains 30,000 training samples and 1,900 testing samples. The total number of training samples is 120,000 and testing 7,600.
The file classes.txt contains a list of classes corresponding to each label.
The files train.csv and test.csv contain all the training samples as comma-sparated values. There are 3 columns in them, corresponding to class index (1 to 4), title and description. The title and description are escaped using double quotes ("), and any internal double quote is escaped by 2 double quotes (""). New lines are escaped by a backslash followed with an "n" character, that is "\n".
CLASS NAME INFORMATION
1: World
2: Sports
3: Business
4: Sci/Tech
JSONL FORMAT
Instead of preserving csv format, I change the format to jsonl, which doesn't consider complicated rule about doublequote and escaping.
| 2,078 | [
[
-0.0270843505859375,
-0.057708740234375,
0.01418304443359375,
0.0012197494506835938,
-0.0161590576171875,
0.020263671875,
-0.01512908935546875,
-0.02587890625,
0.009674072265625,
0.0218353271484375,
-0.024444580078125,
-0.056304931640625,
-0.0662841796875,
0... |
wwydmanski/wisconsin-breast-cancer | 2023-02-23T19:11:33.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"tabular",
"breast-cancer",
"region:us"
] | wwydmanski | null | null | 1 | 5 | 2023-02-23T16:54:47 | ---
task_categories:
- tabular-classification
tags:
- tabular
- breast-cancer
pretty_name: WisconsinBreastCancerDiagnostic
size_categories:
- n<1K
---
## Source:
Copied from the [original dataset](https://archive.ics.uci.edu/ml/datasets/breast+cancer+wisconsin+(diagnostic))
### Creators:
1. Dr. William H. Wolberg, General Surgery Dept.
University of Wisconsin, Clinical Sciences Center
Madison, WI 53792
wolberg '@' eagle.surgery.wisc.edu
2. W. Nick Street, Computer Sciences Dept.
University of Wisconsin, 1210 West Dayton St., Madison, WI 53706
street '@' cs.wisc.edu 608-262-6619
3. Olvi L. Mangasarian, Computer Sciences Dept.
University of Wisconsin, 1210 West Dayton St., Madison, WI 53706
olvi '@' cs.wisc.edu
### Donor:
Nick Street
## Data Set Information:
Features are computed from a digitized image of a fine needle aspirate (FNA) of a breast mass. They describe characteristics of the cell nuclei present in the image. A few of the images can be found at [Web Link]
Separating plane described above was obtained using Multisurface Method-Tree (MSM-T) [K. P. Bennett, "Decision Tree Construction Via Linear Programming." Proceedings of the 4th Midwest Artificial Intelligence and Cognitive Science Society, pp. 97-101, 1992], a classification method which uses linear programming to construct a decision tree. Relevant features were selected using an exhaustive search in the space of 1-4 features and 1-3 separating planes.
The actual linear program used to obtain the separating plane in the 3-dimensional space is that described in: [K. P. Bennett and O. L. Mangasarian: "Robust Linear Programming Discrimination of Two Linearly Inseparable Sets", Optimization Methods and Software 1, 1992, 23-34].
This database is also available through the UW CS ftp server:
ftp ftp.cs.wisc.edu
cd math-prog/cpo-dataset/machine-learn/WDBC/
### Attribute Information:
1) ID number
2) Diagnosis (M = malignant, B = benign)
3-32)
Ten real-valued features are computed for each cell nucleus:
a) radius (mean of distances from center to points on the perimeter)
b) texture (standard deviation of gray-scale values)
c) perimeter
d) area
e) smoothness (local variation in radius lengths)
f) compactness (perimeter^2 / area - 1.0)
g) concavity (severity of concave portions of the contour)
h) concave points (number of concave portions of the contour)
i) symmetry
j) fractal dimension ("coastline approximation" - 1) | 2,434 | [
[
-0.039215087890625,
-0.0498046875,
0.051055908203125,
0.02105712890625,
-0.01861572265625,
-0.0133209228515625,
0.032745361328125,
-0.012298583984375,
0.0151519775390625,
0.0345458984375,
-0.047088623046875,
-0.0650634765625,
-0.035552978515625,
-0.005519866... |
wwydmanski/UNSW-NB15 | 2023-02-26T11:14:46.000Z | [
"task_categories:tabular-classification",
"size_categories:1M<n<10M",
"tabular",
"network",
"region:us"
] | wwydmanski | null | null | 1 | 5 | 2023-02-26T11:07:57 | ---
task_categories:
- tabular-classification
tags:
- tabular
- network
size_categories:
- 1M<n<10M
---
## Source
https://www.kaggle.com/datasets/dhoogla/unswnb15?resource=download
## Dataset
This is an academic intrusion detection dataset. All the credit goes to the original authors: dr. Nour Moustafa and dr. Jill Slay.
Please cite their original paper and all other appropriate articles listed on the UNSW-NB15 page.
The full dataset also offers the pcap, BRO and Argus files along with additional documentation.
The modifications to the predesignated train-test sets are minimal and designed to decrease disk storage and increase performance & reliability.
Exploratory Data Analysis (EDA) through classification with very simple models to .877 AUROC. | 763 | [
[
-0.0270843505859375,
-0.04620361328125,
0.01322174072265625,
0.005077362060546875,
-0.006610870361328125,
0.0010204315185546875,
0.02252197265625,
-0.031707763671875,
0.018707275390625,
0.04736328125,
-0.04437255859375,
-0.037109375,
-0.022705078125,
0.01455... |
pszemraj/SQuALITY-v1.3 | 2023-02-27T08:42:44.000Z | [
"task_categories:summarization",
"task_categories:text2text-generation",
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"summarization",
"long-document",
"arxiv:2205.11465",
"region:us"
] | pszemraj | null | null | 0 | 5 | 2023-02-27T08:25:50 | ---
license: apache-2.0
language:
- en
task_categories:
- summarization
- text2text-generation
tags:
- summarization
- long-document
pretty_name: SQuALITY v1.3
size_categories:
- n<1K
---
# SQuALITY - v1.3
> Original paper [here](https://arxiv.org/abs/2205.11465)
This is v1.3, the 'text' edition `.jsonl` files. See description from the [original repo](https://github.com/nyu-mll/SQuALITY):
> v1.3 fixes some bugs in v1.2. In v1.2, 10 out of 127 articles (each ~5k-word-long) are missing a few hundreds words each, so summaries may not be fully contained in the article. To fix this issue, we have updated the 10 articles.
## contents
> again, this is taken from the repo
Each data file ({train/dev/test}.jsonl) is formatted as a JSON lines file. Each row in the data file is a JSON dictionary with the following fields:
- metadata: the Gutenberg story ID, an internal UID, and the Project Gutenberg license
- document: the Gutenberg story
questions: a list of questions and accompanying responses
- question text
- question number: the order in which that question was answered by the writers
- responses: list of worker's response, where each response is a dictionary containing the (anonymized) worker ID, an internal UID, and their response to the question
### dataset contents
```python
DatasetDict({
train: Dataset({
features: ['metadata', 'document', 'questions'],
num_rows: 50
})
test: Dataset({
features: ['metadata', 'document', 'questions'],
num_rows: 52
})
validation: Dataset({
features: ['metadata', 'document', 'questions'],
num_rows: 25
})
})
```
| 1,658 | [
[
-0.0255889892578125,
-0.050018310546875,
0.027435302734375,
0.038818359375,
-0.004314422607421875,
-0.0206298828125,
-0.0114593505859375,
-0.0413818359375,
0.00586700439453125,
0.0281219482421875,
-0.038330078125,
-0.02777099609375,
-0.0289154052734375,
0.03... |
ronig/protein_binding_sequences | 2023-06-24T10:04:53.000Z | [
"license:mit",
"region:us"
] | ronig | null | null | 2 | 5 | 2023-03-02T20:23:37 | ---
license: mit
pretty_name: Sequence Based Protein - Peptide Binding Dataset
---
# Sequence Based Protein - Peptide Binding Dataset
- Data sources:
- [Huang Laboratory](http://huanglab.phys.hust.edu.cn)
- [Propedia](http://bioinfo.dcc.ufmg.br/propedia/)
- [YAPP-Cd](https://www.biorxiv.org/content/10.1101/2021.06.16.448765v1)
- Dataset size: 16,370 sets of Protein-Peptide sequences that bind, the protein sequence
contains only the relevant chain.
- Train / Val split: the dataset is split to 80% train 10% val and 10% test.
| 545 | [
[
-0.02490234375,
-0.0391845703125,
0.0020999908447265625,
-0.0002491474151611328,
-0.006137847900390625,
-0.019287109375,
0.03155517578125,
-0.013458251953125,
0.03271484375,
0.0224609375,
-0.0498046875,
-0.0245819091796875,
-0.0277862548828125,
0.02792358398... |
HuggingFaceH4/instruct_me | 2023-03-06T08:36:03.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"human-feedback",
"instruct",
"reward-modeling",
"region:us"
] | HuggingFaceH4 | Instruct Me is a dataset of instruction-like dialogues between a human user and AI assistant. The prompts are derived from (prompt, completion) pairs in the Helpful Instructions dataset. The goal is to train a language model to that is "chatty" and can answer the kind of questions or tasks a human user might instruct an AI assistant to perform. | """
_DESCRIPTION = | 14 | 5 | 2023-03-03T13:43:15 | ---
license: apache-2.0
dataset_info:
- config_name: instruction_tuning
features:
- name: text
dtype: string
- name: meta
struct:
- name: source
dtype: string
- name: config
dtype: string
splits:
- name: train
num_bytes: 29975565
num_examples: 41685
- name: test
num_bytes: 3298059
num_examples: 4632
download_size: 18425612
dataset_size: 33273624
- config_name: reward_modelling
features:
- name: text
dtype: string
- name: meta
struct:
- name: source
dtype: string
- name: config
dtype: string
splits:
- name: train
num_bytes: 25274204
num_examples: 41685
- name: test
num_bytes: 2777314
num_examples: 4632
download_size: 15636566
dataset_size: 28051518
- config_name: ppo
features:
- name: prompt
dtype: string
- name: meta
struct:
- name: source
dtype: string
- name: config
dtype: string
splits:
- name: train
num_bytes: 50787070
num_examples: 83371
- name: test
num_bytes: 5715727
num_examples: 9264
download_size: 31461165
dataset_size: 56502797
- config_name: reward_modeling
features:
- name: prompt
dtype: string
- name: meta
struct:
- name: source
dtype: string
- name: config
dtype: string
splits:
- name: train
num_bytes: 25274204
num_examples: 41685
- name: test
num_bytes: 2777314
num_examples: 4632
download_size: 15636838
dataset_size: 28051518
task_categories:
- conversational
- text-generation
language:
- en
tags:
- human-feedback
- instruct
- reward-modeling
pretty_name: Instruct Me
---
# Dataset card for Instruct Me
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** Lewis Tunstall
### Dataset summary
Instruct Me is a dataset of prompts and instruction dialogues between a human user and AI assistant. The prompts are derived from (prompt, completion) pairs in the [Helpful Instructions dataset](https://huggingface.co/datasets/HuggingFaceH4/helpful_instructions). The goal is to train a language model to that is "chatty" and can answer the kind of questions or tasks a human user might instruct an AI assistant to perform.
### Supported Tasks and Leaderboard
We provide 3 configs that can be used for training RLHF models:
#### instruction_tuning
Single-turn user/bot dialogues for instruction tuning.
#### reward_modeling
Prompts to generate model completions and collect human preference data
#### ppo
Prompts to generate model completions for optimization of the instruction-tuned model with techniques like PPO.
### Changelog
* March 6, 2023: `v1.1.0` release. Changed the `text` columns for the `reward_modeling` and `ppo` configs to `prompt` for consistency with our dataset schemas elsewhere.
* March 5, 2023: `v1.0.0` release. | 2,879 | [
[
-0.0238037109375,
-0.05029296875,
0.0059967041015625,
0.0016040802001953125,
-0.00421905517578125,
-0.006374359130859375,
-0.003932952880859375,
-0.01160430908203125,
0.0037860870361328125,
0.0531005859375,
-0.07421875,
-0.050079345703125,
-0.0187835693359375,
... |
shahules786/prosocial-nsfw-reddit | 2023-03-04T21:53:31.000Z | [
"region:us"
] | shahules786 | null | null | 2 | 5 | 2023-03-04T21:53:19 | ---
dataset_info:
features:
- name: user
dtype: string
- name: subreddit
dtype: string
- name: post_id
dtype: string
- name: link_flair_text
dtype: string
- name: over_18
dtype: bool
- name: is_question
dtype: bool
- name: rots
sequence: string
- name: safety_label
dtype: string
splits:
- name: train
num_bytes: 167028.87151967647
num_examples: 1502
download_size: 101362
dataset_size: 167028.87151967647
---
# Dataset Card for "prosocial-nsfw-reddit"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 651 | [
[
-0.041839599609375,
-0.02923583984375,
0.01520538330078125,
0.04119873046875,
-0.0263519287109375,
-0.01059722900390625,
0.01806640625,
-0.0139007568359375,
0.062042236328125,
0.0279998779296875,
-0.0706787109375,
-0.0506591796875,
-0.045989990234375,
0.0013... |
venetis/symptom_text_to_disease_mk3 | 2023-03-09T13:24:21.000Z | [
"region:us"
] | venetis | null | null | 0 | 5 | 2023-03-09T13:24:15 | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
dtype:
class_label:
names:
'0': emotional pain
'1': hair falling out
'2': heart hurts
'3': infected wound
'4': foot ache
'5': shoulder pain
'6': injury from sports
'7': skin issue
'8': stomach ache
'9': knee pain
'10': joint pain
'11': hard to breath
'12': head ache
'13': body feels weak
'14': feeling dizzy
'15': back pain
'16': open wound
'17': internal pain
'18': blurry vision
'19': acne
'20': muscle pain
'21': neck pain
'22': cough
'23': ear ache
'24': feeling cold
splits:
- name: train
num_bytes: 330494.3762197868
num_examples: 5328
- name: test
num_bytes: 41373.82675273983
num_examples: 667
- name: valid
num_bytes: 41311.79702747335
num_examples: 666
download_size: 146385
dataset_size: 413180.0
---
# Dataset Card for "symptom_text_to_disease_mk3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,278 | [
[
-0.004978179931640625,
-0.007747650146484375,
0.040679931640625,
0.02056884765625,
-0.01849365234375,
-0.01287841796875,
0.0294952392578125,
-0.00904083251953125,
0.061279296875,
0.034881591796875,
-0.0484619140625,
-0.07794189453125,
-0.0673828125,
0.005039... |
shahules786/prosocial-confessions | 2023-03-16T04:21:02.000Z | [
"region:us"
] | shahules786 | null | null | 1 | 5 | 2023-03-11T07:00:49 | ---
dataset_info:
features:
- name: context
dtype: string
- name: rots
sequence: string
- name: source
dtype: string
- name: episode_done
dtype: bool
- name: confidence
dtype: float32
- name: safety_label
dtype: string
- name: response
dtype: 'null'
splits:
- name: train
num_bytes: 4022983
num_examples: 14805
download_size: 1686525
dataset_size: 4022983
---
## Prosocial Confessions
A confessions datasets in prosocial-dialog data format. This can be used for red-teaming language models.
- uses data from
[one million reddit confessions](https://www.kaggle.com/datasets/pavellexyr/one-million-reddit-confessions)
and [prosocial-dialog]() to synthesize samples to help train safety models.
- A [classifier](https://huggingface.co/shahules786/prosocial-classifier)
trained on prosocial dialog dataset is used for pseudo labeling.
## Example
```
{
'context': 'I lie to my friends to get out of going to bars/restaurants',
'rots': ["You shouldn't lie to your friends",
'You should not lie to your friends'],
'permalink': 'https://old.reddit.com/r/confession/comments/phgi8h/i_lie_to_my_friends_to_get_out_of_going_to/',
'episone_done': True,
'confidence': 0.87353515625,
'safety_label': '__needs_caution__',
'response': None
}
```
* context : user prompt
* rots : Rules of thumb
* permalink : reddit post link
* confidence : probability of safety label
* safety label
* response : none
## Citations
```
@inproceedings{
kim2022prosocialdialog,
title={ProsocialDialog: A Prosocial Backbone for Conversational Agents},
author={Hyunwoo Kim and Youngjae Yu and Liwei Jiang and Ximing Lu and Daniel Khashabi and Gunhee Kim and Yejin Choi and Maarten Sap},
booktitle={EMNLP},
year=2022
}
``` | 1,787 | [
[
-0.00829315185546875,
-0.04974365234375,
0.04058837890625,
0.024017333984375,
-0.001708984375,
-0.005706787109375,
0.007232666015625,
-0.03131103515625,
0.0194549560546875,
0.0273590087890625,
-0.062469482421875,
-0.054840087890625,
-0.0177001953125,
0.02165... |
AnonymousSub/MedQuAD_47441_Context_Question_Answer_Triples | 2023-03-13T14:58:26.000Z | [
"region:us"
] | AnonymousSub | null | null | 0 | 5 | 2023-03-13T14:58:24 | ---
dataset_info:
features:
- name: Contexts
dtype: string
- name: Questions
dtype: string
- name: Answers
dtype: string
splits:
- name: train
num_bytes: 190797665
num_examples: 47441
download_size: 21780319
dataset_size: 190797665
---
# Dataset Card for "MedQuAD_47441_Context_Question_Answer_Triples"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 471 | [
[
-0.042144775390625,
-0.0269927978515625,
0.022369384765625,
0.01532745361328125,
-0.0152435302734375,
-0.0074615478515625,
0.0229644775390625,
-0.01064300537109375,
0.045013427734375,
0.046875,
-0.04791259765625,
-0.04534912109375,
-0.0309295654296875,
0.001... |
katarinagresova/Genomic_Benchmarks_demo_coding_vs_intergenomic_seqs | 2023-10-04T13:10:11.000Z | [
"region:us"
] | katarinagresova | null | null | 0 | 5 | 2023-03-13T19:33:48 | ---
dataset_info:
features:
- name: seq
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 15900000
num_examples: 75000
- name: test
num_bytes: 5300000
num_examples: 25000
download_size: 2456511
dataset_size: 21200000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "Genomic_Benchmarks_demo_coding_vs_intergenomic_seqs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 614 | [
[
-0.04962158203125,
-0.0013675689697265625,
0.00653839111328125,
0.021697998046875,
-0.008026123046875,
0.01214599609375,
0.002918243408203125,
0.0019683837890625,
0.042510986328125,
0.0107421875,
-0.048126220703125,
-0.058258056640625,
-0.033599853515625,
0.... |
cartesinus/leyzer-fedcsis | 2023-10-20T09:28:32.000Z | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"language:pl",
"language:es",
"license:cc-by-4.0",
"natural-language-understanding",
"region:us"
] | cartesinus | Leyzer is a multilingual text corpus designed to study multilingual and cross-lingual natural language
understanding (NLU) models and the strategies of localization of virtual assistants. It consists of 20
domains across three languages: English, Spanish and Polish, with 186 intents and a wide range of
samples, ranging from 1 to 672 sentences per intent. | @inproceedings{sowanski2020leyzer,
title={Leyzer: A Dataset for Multilingual Virtual Assistants},
author={Sowa{\'n}ski, Marcin and Janicki, Artur},
booktitle={International Conference on Text, Speech, and Dialogue},
pages={477--486},
year={2020},
organization={Springer}
} | 0 | 5 | 2023-03-15T00:12:27 | ---
license: cc-by-4.0
task_categories:
- text-classification
language:
- en
- pl
- es
tags:
- natural-language-understanding
size_categories:
- 10K<n<100K
---
# Leyzer: A Dataset for Multilingual Virtual Assistants
Leyzer is a multilingual text corpus designed to study multilingual and cross-lingual natural language understanding (NLU) models and the strategies of localization of
virtual assistants. It consists of 20 domains across three languages: English, Spanish and Polish, with 186 intents and a wide range of samples, ranging from 1 to 672
sentences per intent. For more stats please refer to wiki.
## Citation
If you use this model, please cite the following:
```
@inproceedings{kubis2023caiccaic,
author={Marek Kubis and Paweł Skórzewski and Marcin Sowański and Tomasz Ziętkiewicz},
pages={1319–1324},
title={Center for Artificial Intelligence Challenge on Conversational AI Correctness},
booktitle={Proceedings of the 18th Conference on Computer Science and Intelligence Systems},
year={2023},
doi={10.15439/2023B6058},
url={http://dx.doi.org/10.15439/2023B6058},
volume={35},
series={Annals of Computer Science and Information Systems}
}
``` | 1,169 | [
[
-0.0302734375,
-0.046630859375,
0.034027099609375,
0.023590087890625,
0.01082611083984375,
0.0101165771484375,
-0.009002685546875,
-0.0294189453125,
0.01387786865234375,
0.034912109375,
-0.054901123046875,
-0.05743408203125,
-0.0203857421875,
0.021240234375,... |
nbtpj/movielens-1m-ratings | 2023-03-15T01:02:27.000Z | [
"region:us"
] | nbtpj | null | null | 0 | 5 | 2023-03-15T01:01:22 | ---
dataset_info:
features:
- name: bucketized_user_age
dtype: float32
- name: movie_genres
sequence: int64
- name: movie_id
dtype: binary
- name: movie_title
dtype: binary
- name: timestamp
dtype: int64
- name: user_gender
dtype: bool
- name: user_id
dtype: binary
- name: user_occupation_label
dtype: int64
- name: user_occupation_text
dtype: binary
- name: user_rating
dtype: float32
- name: user_zip_code
dtype: binary
splits:
- name: train
num_bytes: 116192936
num_examples: 1000209
download_size: 43879407
dataset_size: 116192936
---
# Dataset Card for "movielens-1m-ratings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 798 | [
[
-0.0540771484375,
-0.008331298828125,
-0.00011408329010009766,
0.0168914794921875,
-0.03558349609375,
-0.0091094970703125,
0.0242767333984375,
0.0080718994140625,
0.06365966796875,
0.031005859375,
-0.0679931640625,
-0.052093505859375,
-0.050811767578125,
-0.... |
reginaboateng/ebmnlp_pico | 2023-03-18T17:54:00.000Z | [
"region:us"
] | reginaboateng | null | null | 0 | 5 | 2023-03-18T17:53:53 | ---
dataset_info:
features:
- name: tokens
sequence: string
- name: chunk_tags
sequence: string
- name: pos_tags
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': I-INT
'2': I-OUT
'3': I-PAR
splits:
- name: train
num_bytes: 27639457
num_examples: 23952
- name: test
num_bytes: 1482781
num_examples: 2065
- name: dev
num_bytes: 7446993
num_examples: 7049
download_size: 4095965
dataset_size: 36569231
---
# Dataset Card for "ebmnlp_pico"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 711 | [
[
-0.038482666015625,
-0.01439666748046875,
-0.0013380050659179688,
0.01800537109375,
-0.028656005859375,
-0.00733184814453125,
0.028045654296875,
-0.0287628173828125,
0.0733642578125,
0.024169921875,
-0.05078125,
-0.055267333984375,
-0.039154052734375,
-0.011... |
sunzeyeah/chinese_chatgpt_corpus | 2023-03-23T16:53:47.000Z | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:question-answering",
"task_categories:reinforcement-learning",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:unknown",
"multilingu... | sunzeyeah | null | null | 72 | 5 | 2023-03-21T09:16:21 | ---
annotations_creators:
- no-annotation
language_creators:
- unknown
language:
- zh
license:
- unknown
multilinguality:
- monolingual
pretty_name: Chinese-ChatGPT-Corpus
size_categories:
- 5M<n<10M
task_categories:
- text-generation
- text2text-generation
- question-answering
- reinforcement-learning
task_ids:
- language-modeling
- masked-language-modeling
---
# Dataset Card for chinese_chatgpt_corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Size of downloaded dataset files:** 5.05 GB
- **Size of the generated dataset:** 0 GB
- **Total amount of disk used:** 5.05 GB
### Dataset Summary
This repo collects chinese corpus for Supervised Finetuning (SFT) and Reinforcement Learning From Human Feedback (RLHF).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
Chinese
## Dataset Structure
### Data Instances
#### train_data_external_v1.jsonl
- **Size of downloaded dataset files:** 5.04 GB
- **Size of the generated dataset:** 0 GB
- **Total amount of disk used:** 5.04 GB
An example looks as follows:
```
{
"prompt": "问题:有没有给未成年贷款的有的联系",
"answers":
[
{
"answer": "若通过招行办理,我行规定,贷款人年龄需年满18岁,且年龄加贷款年限不得超过70岁。如果您持有我行信用卡附属卡,可尝试办理预借现金。",
"score": 1
}
],
"prefix": "回答:"
}
```
#### dev_data_external_v1.jsonl
- **Size of downloaded dataset files:** 9.55 MB
- **Size of the generated dataset:** 0 MB
- **Total amount of disk used:** 9.55 MB
An example looks as follows:
```
{
"prompt": "初学纹发现1/2\"的管螺纹并不是1\"的一半。不知道其中的原因,请各位指点。",
"answers":
[
{
"answer": "管螺纹的名义尺寸是“管子”的孔(内)径,而管子的壁厚不是两倍。所以,1/2\"的管螺纹并不是1\"的一半,",
"score": 1
}
],
"prefix": "回答:"
}
```
### Data Fields
The data fields are the same among all splits.
#### train_data_external_v1.jsonl
- `prompt`: prompt, `string`
- `answers`: list of answers
- `answer`: answer, `string`
- `score`: score of answer, `int`
- `prefix`: prefix to the answer, `string`
#### dev_data_external_v1.jsonl
- `prompt`: prompt, `string`
- `answers`: list of answers
- `answer`: answer, `string`
- `score`: score of answer, `int`
- `prefix`: prefix to the answer, `string`
### Data Splits
| name | train |
|----------|-------:|
|train_data_external_v1.jsonl|5477982|
|dev_data_external_v1.jsonl|10000|
## Dataset Creation
### Curation Rationale
Link to github: [data_prepare](https://github.com/sunzeyeah/RLHF/blob/master/src/data_prepare.py)
### Source Data
#### Initial Data Collection and Normalization
- [百科](https://github.com/brightmart/nlp_chinese_corpus)
- [知道问答](https://github.com/SophonPlus/ChineseNlpCorpus)
- [对联](https://github.com/wb14123/couplet-dataset/releases/download/1.0/couplet.tar.gz)
- [古文](https://github.com/NiuTrans/Classical-Modern)
- [古诗词](https://github.com/chinese-poetry/chinese-poetry)
- 微博新闻评论
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
| 5,579 | [
[
-0.038787841796875,
-0.0435791015625,
0.005168914794921875,
0.0193939208984375,
-0.014190673828125,
-0.01016998291015625,
-0.035186767578125,
-0.0258331298828125,
0.034454345703125,
0.0277252197265625,
-0.0528564453125,
-0.05633544921875,
-0.0283355712890625,
... |
jamescalam/langchain-docs | 2023-03-22T11:24:00.000Z | [
"region:us"
] | jamescalam | null | null | 11 | 5 | 2023-03-22T11:20:08 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
rcds/swiss_judgment_prediction_xl | 2023-07-20T07:31:57.000Z | [
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:it",
"language:de",
"language:fr",
"license:cc-by-sa-4.0",
"arxiv:2306.09237",
"region:us"
] | rcds | This dataset contains court decision for judgment prediction task. | @InProceedings{huggingface:dataset,
title = {A great new dataset},
author={huggingface, Inc.
},
year={2020}
} | 0 | 5 | 2023-03-23T23:42:15 | ---
license: cc-by-sa-4.0
task_categories:
- text-classification
language:
- it
- de
- fr
pretty_name: Swiss Judgment Prediction XL
size_categories:
- 100K<n<1M
---
# Dataset Card for Swiss Court View Generation
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Swiss Judgment Prediction is a multilingual, diachronic dataset of 329K Swiss Federal Supreme Court (FSCS) cases. This dataset is part of a challenging text generation task.
### Supported Tasks and Leaderboards
### Languages
Switzerland has four official languages with three languages German, French and Italian being represented. The decisions are written by the judges and clerks in the language of the proceedings.
| Language | Subset | Number of Documents Full |
|------------|------------|--------------------------|
| German | **de** | 160K |
| French | **fr** | 128K |
| Italian | **it** | 41K |
## Dataset Structure
### Data Fields
```
- decision_id: unique identifier for the decision
- facts: facts section of the decision
- considerations: considerations section of the decision
- label: label of the decision
- law_area: area of law of the decision
- language: language of the decision
- year: year of the decision
- court: court of the decision
- chamber: chamber of the decision
- canton: canton of the decision
- region: region of the decision
```
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
The original data are published from the Swiss Federal Supreme Court (https://www.bger.ch) in unprocessed formats (HTML). The documents were downloaded from the Entscheidsuche portal (https://entscheidsuche.ch) in HTML.
#### Who are the source language producers?
The decisions are written by the judges and clerks in the language of the proceedings.
### Annotations
#### Annotation process
#### Who are the annotators?
Metadata is published by the Swiss Federal Supreme Court (https://www.bger.ch).
### Personal and Sensitive Information
The dataset contains publicly available court decisions from the Swiss Federal Supreme Court. Personal or sensitive information has been anonymized by the court before publication according to the following guidelines: https://www.bger.ch/home/juridiction/anonymisierungsregeln.html.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
We release the data under CC-BY-4.0 which complies with the court licensing (https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf)
© Swiss Federal Supreme Court, 2002-2022
The copyright for the editorial content of this website and the consolidated texts, which is owned by the Swiss Federal Supreme Court, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.
Source: https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf
### Citation Information
Please cite our [ArXiv-Preprint](https://arxiv.org/abs/2306.09237)
```
@misc{rasiah2023scale,
title={SCALE: Scaling up the Complexity for Advanced Language Model Evaluation},
author={Vishvaksenan Rasiah and Ronja Stern and Veton Matoshi and Matthias Stürmer and Ilias Chalkidis and Daniel E. Ho and Joel Niklaus},
year={2023},
eprint={2306.09237},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
| 5,059 | [
[
-0.0206146240234375,
-0.042022705078125,
0.027679443359375,
0.0200347900390625,
-0.037322998046875,
-0.0123443603515625,
-0.0135040283203125,
-0.01523590087890625,
0.0131378173828125,
0.0467529296875,
-0.05145263671875,
-0.06256103515625,
-0.05224609375,
0.0... |
s-nlp/ru_paradetox_toxicity | 2023-09-08T08:36:01.000Z | [
"task_categories:text-classification",
"language:ru",
"license:openrail++",
"region:us"
] | s-nlp | null | null | 0 | 5 | 2023-03-24T15:08:32 | ---
license: openrail++
task_categories:
- text-classification
language:
- ru
---
# ParaDetox: Detoxification with Parallel Data (Russian). Toxicity Task Results
This repository contains information about **Toxicity Task** markup from [Russian Paradetox dataset](https://huggingface.co/datasets/s-nlp/ru_paradetox) collection pipeline.
## ParaDetox Collection Pipeline
The ParaDetox Dataset collection was done via [Yandex.Toloka](https://toloka.yandex.com/) crowdsource platform. The collection was done in three steps:
* *Task 1:* **Generation of Paraphrases**: The first crowdsourcing task asks users to eliminate toxicity in a given sentence while keeping the content.
* *Task 2:* **Content Preservation Check**: We show users the generated paraphrases along with their original variants and ask them to indicate if they have close meanings.
* *Task 3:* **Toxicity Check**: Finally, we check if the workers succeeded in removing toxicity.
Specifically this repo contains the results of **Task 3: Toxicity Check**. Here, the samples with markup confidence >= 90 are present.
The input here is text and the label shows if the text is toxic or not.
Totally, datasets contains 6,354 samples. Among them, the minor part is toxic examples (1,506 pairs).
## Citation
```
@inproceedings{logacheva-etal-2022-study,
title = "A Study on Manual and Automatic Evaluation for Text Style Transfer: The Case of Detoxification",
author = "Logacheva, Varvara and
Dementieva, Daryna and
Krotova, Irina and
Fenogenova, Alena and
Nikishina, Irina and
Shavrina, Tatiana and
Panchenko, Alexander",
booktitle = "Proceedings of the 2nd Workshop on Human Evaluation of NLP Systems (HumEval)",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.humeval-1.8",
doi = "10.18653/v1/2022.humeval-1.8",
pages = "90--101",
abstract = "It is often difficult to reliably evaluate models which generate text. Among them, text style transfer is a particularly difficult to evaluate, because its success depends on a number of parameters.We conduct an evaluation of a large number of models on a detoxification task. We explore the relations between the manual and automatic metrics and find that there is only weak correlation between them, which is dependent on the type of model which generated text. Automatic metrics tend to be less reliable for better-performing models. However, our findings suggest that, ChrF and BertScore metrics can be used as a proxy for human evaluation of text detoxification to some extent.",
}
```
## Contacts
For any questions, please contact: Daryna Dementieva (dardem96@gmail.com) | 2,778 | [
[
0.0020847320556640625,
-0.03814697265625,
0.05322265625,
0.03204345703125,
-0.019012451171875,
0.00013399124145507812,
-0.01488494873046875,
-0.0247802734375,
0.00444793701171875,
0.045806884765625,
-0.03875732421875,
-0.058197021484375,
-0.045257568359375,
... |
bharat-raghunathan/indian-foods-dataset | 2023-03-26T08:58:10.000Z | [
"task_categories:image-classification",
"task_categories:text-to-image",
"size_categories:1K<n<10K",
"language:en",
"license:cc0-1.0",
"region:us"
] | bharat-raghunathan | null | null | 1 | 5 | 2023-03-26T06:26:43 | ---
license: cc0-1.0
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': biryani
'1': cholebhature
'2': dabeli
'3': dal
'4': dhokla
'5': dosa
'6': jalebi
'7': kathiroll
'8': kofta
'9': naan
'10': pakora
'11': paneer
'12': panipuri
'13': pavbhaji
'14': vadapav
splits:
- name: train
num_bytes: 611741947.222
num_examples: 3809
- name: test
num_bytes: 153961285
num_examples: 961
download_size: 688922167
dataset_size: 765703232.222
task_categories:
- image-classification
- text-to-image
language:
- en
pretty_name: indian-foods
size_categories:
- 1K<n<10K
---
# Dataset Card for Indian Foods Dataset
## Dataset Description
- **Homepage:** https://www.kaggle.com/datasets/anshulmehtakaggl/themassiveindianfooddataset
- **Repository:** https://www.kaggle.com/datasets/anshulmehtakaggl/themassiveindianfooddataset
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** https://www.kaggle.com/anshulmehtakaggl
### Dataset Summary
This is a multi-category(multi-class classification) related Indian food dataset showcasing [The-massive-Indian-Food-Dataset](https://www.kaggle.com/datasets/anshulmehtakaggl/themassiveindianfooddataset).
This card has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English
## Dataset Structure
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['biryani', 'cholebhature', 'dabeli', 'dal', 'dhokla', 'dosa', 'jalebi', 'kathiroll', 'kofta', 'naan', 'pakora', 'paneer', 'panipuri', 'pavbhaji', 'vadapav'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and test split. The split sizes are as follows:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 3809 |
| test | 961 |
### Data Instances
Each instance is a picture of the Indian food item, along with the category it belongs to.
#### Initial Data Collection and Normalization
Collection by Scraping data from Google Images + Leveraging some JS Functions.
All the images are resized to (300,300) to maintain size uniformity.
### Dataset Curators
[Anshul Mehta](https://www.kaggle.com/anshulmehtakaggl)
### Licensing Information
[CC0: Public Domain](https://creativecommons.org/publicdomain/zero/1.0/)
### Citation Information
[The Massive Indian Foods Dataset](https://www.kaggle.com/datasets/anshulmehtakaggl/themassiveindianfooddataset) | 2,791 | [
[
-0.022491455078125,
-0.032318115234375,
-0.0198516845703125,
0.007476806640625,
-0.0164794921875,
0.01134490966796875,
-0.01248931884765625,
-0.02301025390625,
0.04815673828125,
0.00817108154296875,
-0.0419921875,
-0.051849365234375,
-0.04815673828125,
0.015... |
suolyer/bustm | 2023-03-26T15:28:28.000Z | [
"region:us"
] | suolyer | null | null | 1 | 5 | 2023-03-26T15:28:01 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
LinhDuong/chatdoctor-200k | 2023-03-28T07:58:46.000Z | [
"license:apache-2.0",
"arxiv:2303.14070",
"region:us"
] | LinhDuong | null | null | 9 | 5 | 2023-03-28T07:33:20 | ---
license: apache-2.0
---
This ChatDoctor-200K dataset is collected from this paper https://arxiv.org/pdf/2303.14070.pdf
Alternatively, you can download the original dataset from this link https://drive.google.com/file/d/1lyfqIwlLSClhgrCutWuEe_IACNq6XNUt/view?usp=sharing | 273 | [
[
-0.033966064453125,
-0.0258636474609375,
0.00482177734375,
-0.00392913818359375,
0.0005555152893066406,
0.00421905517578125,
0.00428009033203125,
-0.0055084228515625,
0.0172882080078125,
0.061676025390625,
-0.05218505859375,
-0.0223846435546875,
-0.0354614257812... |
rjjan/reuters21578 | 2023-04-01T21:32:38.000Z | [
"region:us"
] | rjjan | The Reuters-21578 dataset is one of the most widely used data collections for text
categorization research. It is collected from the Reuters financial newswire service in 1987. | @article{APTE94,
author = {Chidanand Apt{\'{e}} and Fred Damerau and Sholom M. Weiss},
title = {Automated Learning of Decision Rules for Text Categorization},
journal = {ACM Transactions on Information Systems},
year = {1994},
note = {To appear.}
}
@inproceedings{APTE94b,
author = {Chidanand Apt{\'{e}} and Fred Damerau and Sholom M. Weiss},
title = {Toward Language Independent Automated Learning of Text Categorization Models},
booktitle = {sigir94},
year = {1994},
note = {To appear.}
}
@inproceedings{HAYES8},
author = {Philip J. Hayes and Peggy M. Anderson and Irene B. Nirenburg and
Linda M. Schmandt},
title = {{TCS}: A Shell for Content-Based Text Categorization},
booktitle = {IEEE Conference on Artificial Intelligence Applications},
year = {1990}
}
@inproceedings{HAYES90b,
author = {Philip J. Hayes and Steven P. Weinstein},
title = {{CONSTRUE/TIS:} A System for Content-Based Indexing of a
Database of News Stories},
booktitle = {Second Annual Conference on Innovative Applications of
Artificial Intelligence},
year = {1990}
}
@incollection{HAYES92 ,
author = {Philip J. Hayes},
title = {Intelligent High-Volume Text Processing using Shallow,
Domain-Specific Techniques},
booktitle = {Text-Based Intelligent Systems},
publisher = {Lawrence Erlbaum},
address = {Hillsdale, NJ},
year = {1992},
editor = {Paul S. Jacobs}
}
@inproceedings{LEWIS91c ,
author = {David D. Lewis},
title = {Evaluating Text Categorization},
booktitle = {Proceedings of Speech and Natural Language Workshop},
year = {1991},
month = {feb},
organization = {Defense Advanced Research Projects Agency},
publisher = {Morgan Kaufmann},
pages = {312--318}
}
@phdthesis{LEWIS91d,
author = {David Dolan Lewis},
title = {Representation and Learning in Information Retrieval},
school = {Computer Science Dept.; Univ. of Massachusetts; Amherst, MA 01003},
year = 1992},
note = {Technical Report 91--93.}
}
@inproceedings{LEWIS91e,
author = {David D. Lewis},
title = {Data Extraction as Text Categorization: An Experiment with
the {MUC-3} Corpus},
booktitle = {Proceedings of the Third Message Understanding Evaluation
and Conference},
year = {1991},
month = {may},
organization = {Defense Advanced Research Projects Agency},
publisher = {Morgan Kaufmann},
address = {Los Altos, CA}
}
@inproceedings{LEWIS92b,
author = {David D. Lewis},
title = {An Evaluation of Phrasal and Clustered Representations on a Text
Categorization Task},
booktitle = {Fifteenth Annual International ACM SIGIR Conference on
Research and Development in Information Retrieval},
year = {1992},
pages = {37--50}
}
@inproceedings{LEWIS92d ,
author = {David D. Lewis and Richard M. Tong},
title = {Text Filtering in {MUC-3} and {MUC-4}},
booktitle = {Proceedings of the Fourth Message Understanding Conference ({MUC-4})},
year = {1992},
month = {jun},
organization = {Defense Advanced Research Projects Agency},
publisher = {Morgan Kaufmann},
address = {Los Altos, CA}
}
@inproceedings{LEWIS92e,
author = {David D. Lewis},
title = {Feature Selection and Feature Extraction for Text Categorization},
booktitle = {Proceedings of Speech and Natural Language Workshop},
year = {1992},
month = {feb} ,
organization = {Defense Advanced Research Projects Agency},
publisher = {Morgan Kaufmann},
pages = {212--217}
}
@inproceedings{LEWIS94b,
author = {David D. Lewis and Marc Ringuette},
title = {A Comparison of Two Learning Algorithms for Text Categorization},
booktitle = {Symposium on Document Analysis and Information Retrieval},
year = {1994},
organization = {ISRI; Univ. of Nevada, Las Vegas},
address = {Las Vegas, NV},
month = {apr},
pages = {81--93}
}
@article{LEWIS94d,
author = {David D. Lewis and Philip J. Hayes},
title = {Guest Editorial},
journal = {ACM Transactions on Information Systems},
year = {1994},
volume = {12},
number = {3},
pages = {231},
month = {jul}
}
@article{SPARCKJONES76,
author = {K. {Sparck Jones} and C. J. {van Rijsbergen}},
title = {Information Retrieval Test Collections},
journal = {Journal of Documentation},
year = {1976},
volume = {32},
number = {1},
pages = {59--75}
}
@book{WEISS91,
author = {Sholom M. Weiss and Casimir A. Kulikowski},
title = {Computer Systems That Learn},
publisher = {Morgan Kaufmann},
year = {1991},
address = {San Mateo, CA}
} | 0 | 5 | 2023-04-01T21:19:47 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
cryscan/multilingual-share | 2023-04-06T03:13:57.000Z | [
"language:en",
"language:zh",
"license:cc0-1.0",
"region:us"
] | cryscan | null | null | 23 | 5 | 2023-04-02T14:46:39 | ---
license: cc0-1.0
language:
- en
- zh
---
# Multilingual Share GPT
Multilingual Share GPT, the free multi-language corpus for LLM training. All text are converted to markdown format, and classified by languages.
## Github Repo
Follow the [link](https://github.com/RWKV-Wiki/MultilingualShareGPT) here to Github.
## Data Example
```json
{
"id": "ImiMfCY",
"lang": "en",
"text": "\n<|user|>: Let's play chess\n\n<|bot|>: Sure, I'd love to play chess with you! Do you want to play a virtual game or just ask chess-related questions?\n\n<|user|>: Virtual game\n\n<|bot|>: I'm sorry, I'm an AI language model and I don't have the ability to display a chess board or move pieces on it. However, I'd be happy to help you with any chess-related questions or problems you might have. Just let me know what you need help with!\n\n<|user|>: Let's play\n\n<|bot|>: I apologize for any confusion, but as an AI language model I do not have the ability to play chess as it requires a graphical interface. However, I can help you learn and improve your chess skills by answering questions and providing explanations and analysis of specific positions or games.\n\n<|user|>: 1.e4\n\n<|bot|>: The move 1.e4 is the most common first move in the game of chess, and is known as the \"King's Pawn Opening\". This move aims to control the center of the board and create space for the other pieces to develop. Black has a variety of responses to 1.e4, but some of the most common are 1...e5 (the \"King's Gambit\"), 1...c5 (the \"Sicilian Defense\"), and 1...e6 (the \"French Defense\").\n\n<|user|>: I play 1.e4\n\n<|bot|>: Very good! You have started the game with the King's Pawn Opening. As black, I will respond with 1...e5, the King's Gambit.\n\n<|user|>: I play 2.Nf3\n\n<|bot|>: That's a good move. With 2.Nf3, you are developing your knight and attacking the black pawn on e5. As black, I'll play 2...Nc6, which develops the knight to a strong central square and prepares to castle kingside.\n\n<|user|>: I play 3.d4\n\n<|bot|>: With the move 3.d4, you are now offering a pawn to black with the King's Gambit. This gambit can lead to a sharp and tactical game, but it also weakens the white pawn structure and can lead to a disadvantage if not handled properly. As black, I will accept the gambit with 3...exd4.\n\n<|user|>: I play 4.Nxd4\n\n<|bot|>: With 4.Nxd4, you are now attacking the black pawn on d4 and forcing it to capture. As black, I'll capture the pawn with 4...Nxd4. This has now opened up the d-file for my rook and given me a lead in development.\n\n<|user|>: I play 5.Qxd4\n\n<|bot|>: With the move 5.Qxd4, you have taken control of the d-file and are attacking the black knight on d4. As black, I will respond with 5...Nc6, developing my knight to a safer square and putting pressure on the white queen.\n\n<|user|>: I play 6.Qd3\n\n<|bot|>: With 6.Qd3, you are attacking the black knight again and preparing to castle kingside. This move also has the idea of putting pressure on the d7-pawn, as well as the d8-h4 diagonal. As black, I will respond with 6...Nf6, developing my knight to a strong central square and preparing to castle kingside.\n\n<|user|>: I play 7.Nc3\n\n<|bot|>: With 7.Nc3, you are developing your knight and preparing to castle kingside. This move also has the idea of supporting the d4-queen and putting pressure on the e5-pawn. As black, I will play 7...d6, solidifying my pawn structure and preparing to develop my dark-squared bishop.\n\n<|user|>: I play 8.Bg5\n\n<|bot|>: With 8.Bg5, you are attacking the black knight on f6 and preparing to castle kingside. This move also has the idea of putting pressure on the h7-pawn and controlling the f4-square. As black, I will respond with 8...Be7, developing my dark-squared bishop and preparing to castle kingside.\n\n<|user|>: I play 9.0-0-0\n\n<|bot|>: With 9.0-0-0, you are now castling kingside and putting your king in a safer position. This move also has the idea of bringing the rook into the game more quickly. As black, I will castle kingside as well with 9...0-0-0.\n\n<|user|>: I play 10.Kb1\n\n<|bot|>: I'm sorry, but that is an illegal move. In chess, the king can only move one square at a time and cannot \"jump\" over other pieces. The correct move would be 10.Be2, developing your bishop and preparing to castle kingside.\n\n<|endoftext|>"
}
```
## 中文用户请看这里
为了推进中文AI的发展,促进AI技术公开化、国际化,我们成立了 ShareGPT-90k 项目,希望借助大家的力量推进数据清洗与对齐工作。
可能与各位想象的有所不同,GPT模型主要通过预训练数据集赋能,语料的质量对模型最终性能至关重要。然而,百度知道、CSDN、知乎等平台软文过多;小木虫等高质量平台语料过少;个人博客内容质量参差不齐。
OpenAI完成数据集的收集花费了巨大成本,以至于需要从微软集资。我们无力承担如此巨大的开销,于是需要各位有志于筹建开放获取语料,并有一定外语基础的网友们献上自己的力量。
如果您有意向参与此项目,我们将不胜感激。
项目地址:https://paratranz.cn/projects/6725 | 4,687 | [
[
-0.03546142578125,
-0.04962158203125,
0.01435089111328125,
0.020233154296875,
-0.027923583984375,
-0.00554656982421875,
0.0022754669189453125,
-0.0384521484375,
0.054840087890625,
0.045684814453125,
-0.0204620361328125,
-0.0377197265625,
-0.0408935546875,
-0... |
teelinsan/camoscio | 2023-04-02T20:18:52.000Z | [
"task_categories:conversational",
"size_categories:10K<n<100K",
"language:it",
"license:openrail",
"llama",
"instruction-tuning",
"region:us"
] | teelinsan | null | null | 1 | 5 | 2023-04-02T20:12:37 | ---
license: openrail
task_categories:
- conversational
language:
- it
tags:
- llama
- instruction-tuning
size_categories:
- 10K<n<100K
---
# Camoscio instruction-tuning dataset
This repository contains the dataset used to train [Camoscio](https://huggingface.co/teelinsan/camoscio-7b-llama).
This dataset is an Italian translation with ChatGPT of the [Stanford Alpaca dataset](https://github.com/tatsu-lab/stanford_alpaca).
Please refer to the [Camoscio repo](https://github.com/teelinsan/camoscio) for more info.
| 519 | [
[
-0.0283966064453125,
-0.01849365234375,
0.0104522705078125,
0.01094818115234375,
-0.050201416015625,
0.00302886962890625,
0.004665374755859375,
-0.0182037353515625,
0.0243377685546875,
0.038330078125,
-0.07550048828125,
-0.05072021484375,
-0.039276123046875,
... |
liuyanchen1015/MULTI_VALUE_cola_negative_inversion | 2023-04-03T19:30:15.000Z | [
"region:us"
] | liuyanchen1015 | null | null | 0 | 5 | 2023-04-03T19:30:11 | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 587
num_examples: 7
- name: test
num_bytes: 461
num_examples: 6
- name: train
num_bytes: 1138
num_examples: 15
download_size: 7213
dataset_size: 2186
---
# Dataset Card for "MULTI_VALUE_cola_negative_inversion"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 580 | [
[
-0.047119140625,
-0.032867431640625,
-0.001453399658203125,
0.0294342041015625,
-0.01430511474609375,
0.00553131103515625,
0.022674560546875,
-0.005573272705078125,
0.0615234375,
0.0260772705078125,
-0.05218505859375,
-0.038238525390625,
-0.049407958984375,
... |
philschmid/sharegpt-raw | 2023-04-04T08:52:59.000Z | [
"license:other",
"region:us"
] | philschmid | null | null | 72 | 5 | 2023-04-04T08:52:59 | ---
license: other
duplicated_from: jeffwan/sharegpt_vicuna
---
## Prepraration
```
pip3 install -r requirements.txt
```
## Data Cleaning
1. merge two raw json files and json beautify the merged file
```
python merge.py sharegpt_90k_raw_dataset/sg_90k_part1.json sharegpt_90k_raw_dataset/sg_90k_part2.json sharegpt_20230401_html_unformatted.json
python pretty_json.py --in sharegpt_20230401_html_unformatted.json --out sharegpt_20230401_html.json
```
2. (Optional) Verify the json file
```
if jq empty sharegpt_20230401_html.json 2>/dev/null; then
echo "JSON is valid"
else
echo "JSON is invalid"
fi
jq length sharegpt_90k_raw_dataset/sg_90k_part1.json
jq length sharegpt_90k_raw_dataset/sg_90k_part2.json
jq length sharegpt_20230401_html.json
```
3. clean data - remove html tags etc
```
python3 clean_sharegpt.py --in sharegpt_20230401_html.json --out sharegpt_20230401_clean.json
....
100%|███████████████████████████████████████████████████████████████████| 90665/90665 [06:32<00:00, 230.98it/s]
total: 90665, skip: 13745, new: 76920
```
4. Filter dataset by language
```
python3 optional_clean.py --in sharegpt_20230401_clean.json --out sharegpt_20230401_clean_lang_zh.json --lang zh
....
return 6240 out of 76920, start dump ...
python3 optional_clean.py --in sharegpt_20230401_clean.json --out sharegpt_20230401_clean_lang_en.json --lang en
...
return 55413 out of 76920, start dump ...
```
> Note: the code itself doesn't support languange list, I didn't change the code for adpation. You can change the code to support more languages. Instead, I just filter two languages I need and merge the `sharegpt_20230401_clean_lang_zh.json` and `sharegpt_20230401_clean_lang_en.json` into `sharegpt_20230401_clean_lang.json`.
5. Split the long conversation
```
python3 split_long_conversation.py --in sharegpt_20230401_clean_lang.json --out sharegpt_20230401_clean_lang_split.json --model-name /home/ubuntu/llama-13b-hf/
...
total: 61653, new: 126032
```
Ok, now we have the cleaned dataset `sharegpt_20230401_clean_lang_split.json` which should be used for finetuning.
| 2,094 | [
[
-0.04412841796875,
-0.038055419921875,
0.01910400390625,
0.0259246826171875,
-0.018890380859375,
0.0029888153076171875,
-0.022613525390625,
-0.023834228515625,
0.018524169921875,
0.0362548828125,
-0.0226287841796875,
-0.034088134765625,
-0.042510986328125,
0... |
davebulaval/RISCBAC | 2023-08-10T22:04:01.000Z | [
"task_categories:summarization",
"task_categories:question-answering",
"task_categories:translation",
"multilinguality:monolingual",
"multilinguality:aligned",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"language:fr",
"license:cc-by-4.0",
"unsupervised",
"arxiv:23... | davebulaval | RISCBAC was created using [RISC](https://github.com/GRAAL-Research/risc), an open-source Python package data
generator. RISC generates look-alike automobile insurance contracts based on the Quebec regulatory insurance
form in French and English.
It contains 10,000 English and French insurance contracts generated using the same seed. Thus, contracts share
the same deterministic synthetic data (RISCBAC can be used as an aligned dataset). RISC can be used to generate
more data for RISCBAC. | @misc{beaucheminrisc,
title={{RISC: Generating Realistic Synthetic Bilingual Insurance
Contract}},
author={David Beauchemin and Richard Khoury},
year={2023},
eprint={2304.04212},
archivePrefix={arXiv}
} | 1 | 5 | 2023-04-04T10:48:51 | ---
license:
- cc-by-4.0
multilinguality:
- monolingual
- aligned
task_categories:
- summarization
- question-answering
- translation
source_datasets:
- original
language:
- en
- fr
tags:
- unsupervised
pretty_name: Realistic Bilingual Synthetic Automobile Insurance Contract
size_categories:
- 10K<n<100K
dataset_info:
download_size: 376971
dataset_size: 611048
viewer: true
---
# Dataset Card for RISCBAC
RISCBAC was created using [RISC](https://github.com/GRAAL-Research/risc), an open-source Python package data generator. RISC generates look-alike automobile insurance contracts based on the Quebec regulatory insurance form in French and English.
It contains 10,000 English and French insurance contracts generated using the same seed. Thus, contracts share the same deterministic synthetic data (RISCBAC can be used as an aligned dataset). RISC can be used to generate more data for RISCBAC.
# Data Instances
## Default (`'fr'`)
The default data instance is the French version of the dataset. The dataset is comprised of 10,000 synthetic automobile insurance contracts.
## Other Option
The other data instance option is `"en"`. The dataset is comprised of 10,000 synthetic automobile insurance contracts.
# Citation Information
```
@misc{beaucheminrisc,
title={{RISC: Generating Realistic Synthetic Bilingual Insurance
Contract}},
author={David Beauchemin and Richard Khoury},
year={2023},
eprint={2304.04212},
archivePrefix={arXiv}
}
```
| 1,479 | [
[
-0.015960693359375,
-0.025115966796875,
0.01425933837890625,
0.0304107666015625,
0.0036296844482421875,
0.007549285888671875,
-0.004619598388671875,
-0.030731201171875,
0.0227813720703125,
0.043670654296875,
-0.041717529296875,
-0.052764892578125,
-0.00576019287... |
hpprc/janli | 2023-04-11T04:40:37.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"language_creators:other",
"multilinguality:monolingual",
"language:ja",
"license:cc-by-sa-4.0",
"region:us"
] | hpprc | null | @InProceedings{yanaka-EtAl:2021:blackbox,
author = {Yanaka, Hitomi and Mineshima, Koji},
title = {Assessing the Generalization Capacity of Pre-trained Language Models through Japanese Adversarial Natural Language Inference},
booktitle = {Proceedings of the 2021 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP (BlackboxNLP2021)},
year = {2021},
} | 2 | 5 | 2023-04-05T12:25:01 | ---
language:
- ja
language_creators:
- other
multilinguality:
- monolingual
pretty_name: JaNLI
task_categories:
- text-classification
task_ids:
- natural-language-inference
license: cc-by-sa-4.0
---
# Dataset Card for JaNLI
## Table of Contents
- [Dataset Card for JaNLI](#dataset-card-for-janli)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [base](#base)
- [original](#original)
- [Data Fields](#data-fields)
- [base](#base-1)
- [original](#original-1)
- [Data Splits](#data-splits)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/verypluming/JaNLI
- **Repository:** https://github.com/verypluming/JaNLI
- **Paper:** https://aclanthology.org/2021.blackboxnlp-1.26/
### Dataset Summary
The JaNLI (Japanese Adversarial NLI) dataset, inspired by the English HANS dataset, is designed to necessitate an understanding of Japanese linguistic phenomena and to illuminate the vulnerabilities of models.
### Languages
The language data in JaNLI is in Japanese (BCP-47 [ja-JP](https://www.rfc-editor.org/info/bcp47)).
## Dataset Structure
### Data Instances
When loading a specific configuration, users has to append a version dependent suffix:
```python
import datasets as ds
dataset: ds.DatasetDict = ds.load_dataset("hpprc/janli")
print(dataset)
# DatasetDict({
# train: Dataset({
# features: ['id', 'premise', 'hypothesis', 'label', 'heuristics', 'number_of_NPs', 'semtag'],
# num_rows: 13680
# })
# test: Dataset({
# features: ['id', 'premise', 'hypothesis', 'label', 'heuristics', 'number_of_NPs', 'semtag'],
# num_rows: 720
# })
# })
dataset: ds.DatasetDict = ds.load_dataset("hpprc/janli", name="original")
print(dataset)
# DatasetDict({
# train: Dataset({
# features: ['id', 'sentence_A_Ja', 'sentence_B_Ja', 'entailment_label_Ja', 'heuristics', 'number_of_NPs', 'semtag'],
# num_rows: 13680
# })
# test: Dataset({
# features: ['id', 'sentence_A_Ja', 'sentence_B_Ja', 'entailment_label_Ja', 'heuristics', 'number_of_NPs', 'semtag'],
# num_rows: 720
# })
# })
```
#### base
An example of looks as follows:
```json
{
'id': 12,
'premise': '若者がフットボール選手を見ている',
'hypothesis': 'フットボール選手を若者が見ている',
'label': 0,
'heuristics': 'overlap-full',
'number_of_NPs': 2,
'semtag': 'scrambling'
}
```
#### original
An example of looks as follows:
```json
{
'id': 12,
'sentence_A_Ja': '若者がフットボール選手を見ている',
'sentence_B_Ja': 'フットボール選手を若者が見ている',
'entailment_label_Ja': 0,
'heuristics': 'overlap-full',
'number_of_NPs': 2,
'semtag': 'scrambling'
}
```
### Data Fields
#### base
A version adopting the column names of a typical NLI dataset.
| Name | Description |
| ------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| id | The number of the sentence pair. |
| premise | The premise (sentence_A_Ja). |
| hypothesis | The hypothesis (sentence_B_Ja). |
| label | The correct label for the sentence pair (either `entailment` or `non-entailment`); in the setting described in the paper, non-entailment = neutral + contradiction (entailment_label_Ja). |
| heuristics | The heuristics (structural pattern) tag. The tags are: subsequence, constituent, full-overlap, order-subset, and mixed-subset. |
| number_of_NPs | The number of noun phrase in a sentence. |
| semtag | The linguistic phenomena tag. |
#### original
The original version retaining the unaltered column names.
| Name | Description |
| ------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| id | The number of the sentence pair. |
| sentence_A_Ja | The premise. |
| sentence_B_Ja | The hypothesis. |
| entailment_label_Ja | The correct label for this sentence pair (either `entailment` or `non-entailment`); in the setting described in the paper, non-entailment = neutral + contradiction |
| heuristics | The heuristics (structural pattern) tag. The tags are: subsequence, constituent, full-overlap, order-subset, and mixed-subset. |
| number_of_NPs | The number of noun phrase in a sentence. |
| semtag | The linguistic phenomena tag. |
### Data Splits
| name | train | validation | test |
| -------- | -----: | ---------: | ---: |
| base | 13,680 | | 720 |
| original | 13,680 | | 720 |
### Annotations
The annotation process for this Japanese NLI dataset involves tagging each pair (P, H) of a premise and hypothesis with a label for structural pattern and linguistic phenomenon.
The structural relationship between premise and hypothesis sentences is classified into five patterns, with each pattern associated with a type of heuristic that can lead to incorrect predictions of the entailment relation.
Additionally, 11 categories of Japanese linguistic phenomena and constructions are focused on for generating the five patterns of adversarial inferences.
For each linguistic phenomenon, a template for the premise sentence P is fixed, and multiple templates for hypothesis sentences H are created.
In total, 144 templates for (P, H) pairs are produced.
Each pair of premise and hypothesis sentences is tagged with an entailment label (`entailment` or `non-entailment`), a structural pattern, and a linguistic phenomenon label.
The JaNLI dataset is generated by instantiating each template 100 times, resulting in a total of 14,400 examples.
The same number of entailment and non-entailment examples are generated for each phenomenon.
The structural patterns are annotated with the templates for each linguistic phenomenon, and the ratio of `entailment` and `non-entailment` examples is not necessarily 1:1 for each pattern.
The dataset uses a total of 158 words (nouns and verbs), which occur more than 20 times in the JSICK and JSNLI datasets.
## Additional Information
- [verypluming/JaNLI](https://github.com/verypluming/JaNLI)
- [Assessing the Generalization Capacity of Pre-trained Language Models through Japanese Adversarial Natural Language Inference](https://aclanthology.org/2021.blackboxnlp-1.26/)
### Licensing Information
CC BY-SA 4.0
### Citation Information
```bibtex
@InProceedings{yanaka-EtAl:2021:blackbox,
author = {Yanaka, Hitomi and Mineshima, Koji},
title = {Assessing the Generalization Capacity of Pre-trained Language Models through Japanese Adversarial Natural Language Inference},
booktitle = {Proceedings of the 2021 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP (BlackboxNLP2021)},
url = {https://aclanthology.org/2021.blackboxnlp-1.26/},
year = {2021},
}
```
### Contributions
Thanks to [Hitomi Yanaka](https://hitomiyanaka.mystrikingly.com/) and [Koji Mineshima](https://abelard.flet.keio.ac.jp/person/minesima/index-j.html) for creating this dataset.
| 9,448 | [
[
-0.030670166015625,
-0.062042236328125,
0.015838623046875,
0.02392578125,
-0.017791748046875,
-0.01434326171875,
-0.01113128662109375,
-0.01605224609375,
0.0379638671875,
0.0312347412109375,
-0.03369140625,
-0.056610107421875,
-0.0364990234375,
0.02447509765... |
mstz/chess_rock_vs_pawn | 2023-04-16T17:01:23.000Z | [
"task_categories:tabular-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc",
"chess",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_chess_(king-rook_vs._king-pawn)_22,
title = {{Chess (King-Rook vs. King-Pawn)}},
year = {1989},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5DK5C}}
} | 0 | 5 | 2023-04-05T21:23:27 | ---
language:
- en
tags:
- chess
- tabular_classification
- binary_classification
- multiclass_classification
- UCI
pretty_name: Chess Rock VS Pawn
size_categories:
- 1K<n<10K
task_categories:
- tabular-classification
configs:
- chess
license: cc
---
# Chess Rock VS Pawn
The [Chess Rock VS Pawn dataset](https://archive-beta.ics.uci.edu/dataset/22/chess+king+rook+vs+king+pawn) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|--------------------------|
| chess | Binary classification | Can the white piece win? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/chess_rock_vs_pawn")["train"]
``` | 826 | [
[
-0.01277923583984375,
-0.01910400390625,
0.01486968994140625,
0.0171051025390625,
-0.0239410400390625,
0.0001957416534423828,
-0.031524658203125,
-0.0120086669921875,
0.01517486572265625,
0.0139617919921875,
-0.04541015625,
-0.0528564453125,
-0.055938720703125,
... |
mstz/congress | 2023-04-16T17:01:56.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"congress",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_congressional_voting_records_105,
title = {{Congressional Voting Records}},
year = {1987},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5C01P}}
} | 0 | 5 | 2023-04-06T07:41:17 | ---
language:
- en
tags:
- congress
- tabular_classification
- binary_classification
- UCI
pretty_name: Congress
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- voting
license: cc
---
# Congress
The [Congress dataset](https://archive.ics.uci.edu/ml/datasets/Congress) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Congressmen of two different parties vote on a series of bills. Guess the party of each voter on the basis of their votes.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|---------------------------------------------------------------|
| voting | Binary classification | What's the party of the voter? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/congress", "voting")["train"]
``` | 960 | [
[
-0.0197601318359375,
-0.006595611572265625,
0.039215087890625,
0.0196533203125,
-0.03680419921875,
0.0013456344604492188,
-0.0013036727905273438,
0.004047393798828125,
0.0162353515625,
0.04620361328125,
-0.025543212890625,
-0.047119140625,
-0.040802001953125,
... |
mstz/promoters | 2023-04-16T17:58:13.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"promoters",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_molecular_biology_(promoter_gene_sequences)_67,
author = {Harley,C., Reynolds,R. & Noordewier,M.},
title = {{Molecular Biology (Promoter Gene Sequences)}},
year = {1990},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5S01D}}
} | 0 | 5 | 2023-04-06T15:47:50 | ---
language:
- en
tags:
- promoters
- tabular_classification
- binary_classification
- UCI
pretty_name: Promoters
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- promoters
license: cc
---
# Promoters
The [Promoters dataset](https://archive.ics.uci.edu/ml/datasets/Promoters) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|---------------------------------|
| promoters | Binary classification | Is this DNA string a promoter? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/promoters")["train"]
``` | 764 | [
[
-0.026214599609375,
-0.01904296875,
-0.017913818359375,
0.01490020751953125,
-0.00394439697265625,
-0.01068878173828125,
-0.0283966064453125,
0.00910186767578125,
0.0288848876953125,
0.007450103759765625,
-0.04095458984375,
-0.0399169921875,
-0.051361083984375,
... |
metaeval/boolq-natural-perturbations | 2023-04-09T14:14:18.000Z | [
"task_categories:text-classification",
"language:en",
"region:us"
] | metaeval | null | null | 0 | 5 | 2023-04-07T09:05:20 | ---
task_categories:
- text-classification
language:
- en
---
BoolQ questions with semantic alteration and human verifications
```bib
@article{khashabi2020naturalperturbations,
title={Natural Perturbation for Robust Question Answering},
author={D. Khashabi and T. Khot and A. Sabhwaral},
journal={arXiv preprint},
year={2020}
}
``` | 340 | [
[
-0.018798828125,
-0.09027099609375,
0.016357421875,
-0.0178375244140625,
-0.012115478515625,
0.016845703125,
0.01155853271484375,
-0.06396484375,
0.004650115966796875,
0.055084228515625,
-0.05047607421875,
-0.0179290771484375,
-0.0216827392578125,
0.03033447... |
arbml/tashkeelav2 | 2023-04-09T03:59:07.000Z | [
"region:us"
] | arbml | null | null | 2 | 5 | 2023-04-09T03:57:54 | ---
dataset_info:
features:
- name: diacratized
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 801916784.3611724
num_examples: 522463
- name: test
num_bytes: 89102717.63882759
num_examples: 58052
download_size: 416908597
dataset_size: 891019502.0
---
# Dataset Card for "tashkeelav2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 483 | [
[
-0.0201416015625,
-0.0211029052734375,
0.0119781494140625,
0.0125885009765625,
-0.0302276611328125,
0.010406494140625,
0.01444244384765625,
-0.0138092041015625,
0.053497314453125,
0.0266876220703125,
-0.050262451171875,
-0.05706787109375,
-0.0406494140625,
-... |
ieuniversity/flirty_or_not | 2023-04-10T20:42:38.000Z | [
"region:us"
] | ieuniversity | null | null | 0 | 5 | 2023-04-10T20:42:32 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: label
dtype:
class_label:
names:
'0': neutral
'1': flirty
- name: texts
dtype: string
splits:
- name: train
num_bytes: 102704
num_examples: 1584
- name: test
num_bytes: 20642
num_examples: 318
- name: validation
num_bytes: 14111
num_examples: 212
download_size: 95358
dataset_size: 137457
---
# Dataset Card for "flirty_or_not"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 610 | [
[
-0.020294189453125,
-0.032440185546875,
0.01094818115234375,
0.032073974609375,
-0.009124755859375,
-0.0135345458984375,
0.00853729248046875,
-0.01418304443359375,
0.06903076171875,
0.0272369384765625,
-0.0687255859375,
-0.061553955078125,
-0.044769287109375,
... |
vietgpt/openwebtext_en | 2023-07-15T09:20:14.000Z | [
"language:en",
"region:us"
] | vietgpt | null | null | 0 | 5 | 2023-04-11T11:24:42 | ---
language: en
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 39769491688
num_examples: 8013769
download_size: 24212906591
dataset_size: 39769491688
---
# Dataset Card for "openwebtext_en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 384 | [
[
-0.052734375,
-0.01244354248046875,
0.006137847900390625,
0.01299285888671875,
-0.0233001708984375,
-0.0090484619140625,
0.00048232078552246094,
-0.0205535888671875,
0.054962158203125,
0.0181732177734375,
-0.05584716796875,
-0.056732177734375,
-0.034088134765625... |
mstz/nursery | 2023-04-16T17:57:18.000Z | [
"task_categories:tabular-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc",
"nursery",
"tabular_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_nursery_76,
author = {Rajkovic,Vladislav},
title = {{Nursery}},
year = {1997},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5P88W}}
} | 0 | 5 | 2023-04-13T09:32:14 | ---
language:
- en
tags:
- nursery
- tabular_classification
- UCI
pretty_name: Nursery
size_categories:
- 1K<n<10K
task_categories:
- tabular-classification
configs:
- nursery
- nursery_binary
license: cc
---
# Nursery
The [Nursery dataset](https://archive-beta.ics.uci.edu/dataset/76/nursery) from the [UCI repository](https://archive-beta.ics.uci.edu/).
Should the nursery school accept the student application?
# Configurations and tasks
| **Configuration** | **Task** |
|-------------------|---------------------------|
| nursery | Multiclass classification |
| nursery_binary | Binary classification | | 643 | [
[
-0.015289306640625,
-0.0008149147033691406,
0.0137481689453125,
0.04339599609375,
0.0003390312194824219,
-0.00807952880859375,
0.004116058349609375,
-0.007572174072265625,
-0.0074005126953125,
0.016387939453125,
-0.05328369140625,
-0.038543701171875,
-0.05465698... |
nanakonoda/xnli_cm_sample | 2023-05-01T22:13:21.000Z | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"source_datasets:extended|xnli",
"language:en",
"language:de",
"language:fr",
"mode classification",
"aligned",
"code-mixed",
... | nanakonoda | This dataset was generated from XNLI using the CodeMixed Text Generator for a binary text classification task. | # @InProceedings{huggingface:dataset,
# title = {A great new dataset},
# author={huggingface, Inc.
# },
# year={2020}
# } | 0 | 5 | 2023-04-14T05:49:35 | ---
annotations_creators:
- expert-generated
language:
- en
- de
- fr
language_creators:
- found
license: []
multilinguality:
- multilingual
pretty_name: XNLI Code-Mixed Corpus (Sampled)
size_categories:
- 1M<n<10M
source_datasets:
- extended|xnli
tags:
- mode classification
- aligned
- code-mixed
task_categories:
- text-classification
task_ids: []
dataset_info:
- config_name: monolingual
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 317164
num_examples: 2490
- name: test
num_bytes: 641496
num_examples: 5007
download_size: 891209
dataset_size: 958660
- config_name: de_ec
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 317164
num_examples: 2490
- name: test
num_bytes: 1136549
num_examples: 14543
download_size: 1298619
dataset_size: 1453713
- config_name: de_ml
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 317164
num_examples: 2490
- name: test
num_bytes: 1068937
num_examples: 12750
download_size: 1248962
dataset_size: 1386101
- config_name: fr_ec
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 317164
num_examples: 2490
- name: test
num_bytes: 1520429
num_examples: 18653
download_size: 1644995
dataset_size: 1837593
- config_name: fr_ml
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 317164
num_examples: 2490
- name: test
num_bytes: 1544539
num_examples: 17381
download_size: 1682885
dataset_size: 1861703
download_size: 891209
dataset_size: 958660
---
# Dataset Card for XNLI Code-Mixed Corpus (Sampled)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
### Supported Tasks and Leaderboards
Binary mode classification (spoken vs written)
### Languages
- English
- German
- French
- German-English code-mixed by Equivalence Constraint Theory
- German-English code-mixed by Matrix Language Theory
- French-English code-mixed by Equivalence Constraint Theory
- German-English code-mixed by Matrix Language Theory
## Dataset Structure
### Data Instances
{
'text': "And he said , Mama , I 'm home",
'label': 0
}
### Data Fields
- text: sentence
- label: binary label of text (0: spoken 1: written)
### Data Splits
- monolingual
- train (English, German, French monolingual): 2490
- test (English, German, French monolingual): 5007
- de_ec
- train (English, German, French monolingual): 2490
- test (German-English code-mixed by Equivalence Constraint Theory): 14543
- de_ml
- train (English, German, French monolingual): 2490
- test (German-English code-mixed by Matrix Language Theory): 12750
- fr_ec
- train (English, German, French monolingual): 2490
- test (French-English code-mixed by Equivalence Constraint Theory): 18653
- fr_ml
- train (English, German, French monolingual): 2490
- test (French-English code-mixed by Matrix Language Theory): 17381
### Other Statistics
#### Average Sentence Length
- monolingual
- train: 19.18714859437751
- test: 19.321150389454765
- de_ec
- train: 19.18714859437751
- test: 11.24314103004882
- de_ml
- train: 19.18714859437751
- test: 12.159450980392156
- fr_ec
- train: 19.18714859437751
- test: 12.26526564091567
- fr_ml
- train: 19.18714859437751
- test: 13.486968528853346
#### Label Split
- monolingual
- train
- 0: 498
- 1: 1992
- test
- 0: 1002
- 1: 4005
- de_ec
- train
- 0: 498
- 1: 1992
- test
- 0: 2777
- 1: 11766
- de_ml
- train
- 0: 498
- 1: 1992
- test
- 0: 2329
- 1: 10421
- fr_ec
- train
- 0: 498
- 1: 1992
- test
- 0: 3322
- 1: 15331
- fr_ml
- train
- 0: 498
- 1: 1992
- test
- 0: 2788
- 1: 14593
## Dataset Creation
### Curation Rationale
Using the XNLI Parallel Corpus, we generated a code-mixed corpus using CodeMixed Text Generator, and sampled a maximum of 30 sentences per original English sentence.
The XNLI Parallel Corpus is available here:
https://huggingface.co/datasets/nanakonoda/xnli_parallel
It was created from the XNLI corpus.
More information is available in the datacard for the XNLI Parallel Corpus.
Here is the link and citation for the original CodeMixed Text Generator paper.
https://github.com/microsoft/CodeMixed-Text-Generator
```
@inproceedings{rizvi-etal-2021-gcm,
title = "{GCM}: A Toolkit for Generating Synthetic Code-mixed Text",
author = "Rizvi, Mohd Sanad Zaki and
Srinivasan, Anirudh and
Ganu, Tanuja and
Choudhury, Monojit and
Sitaram, Sunayana",
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.eacl-demos.24",
pages = "205--211",
abstract = "Code-mixing is common in multilingual communities around the world, and processing it is challenging due to the lack of labeled and unlabeled data. We describe a tool that can automatically generate code-mixed data given parallel data in two languages. We implement two linguistic theories of code-mixing, the Equivalence Constraint theory and the Matrix Language theory to generate all possible code-mixed sentences in the language-pair, followed by sampling of the generated data to generate natural code-mixed sentences. The toolkit provides three modes: a batch mode, an interactive library mode and a web-interface to address the needs of researchers, linguists and language experts. The toolkit can be used to generate unlabeled text data for pre-trained models, as well as visualize linguistic theories of code-mixing. We plan to release the toolkit as open source and extend it by adding more implementations of linguistic theories, visualization techniques and better sampling techniques. We expect that the release of this toolkit will help facilitate more research in code-mixing in diverse language pairs.",
}
```
### Source Data
XNLI Code-Mixed Corpus
https://huggingface.co/datasets/nanakonoda/xnli_cm
XNLI Parallel Corpus
https://huggingface.co/datasets/nanakonoda/xnli_parallel
#### Original Source Data
XNLI Parallel Corpus was created using the XNLI Corpus.
https://github.com/facebookresearch/XNLI
Here is the citation for the original XNLI paper.
```
@InProceedings{conneau2018xnli,
author = "Conneau, Alexis
and Rinott, Ruty
and Lample, Guillaume
and Williams, Adina
and Bowman, Samuel R.
and Schwenk, Holger
and Stoyanov, Veselin",
title = "XNLI: Evaluating Cross-lingual Sentence Representations",
booktitle = "Proceedings of the 2018 Conference on Empirical Methods
in Natural Language Processing",
year = "2018",
publisher = "Association for Computational Linguistics",
location = "Brussels, Belgium",
}
```
#### Initial Data Collection and Normalization
We removed all punctuation from the XNLI Parallel Corpus except apostrophes.
#### Who are the source language producers?
N/A
### Annotations
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
N/A
## Considerations for Using the Data
### Social Impact of Dataset
N/A
### Discussion of Biases
N/A
### Other Known Limitations
N/A
## Additional Information
### Dataset Curators
N/A
### Licensing Information
N/A
### Citation Information
### Contributions
N/A | 7,998 | [
[
-0.0312347412109375,
-0.033294677734375,
0.00037550926208496094,
0.032562255859375,
-0.01018524169921875,
0.0248565673828125,
-0.0477294921875,
-0.034576416015625,
0.045013427734375,
0.0186004638671875,
-0.036407470703125,
-0.056243896484375,
-0.02545166015625,
... |
vietgpt/databricks_dolly15k_vi | 2023-07-03T13:48:40.000Z | [
"region:us"
] | vietgpt | null | null | 0 | 5 | 2023-04-15T01:40:44 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: instruction
dtype: string
- name: input
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 14246407
num_examples: 15004
download_size: 7942722
dataset_size: 14246407
---
- Format for Instruction task
```python
def preprocess(
sample,
instruction_key="### Instruction:",
input_key="Input:",
response_key="### Response:",
end_key="<|endoftext|>"
):
instruction = sample['instruction']
input = sample['input']
response = sample['response']
if input:
return {'text': """Dưới đây là một hướng dẫn mô tả một tác vụ, được ghép nối với một đầu vào cung cấp thêm ngữ cảnh. Viết một phản hồi hoàn thành yêu cầu một cách thích hợp.
{instruction_key}
{instruction}
{input_key}
{input}
{response_key}
{response}
{end_key}""".format(
instruction_key=instruction_key,
instruction=instruction,
input_key=input_key,
input=input,
response_key=response_key,
response=response,
end_key=end_key,
)}
else:
return {'text': """Dưới đây là một hướng dẫn mô tả một nhiệm vụ. Viết một phản hồi hoàn thành yêu cầu một cách thích hợp.
{instruction_key}
{instruction}
{response_key}
{response}
{end_key}""".format(
instruction_key=instruction_key,
instruction=instruction,
response_key=response_key,
response=response,
end_key=end_key,
)}
"""
Dưới đây là một hướng dẫn mô tả một tác vụ, được ghép nối với một đầu vào cung cấp thêm ngữ cảnh. Viết một phản hồi hoàn thành yêu cầu một cách thích hợp.
### Instruction:
Virgin Australia bắt đầu hoạt động vào thời điểm nào?
Input:
Virgin Australia, tên thương mại của Virgin Australia Airlines Pty Ltd, là một hãng hàng không có trụ sở tại Úc. Đây là hãng hàng không lớn nhất sử dụng thương hiệu Virgin theo quy mô đội bay. Hãng bắt đầu hoạt động vào ngày 31 tháng 8 năm 2000 với hai máy bay trên một tuyến đường duy nhất. Sau khi hãng Ansett Australia phá sản vào tháng 9 năm 2001, Virgin Australia bất ngờ trở thành một hãng hàng không lớn trên thị trường nội địa của Úc. Từ đó đến nay, hãng đã mở rộng dịch vụ trực tiếp đến 32 thành phố ở Úc, từ các trung tâm vận hành tại Brisbane, Melbourne và Sydney.
### Response:
Virgin Australia bắt đầu hoạt động vào ngày 31 tháng 8 năm 2000 với hai máy bay trên một tuyến đường duy nhất với tên gọi là Virgin Blue.
<|endoftext|>
"""
``` | 2,443 | [
[
-0.00435638427734375,
-0.055908203125,
0.0178070068359375,
0.047454833984375,
-0.0217132568359375,
-0.0126953125,
-0.0139923095703125,
0.002719879150390625,
0.015380859375,
0.03839111328125,
-0.052947998046875,
-0.038238525390625,
-0.02642822265625,
0.041320... |
sammyboi1801/lfw-face-transformer-dataset | 2023-04-15T14:13:56.000Z | [
"region:us"
] | sammyboi1801 | null | null | 0 | 5 | 2023-04-15T14:13:51 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Abdullah_Gul
'1': Adrien_Brody
'2': Alejandro_Toledo
'3': Alvaro_Uribe
'4': Amelie_Mauresmo
'5': Andre_Agassi
'6': Andy_Roddick
'7': Angelina_Jolie
'8': Ann_Veneman
'9': Anna_Kournikova
'10': Ari_Fleischer
'11': Ariel_Sharon
'12': Arnold_Schwarzenegger
'13': Atal_Bihari_Vajpayee
'14': Bill_Clinton
'15': Bill_Gates
'16': Bill_Simon
'17': Britney_Spears
'18': Carlos_Menem
'19': Carlos_Moya
'20': Catherine_Zeta-Jones
'21': Charles_Moose
'22': Colin_Powell
'23': Condoleezza_Rice
'24': David_Beckham
'25': David_Nalbandian
'26': Dick_Cheney
'27': Dominique_de_Villepin
'28': Donald_Rumsfeld
'29': Edmund_Stoiber
'30': Eduardo_Duhalde
'31': Fidel_Castro
'32': George_HW_Bush
'33': George_Robertson
'34': George_W_Bush
'35': Gerhard_Schroeder
'36': Gloria_Macapagal_Arroyo
'37': Gonzalo_Sanchez_de_Lozada
'38': Gordon_Brown
'39': Gray_Davis
'40': Guillermo_Coria
'41': Halle_Berry
'42': Hamid_Karzai
'43': Hans_Blix
'44': Harrison_Ford
'45': Hillary_Clinton
'46': Howard_Dean
'47': Hu_Jintao
'48': Hugo_Chavez
'49': Igor_Ivanov
'50': Jack_Straw
'51': Jackie_Chan
'52': Jacques_Chirac
'53': James_Blake
'54': James_Kelly
'55': Jean_Charest
'56': Jean_Chretien
'57': Jeb_Bush
'58': Jennifer_Aniston
'59': Jennifer_Capriati
'60': Jennifer_Garner
'61': Jennifer_Lopez
'62': Jeremy_Greenstock
'63': Jiang_Zemin
'64': Jiri_Novak
'65': Joe_Lieberman
'66': John_Allen_Muhammad
'67': John_Ashcroft
'68': John_Bolton
'69': John_Howard
'70': John_Kerry
'71': John_Negroponte
'72': John_Paul_II
'73': John_Snow
'74': Joschka_Fischer
'75': Jose_Maria_Aznar
'76': Juan_Carlos_Ferrero
'77': Julianne_Moore
'78': Julie_Gerberding
'79': Junichiro_Koizumi
'80': Keanu_Reeves
'81': Kim_Clijsters
'82': Kim_Ryong-sung
'83': Kofi_Annan
'84': Lance_Armstrong
'85': Laura_Bush
'86': Lindsay_Davenport
'87': Lleyton_Hewitt
'88': Lucio_Gutierrez
'89': Luiz_Inacio_Lula_da_Silva
'90': Mahathir_Mohamad
'91': Mahmoud_Abbas
'92': Mark_Philippoussis
'93': Megawati_Sukarnoputri
'94': Meryl_Streep
'95': Michael_Bloomberg
'96': Michael_Jackson
'97': Michael_Schumacher
'98': Mike_Weir
'99': Mohammed_Al-Douri
'100': Nancy_Pelosi
'101': Naomi_Watts
'102': Nestor_Kirchner
'103': Nicanor_Duarte_Frutos
'104': Nicole_Kidman
'105': Norah_Jones
'106': Paul_Bremer
'107': Paul_Burrell
'108': Pervez_Musharraf
'109': Pete_Sampras
'110': Pierce_Brosnan
'111': Queen_Elizabeth_II
'112': Recep_Tayyip_Erdogan
'113': Renee_Zellweger
'114': Ricardo_Lagos
'115': Richard_Gephardt
'116': Richard_Myers
'117': Roger_Federer
'118': Roh_Moo-hyun
'119': Rubens_Barrichello
'120': Rudolph_Giuliani
'121': Saddam_Hussein
'122': Salma_Hayek
'123': Serena_Williams
'124': Sergey_Lavrov
'125': Sergio_Vieira_De_Mello
'126': Silvio_Berlusconi
'127': Spencer_Abraham
'128': Taha_Yassin_Ramadan
'129': Tang_Jiaxuan
'130': Tiger_Woods
'131': Tim_Henman
'132': Tom_Daschle
'133': Tom_Ridge
'134': Tommy_Franks
'135': Tony_Blair
'136': Trent_Lott
'137': Venus_Williams
'138': Vicente_Fox
'139': Vladimir_Putin
'140': Wen_Jiabao
'141': Winona_Ryder
'142': Yoriko_Kawaguchi
splits:
- name: train
num_bytes: 33550885.462
num_examples: 3846
- name: test
num_bytes: 2362162.0
num_examples: 271
download_size: 35786453
dataset_size: 35913047.462
---
# Dataset Card for "lfw-face-transformer-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 4,950 | [
[
-0.045501708984375,
-0.0125885009765625,
0.01174163818359375,
0.0236053466796875,
-0.0086822509765625,
0.01241302490234375,
0.03118896484375,
-0.0186614990234375,
0.05718994140625,
0.043121337890625,
-0.07537841796875,
-0.02935791015625,
-0.0430908203125,
-0... |
ioclab/grayscale_image_aesthetic_3M | 2023-04-16T08:12:17.000Z | [
"region:us"
] | ioclab | null | null | 1 | 5 | 2023-04-15T18:01:40 | ---
dataset_info:
features:
- name: image
dtype: image
- name: conditioning_image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 223038217282.0
num_examples: 3000000
download_size: 222413091423
dataset_size: 223038217282.0
---
# Dataset Card for "grayscale_image_aesthetic_3M"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 471 | [
[
-0.04376220703125,
-0.01593017578125,
0.02471923828125,
0.03076171875,
-0.0267333984375,
-0.018829345703125,
0.0158233642578125,
-0.0089569091796875,
0.043487548828125,
0.03204345703125,
-0.049835205078125,
-0.07025146484375,
-0.0335693359375,
-0.01788330078... |
BlackKakapo/multitask-ro | 2023-09-21T14:35:01.000Z | [
"task_categories:text2text-generation",
"task_categories:question-answering",
"task_categories:sentence-similarity",
"task_categories:text-classification",
"task_categories:translation",
"task_categories:summarization",
"multilinguality:monolingual",
"size_categories:1M<n<5M",
"language:ro",
"lice... | BlackKakapo | null | null | 2 | 5 | 2023-04-16T10:49:43 | ---
license: apache-2.0
multilinguality: monolingual
size_categories: 1M<n<5M
language: ro
task_categories:
- text2text-generation
- question-answering
- sentence-similarity
- text-classification
- translation
- summarization
---
## Dataset
### Train
| Dataset | Link | Rows | Task-specific prefix |
| ------ | ------ | ------ | ------ |
| **Paraphrase** | [Paraphrase](https://huggingface.co/datasets/BlackKakapo/paraphrase-ro) | 131951 | *paraphrase:* **string** |
| **Grammar** | [Grammar](https://huggingface.co/datasets/BlackKakapo/grammar-ro) | 1686054 | *grammar:* **string** |
| **Synonyms** | - | 14085 | *synonyms:* **word** |
| **Translate** | - | 999725 | *translate Romanian to English:* **string** |
| **Summarize** | [Summarize](https://huggingface.co/datasets/readerbench/ro-text-summarization) | 71999 | *summarize:* **string** |
| **Sentiment analysis** | [Sentiment analysis](https://huggingface.co/datasets/ro_sent) | 36498 | *sentiment analysis:* **string** |
| **STS** | [STS](https://huggingface.co/datasets/ro_sts) | 7499 | *sts:* **string** |
| **Offense analysis** | [Offense analysis](https://huggingface.co/datasets/readerbench/ro-fb-offense) | 3199 | *offense analysis:* **string** |
| **Gsm8k-ro** | [Gsm8k-ro](https://huggingface.co/datasets/BlackKakapo/gsm8k-ro) | 7474 | **string** |
| **Qasc-ro** | [Qasc-ro](https://huggingface.co/datasets/BlackKakapo/qasc-ro) | 8134 | **string** |
| **Recipes-ro** | [Recipes-ro](https://huggingface.co/datasets/BlackKakapo/recipes-ro) | 818 | 1. *Spune-mi reteta pentru* **string** 2. *Cum as putea face* **string** 3. *Spune-mi te rog cum as putea face* **string** |
| **Qaworld-ro** | [Qaworld-ro](https://huggingface.co/datasets/BlackKakapo/qaworld-ro) | 722659 | **string** |
| **News-ro** | - | 102369 | 1. *Genereaza o știre cu titlul dat si incepe-o astfel* **string** 2. *Scrie o știre cu denumirea asta si cu acest inceput* **string**|
| **Newsagro-ro** | - | 568 | 1. *Genereaza o știre cu titlul dat si incepe-o astfel* **string** 2. *Scrie o știre cu denumirea asta si cu acest inceput* **string**|
| **Instruction-dataset-ro** | [Instruction-dataset-ro](https://huggingface.co/datasets/BlackKakapo/instruction-dataset-ro) | 326 | **string**|
| **TOTAL** | [Multitask-ro](https://huggingface.co/datasets/BlackKakapo/multitask-ro) | **~3.792.698** | |
### Eval
| Dataset | Link | Rows | Task-specific prefix |
| ------ | ------ | ------ | ------ |
| **Paraphrase** | [Paraphrase](https://huggingface.co/datasets/BlackKakapo/paraphrase-ro) | 3540 | *paraphrase:* **string** |
| **Grammar** | [Grammar](https://huggingface.co/datasets/BlackKakapo/grammar-ro) | 200 | *grammar:* **string** |
| **Synonyms** | - | 318 | *synonyms:* **word** |
| **Translate** | [Translate](https://huggingface.co/datasets/opus100/viewer/en-ro/train) | 3271 | *translate Romanian to English:* **string** |
| **Summarize** | [Summarize](https://huggingface.co/datasets/readerbench/ro-text-summarization) | 449 | *summarize:* **string** |
| **Sentiment analysis** | [Sentiment analysis](https://huggingface.co/datasets/ro_sent) | 789 | *sentiment analysis:* **string** |
| **STS** | [STS](https://huggingface.co/datasets/ro_sts) | 1119 | *sts:* **string** |
| **Offense analysis** | [Offense analysis](https://huggingface.co/datasets/readerbench/ro-fb-offense) | 1251 | *offense analysis:* **string** |
| **Gsm8k-ro** | [Gsm8k-ro](https://huggingface.co/datasets/BlackKakapo/gsm8k-ro) | 1319 | **string** |
| **Qasc-ro** | [Qasc-ro](https://huggingface.co/datasets/BlackKakapo/qasc-ro) | 926 | **string** |
| **Recipes-ro** | [Recipes-ro](https://huggingface.co/datasets/BlackKakapo/recipes-ro) | 63 | 1. *Spune-mi reteta pentru* **string** 2. *Cum as putea face* **string** 3. *Spune-mi te rog cum as putea face* **string** |
| **Qaworld-ro** | [Qaworld-ro](https://huggingface.co/datasets/BlackKakapo/qaworld-ro) | 3350 | **string** |
| **News-ro** | - | 140 | 1. *Genereaza o știre cu titlul dat si incepe-o astfel* **string** 2. *Scrie o știre cu denumirea asta si cu acest inceput* **string**|
| **Newsagro-ro** | - | 112 | 1. *Genereaza o știre cu titlul dat si incepe-o astfel* **string** 2. *Scrie o știre cu denumirea asta si cu acest inceput* **string**|
| **TOTAL** | [Multitask-ro](https://huggingface.co/datasets/BlackKakapo/multitask-ro) | **16847** | |
[Original summarize]: <https://huggingface.co/datasets/readerbench/ro-text-summarization>
[Original sent]: <https://huggingface.co/datasets/ro_sent>
[Original sts]: <https://huggingface.co/datasets/ro_sts>
[Original offense]: <https://huggingface.co/datasets/readerbench/ro-fb-offense> | 4,623 | [
[
-0.016265869140625,
-0.05023193359375,
0.0166015625,
0.0455322265625,
-0.01983642578125,
-0.00942230224609375,
-0.019287109375,
-0.037261962890625,
0.07427978515625,
0.0227508544921875,
-0.049163818359375,
-0.041961669921875,
-0.066650390625,
0.0232238769531... |
bigcode/code-exchange | 2023-04-20T01:45:19.000Z | [
"region:us"
] | bigcode | null | null | 2 | 5 | 2023-04-20T01:05:29 | ---
dataset_info:
features:
- name: qid
dtype: int64
- name: question
dtype: string
- name: answers
list:
- name: answer_id
dtype: int64
- name: author
dtype: string
- name: author_id
dtype: int64
- name: author_profile
dtype: string
- name: pm_score
dtype: int64
- name: selected
dtype: bool
- name: text
dtype: string
- name: date
dtype: string
- name: metadata
sequence: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 49109718161.00354
num_examples: 7658345
download_size: 22205153192
dataset_size: 49109718161.00354
---
# Dataset Card for "code-exchange"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 871 | [
[
-0.04022216796875,
-0.00943756103515625,
0.01184844970703125,
0.0257720947265625,
-0.00804901123046875,
0.0087432861328125,
0.011199951171875,
-0.0171356201171875,
0.059906005859375,
0.02825927734375,
-0.046051025390625,
-0.04827880859375,
-0.037841796875,
-... |
iamketan25/python-qa-instructions-dataset | 2023-04-23T05:30:34.000Z | [
"region:us"
] | iamketan25 | null | null | 4 | 5 | 2023-04-23T05:30:08 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
michelleyunun/therapydata | 2023-04-24T23:59:28.000Z | [
"region:us"
] | michelleyunun | null | null | 9 | 5 | 2023-04-24T06:04:29 | ---
dataset_info:
features:
- name: transcript_id
dtype: string
- name: topic
dtype: string
- name: interlocutor
dtype: string
- name: utterance_text
dtype: string
- name: main_therapist_behaviour
dtype: string
- name: client_talk_type
dtype: string
splits:
- name: train
num_bytes: 628461
num_examples: 4153
download_size: 0
dataset_size: 628461
---
# Dataset Card for "therapydata"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 570 | [
[
-0.025299072265625,
-0.022979736328125,
0.024169921875,
0.01282501220703125,
-0.006313323974609375,
0.004352569580078125,
0.01763916015625,
-0.01180267333984375,
0.06793212890625,
0.033538818359375,
-0.053558349609375,
-0.051910400390625,
-0.05621337890625,
... |
jxu124/llava_complex_reasoning_77k | 2023-05-20T18:45:44.000Z | [
"region:us"
] | jxu124 | null | null | 1 | 5 | 2023-04-24T13:15:35 | ---
dataset_info:
features:
- name: global_image_id
dtype: string
- name: image_path
dtype: string
- name: dialog
sequence:
sequence: string
- name: anns_id
dtype: string
splits:
- name: train
num_bytes: 71300555
num_examples: 76643
download_size: 36685003
dataset_size: 71300555
---
# Dataset Card for "llava_complex_reasoning_77k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 513 | [
[
-0.0263824462890625,
-0.0292205810546875,
0.0360107421875,
0.0189208984375,
-0.03314208984375,
0.002567291259765625,
0.00797271728515625,
-0.0133819580078125,
0.048187255859375,
0.03753662109375,
-0.046966552734375,
-0.0595703125,
-0.037811279296875,
-0.0085... |
jlh/uci-shopper | 2023-05-03T21:08:59.000Z | [
"task_categories:tabular-classification",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-4.0",
"region:us"
] | jlh | null | null | 1 | 5 | 2023-04-25T20:26:11 | ---
dataset_info:
features:
- name: Administrative
dtype: int64
- name: Administrative_Duration
dtype: float64
- name: Informational
dtype: int64
- name: Informational_Duration
dtype: float64
- name: ProductRelated
dtype: int64
- name: ProductRelated_Duration
dtype: float64
- name: BounceRates
dtype: float64
- name: ExitRates
dtype: float64
- name: PageValues
dtype: float64
- name: SpecialDay
dtype: float64
- name: Month
dtype: string
- name: OperatingSystems
dtype: int64
- name: Browser
dtype: int64
- name: Region
dtype: int64
- name: TrafficType
dtype: int64
- name: VisitorType
dtype: string
- name: Weekend
dtype: bool
- name: Revenue
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
splits:
- name: train
num_bytes: 1815486
num_examples: 12330
download_size: 425014
dataset_size: 1815486
license: cc-by-4.0
task_categories:
- tabular-classification
language:
- en
pretty_name: Online Shoppers Purchasing Intention Dataset
size_categories:
- 10K<n<100K
---
# Dataset Card for Online Shoppers Purchasing Intention Dataset
## Dataset Description
- **Homepage**: https://archive-beta.ics.uci.edu/dataset/468/online+shoppers+purchasing+intention+dataset
### Dataset Summary
This dataset is a reupload of the Online Shoppers Purchasing Intention Dataset from the [UCI Machine Learning Repository](https://archive-beta.ics.uci.edu/).
> **NOTE:** The information below is from the original dataset description from UCI's website.
>
> ### Overview
>
> Of the 12,330 sessions in the dataset, 84.5% (10,422) were negative class samples that did not end with shopping,
> and the rest (1908) were positive class samples ending with shopping.
>
> #### Additional Information
>
> The dataset consists of feature vectors belonging to 12,330 sessions. The dataset was formed so that
> each session would belong to a different user in a 1-year period to avoid any tendency to a specific campaign,
> special day, user profile, or period.
| 2,107 | [
[
-0.0439453125,
-0.03118896484375,
0.0178070068359375,
0.0024623870849609375,
-0.01393890380859375,
-0.029998779296875,
0.007259368896484375,
-0.0406494140625,
0.037384033203125,
0.0283355712890625,
-0.06365966796875,
-0.0501708984375,
0.0021648406982421875,
... |
jaydenccc/AI_Storyteller_Dataset | 2023-04-26T19:52:33.000Z | [
"region:us"
] | jaydenccc | null | null | 9 | 5 | 2023-04-26T19:52:30 | ---
dataset_info:
features:
- name: synopsis
dtype: string
- name: short_story
dtype: string
splits:
- name: train
num_bytes: 204642
num_examples: 100
download_size: 129691
dataset_size: 204642
---
# Dataset Card for "AI_Storyteller_Dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 404 | [
[
-0.03216552734375,
-0.016510009765625,
0.01383209228515625,
0.0172271728515625,
-0.0084686279296875,
0.0179443359375,
0.0200347900390625,
-0.01512908935546875,
0.0478515625,
0.026763916015625,
-0.050018310546875,
-0.046356201171875,
-0.051910400390625,
-0.02... |
TrainingDataPro/face_masks | 2023-09-14T16:45:36.000Z | [
"task_categories:image-segmentation",
"language:en",
"license:cc-by-nc-nd-4.0",
"finance",
"code",
"region:us"
] | TrainingDataPro | Dataset includes 250 000 images, 4 types of mask worn on 28 000 unique faces.
All images were collected using the Toloka.ai crowdsourcing service and
validated by TrainingData.pro | @InProceedings{huggingface:dataset,
title = {face_masks},
author = {TrainingDataPro},
year = {2023}
} | 1 | 5 | 2023-04-28T12:29:00 | ---
license: cc-by-nc-nd-4.0
task_categories:
- image-segmentation
language:
- en
tags:
- finance
- code
dataset_info:
features:
- name: photo_1
dtype: image
- name: photo_2
dtype: image
- name: photo_3
dtype: image
- name: photo_4
dtype: image
- name: worker_id
dtype: string
- name: age
dtype: int8
- name: country
dtype: string
- name: sex
dtype: string
splits:
- name: train
num_bytes: 341007536
num_examples: 10
download_size: 100871449
dataset_size: 341007536
---
# Face Mask Detection
Dataset includes 250 000 images, 4 types of mask worn on 28 000 unique faces. All images were collected using the Toloka.ai crowdsourcing service and validated by TrainingData.pro
# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=face_masks) to discuss your requirements, learn about the price and buy the dataset.
# File with the extension .csv
includes the following information for each media file:
- **WorkerId**: the identifier of the person who provided the media file,
- **Country**: the country of origin of the person,
- **Age**: the age of the person,
- **Sex**: the gender of the person,
- **Type**: the type of media file
- **Link**: the URL to access the media file
# Folder "img" with media files
- containg all the photos
which correspond to the data in the .csv file
**How it works**: *go to the first folder and you will make sure that it contains media files taken by a person whose parameters are specified in the first 4 lines of the .csv file.*
## [**TrainingData**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=face_masks) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** | 2,071 | [
[
-0.027252197265625,
-0.03265380859375,
0.0038394927978515625,
0.032318115234375,
-0.021331787109375,
0.01465606689453125,
0.0033054351806640625,
-0.03436279296875,
0.026123046875,
0.06817626953125,
-0.055999755859375,
-0.06268310546875,
-0.0516357421875,
0.0... |
Maciel/FinCUGE-Instruction | 2023-08-20T02:26:39.000Z | [
"task_categories:question-answering",
"size_categories:100K<n<1M",
"language:zh",
"license:apache-2.0",
"finance",
"region:us"
] | Maciel | null | null | 3 | 5 | 2023-04-29T10:59:46 | ---
license: apache-2.0
dataset_info:
features:
- name: task
dtype: string
- name: desc
dtype: string
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 62215078
num_examples: 123137
- name: eval
num_bytes: 7548859
num_examples: 15167
download_size: 32078572
dataset_size: 69763937
task_categories:
- question-answering
language:
- zh
tags:
- finance
pretty_name: s
size_categories:
- 100K<n<1M
---
# Dataset Card for Dataset Name
## Dataset Description
本数据集包含八项中文金融自然语言处理基准任务,分别为金融新闻摘要(FinNA)、金融新闻公告事件问答(FinQA)、金融新闻分类(FinNL)、金融新闻关系抽取(FinRE)、金融社交媒体文本情绪分类(FinNE)、金融负面消息及其主体判定(FinNSP)、金融因果事件抽取(FinCQA)、金融事件主体抽取(FinESE)。

## Dataset Structure
(1)FinNA
金融新闻摘要数据集。输入一段金融新闻,需要模型生成一句话摘要。其中训练集包含24000条数据,验证集包含3000条数据。
```json
{
"instruction": "根据以下新闻生成摘要。",
"input": "天宇股份公告,预计2021年半年度归属于上公司股东的净利润1.7亿元-2.3亿元,同比下降39.68%-55.41%。公司主营产品沙坦类原料药受低端市场激烈竞争影响,原料药销售价格较去年同期下降;子公司山东昌邑一期项目和京圣药业生产基地建设完成,进入试生产阶段和达产阶段,产能利用率没有完全释放,生产成本阶段性较高等原因导致报告期毛利率较上年同期下降。",
"output": "天宇股份:半年度净利润预降40%-55%"
}
```
(2)FinQA
金融新闻公告事件问答数据集。由DuEE-fin数据集转化得到。输入一段金融新闻或公告,和一个与文本中发生的事件有关的问题,需要模型根据文本生成问题答案。问题的范围包含文本中包含的事件类型,以及某个事件对应的发生时间和人物等要素;答案为问题对应的文本中的事件类型或事件要素的列表。其中训练集包含16000条数据,验证集包含2000条数据。
```json
{
"instruction": "新城悦服务股份回购事件对应的每股交易价格是什么?原标题:新城悦“自救”:1064万港元回购公司190万股股份 来源:新浪乐居 \
7月8日,新城悦服务(01755.hk)发布公告称,公司于今日回购190万股普通股票,占据现有已发行股份的0.23171%。回购股份每股付出价格区间为5.30港元至5.83港元,付出总额为1064万港元。 \
值得注意的是,新城控股(28.500,1.52,5.63%)董事长涉嫌猥亵儿童被刑拘事件发生后第四个交易日(7月8日),新城悦服务股价开始回升,收涨12.20%。 \
据悉,新城控股董事长涉嫌猥亵儿童被刑拘事件发生第三个交易日(7月5日),新城系港股上市房企市值共蒸发约256亿港元。截至7月5日收盘,新城发展(01030.HK)收于6.71港元\/股,市值自事件发生后减少227.11亿港元;新城悦(01755.HK)收于5.08港元\/股,市值自事件发生后减少28.86亿港元。",
"input": "",
"output": "5.30港元至5.83港元"
}
```
(3)FinNL
金融新闻分类数据集。对于给出的金融新闻,需要模型将其多标签分类到可能的十五种类别,类别包括公司、行业、大盘、国际、经济、政策、政治、期货、债券、房地产、外汇、虚拟货币、新冠、能源和其它。其中训练集包含8000条数据,验证集包含1000条数据。
```json
{
"instruction": "新城悦服务股份回购事件对应的每股交易价格是什么?原标题:新城悦“自救”:1064万港元回购公司190万股股份 来源:新浪乐居 \
7月8日,新城悦服务(01755.hk)发布公告称,公司于今日回购190万股普通股票,占据现有已发行股份的0.23171%。回购股份每股付出价格区间为5.30港元至5.83港元,付出总额为1064万港元。 \
值得注意的是,新城控股(28.500,1.52,5.63%)董事长涉嫌猥亵儿童被刑拘事件发生后第四个交易日(7月8日),新城悦服务股价开始回升,收涨12.20%。 \
据悉,新城控股董事长涉嫌猥亵儿童被刑拘事件发生第三个交易日(7月5日),新城系港股上市房企市值共蒸发约256亿港元。截至7月5日收盘,新城发展(01030.HK)收于6.71港元\/股,市值自事件发生后减少227.11亿港元;新城悦(01755.HK)收于5.08港元\/股,市值自事件发生后减少28.86亿港元。",
"input": "",
"output": "5.30港元至5.83港元"
}
```
(4)FinRE
金融新闻关系抽取数据集。对于给出的金融新闻和头实体-尾实体对,需要模型分类实体对的关系到包含空关系的44种关系类别,包含拥有、持股、竞争、收购、交易、合作、减持等财经金融领域的特有关系类别。其中训练集包含7454条数据,验证集包含1489条数据。
```json
{
"instruction": "根据以下文本,描述以下两个实体东方航空和上航之间的关系。",
"input": "东方航空AH股临时停牌传将与上航合并",
"output": "合并"
}
```
(5)FinFE
金融社交媒体文本情绪分类数据集。对于给出的金融社交媒体文本,需要模型分类该文本的情绪为消极-中性-积极三种类别。其中训练集包含8000条数据,验证集包含1000条数据。
```json
{
"instruction": "这个文本的情感倾向是积极、消极还是中性的。",
"input": "补仓补的心力憔悴",
"output": "该文本情感是消极的。"
}
```
(6)FinNSP
金融负面消息及其主体判定数据集。对于给出的金融新闻或社交媒体文本及其中包含的实体,需要模型判断该文本中是否包含有针对某个实体的负面消息,并指出负面消息的主体是哪个实体。其中训练集包含4800条数据,验证集包含600条数据。
```json
{
"instruction": "识别以下内容中的负面金融实体信息。",
"input": "11月5日,警方对众车在线涉嫌非法吸收公众存款案立案侦查",
"output": "文中包含的负面主体:众车在线。"
}
```
(7)FinCQA
金融因果事件抽取数据集。因果事件抽取专注于在文本中识别出具有因果关系的两个事件及其事件参数,并将其整理为机构化数据。该数据集的因果事件数据集包含对大宗商品领域的因果事件识别,识别的事件类型包括台风/地震,供给增加/减少,需求增加/减少,价格上升/下降等可能为原因和结果事件及其对应关系和对应的产品、地区等参数 。其中训练集包含21965条数据,验证集包含2741条数据。
```json
{
"instruction": "下文中有哪些因果事件?铁矿:中长期,今年铁矿供需格局明显改善,巴西矿难及飓风对发运的影响,导致铁矿石全年供应走低",
"input": "",
"output": "台风导致供给减少"
}
```
(8)FinESE
金融事件主体抽取数据集。从真实的新闻语料中,抽取特定事件类型的主体。其中训练集包含11752条数据,验证集包含1464条数据。
```json
{
"instruction": "分析以下新闻,抽取资金账户风险事件相关的主体信息。",
"input": "金一文化违规减持仅””罚酒三杯””未来减持或””仍不手软””雅虎承认发生大规模数据泄露 2亿账户信息被盗科远股份(002380)股东减持202万股套现5989万",
"output": "所属资金账户风险事件的金融主体是雅虎。"
}
``` | 3,805 | [
[
-0.0352783203125,
-0.04974365234375,
0.0178985595703125,
0.0186614990234375,
-0.0325927734375,
-0.008148193359375,
0.0004887580871582031,
-0.01551055908203125,
0.037078857421875,
0.039306640625,
-0.025115966796875,
-0.053131103515625,
-0.042755126953125,
0.0... |
julia-lukasiewicz-pater/small-GPT-wiki-intro-features | 2023-06-11T14:42:23.000Z | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"license:cc",
"region:us"
] | julia-lukasiewicz-pater | null | null | 0 | 5 | 2023-04-30T15:54:30 | ---
license: cc
task_categories:
- text-classification
language:
- en
size_categories:
- 10K<n<100K
---
# Small-GPT-wiki-intro-features dataset
This dataset is based on [aadityaubhat/GPT-wiki-intro](https://huggingface.co/datasets/aadityaubhat/GPT-wiki-intro).
It contains 100k randomly selected texts (50k from Wikipedia and 50k generated by ChatGPT).
For each text, various complexity measures were calculated, including e.g. readibility, lexical richness etc.
It can be used for text classification or analysis of linguistic features of human-generated and ChatGPT-generated texts.
## Dataset structure
Features were calculated using various Python libraries, i.e. NLTK, [readability-metrics](https://pypi.org/project/py-readability-metrics/), [lexical-diversity](https://pypi.org/project/lexical-diversity/),
and [TextDescriptives](https://hlasse.github.io/TextDescriptives/). The list of all features and their corresponding sources can be found below:
| Column | Description |
| ------ | ----------- |
| text | human- or ChatGPT-generated text; taken from aadityaubhat/GPT-wiki-intro |
| normalized_bigram_entropy | bigram entropy normalized with estimated maximum entropy; nltk |
| mean_word_length | mean word length; nltk |
| mean_sent_length | mean sentence length; nltk |
| fog | Gunning-Fog; readability-metrics |
| ari | Automated Readability Index; readability-metrics |
| dale_chall | Dale Chall Readability; readability-metrics |
| hdd | Hypergeometric Distribution; lexical-diversity |
| mtld | Measure of lexical textual diversity; lexical-diversity |
| mattr | Moving average type-token ratio; lexical-diversity |
| number_of_ADJ | proportion of adjectives per word; nltk |
| number_of_ADP | proportion of adpositions per word; nltk |
| number_of_ADV | proportion of adverbs per word; nltk |
| number_of_CONJ | proportion of conjunctions per word; nltk |
| number_of_DET | proportion of determiners per word; nltk |
| number_of_NOUN | proportion of nouns per word; nltk |
| number_of_NUM | proportion of numerals per word; nltk |
| number_of_PRT | proportion of particles per word; nltk |
| number_of_PRON | proportion of pronuns per word; nltk |
| number_of_VERB | proportion of verbs per word; nltk |
| number_of_DOT | proportion of punctuation marks per word; nltk |
| number_of_X | proportion of POS tag 'Other' per word; nltk |
| class | binary class, 0 stands for Wikipedia, 1 stands for ChatGPT |
| spacy_perplexity | text perplexity; TextDescriptives |
| entropy | text entropy; TextDescriptives |
| automated_readability_index | Automated Readability Index; TextDescriptives |
| per_word_spacy_perplexity | text perplexity per word; TextDescriptives |
| dependency_distance_mean | mean distance from each token to their dependent; TextDescriptives |
| dependency_distance_std | standard deviation of distance from each token to their dependent; TextDescriptives |
| first_order_coherence | cosine similarity between consecutive sentences; TextDescriptives |
| second_order_coherence | cosine similarity between sentences that are two sentences apart; TextDescriptives |
| smog |SMOG; TextDescriptives |
| prop_adjacent_dependency_relation_mean | mean proportion adjacent dependency relations; TextDescriptives |
| prop_adjacent_dependency_relation_std | standard deviation of proportion adjacent dependency relations; TextDescriptives |
| syllables_per_token_mean | mean of syllables per token; TextDescriptives |
| syllables_per_token_median | median of syllables per token; TextDescriptives |
| token_length_std | standard deviation of token length; TextDescriptives |
| token_length_median | median of token length; TextDescriptives |
| sentence_length_median | median of sentence length; TextDescriptives |
| syllables_per_token_std | standard deviation of syllables per token; TextDescriptives |
| proportion_unique_tokens | proportion of unique tokens; TextDescriptives |
| top_ngram_chr_fraction_3 | fraction of characters in a document which are contained within the top n-grams. For a specified n-gram range; TextDescriptives |
| top_ngram_chr_fraction_2 | fraction of characters in a document which are contained within the top n-grams. For a specified n-gram range; TextDescriptives |
| top_ngram_chr_fraction_4 | fraction of characters in a document which are contained within the top n-grams. For a specified n-gram range; TextDescriptives |
| proportion_bullet_points | fraction of characters in a document which are contained within the top n-grams. For a specified n-gram range; TextDescriptives |
| flesch_reading_ease | Flesch Reading ease ; TextDescriptives |
| flesch_kincaid_grade | Flesch Kincaid grade; TextDescriptives |
| gunning_fog | Gunning-Fog; TextDescriptives |
| coleman_liau_index | Coleman-Liau Index; TextDescriptives |
| oov_ratio| out-of-vocabulary ratio; TextDescriptives |
## Code
Code that was used to generate this dataset can be found on [Github](https://github.com/julia-lukasiewicz-pater/gpt-wiki-features/tree/main).
| 5,007 | [
[
-0.020782470703125,
-0.06427001953125,
0.024261474609375,
0.0115814208984375,
-0.018951416015625,
-0.014312744140625,
-0.0243682861328125,
-0.01079559326171875,
0.0235443115234375,
0.01904296875,
-0.02947998046875,
-0.062347412109375,
-0.05181884765625,
0.02... |
TrajanovRisto/esg-sentiment | 2023-04-30T20:28:31.000Z | [
"region:us"
] | TrajanovRisto | null | null | 4 | 5 | 2023-04-30T20:28:28 | ---
dataset_info:
features:
- name: Text
dtype: string
- name: Environmental Negative
dtype: int32
- name: Environmental Neutral
dtype: int32
- name: Environmental Positive
dtype: int32
- name: Governance Negative
dtype: int32
- name: Governance Neutral
dtype: int32
- name: Governance Positive
dtype: int32
- name: Social Negative
dtype: int32
- name: Social Neutral
dtype: int32
- name: Social Positive
dtype: int32
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 135470.12812960235
num_examples: 611
- name: test
num_bytes: 15076.871870397643
num_examples: 68
download_size: 80141
dataset_size: 150547.0
---
# Dataset Card for "esg-sentiment"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 896 | [
[
-0.0589599609375,
-0.02490234375,
0.01397705078125,
0.017364501953125,
-0.0182342529296875,
-0.0003733634948730469,
0.00601959228515625,
-0.0061187744140625,
0.06829833984375,
0.022918701171875,
-0.07275390625,
-0.070068359375,
-0.05029296875,
-0.01707458496... |
Noxturnix/blognone-20230430 | 2023-05-05T21:47:56.000Z | [
"task_categories:text-generation",
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:th",
"license:cc-by-3.0",
"region:us"
] | Noxturnix | null | null | 0 | 5 | 2023-05-01T04:24:03 | ---
license: cc-by-3.0
dataset_info:
features:
- name: title
dtype: string
- name: author
dtype: string
- name: date
dtype: string
- name: tags
sequence: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 51748027
num_examples: 18623
download_size: 21759892
dataset_size: 51748027
task_categories:
- text-generation
- text-classification
language:
- th
size_categories:
- 10K<n<100K
---
# Dataset Card for blognone-20230430
## Dataset Summary
[Blognone](https://www.blognone.com/) posts from January 1, 2020 to April 30, 2023.
## Features
- title: (str)
- author: (str)
- date: (str)
- tags: (list)
- content: (str)
## Licensing Information
Blognone posts are published are licensed under the [Creative Commons Attribution 3.0 Thailand](https://creativecommons.org/licenses/by/3.0/th/deed.en) (CC BY 3.0 TH). | 882 | [
[
-0.04583740234375,
-0.01256561279296875,
0.013641357421875,
0.05157470703125,
-0.05694580078125,
-0.0197906494140625,
0.006275177001953125,
-0.0264739990234375,
0.05523681640625,
0.061553955078125,
-0.0701904296875,
-0.06866455078125,
-0.050689697265625,
0.0... |
miladfa7/5-Flower-Types-Classification-Dataset | 2023-05-02T04:15:51.000Z | [
"region:us"
] | miladfa7 | null | null | 0 | 5 | 2023-05-01T11:01:41 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
mirfan899/phoneme_asr | 2023-06-17T12:32:48.000Z | [
"task_categories:automatic-speech-recognition",
"size_categories:1K<n<10K",
"language:en",
"license:bsd",
"region:us"
] | mirfan899 | null | null | 0 | 5 | 2023-05-03T15:04:17 | ---
license: bsd
task_categories:
- automatic-speech-recognition
language:
- en
pretty_name: timit phoneme datas
size_categories:
- 1K<n<10K
---
This dataset contains the phonetic transcriptions of audios as well as English transcripts. Phonetic transcriptions are based on the g2p model. It can be used to train phoneme recognition
model using wav2vec2. | 356 | [
[
0.0150604248046875,
-0.01617431640625,
0.005901336669921875,
-0.00025463104248046875,
0.002532958984375,
-0.01016998291015625,
-0.0024967193603515625,
-0.032562255859375,
0.0347900390625,
0.047576904296875,
-0.046966552734375,
-0.022308349609375,
-0.023056030273... |
tafseer-nayeem/review_helpfulness_prediction | 2023-08-28T21:56:01.000Z | [
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-nc-sa-4.0",
"Human-Centered NLP",
"Helpfulness Prediction",
"Review Helpfulness Prediction",
"User Review Analysis",
"Dataset",
"Review Helpfulness Prediction Dataset",
"doi:10.57967/hf/0613",
"re... | tafseer-nayeem | null | null | 1 | 5 | 2023-05-04T00:28:02 | ---
license: cc-by-nc-sa-4.0
task_categories:
- text-classification
language:
- en
tags:
- Human-Centered NLP
- Helpfulness Prediction
- Review Helpfulness Prediction
- User Review Analysis
- Dataset
- Review Helpfulness Prediction Dataset
pretty_name: Review Helpfulness Prediction (RHP) Dataset
size_categories:
- 100K<n<1M
---
# Dataset Card for Review Helpfulness Prediction (RHP) Dataset
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:** [On the Role of Reviewer Expertise in Temporal Review Helpfulness Prediction](https://aclanthology.org/2023.findings-eacl.125/)
- **Leaderboard:**
### Dataset Summary
The success of e-commerce services is largely dependent on helpful reviews that aid customers in making informed purchasing decisions. However, some reviews may be spammy or biased, making it challenging to identify which ones are helpful. Current methods for identifying helpful reviews only focus on the review text, ignoring the importance of who posted the review and when it was posted. Additionally, helpfulness votes may be scarce for less popular products or recently submitted reviews. To address these challenges, the we introduce a dataset and task for review helpfulness prediction, incorporating the reviewers' attributes and review date, and build the dataset by scraping reviews from [TripAdvisor](https://www.tripadvisor.com/).
### Languages
English
## Loading the Dataset
```python
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("tafseer-nayeem/review_helpfulness_prediction")
# Divide the dataset into train, test, and validation sets
train_dataset = dataset["train"]
test_dataset = dataset["test"]
validation_dataset = dataset["validation"]
print(f'Number of training samples: {len(train_dataset)}')
print(f'Number of testing samples: {len(test_dataset)}')
print(f'Number of validation samples: {len(validation_dataset)}')
```
**If the above code doesn't work due to changes in the Hugging Face datasets library**, download the `train.json`, `test.json`, and `validation.json` from the data directory and use the following alternative code:
```python
import json
def load_json(filename):
with open(filename, 'r') as f:
data = json.load(f)
return data
# Load the data
train_data = load_json('train.json')
test_data = load_json('test.json')
validation_data = load_json('validation.json')
```
## Dataset Structure
### Data Instances
One example from the `test` split of the dataset is given below in JSON format.
```
{
"user_review_posted": 28,
"user_total_helpful_votes": 78,
"expertise": 0.013414038240254,
"user_cities_visited": 89,
"review_days": 0.39430449069003204,
"helpful_class": 4,
"review_text": "Had to see for myself. Over priced, bloviated, cheap. I am highly sensitive to mold, and it permeated the hotel. Sheets were damp, pipes blew hot air even when turned off. Considering all the hype, that's what this place is, all hype for too much money."
}
```
### Data Fields
- `user_review_posted`: An integer representing the number of reviews posted by the reviewer.
- `user_total_helpful_votes`: An integer representing the cumulative helpful votes received by the reviewer.
- `expertise`: A normalized floating point number representing the mean number of helpful votes received per review.
- `user_cities_visited`: An integer representing the number of cities visited by the reviewer.
- `review_days`: A normalized floating point number representing the relative age of a review in days.
- `helpful_class`: An integer representing the degree of helpfulness of a review.
- `review_text`: A string representing the review text.
### Data Splits
The following Table presents the summary of our dataset with train, validation, and test splits.
| | Train | Valid | Test |
|:---------------:|---------|--------|-------|
| Total #Samples | 145,381 | 8,080 | 8,080 |
| Avg. #Sentences | 7.82 | 7.8 | 7.81 |
| Avg. #Words | 152.37 | 152.25 | 148.9 |
## Dataset Creation
We build our dataset by scraping reviews from [TripAdvisor](https://www.tripadvisor.com). Out of 225,664 reviews retrieved, close to one third have no helpful votes. We filter such reviews, and this reduces the number of reviews to 161,541. We leverage a logarithmic scale to categorize the reviews based on the number of votes received. Specifically, we map the number of votes into five intervals (i.e., [1,2), [2, 4), [4, 8), [8, 16), [16, infinity)), each corresponding to a helpfulness score of {1, 2, 3, 4, 5}, where the higher the score, the more helpful the review. More details can be found in our [EACL 2023](https://aclanthology.org/2023.findings-eacl.125/) paper.
### Discussion of Ethics
In our data scraping process, we took into account ethical considerations. We obtained data at an appropriate pace, avoiding any potential DDoS attacks.
### Known Limitations
Limitation of our dataset is that we only worked with reviews written in English. As a result, we filter out the reviews written in other languages and notice code-switched reviews when the reviewers alternate between two or more languages in a single review.
## Additional Information
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use any of the resources or it's relevant to your work, please cite [the paper](https://aclanthology.org/2023.findings-eacl.125/).
```
@inproceedings{nayeem-rafiei-2023-role,
title = "On the Role of Reviewer Expertise in Temporal Review Helpfulness Prediction",
author = "Nayeem, Mir Tafseer and
Rafiei, Davood",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2023",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-eacl.125",
pages = "1684--1692",
abstract = "Helpful reviews have been essential for the success of e-commerce services, as they help customers make quick purchase decisions and benefit the merchants in their sales. While many reviews are informative, others provide little value and may contain spam, excessive appraisal, or unexpected biases. With the large volume of reviews and their uneven quality, the problem of detecting helpful reviews has drawn much attention lately. Existing methods for identifying helpful reviews primarily focus on review text and ignore the two key factors of (1) who post the reviews and (2) when the reviews are posted. Moreover, the helpfulness votes suffer from scarcity for less popular products and recently submitted (a.k.a., cold-start) reviews. To address these challenges, we introduce a dataset and develop a model that integrates the reviewer{'}s expertise, derived from the past review history of the reviewers, and the temporal dynamics of the reviews to automatically assess review helpfulness. We conduct experiments on our dataset to demonstrate the effectiveness of incorporating these factors and report improved results compared to several well-established baselines.",
}
``` | 7,493 | [
[
-0.03564453125,
-0.03533935546875,
0.036285400390625,
0.0304412841796875,
-0.02203369140625,
-0.0193023681640625,
-0.01226043701171875,
-0.04327392578125,
0.035369873046875,
0.02508544921875,
-0.03851318359375,
-0.038848876953125,
-0.03680419921875,
0.030639... |
sathviknp/robocall-audio-and-transcript | 2023-05-09T01:50:45.000Z | [
"region:us"
] | sathviknp | null | null | 0 | 5 | 2023-05-05T16:53:19 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: language
dtype: string
- name: transcript
dtype: string
splits:
- name: train
num_bytes: 271153415.115
num_examples: 1101
download_size: 168083935
dataset_size: 271153415.115
---
# Dataset Card for "robocall-audio-and-transcript"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## How to Cite?
```BibTex
@inproceedings{usenix_snorcall,
author = {Sathvik Prasad and Trevor Dunlap and Alexander Ross and Bradley Reaves},
title = {Diving into Robocall Content with SnorCall},
booktitle = {32nd {USENIX} Security Symposium ({USENIX} Security)},
year = {2023},
publisher = {USENIX Association},
month = aug,
}
``` | 800 | [
[
0.0013256072998046875,
-0.0091400146484375,
0.0206451416015625,
0.000013172626495361328,
-0.012237548828125,
0.00876617431640625,
-0.0035343170166015625,
-0.01503753662109375,
0.054656982421875,
0.039764404296875,
-0.0457763671875,
-0.036224365234375,
-0.0406494... |
thu-coai/kdconv | 2023-05-08T10:39:46.000Z | [
"language:zh",
"license:apache-2.0",
"arxiv:2004.04100",
"region:us"
] | thu-coai | null | null | 3 | 5 | 2023-05-08T08:25:16 | ---
license: apache-2.0
language:
- zh
---
The KDConv dataset. [GitHub repo](https://github.com/thu-coai/KdConv). [Original paper](https://arxiv.org/abs/2004.04100).
```bib
@inproceedings{zhou-etal-2020-kdconv,
title = "{K}d{C}onv: A {C}hinese Multi-domain Dialogue Dataset Towards Multi-turn Knowledge-driven Conversation",
author = "Zhou, Hao and
Zheng, Chujie and
Huang, Kaili and
Huang, Minlie and
Zhu, Xiaoyan",
booktitle = "ACL",
year = "2020"
}
```
| 502 | [
[
-0.0198822021484375,
-0.036163330078125,
0.018707275390625,
-0.0010890960693359375,
0.003688812255859375,
0.02252197265625,
-0.0135955810546875,
-0.00860595703125,
0.01751708984375,
0.051483154296875,
-0.053985595703125,
-0.055206298828125,
-0.0253448486328125,
... |
h2oai/h2ogpt-fortune2000-personalized | 2023-05-09T05:08:02.000Z | [
"language:en",
"license:apache-2.0",
"gpt",
"llm",
"large language model",
"open-source",
"region:us"
] | h2oai | null | null | 0 | 5 | 2023-05-09T05:06:47 | ---
license: apache-2.0
language:
- en
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
tags:
- gpt
- llm
- large language model
- open-source
---
# h2oGPT Data Card
## Summary
H2O.ai's `h2ogpt-fortune2000-personalized` is an open-source instruct-type dataset for fine-tuning of large language models, licensed for commercial use.
- Number of rows: `11363`
- Number of columns: `4`
- Column names: `['input', 'prompt_type', 'source', 'id']`
## Source
- [Fortune 2000 companies from Wikipedia](https://github.com/h2oai/h2ogpt/blob/b1ea74c0088884ebff97f1ccddbfb3f393e29e44/create_data.py#L1743)
| 650 | [
[
-0.016998291015625,
-0.0304107666015625,
0.016876220703125,
0.0257415771484375,
-0.0140228271484375,
0.003574371337890625,
0.006778717041015625,
-0.019500732421875,
0.00980377197265625,
0.0413818359375,
-0.02740478515625,
-0.0287322998046875,
-0.0172576904296875... |
hohai/webtext | 2023-05-10T10:49:08.000Z | [
"region:us"
] | hohai | null | null | 0 | 5 | 2023-05-10T10:46:39 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
silk-road/Vanilla-chinese-alpaca-luotuo | 2023-05-12T23:17:41.000Z | [
"size_categories:10K<n<100K",
"language:zh",
"license:apache-2.0",
"region:us"
] | silk-road | null | null | 13 | 5 | 2023-05-10T10:50:05 | ---
license: apache-2.0
language:
- zh
pretty_name: f
size_categories:
- 10K<n<100K
---
Vanilla骆驼是骆驼项目在23年3月21日启动的第一个数据集和模型
我们会陆续将更多数据集发布到hf,包括
- [ ] Coco Caption的中文翻译
- [ ] CoQA的中文翻译
- [ ] CNewSum的Embedding数据
- [ ] 增广的开放QA数据
- [ ] WizardLM的中文翻译
如果你也在做这些数据集的筹备,欢迎来联系我们,避免重复花钱。
# 骆驼(Luotuo): 开源中文大语言模型
[https://github.com/LC1332/Luotuo-Chinese-LLM](https://github.com/LC1332/Luotuo-Chinese-LLM)
骆驼(Luotuo)项目是由[冷子昂](https://blairleng.github.io) @ 商汤科技, 陈启源 @ 华中师范大学 以及 李鲁鲁 @ 商汤科技 发起的中文大语言模型开源项目,包含了一系列语言模型。
( 注意: [陈启源](https://qiyuan-chen.github.io/) 正在寻找2024推免导师,欢迎联系 )
骆驼项目**不是**商汤科技的官方产品。
## Citation
Please cite the repo if you use the data or code in this repo.
```
@misc{alpaca,
author={Ziang Leng, Qiyuan Chen and Cheng Li},
title = {Luotuo: An Instruction-following Chinese Language model, LoRA tuning on LLaMA},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/LC1332/Luotuo-Chinese-LLM}},
}
```
| 988 | [
[
-0.0162811279296875,
-0.0806884765625,
0.017425537109375,
0.038055419921875,
-0.03497314453125,
-0.007358551025390625,
-0.00595855712890625,
-0.041534423828125,
0.04046630859375,
0.037139892578125,
-0.041290283203125,
-0.056304931640625,
-0.02349853515625,
-... |
lighteval/lsat_qa | 2023-05-16T08:06:55.000Z | [
"region:us"
] | lighteval | null | 2 | 5 | 2023-05-10T15:33:08 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... | |
ybelkada/oasst1-tiny-subset | 2023-05-11T14:07:03.000Z | [
"region:us"
] | ybelkada | null | null | 1 | 5 | 2023-05-11T14:06:58 | ---
dataset_info:
features:
- name: messages
dtype: string
splits:
- name: train
num_bytes: 59104494.0
num_examples: 39663
- name: test
num_bytes: 6567166.0
num_examples: 4407
download_size: 38767143
dataset_size: 65671660.0
---
# Dataset Card for "oasst1-tiny-subset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 435 | [
[
-0.03460693359375,
-0.0174560546875,
0.0219573974609375,
-0.0109100341796875,
-0.022979736328125,
-0.00988006591796875,
0.0269012451171875,
-0.0004897117614746094,
0.06591796875,
0.0232391357421875,
-0.0667724609375,
-0.036651611328125,
-0.036041259765625,
-... |
alzoubi36/privaseer | 2023-06-21T12:32:56.000Z | [
"license:gpl-3.0",
"region:us"
] | alzoubi36 | null | null | 0 | 5 | 2023-05-17T15:42:14 | ---
license: gpl-3.0
dataset_info:
features:
- name: hash
dtype: string
- name: url
dtype: string
- name: text
dtype: string
- name: title
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 17080868768
num_examples: 2180300
download_size: 8133175578
dataset_size: 17080868768
---
## Privaseer Dataset
Huggingface version of the [Privaseer](https://privaseer.ist.psu.edu/) dataset.
<pre>
@inproceedings{srinath-etal-2021-privacy,
title = "Privacy at Scale: Introducing the {P}riva{S}eer Corpus of Web Privacy Policies",
author = "Srinath, Mukund and
Wilson, Shomir and
Giles, C Lee",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.532",
doi = "10.18653/v1/2021.acl-long.532",
pages = "6829--6839",
abstract = "Organisations disclose their privacy practices by posting privacy policies on their websites. Even though internet users often care about their digital privacy, they usually do not read privacy policies, since understanding them requires a significant investment of time and effort. Natural language processing has been used to create experimental tools to interpret privacy policies, but there has been a lack of large privacy policy corpora to facilitate the creation of large-scale semi-supervised and unsupervised models to interpret and simplify privacy policies. Thus, we present the PrivaSeer Corpus of 1,005,380 English language website privacy policies collected from the web. The number of unique websites represented in PrivaSeer is about ten times larger than the next largest public collection of web privacy policies, and it surpasses the aggregate of unique websites represented in all other publicly available privacy policy corpora combined. We describe a corpus creation pipeline with stages that include a web crawler, language detection, document classification, duplicate and near-duplicate removal, and content extraction. We employ an unsupervised topic modelling approach to investigate the contents of policy documents in the corpus and discuss the distribution of topics in privacy policies at web scale. We further investigate the relationship between privacy policy domain PageRanks and text features of the privacy policies. Finally, we use the corpus to pretrain PrivBERT, a transformer-based privacy policy language model, and obtain state of the art results on the data practice classification and question answering tasks.",}
</pre> | 2,808 | [
[
-0.03375244140625,
-0.050811767578125,
0.0153961181640625,
0.032745361328125,
0.0034027099609375,
-0.01163482666015625,
-0.040557861328125,
-0.0251617431640625,
0.0017194747924804688,
0.046234130859375,
-0.0218048095703125,
-0.045166015625,
-0.036407470703125,
... |
Braddy/rsicd_deduplicate_95 | 2023-05-17T18:01:44.000Z | [
"region:us"
] | Braddy | null | null | 0 | 5 | 2023-05-17T17:42:51 | ---
dataset_info:
features:
- name: filename
dtype: string
- name: captions
sequence: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 449737330.25
num_examples: 8734
- name: test
num_bytes: 60117169.375
num_examples: 1093
- name: valid
num_bytes: 57297204.25
num_examples: 1094
download_size: 528918987
dataset_size: 567151703.875
---
# Dataset Card for "rsicd_deduplicate_95"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 584 | [
[
-0.036041259765625,
-0.004268646240234375,
0.004150390625,
0.020050048828125,
-0.02655029296875,
-0.0018939971923828125,
0.01434326171875,
-0.004955291748046875,
0.036834716796875,
0.045562744140625,
-0.0687255859375,
-0.05731201171875,
-0.05157470703125,
0.... |
Soyoung/HistRED | 2023-08-01T15:05:24.000Z | [
"task_categories:token-classification",
"size_categories:1K<n<10K",
"language:ko",
"license:cc-by-nc-nd-4.0",
"art",
"arxiv:2307.04285",
"region:us"
] | Soyoung | null | null | 1 | 5 | 2023-05-18T13:00:36 | ---
license: cc-by-nc-nd-4.0
task_categories:
- token-classification
language:
- ko
tags:
- art
size_categories:
- 1K<n<10K
---
This is the official code for **HistRED: A Historical Document-Level Relation Extraction Dataset** (ACL 2023).
All materials related to this paper can be found here.
- [ACL Anthology](https://aclanthology.org/2023.acl-long.180/): Official proceeding publication
- [Virtual-ACL 2023](https://virtual2023.aclweb.org/paper_P536.html#slides): You can view papers, posters, and presentation slides.
- [arXiv](https://arxiv.org/abs/2307.04285): This is the camera-ready version, which is a key part of this paper.
Note that this dataset is open under [CC BY-NC-ND 4.0](https://creativecommons.org/licenses/by-nc-nd/4.0/) license.
The same code (except the dataset) can be seen in [Github](https://github.com/dudrrm/HistRED/tree/main)
```python
from datasets import load_dataset
dataset = load_dataset("Soyoung/HistRED")
```
# Dataset Example
Due to the complexity of the dataset, we replace the dataset preview with an example figure.
The text is translated into English for comprehension (*), however, unlike the figure, the dataset does not include English-translated text, only containing Korean and Hanja.
Also, only one relation is shown for readability.
Relation information includes
1. subject and object entities for Korean and Hanja *(sbj_kor, sbj_han, obj_kor, obj_han)*,
2. a relation type *(label)*,
3. and evidence sentence index(es) for each language *(evidence_kor, evidence_han)*.
Metadata contains additional information, such as which book the text is extracted from.

# Corpus of HistRED: \<\< Yeonhaengnok \>\>
In this dataset, we choose *Yeonhaengnok*, a collection of records originally written in Hanja, classical Chinese writing, which has later been translated into Korean.
[Joseon](https://en.wikipedia.org/wiki/Joseon), the last dynastic kingdom of Korea, lasted just over five centuries, from 1392 to 1897, and many aspects of Korean traditions and customs trace their roots back to this era.
Numerous historical documents exist from the Joseon dynasty, including *Annals of Joseon Dynasty* ([AJD](https://en.wikipedia.org/wiki/Veritable_Records_of_the_Joseon_Dynasty)) and *Diaries of the Royal Secretariats* ([DRS](https://en.wikipedia.org/wiki/Seungjeongwon_ilgi)).
Note that the majority of Joseon's records were written in Hanja, the archaic Chinese writing that differs from modern Chinese because the Korean language had not been standardized until much later.
In short, Yeonhaengnok is a travel diary from the Joseon period. In the past, traveling to other places, particularly to foreign countries, was rare.
Therefore, intellectuals who traveled to Chung (also referred to as the [Qing dynasty](https://en.wikipedia.org/wiki/Qing_dynasty)) meticulously documented their journeys, and Yeonhaengnok is a compilation of these accounts.
Diverse individuals from different generations recorded their business trips following similar routes from Joseon to Chung, focusing on people, products, and events they encountered.
The Institute for the Translation of Korean Classics (ITKC) has open-sourced the original and their translated texts for many historical documents, promoting active historical research.
The entire documents were collected from an open-source database at https://db.itkc.or.kr/.
# Properties
- Our dataset contains (i) named entities, (ii) relations between the entities, and (iii) parallel relationships between Korean and Hanja texts.
- <code style="color : red"> dataset.py </code> return processed dataset that can be easily applied to general NLP models.
- For monolingual setting: *KoreanDataset*, *HanjaDataset*
- For Bilingual setting: *JointDataset*
- <code style="color : red"> ner_map.json </code> and <code style="color : red"> label_map.json </code> are the mapping dictionaries from label classes to indexes.
- Sequence level (SL) is a unit of sequence length for extracting self-contained sub-texts without losing context information for each relation in the text. Each folder SL-k indicates that SL is k.
# Dataset usages
- Testbed for evaluating the model performance when varying the sequence length.
- Relation extraction task especially on Non-English or historical corpus.
# Citation
```
@inproceedings{yang-etal-2023-histred,
title = "{H}ist{RED}: A Historical Document-Level Relation Extraction Dataset",
author = "Yang, Soyoung and
Choi, Minseok and
Cho, Youngwoo and
Choo, Jaegul",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.180",
pages = "3207--3224",
}
```
| 4,910 | [
[
-0.018585205078125,
-0.0281219482421875,
0.0283660888671875,
-0.00551605224609375,
-0.01476287841796875,
-0.02142333984375,
-0.0224456787109375,
-0.033782958984375,
0.017822265625,
0.043426513671875,
-0.041839599609375,
-0.059356689453125,
-0.01386260986328125,
... |
PocketDoc/RUCAIBox-Story-Generation-Alpaca | 2023-05-18T21:58:55.000Z | [
"task_categories:text-generation",
"language:en",
"region:us"
] | PocketDoc | null | null | 5 | 5 | 2023-05-18T20:46:19 | ---
task_categories:
- text-generation
language:
- en
---
https://huggingface.co/datasets/RUCAIBox/Story-Generation
RUC AI Box HC Story Generation augmented and converted to alpaca format.
No filtering has been done. | 218 | [
[
-0.053863525390625,
-0.06292724609375,
0.0279083251953125,
0.03564453125,
-0.026123046875,
0.00124359130859375,
0.0213470458984375,
-0.060028076171875,
0.068359375,
0.06439208984375,
-0.091064453125,
-0.038726806640625,
-0.026763916015625,
0.0093765258789062... |
CVdatasets/food27 | 2023-05-18T20:53:43.000Z | [
"region:us"
] | CVdatasets | null | null | 0 | 5 | 2023-05-18T20:52:49 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': apple_pie
'1': beef_tartare
'2': beignets
'3': carrot_cake
'4': cheesecake
'5': cheese_plate
'6': chicken_wings
'7': chocolate_cake
'8': chocolate_mousse
'9': dumplings
'10': edamame
'11': filet_mignon
'12': french_fries
'13': fried_calamari
'14': guacamole
'15': ice_cream
'16': macarons
'17': miso_soup
'18': nachos
'19': onion_rings
'20': pizza
'21': poutine
'22': red_velvet_cake
'23': steak
'24': strawberry_shortcake
'25': tiramisu
'26': waffles
splits:
- name: train
num_bytes: 1010337492.0
num_examples: 20250
- name: validation
num_bytes: 334516930.25
num_examples: 6750
download_size: 1327834336
dataset_size: 1344854422.25
---
# Dataset Card for "food27"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,223 | [
[
-0.034271240234375,
-0.00583648681640625,
0.00738525390625,
0.0112762451171875,
0.01262664794921875,
0.003749847412109375,
0.01543426513671875,
-0.01251983642578125,
0.0657958984375,
0.031341552734375,
-0.062347412109375,
-0.049285888671875,
-0.0439453125,
-... |
AlekseyKorshuk/lmeh-chai-davinci-vs-lit | 2023-05-18T22:37:19.000Z | [
"region:us"
] | AlekseyKorshuk | null | null | 0 | 5 | 2023-05-18T22:08:43 | ---
dataset_info:
features:
- name: davinci
dtype: string
- name: lit
dtype: string
- name: prompt
dtype: string
- name: api_prompt
dtype: string
splits:
- name: test
num_bytes: 402675309
num_examples: 10000
download_size: 200661267
dataset_size: 402675309
---
# Dataset Card for "lmeh-chai-davinci-vs-lit"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 481 | [
[
-0.03399658203125,
-0.0308837890625,
0.0301055908203125,
-0.0166778564453125,
-0.019317626953125,
-0.004547119140625,
0.01438140869140625,
-0.01387786865234375,
0.060455322265625,
0.0267791748046875,
-0.06610107421875,
-0.03790283203125,
-0.0309906005859375,
... |
yanchao/cifar10buqi | 2023-05-19T07:00:52.000Z | [
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"chemistry",
"region:us"
] | yanchao | The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images
per class. There are 50000 training images and 10000 test images. | @TECHREPORT{Krizhevsky09learningmultiple,
author = {Alex Krizhevsky},
title = {Learning multiple layers of features from tiny images},
institution = {},
year = {2009}
} | 0 | 5 | 2023-05-19T05:56:55 | ---
license: apache-2.0
language:
- en
tags:
- chemistry
pretty_name: buqi
size_categories:
- 1K<n<10K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
buqi
### Supported Tasks and Leaderboards
buqi
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 1,388 | [
[
-0.0239410400390625,
-0.024658203125,
-0.005863189697265625,
0.0270233154296875,
-0.015960693359375,
0.01427459716796875,
-0.01812744140625,
-0.0169677734375,
0.025054931640625,
0.04742431640625,
-0.056884765625,
-0.08221435546875,
-0.04437255859375,
0.00917... |
asoria/duorc | 2023-05-19T14:59:33.000Z | [
"task_categories:question-answering",
"task_categories:text2text-generation",
"task_ids:abstractive-qa",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_categories:10K<n<100K",
"sourc... | asoria | DuoRC contains 186,089 unique question-answer pairs created from a collection of 7680 pairs of movie plots where each pair in the collection reflects two versions of the same movie. | @inproceedings{DuoRC,
author = { Amrita Saha and Rahul Aralikatte and Mitesh M. Khapra and Karthik Sankaranarayanan},title = {{DuoRC: Towards Complex Language Understanding with Paraphrased Reading Comprehension}},
booktitle = {Meeting of the Association for Computational Linguistics (ACL)},
year = {2018}
} | 0 | 5 | 2023-05-19T14:58:04 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
- text2text-generation
task_ids:
- abstractive-qa
- extractive-qa
paperswithcode_id: duorc
pretty_name: DuoRC
configs:
- ParaphraseRC
- SelfRC
dataset_info:
- config_name: SelfRC
features:
- name: plot_id
dtype: string
- name: plot
dtype: string
- name: title
dtype: string
- name: question_id
dtype: string
- name: question
dtype: string
- name: answers
sequence: string
- name: no_answer
dtype: bool
splits:
- name: train
num_bytes: 239852925
num_examples: 60721
- name: validation
num_bytes: 51662575
num_examples: 12961
- name: test
num_bytes: 49142766
num_examples: 12559
download_size: 34462660
dataset_size: 340658266
- config_name: ParaphraseRC
features:
- name: plot_id
dtype: string
- name: plot
dtype: string
- name: title
dtype: string
- name: question_id
dtype: string
- name: question
dtype: string
- name: answers
sequence: string
- name: no_answer
dtype: bool
splits:
- name: train
num_bytes: 496683105
num_examples: 69524
- name: validation
num_bytes: 106510545
num_examples: 15591
- name: test
num_bytes: 115215816
num_examples: 15857
download_size: 62921050
dataset_size: 718409466
---
# Dataset Card for duorc
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [DuoRC](https://duorc.github.io/)
- **Repository:** [GitHub](https://github.com/duorc/duorc)
- **Paper:** [arXiv](https://arxiv.org/abs/1804.07927)
- **Leaderboard:** [DuoRC Leaderboard](https://duorc.github.io/#leaderboard)
- **Point of Contact:** [Needs More Information]
### Dataset Summary
The DuoRC dataset is an English language dataset of questions and answers gathered from crowdsourced AMT workers on Wikipedia and IMDb movie plots. The workers were given freedom to pick answer from the plots or synthesize their own answers. It contains two sub-datasets - SelfRC and ParaphraseRC. SelfRC dataset is built on Wikipedia movie plots solely. ParaphraseRC has questions written from Wikipedia movie plots and the answers are given based on corresponding IMDb movie plots.
### Supported Tasks and Leaderboards
- `abstractive-qa` : The dataset can be used to train a model for Abstractive Question Answering. An abstractive question answering model is presented with a passage and a question and is expected to generate a multi-word answer. The model performance is measured by exact-match and F1 score, similar to [SQuAD V1.1](https://huggingface.co/metrics/squad) or [SQuAD V2](https://huggingface.co/metrics/squad_v2). A [BART-based model](https://huggingface.co/yjernite/bart_eli5) with a [dense retriever](https://huggingface.co/yjernite/retribert-base-uncased) may be used for this task.
- `extractive-qa`: The dataset can be used to train a model for Extractive Question Answering. An extractive question answering model is presented with a passage and a question and is expected to predict the start and end of the answer span in the passage. The model performance is measured by exact-match and F1 score, similar to [SQuAD V1.1](https://huggingface.co/metrics/squad) or [SQuAD V2](https://huggingface.co/metrics/squad_v2). [BertForQuestionAnswering](https://huggingface.co/transformers/model_doc/bert.html#bertforquestionanswering) or any other similar model may be used for this task.
### Languages
The text in the dataset is in English, as spoken by Wikipedia writers for movie plots. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
```
{'answers': ['They arrived by train.'], 'no_answer': False, 'plot': "200 years in the future, Mars has been colonized by a high-tech company.\nMelanie Ballard (Natasha Henstridge) arrives by train to a Mars mining camp which has cut all communication links with the company headquarters. She's not alone, as she is with a group of fellow police officers. They find the mining camp deserted except for a person in the prison, Desolation Williams (Ice Cube), who seems to laugh about them because they are all going to die. They were supposed to take Desolation to headquarters, but decide to explore first to find out what happened.They find a man inside an encapsulated mining car, who tells them not to open it. However, they do and he tries to kill them. One of the cops witnesses strange men with deep scarred and heavily tattooed faces killing the remaining survivors. The cops realise they need to leave the place fast.Desolation explains that the miners opened a kind of Martian construction in the soil which unleashed red dust. Those who breathed that dust became violent psychopaths who started to build weapons and kill the uninfected. They changed genetically, becoming distorted but much stronger.The cops and Desolation leave the prison with difficulty, and devise a plan to kill all the genetically modified ex-miners on the way out. However, the plan goes awry, and only Melanie and Desolation reach headquarters alive. Melanie realises that her bosses won't ever believe her. However, the red dust eventually arrives to headquarters, and Melanie and Desolation need to fight once again.", 'plot_id': '/m/03vyhn', 'question': 'How did the police arrive at the Mars mining camp?', 'question_id': 'b440de7d-9c3f-841c-eaec-a14bdff950d1', 'title': 'Ghosts of Mars'}
```
### Data Fields
- `plot_id`: a `string` feature containing the movie plot ID.
- `plot`: a `string` feature containing the movie plot text.
- `title`: a `string` feature containing the movie title.
- `question_id`: a `string` feature containing the question ID.
- `question`: a `string` feature containing the question text.
- `answers`: a `list` of `string` features containing list of answers.
- `no_answer`: a `bool` feature informing whether the question has no answer or not.
### Data Splits
The data is split into a training, dev and test set in such a way that the resulting sets contain 70%, 15%, and 15% of the total QA pairs and no QA pairs for any movie seen in train are included in the test set. The final split sizes are as follows:
Name Train Dec Test
SelfRC 60721 12961 12599
ParaphraseRC 69524 15591 15857
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
Wikipedia and IMDb movie plots
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
For SelfRC, the annotators were allowed to mark an answer span in the plot or synthesize their own answers after reading Wikipedia movie plots.
For ParaphraseRC, questions from the Wikipedia movie plots from SelfRC were used and the annotators were asked to answer based on IMDb movie plots.
#### Who are the annotators?
Amazon Mechanical Turk Workers
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The dataset was intially created by Amrita Saha, Rahul Aralikatte, Mitesh M. Khapra, and Karthik Sankaranarayanan in a collaborated work between IIT Madras and IBM Research.
### Licensing Information
[MIT License](https://github.com/duorc/duorc/blob/master/LICENSE)
### Citation Information
```
@inproceedings{DuoRC,
author = { Amrita Saha and Rahul Aralikatte and Mitesh M. Khapra and Karthik Sankaranarayanan},
title = {{DuoRC: Towards Complex Language Understanding with Paraphrased Reading Comprehension}},
booktitle = {Meeting of the Association for Computational Linguistics (ACL)},
year = {2018}
}
```
### Contributions
Thanks to [@gchhablani](https://github.com/gchhablani) for adding this dataset. | 9,122 | [
[
-0.0435791015625,
-0.053558349609375,
0.02691650390625,
-0.00812530517578125,
-0.0161895751953125,
0.0216522216796875,
0.007061004638671875,
-0.00579071044921875,
0.0230712890625,
0.04840087890625,
-0.058502197265625,
-0.03173828125,
-0.043487548828125,
0.02... |
voidful/EduQG | 2023-05-20T17:01:16.000Z | [
"region:us"
] | voidful | null | null | 0 | 5 | 2023-05-20T17:00:38 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
GeorgeGuo/detect | 2023-05-23T07:15:59.000Z | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:zh",
"license:apache-2.0",
"music",
"region:us"
] | GeorgeGuo | null | null | 0 | 5 | 2023-05-23T07:06:39 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- zh
tags:
- music
size_categories:
- 10K<n<100K
---
This is dataset for test | 151 | [
[
-0.0254669189453125,
-0.03656005859375,
-0.0227813720703125,
0.0193634033203125,
-0.005550384521484375,
-0.01107025146484375,
0.01495361328125,
0.0250701904296875,
0.0001494884490966797,
0.050994873046875,
-0.063232421875,
-0.019073486328125,
-0.011474609375,
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.