id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Cohere/miracl-ja-queries-22-12 | Cohere | 2023-02-06T11:57:00Z | 31 | 1 | null | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:ja",
"license:apache-2.0",
"region:us"
] | 2023-02-06T11:57:00Z | 2023-01-31T09:20:40.000Z | 2023-01-31T09:20:40 | ---
annotations_creators:
- expert-generated
language:
- ja
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# MIRACL (ja) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-ja-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ja-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-ja-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ja-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-ja-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ja-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-ja-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-ja-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-ja-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ja-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-ja-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-ja-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
| [
-0.638614296913147,
-0.8295241594314575,
0.3249386250972748,
0.23641127347946167,
-0.05931806191802025,
-0.05849802494049072,
-0.32141202688217163,
-0.4966791868209839,
0.5616434812545776,
0.23018182814121246,
-0.5476011037826538,
-0.9981316328048706,
-0.6966386437416077,
0.338775634765625... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
range3/wikipedia-ja-20230101 | range3 | 2023-02-04T05:44:41Z | 31 | 3 | null | [
"task_categories:text-generation",
"task_categories:fill-mask",
"language:ja",
"license:cc-by-sa-3.0",
"license:gfdl",
"region:us"
] | 2023-02-04T05:44:41Z | 2023-02-04T04:29:29.000Z | 2023-02-04T04:29:29 | ---
license:
- cc-by-sa-3.0
- gfdl
task_categories:
- text-generation
- fill-mask
language:
- ja
---
# range3/wikipedia-ja-20230101
This dataset consists of a parquet file from the wikipedia dataset with only Japanese data extracted. It is generated by the following python code.
このデータセットは、wikipediaデータセットの日本語データのみを抽出したparquetファイルで構成されます。以下のpythonコードによって生成しています。
```py
import datasets
dss = datasets.load_dataset(
"wikipedia",
language="ja",
date="20230101",
beam_runner="DirectRunner",
)
for split,ds in dss.items():
ds.to_parquet(f"wikipedia-ja-20230101/{split}.parquet")
```
| [
-0.38786014914512634,
-0.6440152525901794,
0.1262252777814865,
0.12946204841136932,
-0.30967241525650024,
-0.5212820768356323,
-0.09585417062044144,
-0.08640577644109726,
0.3823132812976837,
0.5269285440444946,
-0.8565516471862793,
-0.6264643669128418,
-0.4223482012748718,
0.27660134434700... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
JAWCF/objects | JAWCF | 2023-02-10T01:51:10Z | 31 | 0 | null | [
"region:us"
] | 2023-02-10T01:51:10Z | 2023-02-10T01:31:49.000Z | 2023-02-10T01:31:49 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
IlyaGusev/ru_stackoverflow | IlyaGusev | 2023-03-09T23:48:16Z | 31 | 8 | null | [
"task_categories:text-generation",
"task_categories:question-answering",
"size_categories:100K<n<1M",
"language:ru",
"license:other",
"region:us"
] | 2023-03-09T23:48:16Z | 2023-02-13T14:32:35.000Z | 2023-02-13T14:32:35 | ---
license: other
task_categories:
- text-generation
- question-answering
language:
- ru
size_categories:
- 100K<n<1M
dataset_info:
features:
- name: question_id
dtype: uint32
- name: url
dtype: string
- name: answer_count
dtype: uint32
- name: text_html
dtype: string
- name: text_markdown
dtype: string
- name: score
dtype: int32
- name: title
dtype: string
- name: tags
sequence: string
- name: views
dtype: uint64
- name: author
dtype: string
- name: timestamp
dtype: uint64
- name: comments
sequence:
- name: text
dtype: string
- name: author
dtype: string
- name: comment_id
dtype: uint32
- name: score
dtype: int32
- name: timestamp
dtype: uint64
- name: answers
sequence:
- name: answer_id
dtype: uint32
- name: is_accepted
dtype: uint8
- name: text_html
dtype: string
- name: text_markdown
dtype: string
- name: score
dtype: int32
- name: author
dtype: string
- name: timestamp
dtype: uint64
- name: comments
sequence:
- name: text
dtype: string
- name: author
dtype: string
- name: comment_id
dtype: uint32
- name: score
dtype: int32
- name: timestamp
dtype: uint64
splits:
- name: train
num_bytes: 3013377174
num_examples: 437604
download_size: 670468664
dataset_size: 3013377174
---
# Russian StackOverflow dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Description](#description)
- [Usage](#usage)
- [Data Instances](#data-instances)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Licensing Information](#licensing-information)
## Description
**Summary:** Dataset of questions, answers, and comments from [ru.stackoverflow.com](https://ru.stackoverflow.com/).
**Script:** [create_stackoverflow.py](https://github.com/IlyaGusev/rulm/blob/hf/data_processing/create_stackoverflow.py)
**Point of Contact:** [Ilya Gusev](ilya.gusev@phystech.edu)
**Languages:** The dataset is in Russian with some programming code.
## Usage
Prerequisites:
```bash
pip install datasets zstandard jsonlines pysimdjson
```
Loading:
```python
from datasets import load_dataset
dataset = load_dataset('IlyaGusev/ru_stackoverflow', split="train")
for example in dataset:
print(example["text_markdown"])
print()
```
## Data Instances
```
{
"question_id": 11235,
"answer_count": 1,
"url": "https://ru.stackoverflow.com/questions/11235",
"score": 2,
"tags": ["c++", "сериализация"],
"title": "Извлечение из файла, запись в файл",
"views": 1309,
"author": "...",
"timestamp": 1303205289,
"text_html": "...",
"text_markdown": "...",
"comments": {
"text": ["...", "...",
"author": ["...", "..."],
"comment_id": [11236, 11237],
"score": [0, 0],
"timestamp": [1303205411, 1303205678]
},
"answers": {
"answer_id": [11243, 11245],
"timestamp": [1303207791, 1303207792],
"is_accepted": [1, 0],
"text_html": ["...", "..."],
"text_markdown": ["...", "..."],
"score": [3, 0],
"author": ["...", "..."],
"comments": {
"text": ["...", "..."],
"author": ["...", "..."],
"comment_id": [11246, 11249],
"score": [0, 0],
"timestamp": [1303207961, 1303207800]
}
}
}
```
You can use this little helper to unflatten sequences:
```python
def revert_flattening(records):
fixed_records = []
for key, values in records.items():
if not fixed_records:
fixed_records = [{} for _ in range(len(values))]
for i, value in enumerate(values):
fixed_records[i][key] = value
return fixed_records
```
The original JSONL is already unflattened.
## Source Data
* The data source is the [Russian StackOverflow](https://ru.stackoverflow.com/) website.
* Original XMLs: [ru.stackoverflow.com.7z](https://ia600107.us.archive.org/27/items/stackexchange/ru.stackoverflow.com.7z).
* Processing script is [here](https://github.com/IlyaGusev/rulm/blob/hf/data_processing/create_stackoverflow.py).
## Personal and Sensitive Information
The dataset is not anonymized, so individuals' names can be found in the dataset. Information about the original authors is included in the dataset where possible.
## Licensing Information
According to the license of original data, this dataset is distributed under [CC BY-SA 2.5](https://creativecommons.org/licenses/by-sa/2.5/). | [
-0.37188130617141724,
-0.5243373513221741,
0.27737078070640564,
0.14958146214485168,
-0.20782196521759033,
0.17061854898929596,
-0.3229549825191498,
-0.026711182668805122,
0.24008826911449432,
0.3369823694229126,
-0.5352239012718201,
-0.7876397371292114,
-0.26505985856056213,
0.33590203523... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dirtycomputer/ChnSentiCorp_htl_all | dirtycomputer | 2023-02-17T06:46:13Z | 31 | 1 | null | [
"region:us"
] | 2023-02-17T06:46:13Z | 2023-02-17T06:45:31.000Z | 2023-02-17T06:45:31 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sunzeyeah/chinese_chatgpt_corpus | sunzeyeah | 2023-03-23T16:53:47Z | 31 | 72 | null | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:question-answering",
"task_categories:reinforcement-learning",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:unknown",
"multilingu... | 2023-03-23T16:53:47Z | 2023-03-21T09:16:21.000Z | 2023-03-21T09:16:21 | ---
annotations_creators:
- no-annotation
language_creators:
- unknown
language:
- zh
license:
- unknown
multilinguality:
- monolingual
pretty_name: Chinese-ChatGPT-Corpus
size_categories:
- 5M<n<10M
task_categories:
- text-generation
- text2text-generation
- question-answering
- reinforcement-learning
task_ids:
- language-modeling
- masked-language-modeling
---
# Dataset Card for chinese_chatgpt_corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Size of downloaded dataset files:** 5.05 GB
- **Size of the generated dataset:** 0 GB
- **Total amount of disk used:** 5.05 GB
### Dataset Summary
This repo collects chinese corpus for Supervised Finetuning (SFT) and Reinforcement Learning From Human Feedback (RLHF).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
Chinese
## Dataset Structure
### Data Instances
#### train_data_external_v1.jsonl
- **Size of downloaded dataset files:** 5.04 GB
- **Size of the generated dataset:** 0 GB
- **Total amount of disk used:** 5.04 GB
An example looks as follows:
```
{
"prompt": "问题:有没有给未成年贷款的有的联系",
"answers":
[
{
"answer": "若通过招行办理,我行规定,贷款人年龄需年满18岁,且年龄加贷款年限不得超过70岁。如果您持有我行信用卡附属卡,可尝试办理预借现金。",
"score": 1
}
],
"prefix": "回答:"
}
```
#### dev_data_external_v1.jsonl
- **Size of downloaded dataset files:** 9.55 MB
- **Size of the generated dataset:** 0 MB
- **Total amount of disk used:** 9.55 MB
An example looks as follows:
```
{
"prompt": "初学纹发现1/2\"的管螺纹并不是1\"的一半。不知道其中的原因,请各位指点。",
"answers":
[
{
"answer": "管螺纹的名义尺寸是“管子”的孔(内)径,而管子的壁厚不是两倍。所以,1/2\"的管螺纹并不是1\"的一半,",
"score": 1
}
],
"prefix": "回答:"
}
```
### Data Fields
The data fields are the same among all splits.
#### train_data_external_v1.jsonl
- `prompt`: prompt, `string`
- `answers`: list of answers
- `answer`: answer, `string`
- `score`: score of answer, `int`
- `prefix`: prefix to the answer, `string`
#### dev_data_external_v1.jsonl
- `prompt`: prompt, `string`
- `answers`: list of answers
- `answer`: answer, `string`
- `score`: score of answer, `int`
- `prefix`: prefix to the answer, `string`
### Data Splits
| name | train |
|----------|-------:|
|train_data_external_v1.jsonl|5477982|
|dev_data_external_v1.jsonl|10000|
## Dataset Creation
### Curation Rationale
Link to github: [data_prepare](https://github.com/sunzeyeah/RLHF/blob/master/src/data_prepare.py)
### Source Data
#### Initial Data Collection and Normalization
- [百科](https://github.com/brightmart/nlp_chinese_corpus)
- [知道问答](https://github.com/SophonPlus/ChineseNlpCorpus)
- [对联](https://github.com/wb14123/couplet-dataset/releases/download/1.0/couplet.tar.gz)
- [古文](https://github.com/NiuTrans/Classical-Modern)
- [古诗词](https://github.com/chinese-poetry/chinese-poetry)
- 微博新闻评论
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
| [
-0.5344591736793518,
-0.6015647649765015,
0.07140658795833588,
0.267624169588089,
-0.19547812640666962,
-0.14011071622371674,
-0.485289067029953,
-0.35640084743499756,
0.4748614728450775,
0.3821648359298706,
-0.7287083268165588,
-0.7769333720207214,
-0.3907793164253235,
0.0785115659236908,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
grammarly/detexd-benchmark | grammarly | 2023-07-10T17:36:37Z | 31 | 1 | null | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-07-10T17:36:37Z | 2023-03-21T18:44:32.000Z | 2023-03-21T18:44:32 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
size_categories:
- 1K<n<10K
pretty_name: 'DeTexD: A Benchmark Dataset for Delicate Text Detection'
dataset_info:
features:
- name: text
dtype: string
- name: annotator_1
dtype: int32
- name: annotator_2
dtype: int32
- name: annotator_3
dtype: int32
- name: label
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: test
num_examples: 1023
---
# Dataset Card for DeTexD: A Benchmark Dataset for Delicate Text Detection
## Dataset Description
- **Repository:** [DeTexD repository](https://github.com/grammarly/detexd)
- **Paper:** [DeTexD: A Benchmark Dataset for Delicate Text Detection](TODO)
### Dataset Summary
We define *delicate text* as any text that is emotionally charged or potentially triggering such that engaging with it has the potential to result in harm. This broad term covers a range of sensitive texts that vary across four major dimensions: 1) riskiness, 2) explicitness, 3) topic, and 4) target.
This dataset contains texts with fine-grained individual annotator labels from 0 to 5 (where 0 indicates no risk and 5 indicates high risk) and averaged binary labels. See paper for more details.
**Repository:** [DeTexD repository](https://github.com/grammarly/detexd) <br>
**Paper:** [DeTexD: A Benchmark Dataset for Delicate Text Detection](TODO)
## Dataset Structure
### Data Instances
```
{'text': '"He asked me and the club if we could give him a couple of days off just to clear up his mind and he will be back in the group, I suppose, next Monday, back for training and then be a regular part of the whole squad again," Rangnick said.',
'annotator_1': 0,
'annotator_2': 0,
'annotator_3': 0,
'label': 0}
```
### Data Fields
- `text`: Text to be classified
- `annotator_1`: Annotator 1 score (0-5)
- `annotator_2`: Annotator 2 score (0-5)
- `annotator_3`: Annotator 3 score (0-5)
- `label`: Averaged binary score (>=3), either "negative" (0) or positive (1)
### Data Splits
| | test |
|--------------------|-----:|
| Number of examples | 1023 |
### Citation Information
```
@inproceedings{chernodub-etal-2023-detexd,
title = "{D}e{T}ex{D}: A Benchmark Dataset for Delicate Text Detection",
author = "Yavnyi, Serhii and Sliusarenko, Oleksii and Razzaghi, Jade and Mo, Yichen and Hovakimyan, Knar and Chernodub, Artem",
booktitle = "The 7th Workshop on Online Abuse and Harms (WOAH)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.woah-1.2",
pages = "14--28",
abstract = "Over the past few years, much research has been conducted to identify and regulate toxic language. However, few studies have addressed a broader range of sensitive texts that are not necessarily overtly toxic. In this paper, we introduce and define a new category of sensitive text called {``}delicate text.{''} We provide the taxonomy of delicate text and present a detailed annotation scheme. We annotate DeTexD, the first benchmark dataset for delicate text detection. The significance of the difference in the definitions is highlighted by the relative performance deltas between models trained each definitions and corpora and evaluated on the other. We make publicly available the DeTexD Benchmark dataset, annotation guidelines, and baseline model for delicate text detection.",
}
``` | [
-0.22277326881885529,
-0.9116384983062744,
0.35367125272750854,
0.016531363129615784,
-0.3940988779067993,
0.0296232420951128,
-0.30731067061424255,
-0.7264213562011719,
-0.1556715965270996,
0.205814391374588,
-0.7686704993247986,
-0.8390284180641174,
-0.6901804804801941,
0.405429691076278... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
rcds/swiss_criticality_prediction | rcds | 2023-07-20T07:39:07Z | 31 | 0 | null | [
"task_categories:text-classification",
"annotations_creators:machine-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:de",
"language:fr",
"language:it",
"license:cc-by-sa-4.0",
"arxiv:2306.09237",... | 2023-07-20T07:39:07Z | 2023-03-31T21:21:30.000Z | 2023-03-31T21:21:30 | ---
annotations_creators:
- machine-generated
language:
- de
- fr
- it
language_creators:
- expert-generated
license:
- cc-by-sa-4.0
multilinguality:
- multilingual
pretty_name: Legal Criticality Prediction
size_categories:
- 100K<n<1M
source_datasets:
- original
tags: []
task_categories:
- text-classification
---
# Dataset Card for Criticality Prediction
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Legal Criticality Prediction (LCP) is a multilingual, diachronic dataset of 139K Swiss Federal Supreme Court (FSCS) cases annotated with two criticality labels. The bge_label i a binary label (critical, non-critical), while the citation label has 5 classes (critical-1, critical-2, critical-3, critical-4, non-critical). Critical classes of the citation_label are distinct subsets of the critical class of the bge_label. This dataset creates a challenging text classification task. We also provide additional metadata as the publication year, the law area and the canton of origin per case, to promote robustness and fairness studies on the critical area of legal NLP.
### Supported Tasks and Leaderboards
LCP can be used as text classification task
### Languages
Switzerland has four official languages with three languages German, French and Italian being represenated. The decisions are written by the judges and clerks in the language of the proceedings.
German (91k), French (33k), Italian (15k)
## Dataset Structure
```
{
"decision_id": "008d8a52-f0ea-4820-a18c-d06066dbb407",
"language": "fr",
"year": "2018",
"chamber": "CH_BGer_004",
"region": "Federation",
"origin_chamber": "338.0",
"origin_court": "127.0",
"origin_canton": "24.0",
"law_area": "civil_law",
"law_sub_area": ,
"bge_label": "critical",
"citation_label": "critical-1",
"facts": "Faits : A. A.a. Le 17 août 2007, C.X._, née le 14 février 1944 et domiciliée...",
"considerations": "Considérant en droit : 1. Interjeté en temps utile (art. 100 al. 1 LTF) par les défendeurs qui ont succombé dans leurs conclusions (art. 76 LTF) contre une décision...",
"rulings": "Par ces motifs, le Tribunal fédéral prononce : 1. Le recours est rejeté. 2. Les frais judiciaires, arrêtés à 10'000 fr., sont mis solidairement à la charge des recourants...",
}
```
### Data Fields
```
decision_id: (str) a unique identifier of the for the document
language: (str) one of (de, fr, it)
year: (int) the publication year
chamber: (str) the chamber of the case
region: (str) the region of the case
origin_chamber: (str) the chamber of the origin case
origin_court: (str) the court of the origin case
origin_canton: (str) the canton of the origin case
law_area: (str) the law area of the case
law_sub_area:(str) the law sub area of the case
bge_label: (str) critical or non-critical
citation_label: (str) critical-1, critical-2, critical-3, critical-4, non-critical
facts: (str) the facts of the case
considerations: (str) the considerations of the case
rulings: (str) the rulings of the case
```
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
The dataset was split date-stratisfied
- Train: 2002-2015
- Validation: 2016-2017
- Test: 2018-2022
| Language | Subset | Number of Documents (Training/Validation/Test) |
|------------|------------|--------------------------------------------|
| German | **de** | 81'264 (56592 / 19601 / 5071) |
| French | **fr** | 49'354 (29263 / 11117 / 8974) |
| Italian | **it** | 7913 (5220 / 1901 / 792) |
## Dataset Creation
### Curation Rationale
The dataset was created by Stern (2023).
### Source Data
#### Initial Data Collection and Normalization
The original data are published from the Swiss Federal Supreme Court (https://www.bger.ch) in unprocessed formats (HTML). The documents were downloaded from the Entscheidsuche portal (https://entscheidsuche.ch) in HTML.
#### Who are the source language producers?
The decisions are written by the judges and clerks in the language of the proceedings.
### Annotations
#### Annotation process
bge_label:
1. all bger_references in the bge header were extracted (for bge see rcds/swiss_rulings).
2. bger file_names are compared with the found references
citation_label:
1. count all citations for all bger cases and weight citations
2. divide cited cases in four different classes, depending on amount of citations
#### Who are the annotators?
Stern processed data and introduced bge and citation-label
Metadata is published by the Swiss Federal Supreme Court (https://www.bger.ch).
### Personal and Sensitive Information
The dataset contains publicly available court decisions from the Swiss Federal Supreme Court. Personal or sensitive information has been anonymized by the court before publication according to the following guidelines: https://www.bger.ch/home/juridiction/anonymisierungsregeln.html.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
We release the data under CC-BY-4.0 which complies with the court licensing (https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf)
© Swiss Federal Supreme Court, 2002-2022
The copyright for the editorial content of this website and the consolidated texts, which is owned by the Swiss Federal Supreme Court, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.
Source: https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf
### Citation Information
Please cite our [ArXiv-Preprint](https://arxiv.org/abs/2306.09237)
```
@misc{rasiah2023scale,
title={SCALE: Scaling up the Complexity for Advanced Language Model Evaluation},
author={Vishvaksenan Rasiah and Ronja Stern and Veton Matoshi and Matthias Stürmer and Ilias Chalkidis and Daniel E. Ho and Joel Niklaus},
year={2023},
eprint={2306.09237},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@Stern5497](https://github.com/stern5497) for adding this dataset. | [
-0.3085179924964905,
-0.5892398357391357,
0.46979278326034546,
0.2584739029407501,
-0.32706764340400696,
-0.17983730137348175,
-0.31928178668022156,
-0.3334886431694031,
0.12298094481229782,
0.5370956659317017,
-0.46168509125709534,
-0.9254000782966614,
-0.7606620788574219,
0.1037027686834... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
liuyanchen1015/MULTI_VALUE_sst2_comparative_than | liuyanchen1015 | 2023-04-03T19:43:53Z | 31 | 0 | null | [
"region:us"
] | 2023-04-03T19:43:53Z | 2023-04-03T19:43:48.000Z | 2023-04-03T19:43:48 | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: dev
num_bytes: 3000
num_examples: 19
- name: test
num_bytes: 5884
num_examples: 38
- name: train
num_bytes: 70824
num_examples: 631
download_size: 34685
dataset_size: 79708
---
# Dataset Card for "MULTI_VALUE_sst2_comparative_than"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3662165403366089,
-0.060210324823856354,
0.1907975822687149,
-0.002952694660052657,
-0.37693655490875244,
0.31585559248924255,
0.19010889530181885,
-0.186315655708313,
0.7243933081626892,
0.10334482789039612,
-0.6017719507217407,
-0.5494741797447205,
-0.680513858795166,
-0.3223409354686... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
liuyanchen1015/MULTI_VALUE_sst2_present_perfect_ever | liuyanchen1015 | 2023-04-03T19:48:07Z | 31 | 0 | null | [
"region:us"
] | 2023-04-03T19:48:07Z | 2023-04-03T19:48:02.000Z | 2023-04-03T19:48:02 | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: dev
num_bytes: 3536
num_examples: 25
- name: test
num_bytes: 9243
num_examples: 59
- name: train
num_bytes: 137625
num_examples: 1071
download_size: 75239
dataset_size: 150404
---
# Dataset Card for "MULTI_VALUE_sst2_present_perfect_ever"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.03545143082737923,
-0.2708512842655182,
0.22547534108161926,
0.3972143828868866,
-0.6008582711219788,
0.11329356580972672,
0.3101779520511627,
-0.013391601853072643,
0.7920878529548645,
0.4167206585407257,
-0.663666307926178,
-0.600419819355011,
-0.4262526333332062,
-0.5230202674865723,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
IES-Rafael-Alberti/letras-carnaval-cadiz | IES-Rafael-Alberti | 2023-06-04T11:51:32Z | 31 | 2 | null | [
"annotations_creators:no-annotation",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:es",
"license:cc-by-sa-4.0",
"lyrics",
"carnival",
"cadiz",
"region:us"
] | 2023-06-04T11:51:32Z | 2023-04-04T10:34:51.000Z | 2023-04-04T10:34:51 | ---
annotations_creators:
- no-annotation
language:
- es
language_creators:
- machine-generated
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: letrascarnavalcadiz
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- lyrics
- carnival
- cadiz
task_categories: []
task_ids: []
---
# Dataset Card for Letras Carnaval Cádiz

<h4 align="center">
<p>
<b>English</b> |
<a href="https://huggingface.co/datasets/IES-Rafael-Alberti/letras-carnaval-cadiz/blob/main/README_es.md">Español</a>
<p>
</h4>
## Dataset Description
- **Homepage:** https://letrascarnavalcadiz.com
- **Repository:** https://huggingface.co/datasets/IES-Rafael-Alberti/letras-carnaval-cadiz
- **Point of Contact:** contacto@letrascarnavalcadiz.com
### Changelog
|Release|Description|
|-|-|
|v1.0| Initial release of the dataset. Included more than 1K lyrics. It is necessary to verify the accuracy of the data, especially the subset midaccurate. |
### Dataset Summary
This dataset is a comprehensive collection of lyrics from the Carnaval de Cádiz, a significant cultural heritage of the city of Cádiz, Spain. Despite its cultural importance, there has been a lack of a structured database for these lyrics, hindering research and public access to this cultural heritage. This dataset aims to address this gap.
The dataset was created by the Cádiz AI Learning Community, a branch of the non-profit association Spain AI, and was developed by Iván Romero Reyna and Jesús Federico Franco Medinilla, students of the Specialization Course in Artificial Intelligence and Big Data at IES Rafael Alberti during the 2022-2023 academic year. The project is supervised by Jesús Carlos Avecilla de la Herrán, a computational linguist.
Collaboration is encouraged, with individuals able to verify the different records of the dataset at [letrascarnavalcadiz.com](https://letrascarnavalcadiz.com), ensuring the transcription of the lyrics and all data are correct. New lyrics can also be added to the dataset. Corrections and additions are not immediately reflected in the dataset but are updated periodically.
For more information or to report a problem, you can write to [contacto@letrascarnavalcadiz.com](mailto:contacto@letrascarnavalcadiz.com).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset is in Spanish, reflecting the language of the Carnaval de Cádiz.
## Dataset Structure
### Data Instances
A typical instance in the dataset is formatted in JSON and contains the following fields:
```json
{
"id": "9de8647521b728c45ff45c1c11208708d055397fd7781b31cf91b473dff224d5",
"authors": ["Juan Carlos Aragón Becerra"],
"song_type": 2,
"year": "2018",
"group": "Los Mafiosos",
"group_type": 2,
"lyrics": [
"Mujer va llegando el momento",
"de ser la que lleve la rienda",
"el camino ha sido largo y polvoriento",
"pero ya no habrá varón que te detenga",
"gritad larga vida a la reina",
"que va a comenzar tu gobierno",
"ojalá no heredes nada",
"de aquel macho que te odiaba",
"porque en el fondo sabía",
"que ya tú te le acercabas",
"y el contigo no podía",
"ten en cuenta cuando hagas justicia",
"de volver a nivelar la balanza",
"y aguantar aunque tragando saliva",
"el deseo de venganza",
"de ser oh humano fatal",
"de ser o que puedo entender",
"tan solo con una mirada",
"la llaga que baña tu alma y tu piel",
"que te sirva la experiencia",
"del macho de la manada",
"la fuerza no vale nada",
"si no es con la inteligencia",
"y ojalá que tu conciencia",
"a mí me brinde la suerte",
"de nunca volver a verte",
"con los pies en una iglesia",
"que ella fue quien escribió",
"que ella fue quien escribió",
"la historia contra vosotras",
"y encima se la cobró",
"y encima se la cobró",
"con mil millones de devotas",
"ojalá que tu corona y tu bandera",
"abran paso a una vida nueva",
"como un mundo en primavera",
"ojalá que a ti no te envenene el poder",
"y que no dejes nunca de ser la mujer",
"que siempre fue nuestra gran compañera"
]
}
```
The `id` field uniquely identifies each instance in the dataset, providing a way to reference specific entries. The `authors`, `song_type`, `year`, `group`, and `group_type` fields provide context for the lyrics, while the `lyrics` field itself contains the actual text of the song. The relationships between these fields are implicit in the structure of the dataset, with each instance representing a single song from the Carnaval de Cádiz.
### Data Fields
`id`
Unique identifier for each song in the dataset. A SHA-256 hash calculated from the first four verses of the lyrics and the group name, with all spaces removed and converted to lowercase (string).
`authors`
List of authors who have written the song (string array).
`song_type`
The type of song (1: presentación, 2: pasodoble/tango, 3: cuplé, 4: estribillo, 5: popurrí, 6: cuarteta).
`year`
Year the song was written or performed (string).
`group`
Name of the group that performed the song (string).
`group_type`
The type of the group (1: coro, 2: comparsa, 3: chirigota, 4: cuarteto).
`lyrics`
The lyrics of the song, represented as an array of verses (string array).
### Data Splits
This dataset does not have traditional training, validation, and test splits. Instead, it is divided into two subsets: "accurate" and "midaccurate".
The "accurate" subset contains 958 instances. All fields of first 957 instances in this subset have been obtained through web scraping and have undergone at least one human review for accuracy. The rest have been added by users at [letrascarnavalcadiz.com](https://letrascarnavalcadiz.com).
The "midaccurate" subset contains 226 instances. The 'group' and 'lyrics' fields in this subset were collected through web scraping, but the remaining fields were filled in by querying language models connected to the Internet. Therefore, the data in these fields may not be accurate.
| Subset | Instances |
|-------------|----------:|
| Accurate | 958 |
| Midaccurate | 226 |
Please note that the division into subsets is based on the method and reliability of data collection, rather than a random or stratified split typically used in machine learning tasks. Users of the dataset should consider this when deciding how to use the data.
## Dataset Creation
### Curation Rationale
The dataset was created to address a significant need in the cultural heritage of the city of Cádiz, Spain. The Carnaval de Cádiz is a major cultural event, yet there was no structured database of its lyrics that could be consulted for research or public access. This lack of a structured database hindered the exploration and appreciation of this cultural heritage. The dataset was curated to respond to this need.
### Source Data
#### Initial Data Collection and Normalization
The initial collection of lyrics was carried out through automatic scraping of various websites and multimedia content on the Internet. To maximize the number of records with minimal effort, all collection is being done using different Artificial Intelligence models.
#### Who are the source language producers?
The source language producers of the dataset are the authors and performers of the songs from the Carnaval de Cádiz. These include a wide range of individuals and groups who have participated in the Carnaval over the years. The dataset does not include self-reported demographic or identity information for these individuals or groups.
The data in the dataset was collected from two websites: https://www.alsondelcarnaval.es and http://letrasdesdeelparaiso.blogspot.com. The first 957 instances of "accurate" subset of the dataset was collected from the former, while the "midaccurate" subset was collected from the latter. The data was extracted through automatic web scraping, and in the case of the "midaccurate" subset, some fields were filled in by querying language models connected to the Internet.
The rest of "accurate" subset have been added by users at [letrascarnavalcadiz.com](https://letrascarnavalcadiz.com).
### Personal and Sensitive Information
The only sensitive information in the dataset is the names and surnames of the authors of the lyrics.
## Considerations for Using the Data
### Social Impact of Dataset
The use of this dataset has significant social impact.
Firstly, this dataset can positively contribute to the understanding and preservation of Cadiz's culture and traditions, as the Carnaval de Cádiz is an integral part of the city's cultural identity. By providing an accessible and easily searchable resource for carnival song lyrics, this dataset can assist cultural researchers, linguists, and the general public in better understanding and appreciating the rich tradition of the Carnaval de Cádiz.
Additionally, this dataset can be utilized to enhance natural language processing (NLP) technologies in Spanish, a language that can sometimes be underrepresented in NLP research. By providing a high-quality, culture-specific Spanish text corpus, this dataset can aid in improving the accuracy and cultural relevance of Spanish NLP models.
However, there are also risks associated with the use of this dataset. For instance, if used to train text generation models, these models could generate content that reinforces cultural stereotypes or perpetuates existing biases. Moreover, the automatic interpretation of carnival song lyrics can be challenging due to cultural and linguistic subtleties, and errors in this interpretation could lead to misunderstandings or misrepresentations of Cadiz's culture.
Finally, although this dataset does not contain a low-resource or underrepresented language, it does focus on a specific cultural tradition from a specific region of Spain. Therefore, its use can impact the Cadiz community by helping to preserve and disseminate its unique culture and traditions.
### Discussion of Biases
The dataset is subject to several biases due to the nature of the data collection and the historical context of the Cadiz Carnival.
Firstly, there is a temporal bias in the dataset. More recent lyrics are overrepresented compared to older ones, as there is more information available on the internet about modern groups. This may lead to a skewed understanding of the evolution of the Carnival's themes over time.
Secondly, the dataset exhibits a popularity bias. Lyrics from more popular groups are overrepresented because individuals have chosen to write about them more frequently. This could potentially limit the diversity of styles and themes represented in the dataset.
Thirdly, there is a competition bias. Lyrics from groups that advanced further in the competition stages are overrepresented, resulting in more available lyrics from these groups. This might lead to an overemphasis on the styles and themes that tend to be more successful in the competition.
Lastly, the dataset reflects a gender bias. Given that there have historically been more male authors than female authors in the Cadiz Carnival, the majority of the dataset consists of lyrics written by men. This could potentially limit the representation of diverse perspectives and themes in the lyrics.
To mitigate these biases, we actively encourage the participation of the community. By verifying the different records of the dataset, reviewing the transcription of the lyrics and all the data for accuracy, and adding new lyrics, we hope to broaden the diversity and representation.
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
- Iván Romero Reyna. Student of the Specialisation Course in Artificial Intelligence and Big Data at [IES Rafael Alberti](https://iesrafaelalberti.es) during the academic year 2022-2023.
- Jesús Federico Franco Medinilla. Student of the Specialisation Course in Artificial Intelligence and Big Data at [IES Rafael Alberti](https://iesrafaelalberti.es) during the academic year 2022-2023.
- Jesús Carlos Avecilla de la Herrán. Promoter in [Cádiz AI](https://www.spain-ai.com).
### Licensing Information
[CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0)
### Citation Information
```
@misc{letrascarnavalcadiz2023,
author = {Romero Reyna, Iván and Franco Medinilla, Jesús Federico and Avecilla de la Herrán, Jesús Carlos},
title = {letras-carnaval-cadiz},
year = {2023},
url = {https://huggingface.co/datasets/IES-Rafael-Alberti/letras-carnaval-cadiz}
}
```
### Contributions
Thanks to [@ivanro](https://huggingface.co/ivanro), [@jframed281](https://huggingface.co/jframed281) for adding this dataset.
Thanks to all the reviewers and contributors at [letrascarnavalcadiz.com](https://letrascarnavalcadiz.com). | [
-0.4171808362007141,
-0.29225587844848633,
0.13619926571846008,
0.5408684015274048,
-0.3419952094554901,
0.3671661913394928,
-0.2992963194847107,
-0.5382554531097412,
0.6289951801300049,
0.7324789762496948,
-0.9601258039474487,
-1.0648279190063477,
-0.4209458827972412,
0.13479496538639069,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
NiGuLa/Russian_Sensitive_Topics | NiGuLa | 2023-05-12T13:36:44Z | 31 | 2 | null | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:ru",
"license:cc",
"toxic comments classification",
"region:us"
] | 2023-05-12T13:36:44Z | 2023-04-07T20:14:33.000Z | 2023-04-07T20:14:33 | ---
language:
- ru
tags:
- toxic comments classification
license: cc
task_categories:
- text-classification
size_categories:
- 10K<n<100K
---
## General concept of the model
Sensitive topics are such topics that have a high chance of initiating a toxic conversation: homophobia, politics, racism, etc. This dataset uses 18 topics.
More details can be found [in this article ](https://www.aclweb.org/anthology/2021.bsnlp-1.4/) presented at the workshop for Balto-Slavic NLP at the EACL-2021 conference.
This paper presents the first version of this dataset. Here you can see the last version of the dataset which is significantly larger and also properly filtered.
## Licensing Information
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].
[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png
## Citation
If you find this repository helpful, feel free to cite our publication:
```
@inproceedings{babakov-etal-2021-detecting,
title = "Detecting Inappropriate Messages on Sensitive Topics that Could Harm a Company{'}s Reputation",
author = "Babakov, Nikolay and
Logacheva, Varvara and
Kozlova, Olga and
Semenov, Nikita and
Panchenko, Alexander",
booktitle = "Proceedings of the 8th Workshop on Balto-Slavic Natural Language Processing",
month = apr,
year = "2021",
address = "Kiyv, Ukraine",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.bsnlp-1.4",
pages = "26--36",
abstract = "Not all topics are equally {``}flammable{''} in terms of toxicity: a calm discussion of turtles or fishing less often fuels inappropriate toxic dialogues than a discussion of politics or sexual minorities. We define a set of sensitive topics that can yield inappropriate and toxic messages and describe the methodology of collecting and labelling a dataset for appropriateness. While toxicity in user-generated data is well-studied, we aim at defining a more fine-grained notion of inappropriateness. The core of inappropriateness is that it can harm the reputation of a speaker. This is different from toxicity in two respects: (i) inappropriateness is topic-related, and (ii) inappropriate message is not toxic but still unacceptable. We collect and release two datasets for Russian: a topic-labelled dataset and an appropriateness-labelled dataset. We also release pre-trained classification models trained on this data.",
}
``` | [
-0.28814589977264404,
-0.9659602642059326,
0.1551293283700943,
0.18015527725219727,
-0.3743382692337036,
-0.12530528008937836,
-0.18573299050331116,
-0.529918372631073,
-0.11068259179592133,
0.5292353630065918,
-0.4456186294555664,
-0.6576995253562927,
-0.5961924195289612,
-0.1063761264085... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mstz/gisette | mstz | 2023-04-17T10:55:16Z | 31 | 0 | null | [
"task_categories:tabular-classification",
"language:en",
"gisette",
"tabular_classification",
"binary_classification",
"region:us"
] | 2023-04-17T10:55:16Z | 2023-04-17T10:43:21.000Z | 2023-04-17T10:43:21 | ---
language:
- en
tags:
- gisette
- tabular_classification
- binary_classification
pretty_name: Gisette
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- tabular-classification
configs:
- gisette
---
# Gisette
The [Gisette dataset](https://archive-beta.ics.uci.edu/dataset/170/gisette) from the [UCI repository](https://archive-beta.ics.uci.edu/).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-----------------------|---------------------------|-------------------------|
| gisette | Binary classification.| |
| [
-0.4893908202648163,
-0.02777215465903282,
0.28808972239494324,
0.19558900594711304,
-0.316724956035614,
-0.10435856878757477,
0.07135611027479172,
-0.44248247146606445,
0.4245431423187256,
0.4922327697277069,
-0.24641284346580505,
-1.0728685855865479,
-1.044049620628357,
-0.02728806436061... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vietgpt-archive/camel_vi | vietgpt-archive | 2023-04-26T14:25:19Z | 31 | 0 | null | [
"region:us"
] | 2023-04-26T14:25:19Z | 2023-04-18T09:28:33.000Z | 2023-04-18T09:28:33 | ---
dataset_info:
features:
- name: role_1
dtype: string
- name: role_2
dtype: string
- name: original_task
dtype: string
- name: specified_task
dtype: string
- name: messages
list:
- name: input
dtype: string
- name: instruction
dtype: string
- name: response
dtype: string
- name: role
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 171026076
num_examples: 10744
download_size: 52918251
dataset_size: 171026076
---
# Dataset Card for "camel_vi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.539767324924469,
-0.2167069911956787,
-0.3066703677177429,
0.37361881136894226,
-0.4939192533493042,
-0.10696948319673538,
0.30141353607177734,
-0.38303840160369873,
0.730111300945282,
0.4547410011291504,
-0.8582372665405273,
-0.9110108613967896,
-0.5000432729721069,
-0.2742562294006347... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kaiokendev/SuperCOT-dataset | kaiokendev | 2023-05-26T23:45:43Z | 31 | 32 | null | [
"license:mit",
"region:us"
] | 2023-05-26T23:45:43Z | 2023-04-23T21:42:15.000Z | 2023-04-23T21:42:15 | ---
license: mit
---
- epochs: 3
- learning rate: 3e-4
- lora rank: 8
- lora alpha: 16
- lora dropout: 0.05 for cutoff 1024 13B, otherwise no dropout due to gradient checkpointing
- masking: none
- mbatch size: 4 (1 for 30B)
- batch size: 8 (2 for 30B)
- val set size: 0.2
- sdp implementation: xformers
- optimizer: AdamW
- eval strategy: none
Cleaned combination of:
[https://huggingface.co/datasets/QingyiSi/Alpaca-CoT](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT)
- Chain of thought QED
- Chain of thought Aqua
- CodeAlpaca
[https://huggingface.co/datasets/neulab/conala](https://huggingface.co/datasets/neulab/conala)
- Code snippets
[https://huggingface.co/datasets/yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned)
- Alpaca GPT4
Used in https://huggingface.co/kaiokendev/SuperCOT-LoRA | [
-0.790776789188385,
-0.5741112232208252,
0.21752169728279114,
0.20575635135173798,
-0.40204131603240967,
0.03703790158033371,
0.16561919450759888,
-0.6567383408546448,
0.5476373434066772,
0.3323850929737091,
-0.851961076259613,
-0.6615819931030273,
-0.5795965790748596,
-0.0178363099694252,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
TrajanovRisto/esg-sentiment | TrajanovRisto | 2023-04-30T20:28:31Z | 31 | 4 | null | [
"region:us"
] | 2023-04-30T20:28:31Z | 2023-04-30T20:28:28.000Z | 2023-04-30T20:28:28 | ---
dataset_info:
features:
- name: Text
dtype: string
- name: Environmental Negative
dtype: int32
- name: Environmental Neutral
dtype: int32
- name: Environmental Positive
dtype: int32
- name: Governance Negative
dtype: int32
- name: Governance Neutral
dtype: int32
- name: Governance Positive
dtype: int32
- name: Social Negative
dtype: int32
- name: Social Neutral
dtype: int32
- name: Social Positive
dtype: int32
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 135470.12812960235
num_examples: 611
- name: test
num_bytes: 15076.871870397643
num_examples: 68
download_size: 80141
dataset_size: 150547.0
---
# Dataset Card for "esg-sentiment"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.8514680862426758,
-0.359405517578125,
0.20223762094974518,
0.2502852976322174,
-0.2631504237651825,
-0.005064402241259813,
0.08679843693971634,
-0.08857567608356476,
0.9870018362998962,
0.3308275640010834,
-1.0522068738937378,
-1.0119129419326782,
-0.7259645462036133,
-0.246347144246101... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SilpaCS/Alzheimer | SilpaCS | 2023-05-11T11:19:09Z | 31 | 1 | null | [
"task_categories:image-classification",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] | 2023-05-11T11:19:09Z | 2023-05-11T11:12:11.000Z | 2023-05-11T11:12:11 | ---
task_categories:
- image-classification
language:
- en
size_categories:
- 1K<n<10K
--- | [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
PocketDoc/RUCAIBox-Story-Generation-Alpaca | PocketDoc | 2023-05-18T21:58:55Z | 31 | 5 | null | [
"task_categories:text-generation",
"language:en",
"region:us"
] | 2023-05-18T21:58:55Z | 2023-05-18T20:46:19.000Z | 2023-05-18T20:46:19 | ---
task_categories:
- text-generation
language:
- en
---
https://huggingface.co/datasets/RUCAIBox/Story-Generation
RUC AI Box HC Story Generation augmented and converted to alpaca format.
No filtering has been done. | [
-0.7601606845855713,
-0.8880924582481384,
0.3938649594783783,
0.50270676612854,
-0.3688182532787323,
0.01782885193824768,
0.3016495108604431,
-0.846843957901001,
0.9644704461097717,
0.9088032841682434,
-1.2852838039398193,
-0.5468475222587585,
-0.37806493043899536,
0.13250525295734406,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
HumanCompatibleAI/ppo-seals-CartPole-v0 | HumanCompatibleAI | 2023-05-29T09:52:49Z | 31 | 0 | null | [
"region:us"
] | 2023-05-29T09:52:49Z | 2023-05-29T09:52:45.000Z | 2023-05-29T09:52:45 | ---
dataset_info:
features:
- name: obs
sequence:
sequence: float32
- name: acts
sequence: int64
- name: infos
sequence: string
- name: terminal
dtype: bool
- name: rews
sequence: float64
splits:
- name: train
num_bytes: 516313
num_examples: 24
download_size: 297546
dataset_size: 516313
---
# Dataset Card for "ppo-seals-CartPole-v0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6075523495674133,
0.20511241257190704,
0.2855246067047119,
0.038037002086639404,
-0.5095767378807068,
0.15654920041561127,
0.4929254353046417,
-0.17382708191871643,
0.7816818952560425,
0.8097892999649048,
-0.6688525080680847,
-0.9680567383766174,
-0.703245222568512,
-0.2254353016614914,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mirfan899/usummary | mirfan899 | 2023-05-30T09:12:15Z | 31 | 0 | null | [
"region:us"
] | 2023-05-30T09:12:15Z | 2023-05-30T09:12:01.000Z | 2023-05-30T09:12:01 | ---
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: summary
dtype: string
- name: text
dtype: string
splits:
- name: test
num_bytes: 37261391
num_examples: 8458
- name: train
num_bytes: 335094323
num_examples: 67665
- name: validation
num_bytes: 37296120
num_examples: 8458
download_size: 191022704
dataset_size: 409651834
---
# Dataset Card for "usummary"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5575074553489685,
-0.13963580131530762,
0.2561081349849701,
0.10234175622463226,
-0.426792711019516,
-0.01679050736129284,
0.399255633354187,
-0.1013699322938919,
0.8747357130050659,
0.5706628561019897,
-0.7994573712348938,
-0.8375091552734375,
-0.5851308107376099,
-0.29755884408950806,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
agkphysics/AudioSet | agkphysics | 2023-07-13T12:25:32Z | 31 | 2 | null | [
"task_categories:audio-classification",
"license:cc-by-4.0",
"audio",
"region:us"
] | 2023-07-13T12:25:32Z | 2023-06-14T08:17:23.000Z | 2023-06-14T08:17:23 | ---
license: cc-by-4.0
tags:
- audio
task_categories:
- audio-classification
---
# AudioSet data
This repository contains the balanced training set and evaluation set
of the [AudioSet data](
https://research.google.com/audioset/dataset/index.html). The YouTube
videos were downloaded in March 2023, and so not all of the original
audios are available.
Extracting the `*.tar` files will place audio clips into the `audio/`
directory. The distribuion of audio clips is as follows:
- `audio/bal_train`: 18685 audio clips out of 22160 originally.
- `audio/eval`: 17142 audio clips out of 20371 originally.
Most audio is sampled at 48 kHz 24 bit, but about 10% is sampled at
44.1 kHz 24 bit. Audio files are stored in the FLAC format.
## Citation
```bibtex
@inproceedings{45857,
title = {Audio Set: An ontology and human-labeled dataset for audio events},
author = {Jort F. Gemmeke and Daniel P. W. Ellis and Dylan Freedman and Aren Jansen and Wade Lawrence and R. Channing Moore and Manoj Plakal and Marvin Ritter},
year = {2017},
booktitle = {Proc. IEEE ICASSP 2017},
address = {New Orleans, LA}
}
```
| [
-0.7276582717895508,
-0.3507974445819855,
0.05454803258180618,
0.006789146922528744,
-0.17401565611362457,
-0.018805785104632378,
-0.4380090534687042,
-0.234965980052948,
0.33753088116645813,
0.42162904143333435,
-0.9075819849967957,
-0.24733807146549225,
-0.440656840801239,
-0.00971342623... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
iceberg-nlp/climabench | iceberg-nlp | 2023-09-10T22:05:20Z | 31 | 0 | climabench | [
"task_categories:text-classification",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"arxiv:2301.04253",
"region:us"
] | 2023-09-10T22:05:20Z | 2023-06-29T22:37:24.000Z | 2023-06-29T22:37:24 | ---
annotations_creators:
- other
language_creators:
- other
language:
- en
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
paperswithcode_id: climabench
pretty_name: "ClimaBench: A Benchmark Dataset For Climate Change Text Understanding in English"
config_names:
- climate_stance
- climate_eng
- climate_fever
- climatext
- clima_insurance
- clima_insurance_plus
- clima_cdp
- clima_qa
---
### Citation Information
```
@misc{spokoyny2023answering,
title={Towards Answering Climate Questionnaires from Unstructured Climate Reports},
author={Daniel Spokoyny and Tanmay Laud and Tom Corringham and Taylor Berg-Kirkpatrick},
year={2023},
eprint={2301.04253},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | [
-0.28001171350479126,
-0.4947303831577301,
0.6967552304267883,
0.08943047374486923,
-0.10980717837810516,
-0.0645332783460617,
-0.19634786248207092,
-0.13992367684841156,
0.6120956540107727,
0.19100870192050934,
-0.6333022713661194,
-0.43137893080711365,
-0.4987272024154663,
0.223241046071... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
HumanCompatibleAI/ppo-CartPole-v1 | HumanCompatibleAI | 2023-07-18T14:43:49Z | 31 | 0 | null | [
"region:us"
] | 2023-07-18T14:43:49Z | 2023-07-18T14:43:44.000Z | 2023-07-18T14:43:44 | ---
dataset_info:
features:
- name: obs
sequence:
sequence: float32
- name: acts
sequence: int64
- name: infos
sequence: string
- name: terminal
dtype: bool
- name: rews
sequence: float64
splits:
- name: train
num_bytes: 2103613
num_examples: 100
download_size: 1263834
dataset_size: 2103613
---
# Dataset Card for "ppo-CartPole-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6465320587158203,
-0.04049183800816536,
0.13896554708480835,
0.07045003771781921,
-0.5141345262527466,
0.005128329619765282,
0.4064372181892395,
-0.06925149261951447,
0.7887259125709534,
0.7407267689704895,
-0.8403114676475525,
-1.0310938358306885,
-0.6273631453514099,
-0.36922430992126... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lamini/bts | lamini | 2023-07-24T03:50:41Z | 31 | 1 | null | [
"region:us"
] | 2023-07-24T03:50:41Z | 2023-07-24T03:49:06.000Z | 2023-07-24T03:49:06 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 129862.8
num_examples: 126
- name: test
num_bytes: 14429.2
num_examples: 14
download_size: 50390
dataset_size: 144292.0
---
# Dataset Card for "bts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6079241633415222,
-0.30697962641716003,
0.17775072157382965,
0.23296089470386505,
-0.41518914699554443,
0.14128701388835907,
0.34914326667785645,
-0.07874372601509094,
0.9010359048843384,
0.42077597975730896,
-0.843404233455658,
-0.7809075117111206,
-0.49563097953796387,
-0.146382570266... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ds4sd/PubTables-1M_OTSL | ds4sd | 2023-08-31T16:00:24Z | 31 | 1 | null | [
"task_categories:object-detection",
"task_categories:table-to-text",
"size_categories:100K<n<1M",
"license:other",
"table-structure-recognition",
"table-understanding",
"PDF",
"arxiv:2305.03393",
"region:us"
] | 2023-08-31T16:00:24Z | 2023-08-10T08:21:06.000Z | 2023-08-10T08:21:06 | ---
license: other
pretty_name: PubTables-1M-OTSL
size_categories:
- 100K<n<1M
tags:
- table-structure-recognition
- table-understanding
- PDF
task_categories:
- object-detection
- table-to-text
---
# Dataset Card for PubTables-1M_OTSL
## Dataset Description
- **Homepage:** https://ds4sd.github.io
- **Paper:** https://arxiv.org/pdf/2305.03393
### Dataset Summary
This dataset enables the evaluation of both object detection models and image-to-text methods.
[PubTables-1M](https://github.com/microsoft/table-transformer) is introduced in the publication *"PubTables-1M: Towards Comprehensive Table Extraction From Unstructured Documents"* by Smock et al. The conversion into HF (Hugging Face) and the addition of the OTSL (Optimized Table Structure Language) format is presented in our paper "Optimized Table Tokenization for Table Structure Recognition" by Lysak et al. The dataset includes the original annotations amongst new additions.
### Dataset Structure
* cells: origunal dataset cell groundtruth (content).
* table_bbox: origunal dataset table detection groundtruth.
* otsl: new reduced table structure token format
* html: Generated HTML for PubTables-1M to match PubTabNet, FinTabNet, and SynthTabNet format.
* html_restored: generated HTML from OTSL.
* cols: grid column length.
* rows: grid row length.
* image: PIL image
### OTSL Vocabulary:
**OTSL**: new reduced table structure token format
More information on the OTSL table structure format and its concepts can be read from our paper.
Format of this dataset extends work presented in a paper, and introduces slight modifications:
* "fcel" - cell that has content in it
* "ecel" - cell that is empty
* "lcel" - left-looking cell (to handle horizontally merged cells)
* "ucel" - up-looking cell (to handle vertically merged cells)
* "xcel" - 2d span cells, in this dataset - covers entire area of a merged cell
* "nl" - new line token
### Data Splits
The dataset provides three splits
- `train`
- `val`
- `test`
## Additional Information
### Dataset Curators
The dataset is converted by the [Deep Search team](https://ds4sd.github.io/) at IBM Research.
You can contact us at [deepsearch-core@zurich.ibm.com](mailto:deepsearch-core@zurich.ibm.com).
Curators:
- Maksym Lysak, [@maxmnemonic](https://github.com/maxmnemonic)
- Ahmed Nassar, [@nassarofficial](https://github.com/nassarofficial)
- Christoph Auer, [@cau-git](https://github.com/cau-git)
- Nikos Livathinos, [@nikos-livathinos](https://github.com/nikos-livathinos)
- Peter Staar, [@PeterStaar-IBM](https://github.com/PeterStaar-IBM)
### Citation Information
**Citation to OTSL Paper:**
@article{lysak2023optimized,
title={Optimized Table Tokenization for Table Structure Recognition},
author={Maksym Lysak and Ahmed Nassar and Nikolaos Livathinos and Christoph Auer and Peter Staar},
year={2023},
eprint={2305.03393},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
**Citation to PubTables-1M creators:**
@inproceedings{smock2022pubtables,
title={Pub{T}ables-1{M}: Towards comprehensive table extraction from unstructured documents},
author={Smock, Brandon and Pesala, Rohith and Abraham, Robin},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
pages={4634-4642},
year={2022},
month={June}
} | [
-0.22595924139022827,
-0.3980672359466553,
0.44898146390914917,
-0.09115316718816757,
-0.5465880036354065,
-0.17232048511505127,
-0.003079064190387726,
-0.4270891547203064,
0.21259674429893494,
0.3347952961921692,
-0.23038071393966675,
-0.9580268263816833,
-0.2705945670604706,
0.2196179926... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
EgilKarlsen/CSIC_DistilRoBERTa_Baseline | EgilKarlsen | 2023-08-17T18:20:04Z | 31 | 0 | null | [
"region:us"
] | 2023-08-17T18:20:04Z | 2023-08-11T00:22:13.000Z | 2023-08-11T00:22:13 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: '0'
dtype: float32
- name: '1'
dtype: float32
- name: '2'
dtype: float32
- name: '3'
dtype: float32
- name: '4'
dtype: float32
- name: '5'
dtype: float32
- name: '6'
dtype: float32
- name: '7'
dtype: float32
- name: '8'
dtype: float32
- name: '9'
dtype: float32
- name: '10'
dtype: float32
- name: '11'
dtype: float32
- name: '12'
dtype: float32
- name: '13'
dtype: float32
- name: '14'
dtype: float32
- name: '15'
dtype: float32
- name: '16'
dtype: float32
- name: '17'
dtype: float32
- name: '18'
dtype: float32
- name: '19'
dtype: float32
- name: '20'
dtype: float32
- name: '21'
dtype: float32
- name: '22'
dtype: float32
- name: '23'
dtype: float32
- name: '24'
dtype: float32
- name: '25'
dtype: float32
- name: '26'
dtype: float32
- name: '27'
dtype: float32
- name: '28'
dtype: float32
- name: '29'
dtype: float32
- name: '30'
dtype: float32
- name: '31'
dtype: float32
- name: '32'
dtype: float32
- name: '33'
dtype: float32
- name: '34'
dtype: float32
- name: '35'
dtype: float32
- name: '36'
dtype: float32
- name: '37'
dtype: float32
- name: '38'
dtype: float32
- name: '39'
dtype: float32
- name: '40'
dtype: float32
- name: '41'
dtype: float32
- name: '42'
dtype: float32
- name: '43'
dtype: float32
- name: '44'
dtype: float32
- name: '45'
dtype: float32
- name: '46'
dtype: float32
- name: '47'
dtype: float32
- name: '48'
dtype: float32
- name: '49'
dtype: float32
- name: '50'
dtype: float32
- name: '51'
dtype: float32
- name: '52'
dtype: float32
- name: '53'
dtype: float32
- name: '54'
dtype: float32
- name: '55'
dtype: float32
- name: '56'
dtype: float32
- name: '57'
dtype: float32
- name: '58'
dtype: float32
- name: '59'
dtype: float32
- name: '60'
dtype: float32
- name: '61'
dtype: float32
- name: '62'
dtype: float32
- name: '63'
dtype: float32
- name: '64'
dtype: float32
- name: '65'
dtype: float32
- name: '66'
dtype: float32
- name: '67'
dtype: float32
- name: '68'
dtype: float32
- name: '69'
dtype: float32
- name: '70'
dtype: float32
- name: '71'
dtype: float32
- name: '72'
dtype: float32
- name: '73'
dtype: float32
- name: '74'
dtype: float32
- name: '75'
dtype: float32
- name: '76'
dtype: float32
- name: '77'
dtype: float32
- name: '78'
dtype: float32
- name: '79'
dtype: float32
- name: '80'
dtype: float32
- name: '81'
dtype: float32
- name: '82'
dtype: float32
- name: '83'
dtype: float32
- name: '84'
dtype: float32
- name: '85'
dtype: float32
- name: '86'
dtype: float32
- name: '87'
dtype: float32
- name: '88'
dtype: float32
- name: '89'
dtype: float32
- name: '90'
dtype: float32
- name: '91'
dtype: float32
- name: '92'
dtype: float32
- name: '93'
dtype: float32
- name: '94'
dtype: float32
- name: '95'
dtype: float32
- name: '96'
dtype: float32
- name: '97'
dtype: float32
- name: '98'
dtype: float32
- name: '99'
dtype: float32
- name: '100'
dtype: float32
- name: '101'
dtype: float32
- name: '102'
dtype: float32
- name: '103'
dtype: float32
- name: '104'
dtype: float32
- name: '105'
dtype: float32
- name: '106'
dtype: float32
- name: '107'
dtype: float32
- name: '108'
dtype: float32
- name: '109'
dtype: float32
- name: '110'
dtype: float32
- name: '111'
dtype: float32
- name: '112'
dtype: float32
- name: '113'
dtype: float32
- name: '114'
dtype: float32
- name: '115'
dtype: float32
- name: '116'
dtype: float32
- name: '117'
dtype: float32
- name: '118'
dtype: float32
- name: '119'
dtype: float32
- name: '120'
dtype: float32
- name: '121'
dtype: float32
- name: '122'
dtype: float32
- name: '123'
dtype: float32
- name: '124'
dtype: float32
- name: '125'
dtype: float32
- name: '126'
dtype: float32
- name: '127'
dtype: float32
- name: '128'
dtype: float32
- name: '129'
dtype: float32
- name: '130'
dtype: float32
- name: '131'
dtype: float32
- name: '132'
dtype: float32
- name: '133'
dtype: float32
- name: '134'
dtype: float32
- name: '135'
dtype: float32
- name: '136'
dtype: float32
- name: '137'
dtype: float32
- name: '138'
dtype: float32
- name: '139'
dtype: float32
- name: '140'
dtype: float32
- name: '141'
dtype: float32
- name: '142'
dtype: float32
- name: '143'
dtype: float32
- name: '144'
dtype: float32
- name: '145'
dtype: float32
- name: '146'
dtype: float32
- name: '147'
dtype: float32
- name: '148'
dtype: float32
- name: '149'
dtype: float32
- name: '150'
dtype: float32
- name: '151'
dtype: float32
- name: '152'
dtype: float32
- name: '153'
dtype: float32
- name: '154'
dtype: float32
- name: '155'
dtype: float32
- name: '156'
dtype: float32
- name: '157'
dtype: float32
- name: '158'
dtype: float32
- name: '159'
dtype: float32
- name: '160'
dtype: float32
- name: '161'
dtype: float32
- name: '162'
dtype: float32
- name: '163'
dtype: float32
- name: '164'
dtype: float32
- name: '165'
dtype: float32
- name: '166'
dtype: float32
- name: '167'
dtype: float32
- name: '168'
dtype: float32
- name: '169'
dtype: float32
- name: '170'
dtype: float32
- name: '171'
dtype: float32
- name: '172'
dtype: float32
- name: '173'
dtype: float32
- name: '174'
dtype: float32
- name: '175'
dtype: float32
- name: '176'
dtype: float32
- name: '177'
dtype: float32
- name: '178'
dtype: float32
- name: '179'
dtype: float32
- name: '180'
dtype: float32
- name: '181'
dtype: float32
- name: '182'
dtype: float32
- name: '183'
dtype: float32
- name: '184'
dtype: float32
- name: '185'
dtype: float32
- name: '186'
dtype: float32
- name: '187'
dtype: float32
- name: '188'
dtype: float32
- name: '189'
dtype: float32
- name: '190'
dtype: float32
- name: '191'
dtype: float32
- name: '192'
dtype: float32
- name: '193'
dtype: float32
- name: '194'
dtype: float32
- name: '195'
dtype: float32
- name: '196'
dtype: float32
- name: '197'
dtype: float32
- name: '198'
dtype: float32
- name: '199'
dtype: float32
- name: '200'
dtype: float32
- name: '201'
dtype: float32
- name: '202'
dtype: float32
- name: '203'
dtype: float32
- name: '204'
dtype: float32
- name: '205'
dtype: float32
- name: '206'
dtype: float32
- name: '207'
dtype: float32
- name: '208'
dtype: float32
- name: '209'
dtype: float32
- name: '210'
dtype: float32
- name: '211'
dtype: float32
- name: '212'
dtype: float32
- name: '213'
dtype: float32
- name: '214'
dtype: float32
- name: '215'
dtype: float32
- name: '216'
dtype: float32
- name: '217'
dtype: float32
- name: '218'
dtype: float32
- name: '219'
dtype: float32
- name: '220'
dtype: float32
- name: '221'
dtype: float32
- name: '222'
dtype: float32
- name: '223'
dtype: float32
- name: '224'
dtype: float32
- name: '225'
dtype: float32
- name: '226'
dtype: float32
- name: '227'
dtype: float32
- name: '228'
dtype: float32
- name: '229'
dtype: float32
- name: '230'
dtype: float32
- name: '231'
dtype: float32
- name: '232'
dtype: float32
- name: '233'
dtype: float32
- name: '234'
dtype: float32
- name: '235'
dtype: float32
- name: '236'
dtype: float32
- name: '237'
dtype: float32
- name: '238'
dtype: float32
- name: '239'
dtype: float32
- name: '240'
dtype: float32
- name: '241'
dtype: float32
- name: '242'
dtype: float32
- name: '243'
dtype: float32
- name: '244'
dtype: float32
- name: '245'
dtype: float32
- name: '246'
dtype: float32
- name: '247'
dtype: float32
- name: '248'
dtype: float32
- name: '249'
dtype: float32
- name: '250'
dtype: float32
- name: '251'
dtype: float32
- name: '252'
dtype: float32
- name: '253'
dtype: float32
- name: '254'
dtype: float32
- name: '255'
dtype: float32
- name: '256'
dtype: float32
- name: '257'
dtype: float32
- name: '258'
dtype: float32
- name: '259'
dtype: float32
- name: '260'
dtype: float32
- name: '261'
dtype: float32
- name: '262'
dtype: float32
- name: '263'
dtype: float32
- name: '264'
dtype: float32
- name: '265'
dtype: float32
- name: '266'
dtype: float32
- name: '267'
dtype: float32
- name: '268'
dtype: float32
- name: '269'
dtype: float32
- name: '270'
dtype: float32
- name: '271'
dtype: float32
- name: '272'
dtype: float32
- name: '273'
dtype: float32
- name: '274'
dtype: float32
- name: '275'
dtype: float32
- name: '276'
dtype: float32
- name: '277'
dtype: float32
- name: '278'
dtype: float32
- name: '279'
dtype: float32
- name: '280'
dtype: float32
- name: '281'
dtype: float32
- name: '282'
dtype: float32
- name: '283'
dtype: float32
- name: '284'
dtype: float32
- name: '285'
dtype: float32
- name: '286'
dtype: float32
- name: '287'
dtype: float32
- name: '288'
dtype: float32
- name: '289'
dtype: float32
- name: '290'
dtype: float32
- name: '291'
dtype: float32
- name: '292'
dtype: float32
- name: '293'
dtype: float32
- name: '294'
dtype: float32
- name: '295'
dtype: float32
- name: '296'
dtype: float32
- name: '297'
dtype: float32
- name: '298'
dtype: float32
- name: '299'
dtype: float32
- name: '300'
dtype: float32
- name: '301'
dtype: float32
- name: '302'
dtype: float32
- name: '303'
dtype: float32
- name: '304'
dtype: float32
- name: '305'
dtype: float32
- name: '306'
dtype: float32
- name: '307'
dtype: float32
- name: '308'
dtype: float32
- name: '309'
dtype: float32
- name: '310'
dtype: float32
- name: '311'
dtype: float32
- name: '312'
dtype: float32
- name: '313'
dtype: float32
- name: '314'
dtype: float32
- name: '315'
dtype: float32
- name: '316'
dtype: float32
- name: '317'
dtype: float32
- name: '318'
dtype: float32
- name: '319'
dtype: float32
- name: '320'
dtype: float32
- name: '321'
dtype: float32
- name: '322'
dtype: float32
- name: '323'
dtype: float32
- name: '324'
dtype: float32
- name: '325'
dtype: float32
- name: '326'
dtype: float32
- name: '327'
dtype: float32
- name: '328'
dtype: float32
- name: '329'
dtype: float32
- name: '330'
dtype: float32
- name: '331'
dtype: float32
- name: '332'
dtype: float32
- name: '333'
dtype: float32
- name: '334'
dtype: float32
- name: '335'
dtype: float32
- name: '336'
dtype: float32
- name: '337'
dtype: float32
- name: '338'
dtype: float32
- name: '339'
dtype: float32
- name: '340'
dtype: float32
- name: '341'
dtype: float32
- name: '342'
dtype: float32
- name: '343'
dtype: float32
- name: '344'
dtype: float32
- name: '345'
dtype: float32
- name: '346'
dtype: float32
- name: '347'
dtype: float32
- name: '348'
dtype: float32
- name: '349'
dtype: float32
- name: '350'
dtype: float32
- name: '351'
dtype: float32
- name: '352'
dtype: float32
- name: '353'
dtype: float32
- name: '354'
dtype: float32
- name: '355'
dtype: float32
- name: '356'
dtype: float32
- name: '357'
dtype: float32
- name: '358'
dtype: float32
- name: '359'
dtype: float32
- name: '360'
dtype: float32
- name: '361'
dtype: float32
- name: '362'
dtype: float32
- name: '363'
dtype: float32
- name: '364'
dtype: float32
- name: '365'
dtype: float32
- name: '366'
dtype: float32
- name: '367'
dtype: float32
- name: '368'
dtype: float32
- name: '369'
dtype: float32
- name: '370'
dtype: float32
- name: '371'
dtype: float32
- name: '372'
dtype: float32
- name: '373'
dtype: float32
- name: '374'
dtype: float32
- name: '375'
dtype: float32
- name: '376'
dtype: float32
- name: '377'
dtype: float32
- name: '378'
dtype: float32
- name: '379'
dtype: float32
- name: '380'
dtype: float32
- name: '381'
dtype: float32
- name: '382'
dtype: float32
- name: '383'
dtype: float32
- name: '384'
dtype: float32
- name: '385'
dtype: float32
- name: '386'
dtype: float32
- name: '387'
dtype: float32
- name: '388'
dtype: float32
- name: '389'
dtype: float32
- name: '390'
dtype: float32
- name: '391'
dtype: float32
- name: '392'
dtype: float32
- name: '393'
dtype: float32
- name: '394'
dtype: float32
- name: '395'
dtype: float32
- name: '396'
dtype: float32
- name: '397'
dtype: float32
- name: '398'
dtype: float32
- name: '399'
dtype: float32
- name: '400'
dtype: float32
- name: '401'
dtype: float32
- name: '402'
dtype: float32
- name: '403'
dtype: float32
- name: '404'
dtype: float32
- name: '405'
dtype: float32
- name: '406'
dtype: float32
- name: '407'
dtype: float32
- name: '408'
dtype: float32
- name: '409'
dtype: float32
- name: '410'
dtype: float32
- name: '411'
dtype: float32
- name: '412'
dtype: float32
- name: '413'
dtype: float32
- name: '414'
dtype: float32
- name: '415'
dtype: float32
- name: '416'
dtype: float32
- name: '417'
dtype: float32
- name: '418'
dtype: float32
- name: '419'
dtype: float32
- name: '420'
dtype: float32
- name: '421'
dtype: float32
- name: '422'
dtype: float32
- name: '423'
dtype: float32
- name: '424'
dtype: float32
- name: '425'
dtype: float32
- name: '426'
dtype: float32
- name: '427'
dtype: float32
- name: '428'
dtype: float32
- name: '429'
dtype: float32
- name: '430'
dtype: float32
- name: '431'
dtype: float32
- name: '432'
dtype: float32
- name: '433'
dtype: float32
- name: '434'
dtype: float32
- name: '435'
dtype: float32
- name: '436'
dtype: float32
- name: '437'
dtype: float32
- name: '438'
dtype: float32
- name: '439'
dtype: float32
- name: '440'
dtype: float32
- name: '441'
dtype: float32
- name: '442'
dtype: float32
- name: '443'
dtype: float32
- name: '444'
dtype: float32
- name: '445'
dtype: float32
- name: '446'
dtype: float32
- name: '447'
dtype: float32
- name: '448'
dtype: float32
- name: '449'
dtype: float32
- name: '450'
dtype: float32
- name: '451'
dtype: float32
- name: '452'
dtype: float32
- name: '453'
dtype: float32
- name: '454'
dtype: float32
- name: '455'
dtype: float32
- name: '456'
dtype: float32
- name: '457'
dtype: float32
- name: '458'
dtype: float32
- name: '459'
dtype: float32
- name: '460'
dtype: float32
- name: '461'
dtype: float32
- name: '462'
dtype: float32
- name: '463'
dtype: float32
- name: '464'
dtype: float32
- name: '465'
dtype: float32
- name: '466'
dtype: float32
- name: '467'
dtype: float32
- name: '468'
dtype: float32
- name: '469'
dtype: float32
- name: '470'
dtype: float32
- name: '471'
dtype: float32
- name: '472'
dtype: float32
- name: '473'
dtype: float32
- name: '474'
dtype: float32
- name: '475'
dtype: float32
- name: '476'
dtype: float32
- name: '477'
dtype: float32
- name: '478'
dtype: float32
- name: '479'
dtype: float32
- name: '480'
dtype: float32
- name: '481'
dtype: float32
- name: '482'
dtype: float32
- name: '483'
dtype: float32
- name: '484'
dtype: float32
- name: '485'
dtype: float32
- name: '486'
dtype: float32
- name: '487'
dtype: float32
- name: '488'
dtype: float32
- name: '489'
dtype: float32
- name: '490'
dtype: float32
- name: '491'
dtype: float32
- name: '492'
dtype: float32
- name: '493'
dtype: float32
- name: '494'
dtype: float32
- name: '495'
dtype: float32
- name: '496'
dtype: float32
- name: '497'
dtype: float32
- name: '498'
dtype: float32
- name: '499'
dtype: float32
- name: '500'
dtype: float32
- name: '501'
dtype: float32
- name: '502'
dtype: float32
- name: '503'
dtype: float32
- name: '504'
dtype: float32
- name: '505'
dtype: float32
- name: '506'
dtype: float32
- name: '507'
dtype: float32
- name: '508'
dtype: float32
- name: '509'
dtype: float32
- name: '510'
dtype: float32
- name: '511'
dtype: float32
- name: '512'
dtype: float32
- name: '513'
dtype: float32
- name: '514'
dtype: float32
- name: '515'
dtype: float32
- name: '516'
dtype: float32
- name: '517'
dtype: float32
- name: '518'
dtype: float32
- name: '519'
dtype: float32
- name: '520'
dtype: float32
- name: '521'
dtype: float32
- name: '522'
dtype: float32
- name: '523'
dtype: float32
- name: '524'
dtype: float32
- name: '525'
dtype: float32
- name: '526'
dtype: float32
- name: '527'
dtype: float32
- name: '528'
dtype: float32
- name: '529'
dtype: float32
- name: '530'
dtype: float32
- name: '531'
dtype: float32
- name: '532'
dtype: float32
- name: '533'
dtype: float32
- name: '534'
dtype: float32
- name: '535'
dtype: float32
- name: '536'
dtype: float32
- name: '537'
dtype: float32
- name: '538'
dtype: float32
- name: '539'
dtype: float32
- name: '540'
dtype: float32
- name: '541'
dtype: float32
- name: '542'
dtype: float32
- name: '543'
dtype: float32
- name: '544'
dtype: float32
- name: '545'
dtype: float32
- name: '546'
dtype: float32
- name: '547'
dtype: float32
- name: '548'
dtype: float32
- name: '549'
dtype: float32
- name: '550'
dtype: float32
- name: '551'
dtype: float32
- name: '552'
dtype: float32
- name: '553'
dtype: float32
- name: '554'
dtype: float32
- name: '555'
dtype: float32
- name: '556'
dtype: float32
- name: '557'
dtype: float32
- name: '558'
dtype: float32
- name: '559'
dtype: float32
- name: '560'
dtype: float32
- name: '561'
dtype: float32
- name: '562'
dtype: float32
- name: '563'
dtype: float32
- name: '564'
dtype: float32
- name: '565'
dtype: float32
- name: '566'
dtype: float32
- name: '567'
dtype: float32
- name: '568'
dtype: float32
- name: '569'
dtype: float32
- name: '570'
dtype: float32
- name: '571'
dtype: float32
- name: '572'
dtype: float32
- name: '573'
dtype: float32
- name: '574'
dtype: float32
- name: '575'
dtype: float32
- name: '576'
dtype: float32
- name: '577'
dtype: float32
- name: '578'
dtype: float32
- name: '579'
dtype: float32
- name: '580'
dtype: float32
- name: '581'
dtype: float32
- name: '582'
dtype: float32
- name: '583'
dtype: float32
- name: '584'
dtype: float32
- name: '585'
dtype: float32
- name: '586'
dtype: float32
- name: '587'
dtype: float32
- name: '588'
dtype: float32
- name: '589'
dtype: float32
- name: '590'
dtype: float32
- name: '591'
dtype: float32
- name: '592'
dtype: float32
- name: '593'
dtype: float32
- name: '594'
dtype: float32
- name: '595'
dtype: float32
- name: '596'
dtype: float32
- name: '597'
dtype: float32
- name: '598'
dtype: float32
- name: '599'
dtype: float32
- name: '600'
dtype: float32
- name: '601'
dtype: float32
- name: '602'
dtype: float32
- name: '603'
dtype: float32
- name: '604'
dtype: float32
- name: '605'
dtype: float32
- name: '606'
dtype: float32
- name: '607'
dtype: float32
- name: '608'
dtype: float32
- name: '609'
dtype: float32
- name: '610'
dtype: float32
- name: '611'
dtype: float32
- name: '612'
dtype: float32
- name: '613'
dtype: float32
- name: '614'
dtype: float32
- name: '615'
dtype: float32
- name: '616'
dtype: float32
- name: '617'
dtype: float32
- name: '618'
dtype: float32
- name: '619'
dtype: float32
- name: '620'
dtype: float32
- name: '621'
dtype: float32
- name: '622'
dtype: float32
- name: '623'
dtype: float32
- name: '624'
dtype: float32
- name: '625'
dtype: float32
- name: '626'
dtype: float32
- name: '627'
dtype: float32
- name: '628'
dtype: float32
- name: '629'
dtype: float32
- name: '630'
dtype: float32
- name: '631'
dtype: float32
- name: '632'
dtype: float32
- name: '633'
dtype: float32
- name: '634'
dtype: float32
- name: '635'
dtype: float32
- name: '636'
dtype: float32
- name: '637'
dtype: float32
- name: '638'
dtype: float32
- name: '639'
dtype: float32
- name: '640'
dtype: float32
- name: '641'
dtype: float32
- name: '642'
dtype: float32
- name: '643'
dtype: float32
- name: '644'
dtype: float32
- name: '645'
dtype: float32
- name: '646'
dtype: float32
- name: '647'
dtype: float32
- name: '648'
dtype: float32
- name: '649'
dtype: float32
- name: '650'
dtype: float32
- name: '651'
dtype: float32
- name: '652'
dtype: float32
- name: '653'
dtype: float32
- name: '654'
dtype: float32
- name: '655'
dtype: float32
- name: '656'
dtype: float32
- name: '657'
dtype: float32
- name: '658'
dtype: float32
- name: '659'
dtype: float32
- name: '660'
dtype: float32
- name: '661'
dtype: float32
- name: '662'
dtype: float32
- name: '663'
dtype: float32
- name: '664'
dtype: float32
- name: '665'
dtype: float32
- name: '666'
dtype: float32
- name: '667'
dtype: float32
- name: '668'
dtype: float32
- name: '669'
dtype: float32
- name: '670'
dtype: float32
- name: '671'
dtype: float32
- name: '672'
dtype: float32
- name: '673'
dtype: float32
- name: '674'
dtype: float32
- name: '675'
dtype: float32
- name: '676'
dtype: float32
- name: '677'
dtype: float32
- name: '678'
dtype: float32
- name: '679'
dtype: float32
- name: '680'
dtype: float32
- name: '681'
dtype: float32
- name: '682'
dtype: float32
- name: '683'
dtype: float32
- name: '684'
dtype: float32
- name: '685'
dtype: float32
- name: '686'
dtype: float32
- name: '687'
dtype: float32
- name: '688'
dtype: float32
- name: '689'
dtype: float32
- name: '690'
dtype: float32
- name: '691'
dtype: float32
- name: '692'
dtype: float32
- name: '693'
dtype: float32
- name: '694'
dtype: float32
- name: '695'
dtype: float32
- name: '696'
dtype: float32
- name: '697'
dtype: float32
- name: '698'
dtype: float32
- name: '699'
dtype: float32
- name: '700'
dtype: float32
- name: '701'
dtype: float32
- name: '702'
dtype: float32
- name: '703'
dtype: float32
- name: '704'
dtype: float32
- name: '705'
dtype: float32
- name: '706'
dtype: float32
- name: '707'
dtype: float32
- name: '708'
dtype: float32
- name: '709'
dtype: float32
- name: '710'
dtype: float32
- name: '711'
dtype: float32
- name: '712'
dtype: float32
- name: '713'
dtype: float32
- name: '714'
dtype: float32
- name: '715'
dtype: float32
- name: '716'
dtype: float32
- name: '717'
dtype: float32
- name: '718'
dtype: float32
- name: '719'
dtype: float32
- name: '720'
dtype: float32
- name: '721'
dtype: float32
- name: '722'
dtype: float32
- name: '723'
dtype: float32
- name: '724'
dtype: float32
- name: '725'
dtype: float32
- name: '726'
dtype: float32
- name: '727'
dtype: float32
- name: '728'
dtype: float32
- name: '729'
dtype: float32
- name: '730'
dtype: float32
- name: '731'
dtype: float32
- name: '732'
dtype: float32
- name: '733'
dtype: float32
- name: '734'
dtype: float32
- name: '735'
dtype: float32
- name: '736'
dtype: float32
- name: '737'
dtype: float32
- name: '738'
dtype: float32
- name: '739'
dtype: float32
- name: '740'
dtype: float32
- name: '741'
dtype: float32
- name: '742'
dtype: float32
- name: '743'
dtype: float32
- name: '744'
dtype: float32
- name: '745'
dtype: float32
- name: '746'
dtype: float32
- name: '747'
dtype: float32
- name: '748'
dtype: float32
- name: '749'
dtype: float32
- name: '750'
dtype: float32
- name: '751'
dtype: float32
- name: '752'
dtype: float32
- name: '753'
dtype: float32
- name: '754'
dtype: float32
- name: '755'
dtype: float32
- name: '756'
dtype: float32
- name: '757'
dtype: float32
- name: '758'
dtype: float32
- name: '759'
dtype: float32
- name: '760'
dtype: float32
- name: '761'
dtype: float32
- name: '762'
dtype: float32
- name: '763'
dtype: float32
- name: '764'
dtype: float32
- name: '765'
dtype: float32
- name: '766'
dtype: float32
- name: '767'
dtype: float32
- name: label
dtype: string
splits:
- name: train
num_bytes: 115621178.4375
num_examples: 37500
- name: test
num_bytes: 38540392.5
num_examples: 12500
download_size: 211874011
dataset_size: 154161570.9375
---
# Dataset Card for "CSIC_DistilRoBERTa_Baseline"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6135934591293335,
-0.18635447323322296,
0.2340167909860611,
0.42912405729293823,
-0.12701353430747986,
0.07027149200439453,
0.435101717710495,
0.07975827157497406,
0.5825444459915161,
0.2874135673046112,
-0.8126155138015747,
-0.9075047969818115,
-0.6957786679267883,
-0.156854048371315,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
rahular/simple-wikipedia | rahular | 2023-08-17T17:09:41Z | 31 | 0 | null | [
"region:us"
] | 2023-08-17T17:09:41Z | 2023-08-17T17:07:10.000Z | 2023-08-17T17:07:10 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 144689943
num_examples: 769764
download_size: 86969379
dataset_size: 144689943
---
# simple-wikipedia
Processed, text-only dump of the Simple Wikipedia (English). Contains 23,886,673 words. | [
-0.4513847231864929,
-0.6092792749404907,
0.4929267466068268,
0.1737169623374939,
-0.8005578517913818,
-0.3596700131893158,
-0.2442970871925354,
-0.2824794054031372,
0.5185235142707825,
0.648201584815979,
-0.7547734379768372,
-0.141886368393898,
-0.8478459119796753,
0.6665658354759216,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
luisroque/instruct-python-llama2-20k | luisroque | 2023-08-18T09:44:00Z | 31 | 0 | null | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] | 2023-08-18T09:44:00Z | 2023-08-17T17:59:03.000Z | 2023-08-17T17:59:03 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 34661192.7
num_examples: 19000
- name: test
num_bytes: 1824273.3
num_examples: 1000
download_size: 19060329
dataset_size: 36485466
license: cc-by-sa-3.0
task_categories:
- text-generation
language:
- en
pretty_name: Instruct Python 500k
size_categories:
- 10K<n<100K
---
# Fine-tuning Instruct Llama2 Stack Overflow Python Q&A
## Transformed Dataset
### Objective
The transformed dataset is designed for fine-tuning LLMs to improve Python coding assistance by focusing on high-quality content from Stack Overflow. It has around 20k instructions.
### Structure
- **Question-Answer Pairing**: Questions and answers are paired using the `ParentId` linkage.
- **Quality Focus**: Only top-rated answers for each question are retained.
- **HTML Tag Removal**: All HTML tags in the content are removed.
- **Combined Question Field**: Each question's title and body are merged.
- **Filtering**: Entries with negative scores or those not containing Python code structures are excluded.
Final columns:
- `score_question`
- `score_answer`
- `question`
- `answer`
### Llama2 Transformation
The dataset has been transformed to match the Llama2 prompt structure, which is relevant for the model's fine-tuning. The format is the following:
`<s>[INST] <<SYS>> {{ system_prompt }} <</SYS>> {{ user_message }} [/INST]`
Where:
- `system_prompt` gives context or instructions to the model.
- `user_message` is the user's query following the system prompt, expecting a particular response from the model.
This structure ensures the training aligns with Llama2's expectations, optimizing the fine-tuning quality.
## Original Dataset
The dataset contains questions and answers from Stack Overflow with the `python` tag, covering the period from August 2, 2008, to October 19, 2016.
## License
All contributions are under the [CC-BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/). Attribution is required. The original dataset was posted [here](https://www.kaggle.com/datasets/stackoverflow/pythonquestions).
Keep in touch: [LinkedIn](https://www.linkedin.com/in/luisbrasroque/) | [
-0.2834445834159851,
-0.7545100450515747,
0.2873803973197937,
0.20334146916866302,
-0.18896827101707458,
-0.12327821552753448,
-0.16311728954315186,
-0.34118345379829407,
-0.10577523708343506,
0.6173050403594971,
-0.8538608551025391,
-0.5304784178733826,
-0.4510142505168915,
0.281105458736... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
seungheondoh/audioset-music | seungheondoh | 2023-08-23T03:09:25Z | 31 | 1 | null | [
"language:en",
"license:mit",
"music",
"audioset",
"arxiv:2302.03917",
"region:us"
] | 2023-08-23T03:09:25Z | 2023-08-23T02:20:43.000Z | 2023-08-23T02:20:43 | ---
license: mit
language:
- en
tags:
- music
- audioset
pretty_name: audioset-music
---
# Dataset Card for "audioset-music"
audioset-subset using 130 music mid from [noise2music](https://arxiv.org/abs/2302.03917)
```
[
'/m/0z9c','/m/0mkg','/m/042v_gx','/m/0fd3y','/t/dd00036','/m/025td0t','/m/0192l','/m/018j2','/m/0bm02','/m/018vs','/m/02cz_7','/m/0395lw','/m/0gg8l','/m/0155w','/m/0l14_3',
'/m/01kcd','/m/015vgc','/m/01xqw','/m/02bk07','/m/0l14jd','/m/02mscn','/m/0140xf','/m/01wy6','/m/0ggq0m','/m/01lyv','/m/0239kh','/m/01qbl','/m/0ggx5q','/m/02bxd','/m/026z9',
'/m/02fsn','/m/0283d','/m/02hnl','/m/02k_mr','/m/026t6','/m/07s72n','/m/02sgy','/m/08cyft','/m/02lkt','/m/03xq_f','/m/0m0jc','/t/dd00035','/m/0326g','/m/0l14j_','/m/02w4v',
'/m/0319l','/m/02x8m','/t/dd00032','/m/0dwtp','/m/0mbct','/m/0dls3','/m/0342h','/m/03gvt','/t/dd00031','/m/03qjg','/m/03m5k','/m/03q5t','/m/03lty','/m/0glt670','/m/03mb9',
'/m/05rwpb','/m/03_d0','/m/03r5q_','/m/05148p4','/m/07pkxdp','/m/0j45pbj','/m/04rzd','/m/0dwsp','/m/06j64v','/m/05fw6t','/m/0164x2','/m/028sqc','/m/0dq0md','/m/0g293',
'/m/02v2lh','/m/05pd6','/m/013y1f','/m/0l14md','/m/05r5c','/m/0fx80y','/m/064t9','/m/0dl5d','/m/05w3f','/m/05r6t','/m/05r5wn','/m/06cqb','/m/06j6l','/m/03t3fj','/m/07sbbz2',
'/m/06by7','/t/dd00033','/m/0ln16','/m/06ncr','/t/dd00037','/m/01hgjl','/m/0l14l2','/m/0l14t7','/m/0jtg0','/m/06rqw','/m/06rvn','/m/0gywn','/m/0l14gg','/m/06w87','/m/0l156b',
'/m/02qmj0d','/m/07s0s5r','/m/015y_n','/m/0l14qv','/m/01p970','/m/07brj','/m/01glhc','/m/07gxw','/t/dd00034','/m/02cjck','/m/07kc_','/m/011k_j','/m/02p0sh1','/m/07lnk',
'/m/07c6l','/m/07gql','/m/016622','/m/07xzm','/m/0dwt5','/m/01z7dr','/m/07y_7','/m/0y4f8','/m/04wptg','/m/085jw','/m/01sm1g','/m/01bns_'
]
```
```
[
'A capella','Accordion','Acoustic guitar','Ambient music','Angry music',
'Background music','Bagpipes','Banjo','Bass drum','Bass guitar','Beatboxing','Bell','Bluegrass','Blues','Bowed string instrument','Brass instrument',
'Carnatic music','Cello','Chant','Choir','Christian music','Christmas music','Clarinet','Classical music','Country','Cowbell','Cymbal',
'Dance music','Didgeridoo','Disco','Double bass','Drum and bass','Drum kit','Drum roll','Drum','Dubstep',
'Electric guitar','Electronic dance music','Electronic music','Electronic organ','Electronica','Exciting music',
'Flamenco','Flute','Folk music','French horn','Funk','Funny music',
'Glockenspiel','Gong','Grunge','Guitar',
'Hammond organ','Happy music','Harmonica','Harp','Harpsichord','Heavy metal','Hip hop music','House music',
'Independent music',
'Jazz','Jingle (music)',
'Keyboard (musical)',
'Lullaby',
'Mallet percussion','Mandolin','Marimba, xylophone','Middle Eastern music','Music for children','Music of Africa','Music of Asia','Music of Bollywood','Music of Latin America',
'New-age music',
'Orchestra','Organ',
'Percussion','Piano','Plucked string instrument','Pop music','Progressive rock','Psychedelic rock','Punk rock',
'Rattle (instrument)','Reggae','Rhythm and blues','Rimshot','Rock and roll','Rock music',
'Sad music','Salsa music','Saxophone','Scary music','Scratching (performance technique)','Shofar','Singing bowl','Sitar','Ska','Snare drum','Soul music','Soundtrack music','Steel guitar, slide guitar','Steelpan','String section','Strum','Swing music','Synthesizer',
'Tabla','Tambourine','Tapping (guitar technique)','Techno','Tender music','Theme music','Theremin','Timpani','Traditional music','Trance music','Trombone','Trumpet','Tubular bells',
'Ukulele',
'Vibraphone','Video game music','Violin, fiddle','Vocal music',
'Wedding music','Wind instrument, woodwind instrument','Wood block',
'Zither'
]
``` | [
-0.7742403745651245,
-0.26382872462272644,
0.16551265120506287,
0.28723904490470886,
-0.3921034336090088,
0.3421139419078827,
0.1008990928530693,
-0.23233406245708466,
0.9545084238052368,
0.5127673149108887,
-1.0767806768417358,
-0.6894080638885498,
-0.389421284198761,
0.3107316792011261,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
EgilKarlsen/CSIC_DistilRoBERTa_FT | EgilKarlsen | 2023-09-04T08:18:52Z | 31 | 0 | null | [
"region:us"
] | 2023-09-04T08:18:52Z | 2023-09-04T08:18:18.000Z | 2023-09-04T08:18:18 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: '0'
dtype: float32
- name: '1'
dtype: float32
- name: '2'
dtype: float32
- name: '3'
dtype: float32
- name: '4'
dtype: float32
- name: '5'
dtype: float32
- name: '6'
dtype: float32
- name: '7'
dtype: float32
- name: '8'
dtype: float32
- name: '9'
dtype: float32
- name: '10'
dtype: float32
- name: '11'
dtype: float32
- name: '12'
dtype: float32
- name: '13'
dtype: float32
- name: '14'
dtype: float32
- name: '15'
dtype: float32
- name: '16'
dtype: float32
- name: '17'
dtype: float32
- name: '18'
dtype: float32
- name: '19'
dtype: float32
- name: '20'
dtype: float32
- name: '21'
dtype: float32
- name: '22'
dtype: float32
- name: '23'
dtype: float32
- name: '24'
dtype: float32
- name: '25'
dtype: float32
- name: '26'
dtype: float32
- name: '27'
dtype: float32
- name: '28'
dtype: float32
- name: '29'
dtype: float32
- name: '30'
dtype: float32
- name: '31'
dtype: float32
- name: '32'
dtype: float32
- name: '33'
dtype: float32
- name: '34'
dtype: float32
- name: '35'
dtype: float32
- name: '36'
dtype: float32
- name: '37'
dtype: float32
- name: '38'
dtype: float32
- name: '39'
dtype: float32
- name: '40'
dtype: float32
- name: '41'
dtype: float32
- name: '42'
dtype: float32
- name: '43'
dtype: float32
- name: '44'
dtype: float32
- name: '45'
dtype: float32
- name: '46'
dtype: float32
- name: '47'
dtype: float32
- name: '48'
dtype: float32
- name: '49'
dtype: float32
- name: '50'
dtype: float32
- name: '51'
dtype: float32
- name: '52'
dtype: float32
- name: '53'
dtype: float32
- name: '54'
dtype: float32
- name: '55'
dtype: float32
- name: '56'
dtype: float32
- name: '57'
dtype: float32
- name: '58'
dtype: float32
- name: '59'
dtype: float32
- name: '60'
dtype: float32
- name: '61'
dtype: float32
- name: '62'
dtype: float32
- name: '63'
dtype: float32
- name: '64'
dtype: float32
- name: '65'
dtype: float32
- name: '66'
dtype: float32
- name: '67'
dtype: float32
- name: '68'
dtype: float32
- name: '69'
dtype: float32
- name: '70'
dtype: float32
- name: '71'
dtype: float32
- name: '72'
dtype: float32
- name: '73'
dtype: float32
- name: '74'
dtype: float32
- name: '75'
dtype: float32
- name: '76'
dtype: float32
- name: '77'
dtype: float32
- name: '78'
dtype: float32
- name: '79'
dtype: float32
- name: '80'
dtype: float32
- name: '81'
dtype: float32
- name: '82'
dtype: float32
- name: '83'
dtype: float32
- name: '84'
dtype: float32
- name: '85'
dtype: float32
- name: '86'
dtype: float32
- name: '87'
dtype: float32
- name: '88'
dtype: float32
- name: '89'
dtype: float32
- name: '90'
dtype: float32
- name: '91'
dtype: float32
- name: '92'
dtype: float32
- name: '93'
dtype: float32
- name: '94'
dtype: float32
- name: '95'
dtype: float32
- name: '96'
dtype: float32
- name: '97'
dtype: float32
- name: '98'
dtype: float32
- name: '99'
dtype: float32
- name: '100'
dtype: float32
- name: '101'
dtype: float32
- name: '102'
dtype: float32
- name: '103'
dtype: float32
- name: '104'
dtype: float32
- name: '105'
dtype: float32
- name: '106'
dtype: float32
- name: '107'
dtype: float32
- name: '108'
dtype: float32
- name: '109'
dtype: float32
- name: '110'
dtype: float32
- name: '111'
dtype: float32
- name: '112'
dtype: float32
- name: '113'
dtype: float32
- name: '114'
dtype: float32
- name: '115'
dtype: float32
- name: '116'
dtype: float32
- name: '117'
dtype: float32
- name: '118'
dtype: float32
- name: '119'
dtype: float32
- name: '120'
dtype: float32
- name: '121'
dtype: float32
- name: '122'
dtype: float32
- name: '123'
dtype: float32
- name: '124'
dtype: float32
- name: '125'
dtype: float32
- name: '126'
dtype: float32
- name: '127'
dtype: float32
- name: '128'
dtype: float32
- name: '129'
dtype: float32
- name: '130'
dtype: float32
- name: '131'
dtype: float32
- name: '132'
dtype: float32
- name: '133'
dtype: float32
- name: '134'
dtype: float32
- name: '135'
dtype: float32
- name: '136'
dtype: float32
- name: '137'
dtype: float32
- name: '138'
dtype: float32
- name: '139'
dtype: float32
- name: '140'
dtype: float32
- name: '141'
dtype: float32
- name: '142'
dtype: float32
- name: '143'
dtype: float32
- name: '144'
dtype: float32
- name: '145'
dtype: float32
- name: '146'
dtype: float32
- name: '147'
dtype: float32
- name: '148'
dtype: float32
- name: '149'
dtype: float32
- name: '150'
dtype: float32
- name: '151'
dtype: float32
- name: '152'
dtype: float32
- name: '153'
dtype: float32
- name: '154'
dtype: float32
- name: '155'
dtype: float32
- name: '156'
dtype: float32
- name: '157'
dtype: float32
- name: '158'
dtype: float32
- name: '159'
dtype: float32
- name: '160'
dtype: float32
- name: '161'
dtype: float32
- name: '162'
dtype: float32
- name: '163'
dtype: float32
- name: '164'
dtype: float32
- name: '165'
dtype: float32
- name: '166'
dtype: float32
- name: '167'
dtype: float32
- name: '168'
dtype: float32
- name: '169'
dtype: float32
- name: '170'
dtype: float32
- name: '171'
dtype: float32
- name: '172'
dtype: float32
- name: '173'
dtype: float32
- name: '174'
dtype: float32
- name: '175'
dtype: float32
- name: '176'
dtype: float32
- name: '177'
dtype: float32
- name: '178'
dtype: float32
- name: '179'
dtype: float32
- name: '180'
dtype: float32
- name: '181'
dtype: float32
- name: '182'
dtype: float32
- name: '183'
dtype: float32
- name: '184'
dtype: float32
- name: '185'
dtype: float32
- name: '186'
dtype: float32
- name: '187'
dtype: float32
- name: '188'
dtype: float32
- name: '189'
dtype: float32
- name: '190'
dtype: float32
- name: '191'
dtype: float32
- name: '192'
dtype: float32
- name: '193'
dtype: float32
- name: '194'
dtype: float32
- name: '195'
dtype: float32
- name: '196'
dtype: float32
- name: '197'
dtype: float32
- name: '198'
dtype: float32
- name: '199'
dtype: float32
- name: '200'
dtype: float32
- name: '201'
dtype: float32
- name: '202'
dtype: float32
- name: '203'
dtype: float32
- name: '204'
dtype: float32
- name: '205'
dtype: float32
- name: '206'
dtype: float32
- name: '207'
dtype: float32
- name: '208'
dtype: float32
- name: '209'
dtype: float32
- name: '210'
dtype: float32
- name: '211'
dtype: float32
- name: '212'
dtype: float32
- name: '213'
dtype: float32
- name: '214'
dtype: float32
- name: '215'
dtype: float32
- name: '216'
dtype: float32
- name: '217'
dtype: float32
- name: '218'
dtype: float32
- name: '219'
dtype: float32
- name: '220'
dtype: float32
- name: '221'
dtype: float32
- name: '222'
dtype: float32
- name: '223'
dtype: float32
- name: '224'
dtype: float32
- name: '225'
dtype: float32
- name: '226'
dtype: float32
- name: '227'
dtype: float32
- name: '228'
dtype: float32
- name: '229'
dtype: float32
- name: '230'
dtype: float32
- name: '231'
dtype: float32
- name: '232'
dtype: float32
- name: '233'
dtype: float32
- name: '234'
dtype: float32
- name: '235'
dtype: float32
- name: '236'
dtype: float32
- name: '237'
dtype: float32
- name: '238'
dtype: float32
- name: '239'
dtype: float32
- name: '240'
dtype: float32
- name: '241'
dtype: float32
- name: '242'
dtype: float32
- name: '243'
dtype: float32
- name: '244'
dtype: float32
- name: '245'
dtype: float32
- name: '246'
dtype: float32
- name: '247'
dtype: float32
- name: '248'
dtype: float32
- name: '249'
dtype: float32
- name: '250'
dtype: float32
- name: '251'
dtype: float32
- name: '252'
dtype: float32
- name: '253'
dtype: float32
- name: '254'
dtype: float32
- name: '255'
dtype: float32
- name: '256'
dtype: float32
- name: '257'
dtype: float32
- name: '258'
dtype: float32
- name: '259'
dtype: float32
- name: '260'
dtype: float32
- name: '261'
dtype: float32
- name: '262'
dtype: float32
- name: '263'
dtype: float32
- name: '264'
dtype: float32
- name: '265'
dtype: float32
- name: '266'
dtype: float32
- name: '267'
dtype: float32
- name: '268'
dtype: float32
- name: '269'
dtype: float32
- name: '270'
dtype: float32
- name: '271'
dtype: float32
- name: '272'
dtype: float32
- name: '273'
dtype: float32
- name: '274'
dtype: float32
- name: '275'
dtype: float32
- name: '276'
dtype: float32
- name: '277'
dtype: float32
- name: '278'
dtype: float32
- name: '279'
dtype: float32
- name: '280'
dtype: float32
- name: '281'
dtype: float32
- name: '282'
dtype: float32
- name: '283'
dtype: float32
- name: '284'
dtype: float32
- name: '285'
dtype: float32
- name: '286'
dtype: float32
- name: '287'
dtype: float32
- name: '288'
dtype: float32
- name: '289'
dtype: float32
- name: '290'
dtype: float32
- name: '291'
dtype: float32
- name: '292'
dtype: float32
- name: '293'
dtype: float32
- name: '294'
dtype: float32
- name: '295'
dtype: float32
- name: '296'
dtype: float32
- name: '297'
dtype: float32
- name: '298'
dtype: float32
- name: '299'
dtype: float32
- name: '300'
dtype: float32
- name: '301'
dtype: float32
- name: '302'
dtype: float32
- name: '303'
dtype: float32
- name: '304'
dtype: float32
- name: '305'
dtype: float32
- name: '306'
dtype: float32
- name: '307'
dtype: float32
- name: '308'
dtype: float32
- name: '309'
dtype: float32
- name: '310'
dtype: float32
- name: '311'
dtype: float32
- name: '312'
dtype: float32
- name: '313'
dtype: float32
- name: '314'
dtype: float32
- name: '315'
dtype: float32
- name: '316'
dtype: float32
- name: '317'
dtype: float32
- name: '318'
dtype: float32
- name: '319'
dtype: float32
- name: '320'
dtype: float32
- name: '321'
dtype: float32
- name: '322'
dtype: float32
- name: '323'
dtype: float32
- name: '324'
dtype: float32
- name: '325'
dtype: float32
- name: '326'
dtype: float32
- name: '327'
dtype: float32
- name: '328'
dtype: float32
- name: '329'
dtype: float32
- name: '330'
dtype: float32
- name: '331'
dtype: float32
- name: '332'
dtype: float32
- name: '333'
dtype: float32
- name: '334'
dtype: float32
- name: '335'
dtype: float32
- name: '336'
dtype: float32
- name: '337'
dtype: float32
- name: '338'
dtype: float32
- name: '339'
dtype: float32
- name: '340'
dtype: float32
- name: '341'
dtype: float32
- name: '342'
dtype: float32
- name: '343'
dtype: float32
- name: '344'
dtype: float32
- name: '345'
dtype: float32
- name: '346'
dtype: float32
- name: '347'
dtype: float32
- name: '348'
dtype: float32
- name: '349'
dtype: float32
- name: '350'
dtype: float32
- name: '351'
dtype: float32
- name: '352'
dtype: float32
- name: '353'
dtype: float32
- name: '354'
dtype: float32
- name: '355'
dtype: float32
- name: '356'
dtype: float32
- name: '357'
dtype: float32
- name: '358'
dtype: float32
- name: '359'
dtype: float32
- name: '360'
dtype: float32
- name: '361'
dtype: float32
- name: '362'
dtype: float32
- name: '363'
dtype: float32
- name: '364'
dtype: float32
- name: '365'
dtype: float32
- name: '366'
dtype: float32
- name: '367'
dtype: float32
- name: '368'
dtype: float32
- name: '369'
dtype: float32
- name: '370'
dtype: float32
- name: '371'
dtype: float32
- name: '372'
dtype: float32
- name: '373'
dtype: float32
- name: '374'
dtype: float32
- name: '375'
dtype: float32
- name: '376'
dtype: float32
- name: '377'
dtype: float32
- name: '378'
dtype: float32
- name: '379'
dtype: float32
- name: '380'
dtype: float32
- name: '381'
dtype: float32
- name: '382'
dtype: float32
- name: '383'
dtype: float32
- name: '384'
dtype: float32
- name: '385'
dtype: float32
- name: '386'
dtype: float32
- name: '387'
dtype: float32
- name: '388'
dtype: float32
- name: '389'
dtype: float32
- name: '390'
dtype: float32
- name: '391'
dtype: float32
- name: '392'
dtype: float32
- name: '393'
dtype: float32
- name: '394'
dtype: float32
- name: '395'
dtype: float32
- name: '396'
dtype: float32
- name: '397'
dtype: float32
- name: '398'
dtype: float32
- name: '399'
dtype: float32
- name: '400'
dtype: float32
- name: '401'
dtype: float32
- name: '402'
dtype: float32
- name: '403'
dtype: float32
- name: '404'
dtype: float32
- name: '405'
dtype: float32
- name: '406'
dtype: float32
- name: '407'
dtype: float32
- name: '408'
dtype: float32
- name: '409'
dtype: float32
- name: '410'
dtype: float32
- name: '411'
dtype: float32
- name: '412'
dtype: float32
- name: '413'
dtype: float32
- name: '414'
dtype: float32
- name: '415'
dtype: float32
- name: '416'
dtype: float32
- name: '417'
dtype: float32
- name: '418'
dtype: float32
- name: '419'
dtype: float32
- name: '420'
dtype: float32
- name: '421'
dtype: float32
- name: '422'
dtype: float32
- name: '423'
dtype: float32
- name: '424'
dtype: float32
- name: '425'
dtype: float32
- name: '426'
dtype: float32
- name: '427'
dtype: float32
- name: '428'
dtype: float32
- name: '429'
dtype: float32
- name: '430'
dtype: float32
- name: '431'
dtype: float32
- name: '432'
dtype: float32
- name: '433'
dtype: float32
- name: '434'
dtype: float32
- name: '435'
dtype: float32
- name: '436'
dtype: float32
- name: '437'
dtype: float32
- name: '438'
dtype: float32
- name: '439'
dtype: float32
- name: '440'
dtype: float32
- name: '441'
dtype: float32
- name: '442'
dtype: float32
- name: '443'
dtype: float32
- name: '444'
dtype: float32
- name: '445'
dtype: float32
- name: '446'
dtype: float32
- name: '447'
dtype: float32
- name: '448'
dtype: float32
- name: '449'
dtype: float32
- name: '450'
dtype: float32
- name: '451'
dtype: float32
- name: '452'
dtype: float32
- name: '453'
dtype: float32
- name: '454'
dtype: float32
- name: '455'
dtype: float32
- name: '456'
dtype: float32
- name: '457'
dtype: float32
- name: '458'
dtype: float32
- name: '459'
dtype: float32
- name: '460'
dtype: float32
- name: '461'
dtype: float32
- name: '462'
dtype: float32
- name: '463'
dtype: float32
- name: '464'
dtype: float32
- name: '465'
dtype: float32
- name: '466'
dtype: float32
- name: '467'
dtype: float32
- name: '468'
dtype: float32
- name: '469'
dtype: float32
- name: '470'
dtype: float32
- name: '471'
dtype: float32
- name: '472'
dtype: float32
- name: '473'
dtype: float32
- name: '474'
dtype: float32
- name: '475'
dtype: float32
- name: '476'
dtype: float32
- name: '477'
dtype: float32
- name: '478'
dtype: float32
- name: '479'
dtype: float32
- name: '480'
dtype: float32
- name: '481'
dtype: float32
- name: '482'
dtype: float32
- name: '483'
dtype: float32
- name: '484'
dtype: float32
- name: '485'
dtype: float32
- name: '486'
dtype: float32
- name: '487'
dtype: float32
- name: '488'
dtype: float32
- name: '489'
dtype: float32
- name: '490'
dtype: float32
- name: '491'
dtype: float32
- name: '492'
dtype: float32
- name: '493'
dtype: float32
- name: '494'
dtype: float32
- name: '495'
dtype: float32
- name: '496'
dtype: float32
- name: '497'
dtype: float32
- name: '498'
dtype: float32
- name: '499'
dtype: float32
- name: '500'
dtype: float32
- name: '501'
dtype: float32
- name: '502'
dtype: float32
- name: '503'
dtype: float32
- name: '504'
dtype: float32
- name: '505'
dtype: float32
- name: '506'
dtype: float32
- name: '507'
dtype: float32
- name: '508'
dtype: float32
- name: '509'
dtype: float32
- name: '510'
dtype: float32
- name: '511'
dtype: float32
- name: '512'
dtype: float32
- name: '513'
dtype: float32
- name: '514'
dtype: float32
- name: '515'
dtype: float32
- name: '516'
dtype: float32
- name: '517'
dtype: float32
- name: '518'
dtype: float32
- name: '519'
dtype: float32
- name: '520'
dtype: float32
- name: '521'
dtype: float32
- name: '522'
dtype: float32
- name: '523'
dtype: float32
- name: '524'
dtype: float32
- name: '525'
dtype: float32
- name: '526'
dtype: float32
- name: '527'
dtype: float32
- name: '528'
dtype: float32
- name: '529'
dtype: float32
- name: '530'
dtype: float32
- name: '531'
dtype: float32
- name: '532'
dtype: float32
- name: '533'
dtype: float32
- name: '534'
dtype: float32
- name: '535'
dtype: float32
- name: '536'
dtype: float32
- name: '537'
dtype: float32
- name: '538'
dtype: float32
- name: '539'
dtype: float32
- name: '540'
dtype: float32
- name: '541'
dtype: float32
- name: '542'
dtype: float32
- name: '543'
dtype: float32
- name: '544'
dtype: float32
- name: '545'
dtype: float32
- name: '546'
dtype: float32
- name: '547'
dtype: float32
- name: '548'
dtype: float32
- name: '549'
dtype: float32
- name: '550'
dtype: float32
- name: '551'
dtype: float32
- name: '552'
dtype: float32
- name: '553'
dtype: float32
- name: '554'
dtype: float32
- name: '555'
dtype: float32
- name: '556'
dtype: float32
- name: '557'
dtype: float32
- name: '558'
dtype: float32
- name: '559'
dtype: float32
- name: '560'
dtype: float32
- name: '561'
dtype: float32
- name: '562'
dtype: float32
- name: '563'
dtype: float32
- name: '564'
dtype: float32
- name: '565'
dtype: float32
- name: '566'
dtype: float32
- name: '567'
dtype: float32
- name: '568'
dtype: float32
- name: '569'
dtype: float32
- name: '570'
dtype: float32
- name: '571'
dtype: float32
- name: '572'
dtype: float32
- name: '573'
dtype: float32
- name: '574'
dtype: float32
- name: '575'
dtype: float32
- name: '576'
dtype: float32
- name: '577'
dtype: float32
- name: '578'
dtype: float32
- name: '579'
dtype: float32
- name: '580'
dtype: float32
- name: '581'
dtype: float32
- name: '582'
dtype: float32
- name: '583'
dtype: float32
- name: '584'
dtype: float32
- name: '585'
dtype: float32
- name: '586'
dtype: float32
- name: '587'
dtype: float32
- name: '588'
dtype: float32
- name: '589'
dtype: float32
- name: '590'
dtype: float32
- name: '591'
dtype: float32
- name: '592'
dtype: float32
- name: '593'
dtype: float32
- name: '594'
dtype: float32
- name: '595'
dtype: float32
- name: '596'
dtype: float32
- name: '597'
dtype: float32
- name: '598'
dtype: float32
- name: '599'
dtype: float32
- name: '600'
dtype: float32
- name: '601'
dtype: float32
- name: '602'
dtype: float32
- name: '603'
dtype: float32
- name: '604'
dtype: float32
- name: '605'
dtype: float32
- name: '606'
dtype: float32
- name: '607'
dtype: float32
- name: '608'
dtype: float32
- name: '609'
dtype: float32
- name: '610'
dtype: float32
- name: '611'
dtype: float32
- name: '612'
dtype: float32
- name: '613'
dtype: float32
- name: '614'
dtype: float32
- name: '615'
dtype: float32
- name: '616'
dtype: float32
- name: '617'
dtype: float32
- name: '618'
dtype: float32
- name: '619'
dtype: float32
- name: '620'
dtype: float32
- name: '621'
dtype: float32
- name: '622'
dtype: float32
- name: '623'
dtype: float32
- name: '624'
dtype: float32
- name: '625'
dtype: float32
- name: '626'
dtype: float32
- name: '627'
dtype: float32
- name: '628'
dtype: float32
- name: '629'
dtype: float32
- name: '630'
dtype: float32
- name: '631'
dtype: float32
- name: '632'
dtype: float32
- name: '633'
dtype: float32
- name: '634'
dtype: float32
- name: '635'
dtype: float32
- name: '636'
dtype: float32
- name: '637'
dtype: float32
- name: '638'
dtype: float32
- name: '639'
dtype: float32
- name: '640'
dtype: float32
- name: '641'
dtype: float32
- name: '642'
dtype: float32
- name: '643'
dtype: float32
- name: '644'
dtype: float32
- name: '645'
dtype: float32
- name: '646'
dtype: float32
- name: '647'
dtype: float32
- name: '648'
dtype: float32
- name: '649'
dtype: float32
- name: '650'
dtype: float32
- name: '651'
dtype: float32
- name: '652'
dtype: float32
- name: '653'
dtype: float32
- name: '654'
dtype: float32
- name: '655'
dtype: float32
- name: '656'
dtype: float32
- name: '657'
dtype: float32
- name: '658'
dtype: float32
- name: '659'
dtype: float32
- name: '660'
dtype: float32
- name: '661'
dtype: float32
- name: '662'
dtype: float32
- name: '663'
dtype: float32
- name: '664'
dtype: float32
- name: '665'
dtype: float32
- name: '666'
dtype: float32
- name: '667'
dtype: float32
- name: '668'
dtype: float32
- name: '669'
dtype: float32
- name: '670'
dtype: float32
- name: '671'
dtype: float32
- name: '672'
dtype: float32
- name: '673'
dtype: float32
- name: '674'
dtype: float32
- name: '675'
dtype: float32
- name: '676'
dtype: float32
- name: '677'
dtype: float32
- name: '678'
dtype: float32
- name: '679'
dtype: float32
- name: '680'
dtype: float32
- name: '681'
dtype: float32
- name: '682'
dtype: float32
- name: '683'
dtype: float32
- name: '684'
dtype: float32
- name: '685'
dtype: float32
- name: '686'
dtype: float32
- name: '687'
dtype: float32
- name: '688'
dtype: float32
- name: '689'
dtype: float32
- name: '690'
dtype: float32
- name: '691'
dtype: float32
- name: '692'
dtype: float32
- name: '693'
dtype: float32
- name: '694'
dtype: float32
- name: '695'
dtype: float32
- name: '696'
dtype: float32
- name: '697'
dtype: float32
- name: '698'
dtype: float32
- name: '699'
dtype: float32
- name: '700'
dtype: float32
- name: '701'
dtype: float32
- name: '702'
dtype: float32
- name: '703'
dtype: float32
- name: '704'
dtype: float32
- name: '705'
dtype: float32
- name: '706'
dtype: float32
- name: '707'
dtype: float32
- name: '708'
dtype: float32
- name: '709'
dtype: float32
- name: '710'
dtype: float32
- name: '711'
dtype: float32
- name: '712'
dtype: float32
- name: '713'
dtype: float32
- name: '714'
dtype: float32
- name: '715'
dtype: float32
- name: '716'
dtype: float32
- name: '717'
dtype: float32
- name: '718'
dtype: float32
- name: '719'
dtype: float32
- name: '720'
dtype: float32
- name: '721'
dtype: float32
- name: '722'
dtype: float32
- name: '723'
dtype: float32
- name: '724'
dtype: float32
- name: '725'
dtype: float32
- name: '726'
dtype: float32
- name: '727'
dtype: float32
- name: '728'
dtype: float32
- name: '729'
dtype: float32
- name: '730'
dtype: float32
- name: '731'
dtype: float32
- name: '732'
dtype: float32
- name: '733'
dtype: float32
- name: '734'
dtype: float32
- name: '735'
dtype: float32
- name: '736'
dtype: float32
- name: '737'
dtype: float32
- name: '738'
dtype: float32
- name: '739'
dtype: float32
- name: '740'
dtype: float32
- name: '741'
dtype: float32
- name: '742'
dtype: float32
- name: '743'
dtype: float32
- name: '744'
dtype: float32
- name: '745'
dtype: float32
- name: '746'
dtype: float32
- name: '747'
dtype: float32
- name: '748'
dtype: float32
- name: '749'
dtype: float32
- name: '750'
dtype: float32
- name: '751'
dtype: float32
- name: '752'
dtype: float32
- name: '753'
dtype: float32
- name: '754'
dtype: float32
- name: '755'
dtype: float32
- name: '756'
dtype: float32
- name: '757'
dtype: float32
- name: '758'
dtype: float32
- name: '759'
dtype: float32
- name: '760'
dtype: float32
- name: '761'
dtype: float32
- name: '762'
dtype: float32
- name: '763'
dtype: float32
- name: '764'
dtype: float32
- name: '765'
dtype: float32
- name: '766'
dtype: float32
- name: '767'
dtype: float32
- name: label
dtype: string
splits:
- name: train
num_bytes: 115621182
num_examples: 37500
- name: test
num_bytes: 38540387
num_examples: 12500
download_size: 211876775
dataset_size: 154161569
---
# Dataset Card for "CSIC_DistilRoBERTa_FT"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.47644615173339844,
-0.20814542472362518,
0.3494182229042053,
0.5931784510612488,
-0.2840084731578827,
0.29199889302253723,
0.4509090185165405,
0.05724720284342766,
0.6422017216682434,
0.20078518986701965,
-0.7871842384338379,
-0.8437122702598572,
-0.7503664493560791,
0.02020563744008541... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
daniel2588/website_defacement | daniel2588 | 2023-10-31T09:06:41Z | 31 | 0 | null | [
"region:us"
] | 2023-10-31T09:06:41Z | 2023-09-05T11:10:42.000Z | 2023-09-05T11:10:42 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
codymlewis/nbaiot | codymlewis | 2023-10-13T04:02:56Z | 31 | 0 | null | [
"license:cc-by-4.0",
"arxiv:1805.03409",
"region:us"
] | 2023-10-13T04:02:56Z | 2023-09-20T02:24:15.000Z | 2023-09-20T02:24:15 | ---
dataset_info:
features:
- name: features
sequence: float32
length: 115
- name: attack
dtype:
class_label:
names:
'0': benign_traffic
'1': combo
'2': junk
'3': mirai-ack
'4': mirai-scan
'5': mirai-syn
'6': mirai-udp
'7': mirai-udpplain
'8': scan
'9': tcp
'10': udp
- name: device
dtype:
class_label:
names:
'0': Danmini_Doorbell
'1': Ecobee_Thermostat
'2': Ennio_Doorbell
'3': Philips_B120N10_Baby_Monitor
'4': Provision_PT_737E_Security_Camera
'5': Provision_PT_838_Security_Camera
'6': Samsung_SNH_1011_N_Webcam
'7': SimpleHome_XCS7_1002_WHT_Security_Camera
'8': SimpleHome_XCS7_1003_WHT_Security_Camera
splits:
- name: train
num_bytes: 2857231888
num_examples: 6002588
- name: test
num_bytes: 504568568
num_examples: 1060018
download_size: 1772922927
dataset_size: 3361800456
license: cc-by-4.0
pretty_name: nbaiot
---
# Dataset Card for N-BAIoT
*From https://archive.ics.uci.edu/dataset/442/detection+of+iot+botnet+attacks+n+baiot:* This dataset addresses the lack of public botnet datasets, especially for the IoT. It suggests *real* traffic data, gathered from 9 commercial IoT devices authentically infected by Mirai and BASHLITE.
## Dataset Details
### Dataset Description
*From https://archive.ics.uci.edu/dataset/442/detection+of+iot+botnet+attacks+n+baiot:*
(a) Attribute being predicted:
-- Originally we aimed at distinguishing between benign and Malicious traffic data by means of anomaly detection techniques.
-- However, as the malicious data can be divided into 10 attacks carried by 2 botnets, the dataset can also be used for multi-class classification: 10 classes of attacks, plus 1 class of 'benign'.
(b) The study's results:
-- For each of the 9 IoT devices we trained and optimized a deep autoencoder on 2/3 of its benign data (i.e., the training set of each device). This was done to capture normal network traffic patterns.
-- The test data of each device comprised of the remaining 1/3 of benign data plus all the malicious data. On each test set we applied the respective trained (deep) autoencoder as an anomaly detector. The detection of anomalies (i.e., the cyberattacks launched from each of the above IoT devices) concluded with 100% TPR.
- **Curated by:** Meidan, Yair, Bohadana, Michael, Mathov, Yael, Mirsky, Yisroel, Breitenbacher, Dominik, , Asaf, and Shabtai, Asaf
- **License:** [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/legalcode)
### Dataset Sources
- **Repository:** https://archive.ics.uci.edu/dataset/442/detection+of+iot+botnet+attacks+n+baiot
- **Paper:** https://arxiv.org/abs/1805.03409
## Citation
**BibTeX:**
@misc{misc_detection_of_iot_botnet_attacks_n_baiot_442,
author = {Meidan,Yair, Bohadana,Michael, Mathov,Yael, Mirsky,Yisroel, Breitenbacher,Dominik, ,Asaf, and Shabtai,Asaf},
title = {{N-BaIoT Dataset to Detect IoT Botnet Attacks}},
year = {2018},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: https://doi.org/10.24432/C5RC8J}
}
**APA:**
Meidan, Yair, Bohadana, Michael, Mathov, Yael, Mirsky, Yisroel, Breitenbacher, Dominik, ,Asaf, and Shabtai, Asaf. (2018). N-BaIoT Dataset to Detect IoT Botnet Attacks. UCI Machine Learning Repository. https://doi.org/10.24432/C5RC8J.
## Glossary [optional]
- **IoT**: Internet of Things
- **Botnet**: A collection of devices that are maliciously controlled via malware | [
-0.3307483196258545,
-0.6469738483428955,
-0.12133921682834625,
-0.1466452032327652,
-0.18232735991477966,
-0.013865391723811626,
0.30361464619636536,
-0.4170933663845062,
0.37213945388793945,
0.22429926693439484,
-0.3387167453765869,
-0.4580480456352234,
-0.5771939754486084,
-0.0278775785... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SEACrowd/term_a | SEACrowd | 2023-09-26T12:29:41Z | 31 | 0 | null | [
"language:ind",
"keyword-tagging",
"region:us"
] | 2023-09-26T12:29:41Z | 2023-09-26T11:13:44.000Z | 2023-09-26T11:13:44 | ---
tags:
- keyword-tagging
language:
- ind
---
# term_a
TermA is a span-extraction dataset collected from the hotel aggregator platform, AiryRooms
(Septiandri and Sutiono, 2019; Fernando et al.,
2019) consisting of thousands of hotel reviews,each containing a span label for aspect
and sentiment words representing the opinion of the reviewer on the corresponding aspect.
The labels use Inside-Outside-Beginning tagging (IOB) with two kinds of tags, aspect and
sentiment.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@article{winatmoko2019aspect,
title={Aspect and opinion term extraction for hotel reviews using transfer learning and auxiliary labels},
author={Winatmoko, Yosef Ardhito and Septiandri, Ali Akbar and Sutiono, Arie Pratama},
journal={arXiv preprint arXiv:1909.11879},
year={2019}
}
@inproceedings{fernando2019aspect,
title={Aspect and opinion terms extraction using double embeddings and attention mechanism for indonesian hotel reviews},
author={Fernando, Jordhy and Khodra, Masayu Leylia and Septiandri, Ali Akbar},
booktitle={2019 International Conference of Advanced Informatics: Concepts, Theory and Applications (ICAICTA)},
pages={1--6},
year={2019},
organization={IEEE}
}
```
## License
Creative Common Attribution Share-Alike 4.0 International
## Homepage
[https://github.com/IndoNLP/indonlu](https://github.com/IndoNLP/indonlu)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) | [
-0.6968581080436707,
-0.871724545955658,
0.1281052827835083,
0.41216394305229187,
-0.5163208842277527,
0.07032085955142975,
-0.09465118497610092,
-0.4349570870399475,
0.8115648627281189,
0.48154059052467346,
-0.40797409415245056,
-0.5890896916389465,
-0.44480136036872864,
0.220289215445518... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SEACrowd/tydiqa_id | SEACrowd | 2023-09-26T12:31:34Z | 31 | 0 | null | [
"language:ind",
"question-answering",
"region:us"
] | 2023-09-26T12:31:34Z | 2023-09-26T11:15:48.000Z | 2023-09-26T11:15:48 | ---
tags:
- question-answering
language:
- ind
---
# tydiqa_id
TyDiQA dataset is collected from Wikipedia articles with human-annotated question and answer pairs covering 11 languages.
The question-answer pairs are collected for each language without using translation services.
IndoNLG uses the Indonesian data from the secondary Gold passage task of the original TyDiQA dataset and
randomly split off 15% of the training data and use it as the test set.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@article{clark-etal-2020-tydi,
title = "{T}y{D}i {QA}: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages",
author = "Clark, Jonathan H. and
Choi, Eunsol and
Collins, Michael and
Garrette, Dan and
Kwiatkowski, Tom and
Nikolaev, Vitaly and
Palomaki, Jennimaria",
journal = "Transactions of the Association for Computational Linguistics",
volume = "8",
year = "2020",
address = "Cambridge, MA",
publisher = "MIT Press",
url = "https://aclanthology.org/2020.tacl-1.30",
doi = "10.1162/tacl_a_00317",
pages = "454--470",
}
@inproceedings{cahyawijaya-etal-2021-indonlg,
title = "{I}ndo{NLG}: Benchmark and Resources for Evaluating {I}ndonesian Natural Language Generation",
author = "Cahyawijaya, Samuel and
Winata, Genta Indra and
Wilie, Bryan and
Vincentio, Karissa and
Li, Xiaohong and
Kuncoro, Adhiguna and
Ruder, Sebastian and
Lim, Zhi Yuan and
Bahar, Syafri and
Khodra, Masayu and
Purwarianti, Ayu and
Fung, Pascale",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.699",
doi = "10.18653/v1/2021.emnlp-main.699",
pages = "8875--8898"
}
```
## License
Creative Common Attribution Share-Alike 4.0 International
## Homepage
[https://github.com/IndoNLP/indonlg](https://github.com/IndoNLP/indonlg)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) | [
-0.575702965259552,
-0.6481682062149048,
0.087367482483387,
0.3276151418685913,
-0.4190900921821594,
-0.12157590687274933,
-0.555095374584198,
-0.3519098162651062,
0.4418281018733978,
0.4822637140750885,
-0.5767275094985962,
-0.8350361585617065,
-0.32095450162887573,
0.5461929440498352,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
abdiharyadi/nafkhan_ft_dataset_with_id_amr | abdiharyadi | 2023-10-13T00:19:59Z | 31 | 0 | null | [
"region:us"
] | 2023-10-13T00:19:59Z | 2023-09-28T08:44:38.000Z | 2023-09-28T08:44:38 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: en_amr
dtype: string
- name: id_amr
dtype: string
- name: en
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 106147279
num_examples: 92867
- name: validation
num_bytes: 2278476
num_examples: 1722
- name: test
num_bytes: 1866019
num_examples: 1371
download_size: 41233166
dataset_size: 110291774
---
# Dataset Card for "nafkhan_ft_dataset_with_id_amr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5622181296348572,
-0.4125952124595642,
-0.12296077609062195,
0.1523643583059311,
-0.3808315098285675,
0.23219847679138184,
0.2845304608345032,
-0.03198765218257904,
0.9830519556999207,
0.3840039074420929,
-0.7024741768836975,
-0.7691920399665833,
-0.40083929896354675,
-0.082848154008388... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AntoineBlanot/alpaca-llama2-chat | AntoineBlanot | 2023-10-09T07:30:40Z | 31 | 2 | null | [
"region:us"
] | 2023-10-09T07:30:40Z | 2023-10-05T08:57:49.000Z | 2023-10-05T08:57:49 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 46095859
num_examples: 52002
download_size: 0
dataset_size: 46095859
---
# Dataset Card for "alpaca-llama2-chat"
This dataset is the [alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) dataset formatted for [llama2-chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf).
The default system prompt, as well as special tokens has all been added for a ready-to-train dataset.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.54119873046875,
-0.4554242789745331,
0.19084574282169342,
0.6274835467338562,
-0.6981868147850037,
0.1589251160621643,
0.09778539836406708,
-0.3918815851211548,
1.0401140451431274,
0.4005614221096039,
-1.0113909244537354,
-0.5359949469566345,
-0.7252590656280518,
0.05138761177659035,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
YaHi/english_AAAI_Math | YaHi | 2023-10-09T21:06:27Z | 31 | 0 | null | [
"region:us"
] | 2023-10-09T21:06:27Z | 2023-10-09T21:06:26.000Z | 2023-10-09T21:06:26 | ---
dataset_info:
features:
- name: dataset_version
dtype: timestamp[s]
- name: queId
dtype: string
- name: difficulty
dtype: string
- name: qtype
dtype: string
- name: problem
dtype: string
- name: knowledge_point_routes
sequence: string
splits:
- name: train
num_bytes: 2228695
num_examples: 5927
download_size: 854269
dataset_size: 2228695
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "english_AAAI_Math"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6113774180412292,
-0.36102378368377686,
0.06013115495443344,
0.3863644599914551,
0.03783729299902916,
0.08283929526805878,
0.1305600255727768,
-0.220000222325325,
0.8702073097229004,
0.19237439334392548,
-0.7852181196212769,
-0.754578709602356,
-0.5567091703414917,
-0.12361875921487808,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
stepkurniawan/qa-rag-llama | stepkurniawan | 2023-10-17T14:26:55Z | 31 | 0 | null | [
"license:mit",
"region:us"
] | 2023-10-17T14:26:55Z | 2023-10-16T16:37:09.000Z | 2023-10-16T16:37:09 | ---
license: mit
dataset_info:
- config_name: Llama-2-13b-chat-hf
features:
- name: question
dtype: string
- name: ground_truths
sequence: string
- name: answer
dtype: string
- name: contexts
sequence: string
splits:
- name: train
num_bytes: 188631
num_examples: 50
download_size: 99989
dataset_size: 188631
- config_name: Llama-2-7b-chat-hf
features:
- name: question
dtype: string
- name: ground_truths
sequence: string
- name: answer
dtype: string
- name: contexts
sequence: string
splits:
- name: train
num_bytes: 168301
num_examples: 50
download_size: 89924
dataset_size: 168301
- config_name: default
features:
- name: question
dtype: string
- name: ground_truths
sequence: string
- name: answer
dtype: string
- name: contexts
sequence: string
splits:
- name: train
num_bytes: 10068
num_examples: 3
download_size: 0
dataset_size: 10068
configs:
- config_name: Llama-2-13b-chat-hf
data_files:
- split: train
path: Llama-2-13b-chat-hf/train-*
- config_name: Llama-2-7b-chat-hf
data_files:
- split: train
path: Llama-2-7b-chat-hf/train-*
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
garrett361/lore_mc_task_test | garrett361 | 2023-10-17T14:01:51Z | 31 | 0 | null | [
"region:us"
] | 2023-10-17T14:01:51Z | 2023-10-17T14:01:47.000Z | 2023-10-17T14:01:47 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: number
dtype: string
- name: gold
dtype: string
- name: choices
sequence: string
- name: query
dtype: string
splits:
- name: train
num_bytes: 10887.5
num_examples: 50
- name: validation
num_bytes: 5443.75
num_examples: 25
- name: test
num_bytes: 5443.75
num_examples: 25
download_size: 17841
dataset_size: 21775.0
---
# Dataset Card for "lore_mc_task_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.39611345529556274,
-0.3960953950881958,
0.253908634185791,
0.22289422154426575,
-0.02245216630399227,
0.027206523343920708,
0.2774510979652405,
-0.1755698323249817,
0.782012939453125,
0.5654464364051819,
-1.1649281978607178,
-0.7724844813346863,
-0.5554269552230835,
-0.23065780103206635... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
getawayfrommeXD/trec | getawayfrommeXD | 2023-10-28T07:32:24Z | 31 | 0 | null | [
"region:us"
] | 2023-10-28T07:32:24Z | 2023-10-25T04:17:52.000Z | 2023-10-25T04:17:52 | ---
dataset_info:
features:
- name: label-coarse
dtype: int64
- name: text
dtype: string
- name: clean_text
dtype: string
splits:
- name: train
num_bytes: 485569
num_examples: 4952
- name: validation
num_bytes: 50526
num_examples: 500
- name: test
num_bytes: 36238
num_examples: 500
download_size: 0
dataset_size: 572333
---
# Dataset Card for "trec"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6532798409461975,
-0.35221540927886963,
0.2735165059566498,
0.15344448387622833,
-0.23157484829425812,
0.33993497490882874,
0.3105717599391937,
-0.17477455735206604,
0.879597544670105,
0.4632546007633209,
-0.9130440950393677,
-1.0260426998138428,
-0.5287262797355652,
-0.1819853037595749... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MananSantoki/vadodara-jsonl | MananSantoki | 2023-10-25T11:26:34Z | 31 | 0 | null | [
"region:us"
] | 2023-10-25T11:26:34Z | 2023-10-25T11:26:01.000Z | 2023-10-25T11:26:01 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nlplabtdtu/multi-choices-text | nlplabtdtu | 2023-11-09T10:07:38Z | 31 | 0 | null | [
"region:us"
] | 2023-11-09T10:07:38Z | 2023-10-27T06:35:45.000Z | 2023-10-27T06:35:45 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: question
dtype: string
- name: options
list:
- name: answer
dtype: string
- name: key
dtype: string
- name: answer
struct:
- name: answer
dtype: string
- name: key
dtype: string
- name: solution
dtype: string
- name: type
dtype: string
- name: alnum_start
dtype: bool
- name: prompt
dtype: string
- name: response
dtype: string
- name: grade
dtype: string
- name: subject
dtype: string
- name: prompt_type
dtype: string
splits:
- name: train
num_bytes: 93596608
num_examples: 58286
download_size: 48223987
dataset_size: 93596608
---
# Dataset Card for "multi-choices-text"
Bộ dữ liệu trắc nghiệm gồm 58,290 dòng từ vungoi. Bộ này có một số đặc điểm sau:
```
- Các câu hỏi đều là câu hỏi hoàn chỉnh với "?" cuối câu
- Các câu hỏi tiếng Anh đều đã bị bỏ qua
- Các phần "Đáp án.*[ABCD]" của field "solution" bị thay bằng ""
- Đã bỏ "." ở từng "answer" của "options" và cả "solution". Chủ yếu để dễ làm prompt.
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.43573522567749023,
-0.7681803703308105,
0.2008899301290512,
0.3183664381504059,
-0.5292713642120361,
0.12249291688203812,
-0.13039667904376984,
-0.16016213595867157,
0.5553781390190125,
0.7077664732933044,
-0.6196457147598267,
-0.7283298373222351,
-0.5806697607040405,
0.2436329424381256... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
georgeyw/dsir-pile-100k | georgeyw | 2023-10-28T01:34:19Z | 31 | 0 | null | [
"region:us"
] | 2023-10-28T01:34:19Z | 2023-10-28T01:33:32.000Z | 2023-10-28T01:33:32 | rows 4.9m to 5m in the pile | [
-0.708500325679779,
0.2725760340690613,
0.2932646870613098,
0.3163984715938568,
-0.31048500537872314,
0.06800799816846848,
0.8925615549087524,
-0.3915828466415405,
0.669558584690094,
0.8252049684524536,
0.038067854940891266,
0.013840780593454838,
-0.5085081458091736,
0.649865984916687,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tourist800/orkg-llama2-v2 | tourist800 | 2023-11-01T06:09:05Z | 31 | 0 | null | [
"license:mit",
"region:us"
] | 2023-11-01T06:09:05Z | 2023-10-29T10:17:04.000Z | 2023-10-29T10:17:04 | ---
license: mit
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
recoilme/aesthetic_photos_xs | recoilme | 2023-10-29T15:20:31Z | 31 | 0 | null | [
"size_categories:1K<n<10K",
"art",
"region:us"
] | 2023-10-29T15:20:31Z | 2023-10-29T14:50:46.000Z | 2023-10-29T14:50:46 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1391150970.57
num_examples: 1010
download_size: 1391377501
dataset_size: 1391150970.57
tags:
- art
pretty_name: aesthetic photos xs
size_categories:
- 1K<n<10K
---
# aesthetic_photos_xs
- 1k manually selected photos from unsplash
- captioned with BLIP model large caption && SmilingWolf/wd-v1-4-convnext-tagger-v2
# repositories
- https://github.com/recoilme/unsplash_dwn
- https://github.com/kohya-ss/sd-scripts
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6073541045188904,
-0.00995598640292883,
0.2414441853761673,
0.30522817373275757,
-0.5040495991706848,
0.2599756717681885,
0.006658776197582483,
-0.380786657333374,
0.6525158286094666,
0.6575647592544556,
-0.9409482479095459,
-0.7522832155227661,
-0.33259013295173645,
0.21885061264038086... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
anyspeech/fleurs_test | anyspeech | 2023-11-02T00:05:35Z | 31 | 0 | null | [
"region:us"
] | 2023-11-02T00:05:35Z | 2023-11-02T00:02:09.000Z | 2023-11-02T00:02:09 | ---
configs:
- config_name: default
data_files:
- split: query
path: data/query-*
- split: candidate
path: data/candidate-*
dataset_info:
features:
- name: _id
dtype: int64
- name: file_name
dtype: string
- name: raw_transcription
dtype: string
- name: transcription
dtype: string
- name: num_samples
dtype: int64
- name: gender
dtype: string
- name: phones
dtype: string
- name: audio
struct:
- name: array
sequence: float64
- name: sampling_rate
dtype: int64
splits:
- name: query
num_bytes: 1843536302
num_examples: 1132
- name: candidate
num_bytes: 3243527476
num_examples: 1979
download_size: 3137163451
dataset_size: 5087063778
---
# Dataset Card for "fleurs_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.775978147983551,
-0.513909637928009,
0.03317233547568321,
0.2801312804222107,
-0.06066480278968811,
-0.1586194932460785,
0.2977476418018341,
-0.224787637591362,
0.7579621076583862,
0.5333923697471619,
-0.8206210732460022,
-0.5726436972618103,
-0.542812168598175,
-0.1595841348171234,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
YousufEjaz/phishing_legitimate | YousufEjaz | 2023-11-03T09:11:27Z | 31 | 0 | null | [
"license:mit",
"region:us"
] | 2023-11-03T09:11:27Z | 2023-11-03T06:03:36.000Z | 2023-11-03T06:03:36 | ---
license: mit
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
feedback-to-code/Corona-Warn-App-1 | feedback-to-code | 2023-11-06T12:12:23Z | 31 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-06T12:12:23Z | 2023-11-06T08:34:10.000Z | 2023-11-06T08:34:10 | ---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: train
path: data/tasks/cwa-app-ios-tasks-everything.jsonl
- split: test
path: data/tasks/cwa-app-ios-tasks.jsonl
--- | [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
thangvip/orca-processes | thangvip | 2023-11-08T03:11:09Z | 31 | 0 | null | [
"region:us"
] | 2023-11-08T03:11:09Z | 2023-11-08T03:11:05.000Z | 2023-11-08T03:11:05 | ---
dataset_info:
features:
- name: id
dtype: string
- name: system_prompt
dtype: string
- name: question
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 31950704.438731268
num_examples: 32860
download_size: 11256640
dataset_size: 31950704.438731268
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "orca-processes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.42546987533569336,
-0.4830549359321594,
0.311863511800766,
0.058204494416713715,
-0.25072044134140015,
-0.16906361281871796,
0.32982513308525085,
-0.4533863365650177,
0.9466180205345154,
0.7751142978668213,
-0.9506861567497253,
-0.7171223163604736,
-0.5814188122749329,
-0.33063086867332... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Fishball02/topical-chat | Fishball02 | 2023-11-12T21:47:12Z | 31 | 0 | null | [
"region:us"
] | 2023-11-12T21:47:12Z | 2023-11-09T22:35:56.000Z | 2023-11-09T22:35:56 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: context
list:
- name: agent
dtype: string
- name: message
dtype: string
- name: agent
dtype: string
- name: message
dtype: string
splits:
- name: train
num_bytes: 285581436.3244529
num_examples: 202117
- name: test
num_bytes: 31732055.675547145
num_examples: 22458
download_size: 181996944
dataset_size: 317313492.0
---
# Dataset Card for "topical-chat"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5871667265892029,
-0.6638720035552979,
0.21133217215538025,
0.39176544547080994,
-0.3105213940143585,
0.03997032716870308,
-0.13121621310710907,
-0.1963471621274948,
1.0453397035598755,
0.4217976927757263,
-0.9957610964775085,
-0.9553917050361633,
-0.5742097496986389,
-0.532710433006286... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Deojoandco/capstone_forgpt_without_gold | Deojoandco | 2023-11-10T13:00:46Z | 31 | 0 | null | [
"region:us"
] | 2023-11-10T13:00:46Z | 2023-11-10T13:00:32.000Z | 2023-11-10T13:00:32 | ---
dataset_info:
features:
- name: dialog_id
dtype: int64
- name: dialogue
dtype: string
- name: summary
dtype: string
- name: gold_tags
dtype: string
splits:
- name: train
num_bytes: 73584
num_examples: 76
- name: validation
num_bytes: 14568
num_examples: 12
- name: test
num_bytes: 8476
num_examples: 12
download_size: 38534
dataset_size: 96628
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
# Dataset Card for "capstone_forgpt_without_gold"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5612761974334717,
-0.2079877257347107,
0.36758968234062195,
0.21266821026802063,
-0.30306336283683777,
0.034468576312065125,
0.03316568210721016,
0.17235520482063293,
0.6661767959594727,
0.7167744636535645,
-1.1365301609039307,
-0.9707843065261841,
-0.6983197331428528,
-0.33846208453178... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Lwasinam/en-ha | Lwasinam | 2023-11-12T19:25:23Z | 31 | 0 | null | [
"region:us"
] | 2023-11-12T19:25:23Z | 2023-11-12T19:13:33.000Z | 2023-11-12T19:13:33 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
argilla/end2end_textclassification | argilla | 2023-11-27T13:37:22Z | 31 | 0 | null | [
"size_categories:1K<n<10K",
"rlfh",
"argilla",
"human-feedback",
"region:us"
] | 2023-11-27T13:37:22Z | 2023-11-13T17:25:52.000Z | 2023-11-13T17:25:52 | ---
size_categories: 1K<n<10K
tags:
- rlfh
- argilla
- human-feedback
---
# Dataset Card for end2end_textclassification
This dataset has been created with [Argilla](https://docs.argilla.io).
As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Dataset Description
- **Homepage:** https://argilla.io
- **Repository:** https://github.com/argilla-io/argilla
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named `argilla.yaml`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla.
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.FeedbackDataset.from_huggingface("argilla/end2end_textclassification")
```
### Load with `datasets`
To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("argilla/end2end_textclassification")
```
### Supported Tasks and Leaderboards
This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/conceptual_guides/data_model.html#feedback-dataset) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure).
There are no leaderboards associated with this dataset.
### Languages
[More Information Needed]
## Dataset Structure
### Data in Argilla
The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, **metadata**, **vectors**, and **guidelines**.
The **fields** are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
| Field Name | Title | Type | Required | Markdown |
| ---------- | ----- | ---- | -------- | -------- |
| text | Text | FieldTypes.text | True | False |
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| label | Label | QuestionTypes.label_selection | True | N/A | ['World', 'Sports', 'Business', 'Sci/Tech'] |
The **suggestions** are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with "-suggestion" and the metadata is appended with "-suggestion-metadata".
The **metadata** is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
| Metadata Name | Title | Type | Values | Visible for Annotators |
| ------------- | ----- | ---- | ------ | ---------------------- |
| group | Annotation Group | terms | ['group-1', 'group-2', 'group-3'] | True |
| length | Length of the text | integer | 100 - 862 | True |
| length_std | Standard deviation of the length of the text | float | 139.096 - 361.398 | True |
The **guidelines**, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
```json
{
"external_id": "record-0",
"fields": {
"text": "Wall St. Bears Claw Back Into the Black (Reuters) Reuters - Short-sellers, Wall Street\u0027s dwindling\\band of ultra-cynics, are seeing green again."
},
"metadata": {
"group": "group-2",
"length": 144,
"length_std": 144.0
},
"responses": [],
"suggestions": [],
"vectors": {}
}
```
While the same record in HuggingFace `datasets` looks as follows:
```json
{
"external_id": "record-0",
"label": [],
"label-suggestion": null,
"label-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
},
"metadata": "{\"group\": \"group-2\", \"length\": 144, \"length_std\": 144.0}",
"text": "Wall St. Bears Claw Back Into the Black (Reuters) Reuters - Short-sellers, Wall Street\u0027s dwindling\\band of ultra-cynics, are seeing green again."
}
```
### Data Fields
Among the dataset fields, we differentiate between the following:
* **Fields:** These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
* **text** is of type `FieldTypes.text`.
* **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as `RatingQuestion`, `TextQuestion`, `LabelQuestion`, `MultiLabelQuestion`, and `RankingQuestion`.
* **label** is of type `QuestionTypes.label_selection` with the following allowed values ['World', 'Sports', 'Business', 'Sci/Tech'].
* **Suggestions:** As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.
* (optional) **label-suggestion** is of type `QuestionTypes.label_selection` with the following allowed values ['World', 'Sports', 'Business', 'Sci/Tech'].
Additionally, we also have two more fields that are optional and are the following:
* **metadata:** This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
* **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
Classify the articles into one of the four categories.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | [
-0.7106332778930664,
-0.840616762638092,
0.20998705923557281,
0.1941518783569336,
-0.3804853856563568,
-0.43063750863075256,
-0.0845283716917038,
-0.6399125456809998,
0.674923837184906,
0.7474461793899536,
-0.7176883816719055,
-0.7959818840026855,
-0.6621313095092773,
0.26312124729156494,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
anumafzal94/pubmed-2shot-4096 | anumafzal94 | 2023-11-17T10:45:39Z | 31 | 0 | null | [
"region:us"
] | 2023-11-17T10:45:39Z | 2023-11-15T13:47:10.000Z | 2023-11-15T13:47:10 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: summary
dtype: string
- name: few-shot
dtype: bool
splits:
- name: test
num_bytes: 8149116.593446602
num_examples: 426
- name: train
num_bytes: 139802654.7469022
num_examples: 7242
download_size: 20828412
dataset_size: 147951771.3403488
---
# Dataset Card for "pubmed-2shot-4096"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.08539947122335434,
0.14528179168701172,
0.5515300631523132,
0.10807128995656967,
-0.44164785742759705,
-0.07952707260847092,
0.35072243213653564,
-0.1371520310640335,
0.730095386505127,
0.47741150856018066,
-0.6673378944396973,
-0.5684558153152466,
-0.751338005065918,
-0.072257846593856... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
atmallen/qm_alice_hard_4_mixture_1.0e | atmallen | 2023-11-16T18:18:16Z | 31 | 0 | null | [
"region:us"
] | 2023-11-16T18:18:16Z | 2023-11-16T03:33:37.000Z | 2023-11-16T03:33:37 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: alice_label
dtype: bool
- name: bob_label
dtype: bool
- name: difficulty
dtype: int64
- name: statement
dtype: string
- name: choices
sequence: string
- name: character
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
splits:
- name: train
num_bytes: 4578170.5
num_examples: 37091
- name: validation
num_bytes: 487083.5
num_examples: 3969
- name: test
num_bytes: 477119.5
num_examples: 3926
download_size: 1548358
dataset_size: 5542373.5
---
# Dataset Card for "qm_alice_hard_4_mixture_1.0e"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.46733781695365906,
-0.19386543333530426,
0.3733271360397339,
0.3511069118976593,
-0.21260613203048706,
0.07948730885982513,
0.49586179852485657,
0.05191444233059883,
0.7732616066932678,
0.524484395980835,
-0.6886478662490845,
-0.9015241265296936,
-0.5089607834815979,
-0.1395207196474075... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
empbetty/dog-similar-to-tangyuan-dataset | empbetty | 2023-11-16T11:08:55Z | 31 | 0 | null | [
"region:us"
] | 2023-11-16T11:08:55Z | 2023-11-16T11:08:53.000Z | 2023-11-16T11:08:53 | ---
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
splits:
- name: train
num_bytes: 2926472.0
num_examples: 105
download_size: 2926339
dataset_size: 2926472.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "dog-similar-to-tangyuan-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5531107783317566,
-0.060214828699827194,
0.1197693794965744,
0.3470171093940735,
-0.42461615800857544,
-0.12454700469970703,
0.18406884372234344,
-0.2138863056898117,
0.7778640985488892,
0.42838627099990845,
-0.7868608832359314,
-0.646608293056488,
-0.22285419702529907,
-0.1896346956491... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
laion/laion-pop | laion | 2023-11-18T13:16:24Z | 31 | 9 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-18T13:16:24Z | 2023-11-17T17:11:06.000Z | 2023-11-17T17:11:06 | ---
license: apache-2.0
---
# LAION POP: 600,000 HIGH-RESOLUTION IMAGES WITH DETAILED DESCRIPTIONS
## by: Christoph Schuhmann, Peter Bevan, 17 Nov, 2023
---
LAION POP is a subset of LAION 5B: This subset comprises 600,000 high-resolution images, each equipped with detailed descriptions. The selection of images was based on 10,000 different concepts popular on the image generation site "Midjourney".
---
### Dataset and Methodology
4.25 million Midjourney images were downloaded from this huggingface repository, and CLIP L14 vectors were generated for each image. Using the k-means clustering method, these vectors were assigned to 10,000 centroids. The CLIP vectors of these centroids were then used to retrieve nearest neighbors from the LAION-5B dataset using the image search website, focusing on those with aesthetic values of at least 0.5 and a minimum resolution of 768 pixels on the shortest side. Additionally, images suspected of containing watermarks were filtered out. NSFW values were calculated for each image using the LAION CLIP-based-NSFW-Detector, and these are released with the data.
---
### Generation of LLAVA 1.5 Captions
Detailed image descriptions were created for the selected images using the LLAVA 1.5 model. These descriptions focus on objects, backgrounds, scenery, interactions, and gestures, as well as the appearance and emotions of the depicted people or characters.
---
### PROMPT
"Can you please describe this image in up to two paragraphs? Please specify any objects within the image, backgrounds, scenery, interactions, and gestures or poses. If they are multiple of any object, please specify how many. Is there text in the image, and if so, what does it say? If there is any lighting in the image, can you identify where it is and what it looks like? What style is the image? If there are people or characters in the image, what emotions are they conveying? Please keep your descriptions factual and terse but complete. DO NOT add any unnecessary speculation about the things that are not part of the image such as "the image is inspiring to viewers" or "seeing this makes you feel joy". DO NOT add things such as "creates a unique and entertaining visual", as these descriptions are interpretations and not a part of the image itself. The description should be purely factual, with no subjective speculation. Make sure to include the style of the image, for example cartoon, photograph, 3d render etc. Start with the words ‘This image showcases’:”
‘This image showcases’ was trimmed from the beginning of each caption upon generation.
---
### Future Application and Improvements
Although no text-to-image model has been tuned with these data so far, we expect that the use of these data could significantly improve the aesthetic quality of the outputs.
# Citation
```bibtex
@misc{LAION_POP,
title = {LAION POP: 600,000 High-Resolution Images With Detailed Descriptions},
author = {Christoph Schuhmann and Peter Bevan},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://huggingface.co/datasets/laion/laion-pop}},
}
``` | [
-0.7180682420730591,
-0.4762360751628876,
0.3520655333995819,
0.3348037600517273,
-0.37856897711753845,
-0.19650857150554657,
0.06705113500356674,
-0.680060863494873,
0.43293774127960205,
0.735034704208374,
-0.5335177183151245,
-0.4864729344844818,
-0.5152937173843384,
0.29556214809417725,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dapooni/ds_translate_en_bsl_bart | dapooni | 2023-11-25T03:04:05Z | 31 | 0 | null | [
"region:us"
] | 2023-11-25T03:04:05Z | 2023-11-19T17:14:42.000Z | 2023-11-19T17:14:42 | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 239978
num_examples: 1480
download_size: 87719
dataset_size: 239978
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.1285339742898941,
-0.18616800010204315,
0.6529127359390259,
0.4943626821041107,
-0.1931934952735901,
0.2360742688179016,
0.360720157623291,
0.05056300014257431,
0.5793654322624207,
0.7400140166282654,
-0.6508105993270874,
-0.23783984780311584,
-0.7102248668670654,
-0.047826044261455536,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
idning/ffhq128-caption | idning | 2023-11-21T04:23:13Z | 31 | 0 | null | [
"license:mit",
"region:us"
] | 2023-11-21T04:23:13Z | 2023-11-21T01:41:34.000Z | 2023-11-21T01:41:34 | ---
license: mit
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2053430676.0
num_examples: 70000
download_size: 2051404020
dataset_size: 2053430676.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.1285339742898941,
-0.18616800010204315,
0.6529127359390259,
0.4943626821041107,
-0.1931934952735901,
0.2360742688179016,
0.360720157623291,
0.05056300014257431,
0.5793654322624207,
0.7400140166282654,
-0.6508105993270874,
-0.23783984780311584,
-0.7102248668670654,
-0.047826044261455536,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Xnhyacinth/NQ-Image | Xnhyacinth | 2023-11-22T15:41:35Z | 31 | 0 | null | [
"license:mit",
"region:us"
] | 2023-11-22T15:41:35Z | 2023-11-21T09:42:49.000Z | 2023-11-21T09:42:49 | ---
license: mit
dataset_info:
- config_name: ctxs1
features:
- name: id
dtype: int64
- name: answers
sequence: string
- name: question
dtype: string
- name: compressed_prompt
struct:
- name: compressed_prompt
dtype: string
- name: compressed_tokens
dtype: int64
- name: origin_tokens
dtype: int64
- name: ratio
dtype: string
- name: saving
dtype: string
- name: ctxs
list:
- name: id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: train
num_bytes: 5212377086
num_examples: 79168
- name: eval
num_bytes: 576466670
num_examples: 8757
- name: test
num_bytes: 238448436
num_examples: 3610
download_size: 3334114023
dataset_size: 6027292192
- config_name: ctxs100
features:
- name: question
dtype: string
- name: compressed_prompt
struct:
- name: compressed_prompt
dtype: string
- name: compressed_tokens
dtype: int64
- name: origin_tokens
dtype: int64
- name: ratio
dtype: string
- name: saving
dtype: string
- name: answers
sequence: string
- name: id
dtype: int64
- name: ctxs
list:
- name: id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: train
num_bytes: 5316136683
num_examples: 79168
- name: eval
num_bytes: 587931406
num_examples: 8757
- name: test
num_bytes: 243224578
num_examples: 3610
download_size: 3413758169
dataset_size: 6147292667
- config_name: ctxs5
features:
- name: id
dtype: int64
- name: answers
sequence: string
- name: question
dtype: string
- name: compressed_prompt
struct:
- name: compressed_prompt
dtype: string
- name: compressed_tokens
dtype: int64
- name: origin_tokens
dtype: int64
- name: ratio
dtype: string
- name: saving
dtype: string
- name: ctxs
list:
- name: id
dtype: string
- name: score
dtype: float64
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: train
num_bytes: 5379479786
num_examples: 79168
- name: eval
num_bytes: 594986589
num_examples: 8757
- name: test
num_bytes: 246104192
num_examples: 3610
download_size: 3408308518
dataset_size: 6220570567
configs:
- config_name: ctxs1
data_files:
- split: train
path: ctxs1/train-*
- split: eval
path: ctxs1/eval-*
- split: test
path: ctxs1/test-*
- config_name: ctxs100
data_files:
- split: train
path: ctxs100/train-*
- split: eval
path: ctxs100/eval-*
- split: test
path: ctxs100/test-*
- config_name: ctxs5
data_files:
- split: train
path: ctxs5/train-*
- split: eval
path: ctxs5/eval-*
- split: test
path: ctxs5/test-*
---
| [
-0.1285339742898941,
-0.18616800010204315,
0.6529127359390259,
0.4943626821041107,
-0.1931934952735901,
0.2360742688179016,
0.360720157623291,
0.05056300014257431,
0.5793654322624207,
0.7400140166282654,
-0.6508105993270874,
-0.23783984780311584,
-0.7102248668670654,
-0.047826044261455536,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Anwaarma/MySentimentAnwarBig | Anwaarma | 2023-11-21T10:32:07Z | 31 | 0 | null | [
"region:us"
] | 2023-11-21T10:32:07Z | 2023-11-21T10:32:05.000Z | 2023-11-21T10:32:05 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': Negative
'1': Positive
splits:
- name: train
num_bytes: 3402656.0
num_examples: 14666
- name: test
num_bytes: 271618.95179553545
num_examples: 1080
download_size: 1986961
dataset_size: 3674274.9517955356
---
# Dataset Card for "MySentimentAnwarBig"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6347995400428772,
-0.35079124569892883,
0.048209499567747116,
0.5704541802406311,
-0.2660598158836365,
0.08088191598653793,
0.316484659910202,
-0.09868031740188599,
0.8901681900024414,
0.39829373359680176,
-1.0935381650924683,
-0.7300177812576294,
-0.5433417558670044,
-0.126387238502502... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Siki-77/amazon6_5core_polarity | Siki-77 | 2023-11-23T14:44:42Z | 31 | 1 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-23T14:44:42Z | 2023-11-21T12:29:07.000Z | 2023-11-21T12:29:07 | ---
license: apache-2.0
---
data source:https://cseweb.ucsd.edu/~jmcauley/datasets/amazon_v2/
we construct a new dataset Amazon reviews (Ni et al., 2019) on data aggregated over six genres 5core: beauty, fashion, appliances, giftcards, magazines, and software.
cite:
Jianmo Ni, Jiacheng Li, and Julian McAuley. Justifying recommendations using distantly-labeled
reviews and fine-grained aspects. In Empirical Methods in Natural Language Processing and
International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 2019. URL
https://www.aclweb.org/anthology/D19-1018. | [
-0.5987173318862915,
-0.6975337862968445,
0.33566731214523315,
0.427104115486145,
-0.2942962050437927,
-0.27377212047576904,
-0.15888559818267822,
-1.0402089357376099,
0.2737870514392853,
0.5086053609848022,
-0.7132291197776794,
-0.7931682467460632,
-0.21360933780670166,
0.1265315115451812... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
plaguss/go_emotions_raw | plaguss | 2023-11-24T08:44:41Z | 31 | 0 | null | [
"size_categories:10K<n<100K",
"rlfh",
"argilla",
"human-feedback",
"region:us"
] | 2023-11-24T08:44:41Z | 2023-11-23T18:18:33.000Z | 2023-11-23T18:18:33 | ---
size_categories: 10K<n<100K
tags:
- rlfh
- argilla
- human-feedback
---
# Dataset Card for go_emotions_raw
This dataset has been created with [Argilla](https://docs.argilla.io).
As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Dataset Description
- **Homepage:** https://argilla.io
- **Repository:** https://github.com/argilla-io/argilla
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named `argilla.yaml`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla.
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.FeedbackDataset.from_huggingface("plaguss/go_emotions_raw")
```
### Load with `datasets`
To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("plaguss/go_emotions_raw")
```
### Supported Tasks and Leaderboards
This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/conceptual_guides/data_model.html#feedback-dataset) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure).
There are no leaderboards associated with this dataset.
### Languages
[More Information Needed]
## Dataset Structure
### Data in Argilla
The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, **metadata**, **vectors**, and **guidelines**.
The **fields** are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
| Field Name | Title | Type | Required | Markdown |
| ---------- | ----- | ---- | -------- | -------- |
| text | Text | text | True | False |
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| label | Label | multi_label_selection | True | Classify the text by selecting the correct label from the given list of labels. | ['admiration', 'amusement', 'anger', 'annoyance', 'approval', 'caring', 'confusion', 'curiosity', 'desire', 'disappointment', 'disapproval', 'disgust', 'embarrassment', 'excitement', 'fear', 'gratitude', 'grief', 'joy', 'love', 'nervousness', 'optimism', 'pride', 'realization', 'relief', 'remorse', 'sadness', 'surprise', 'neutral'] |
The **suggestions** are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with "-suggestion" and the metadata is appended with "-suggestion-metadata".
The **metadata** is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
| Metadata Name | Title | Type | Values | Visible for Annotators |
| ------------- | ----- | ---- | ------ | ---------------------- |
The **guidelines**, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
```json
{
"external_id": null,
"fields": {
"text": " \"If you don\u0027t wear BROWN AND ORANGE...YOU DON\u0027T MATTER!\" We need a tshirt with that on it asap! "
},
"metadata": {},
"responses": [
{
"status": "submitted",
"user_id": "00000000-0000-0000-0000-000000000001",
"values": {
"label": {
"value": [
"neutral"
]
}
}
},
{
"status": "submitted",
"user_id": "00000000-0000-0000-0000-000000000016",
"values": {
"label": {
"value": [
"anger",
"annoyance",
"optimism"
]
}
}
},
{
"status": "submitted",
"user_id": "00000000-0000-0000-0000-000000000028",
"values": {
"label": {
"value": [
"approval"
]
}
}
},
{
"status": "submitted",
"user_id": "00000000-0000-0000-0000-000000000039",
"values": {
"label": {
"value": [
"neutral"
]
}
}
},
{
"status": "submitted",
"user_id": "00000000-0000-0000-0000-000000000048",
"values": {
"label": {
"value": [
"annoyance"
]
}
}
}
],
"suggestions": [
{
"agent": null,
"question_name": "label",
"score": null,
"type": "human",
"value": [
"annoyance",
"neutral"
]
}
],
"vectors": {}
}
```
While the same record in HuggingFace `datasets` looks as follows:
```json
{
"external_id": null,
"label": [
{
"status": "submitted",
"user_id": "00000000-0000-0000-0000-000000000001",
"value": [
"neutral"
]
},
{
"status": "submitted",
"user_id": "00000000-0000-0000-0000-000000000016",
"value": [
"anger",
"annoyance",
"optimism"
]
},
{
"status": "submitted",
"user_id": "00000000-0000-0000-0000-000000000028",
"value": [
"approval"
]
},
{
"status": "submitted",
"user_id": "00000000-0000-0000-0000-000000000039",
"value": [
"neutral"
]
},
{
"status": "submitted",
"user_id": "00000000-0000-0000-0000-000000000048",
"value": [
"annoyance"
]
}
],
"label-suggestion": [
"annoyance",
"neutral"
],
"label-suggestion-metadata": {
"agent": null,
"score": null,
"type": "human"
},
"metadata": "{}",
"text": " \"If you don\u0027t wear BROWN AND ORANGE...YOU DON\u0027T MATTER!\" We need a tshirt with that on it asap! "
}
```
### Data Fields
Among the dataset fields, we differentiate between the following:
* **Fields:** These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
* **text** is of type `text`.
* **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as `RatingQuestion`, `TextQuestion`, `LabelQuestion`, `MultiLabelQuestion`, and `RankingQuestion`.
* **label** is of type `multi_label_selection` with the following allowed values ['admiration', 'amusement', 'anger', 'annoyance', 'approval', 'caring', 'confusion', 'curiosity', 'desire', 'disappointment', 'disapproval', 'disgust', 'embarrassment', 'excitement', 'fear', 'gratitude', 'grief', 'joy', 'love', 'nervousness', 'optimism', 'pride', 'realization', 'relief', 'remorse', 'sadness', 'surprise', 'neutral'], and description "Classify the text by selecting the correct label from the given list of labels.".
* **Suggestions:** As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.
* (optional) **label-suggestion** is of type `multi_label_selection` with the following allowed values ['admiration', 'amusement', 'anger', 'annoyance', 'approval', 'caring', 'confusion', 'curiosity', 'desire', 'disappointment', 'disapproval', 'disgust', 'embarrassment', 'excitement', 'fear', 'gratitude', 'grief', 'joy', 'love', 'nervousness', 'optimism', 'pride', 'realization', 'relief', 'remorse', 'sadness', 'surprise', 'neutral'].
Additionally, we also have two more fields that are optional and are the following:
* **metadata:** This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
* **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Script used for the generation
```python
import argilla as rg
from datasets import load_dataset
import uuid
from datasets import concatenate_datasets
ds = load_dataset("go_emotions", "raw", split="train")
ds_prepared = load_dataset("go_emotions")
_CLASS_NAMES = [
"admiration",
"amusement",
"anger",
"annoyance",
"approval",
"caring",
"confusion",
"curiosity",
"desire",
"disappointment",
"disapproval",
"disgust",
"embarrassment",
"excitement",
"fear",
"gratitude",
"grief",
"joy",
"love",
"nervousness",
"optimism",
"pride",
"realization",
"relief",
"remorse",
"sadness",
"surprise",
"neutral",
]
label_to_id = {label: i for i, label in enumerate(_CLASS_NAMES)}
id_to_label = {i: label for i, label in enumerate(_CLASS_NAMES)}
# Concatenate the datasets and transform to pd.DataFrame
ds_prepared = concatenate_datasets([ds_prepared["train"], ds_prepared["validation"], ds_prepared["test"]])
df_prepared = ds_prepared.to_pandas()
# Obtain the final labels as a dict, to later include these as suggestions
labels_prepared = {}
for idx in df_prepared.index:
labels = [id_to_label[label_id] for label_id in df_prepared['labels'][idx]]
labels_prepared[df_prepared['id'][idx]] = labels
# Add labels to the dataset and keep only the relevant columns
def add_labels(ex):
labels = []
for label in _CLASS_NAMES:
if ex[label] == 1:
labels.append(label)
ex["labels"] = labels
return ex
ds = ds.map(add_labels)
df = ds.select_columns(["text", "labels", "rater_id", "id"]).to_pandas()
# Create a FeedbackDataset for text classification
feedback_dataset = rg.FeedbackDataset.for_text_classification(labels=_CLASS_NAMES, multi_label=True)
# Create the records with the original responses, and use as suggestions
# the final labels in the "simplified" go_emotions dataset.
records = []
for text, df_text in df.groupby("text"):
responses = []
for rater_id, df_raters in df_text.groupby("rater_id"):
responses.append(
{
"values": {"label": {"value": df_raters["labels"].iloc[0].tolist()}},
"status": "submitted",
"user_id": uuid.UUID(int=rater_id),
}
)
suggested_labels = labels_prepared.get(df_raters["id"].iloc[0], None)
if not suggested_labels:
continue
suggestion = [
{
"question_name": "label",
"value": suggested_labels,
"type": "human",
}
]
records.append(
rg.FeedbackRecord(
fields={"text": df_raters["text"].iloc[0]},
responses=responses,
suggestions=suggestion
)
)
feedback_dataset.add_records(records)
# Push to the hub
feedback_dataset.push_to_huggingface("plaguss/go_emotions_raw")
```
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
This is a text classification dataset that contains texts and labels. Given a set of texts and a predefined set of labels, the goal of text classification is to assign one or more labels to each text based on its content. Please classify the texts by making the correct selection.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | [
-0.7371832132339478,
-0.7678261995315552,
0.3165278136730194,
0.22504839301109314,
-0.2469681203365326,
-0.30490753054618835,
-0.08050007373094559,
-0.35933804512023926,
0.7777606248855591,
0.6996095180511475,
-0.6837188601493835,
-0.9309860467910767,
-0.6198286414146423,
0.228096842765808... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sadmoseby/sample-function-call | sadmoseby | 2023-11-29T01:12:04Z | 31 | 0 | null | [
"region:us"
] | 2023-11-29T01:12:04Z | 2023-11-24T23:26:05.000Z | 2023-11-24T23:26:05 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2259661
num_examples: 1000
- name: validation
num_bytes: 2231397
num_examples: 1000
- name: test
num_bytes: 2233892
num_examples: 1000
- name: val
num_bytes: 2231397
num_examples: 1000
download_size: 3541865
dataset_size: 8956347
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
- split: val
path: data/val-*
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gueleilo/enemfull | gueleilo | 2023-11-25T03:16:35Z | 31 | 1 | null | [
"region:us"
] | 2023-11-25T03:16:35Z | 2023-11-25T01:45:48.000Z | 2023-11-25T01:45:48 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
portafolio/llamadas-celular-1494-es | portafolio | 2023-11-26T06:08:16Z | 31 | 0 | null | [
"region:us"
] | 2023-11-26T06:08:16Z | 2023-11-26T05:31:07.000Z | 2023-11-26T05:31:07 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
luozhouyang/dureader | luozhouyang | 2021-11-29T04:44:53Z | 30 | 3 | null | [
"region:us"
] | 2021-11-29T04:44:53Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | # dureader
数据来自千言DuReader数据集,这里是原始地址 [千言数据集:阅读理解](https://aistudio.baidu.com/aistudio/competition/detail/49/0/task-definition)。
> 本数据集只用作学术研究使用。如果本仓库涉及侵权行为,会立即删除。
目前包含以下两个子集:
* DuReader-robust
* DuReader-checklist
```python
from datasets import load_dataset
robust = load_dataset("luozhouyang/dureader", "robust")
checklist = load_dataset("luozhouyang/dureader", "checklist")
``` | [
-0.19390296936035156,
-0.5339222550392151,
-0.012357520870864391,
0.5304158329963684,
-0.6146119832992554,
-0.20069845020771027,
0.11874234676361084,
-0.1618608683347702,
0.10681109875440598,
0.5403172969818115,
-0.3590930700302124,
-0.4004678726196289,
-0.5894290804862976,
0.4033346474170... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
persiannlp/parsinlu_query_paraphrasing | persiannlp | 2022-10-22T15:13:22Z | 30 | 0 | null | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|quora|google",
"language:fa",
"license:cc-by-nc-sa-4.0",
"arxiv:2012.06154",
"region:us"
] | 2022-10-22T15:13:22Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- fa
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended|quora|google
task_categories:
- query-paraphrasing
task_ids:
- query-paraphrasing
---
# Dataset Card for PersiNLU (Query Paraphrasing)
## Table of Contents
- [Dataset Card for PersiNLU (Query Paraphrasing)](#dataset-card-for-persi_nlu_query_paraphrasing)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/persiannlp/parsinlu/)
- **Repository:** [Github](https://github.com/persiannlp/parsinlu/)
- **Paper:** [Arxiv](https://arxiv.org/abs/2012.06154)
- **Leaderboard:**
- **Point of Contact:** d.khashabi@gmail.com
### Dataset Summary
A Persian query paraphrasng task (deciding whether two questions are paraphrases of each other).
The questions are partially generated from Google auto-complete, and partially translated from the Quora paraphrasing dataset.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text dataset is in Persian (`fa`).
## Dataset Structure
### Data Instances
Here is an example from the dataset:
```json
{
"q1": "اعمال حج تمتع از چه روزی شروع میشود؟",
"q2": "ویار از چه روزی شروع میشود؟",
"label": "0",
"category": "natural"
}
```
### Data Fields
- `q1`: the first question.
- `q2`: the second question.
- `category`: whether the questions are mined from Quora (`qqp`) or they're extracted from Google auto-complete (`natural`).
- `label`: `1` if the questions are paraphrases; `0` otherwise.
### Data Splits
The train/dev/test splits contains 1830/898/1916 samples.
## Dataset Creation
### Curation Rationale
For details, check [the corresponding draft](https://arxiv.org/abs/2012.06154).
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
CC BY-NC-SA 4.0 License
### Citation Information
```bibtex
@article{huggingface:dataset,
title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
year={2020}
journal = {arXiv e-prints},
eprint = {2012.06154},
}
```
### Contributions
Thanks to [@danyaljj](https://github.com/danyaljj) for adding this dataset.
| [
-0.41624385118484497,
-0.8086446523666382,
0.25060468912124634,
0.3180920481681824,
-0.32754358649253845,
0.07024025171995163,
-0.4939287304878235,
0.03428135812282562,
0.36149024963378906,
0.4918592870235443,
-0.604773759841919,
-0.8025475144386292,
-0.47521841526031494,
0.341599017381668... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
projecte-aina/xquad-ca | projecte-aina | 2023-11-25T05:37:46Z | 30 | 1 | null | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"language:ca",
"license:cc-by-sa-4.0",
"arxiv:2107.07903",
"arxiv:1606.05250",
"arxiv:1910.11856",
"regi... | 2023-11-25T05:37:46Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- ca
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: xquad-ca
size_categories:
- unknown
source_datasets: []
task_categories:
- question-answering
task_ids:
- extractive-qa
---
# Dataset Card for XQuAD-Ca
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://zenodo.org/record/6669801
- **Paper:** [Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? A Comprehensive Assessment for Catalan](https://arxiv.org/abs/2107.07903)
- **Point of Contact:** [Carlos Rodríguez-Penagos](carlos.rodriguez1@bsc.es) and [Carme Armentano-Oller](carme.armentano@bsc.es)
### Dataset Summary
Professional translation into Catalan of [XQuAD dataset](https://github.com/deepmind/xquad).
XQuAD (Cross-lingual Question Answering Dataset) is a benchmark dataset for evaluating cross-lingual question answering performance. The dataset consists of a subset of 240 paragraphs and 1190 question-answer pairs from the development set of SQuAD v1.1 ([Rajpurkar, Pranav et al., 2016](http://arxiv.org/abs/1606.05250)) together with their professional translations into ten language: Spanish, German, Greek, Russian, Turkish, Arabic, Vietnamese, Thai, Chinese, and Hindi. Rumanian was added later. We added the 13th language to the corpus using also professional native Catalan translators.
XQuAD and XQuAD-Ca datasets are released under [CC-by-sa](https://creativecommons.org/licenses/by-sa/3.0/legalcode) licence.
### Supported Tasks and Leaderboards
Cross-lingual-QA, Extractive-QA, Language Model
### Languages
The dataset is in Catalan (`ca-ES`)
## Dataset Structure
### Data Instances
One json file.
1189 examples.
<pre>
{
"data": [
{
"context": "Al llarg de la seva existència, Varsòvia ha estat una ciutat multicultural. Segons el cens del 1901, de 711.988 habitants, el 56,2 % eren catòlics, el 35,7 % jueus, el 5 % cristians ortodoxos grecs i el 2,8 % protestants. Vuit anys després, el 1909, hi havia 281.754 jueus (36,9 %), 18.189 protestants (2,4 %) i 2.818 mariavites (0,4 %). Això va provocar que es construïssin centenars de llocs de culte religiós a totes les parts de la ciutat. La majoria d’ells es van destruir després de la insurrecció de Varsòvia del 1944. Després de la guerra, les noves autoritats comunistes de Polònia van apocar la construcció d’esglésies i només se’n va construir un petit nombre.",
"qas": [
{
"answers": [
{
"text": "711.988",
"answer_start": 104
}
],
"id": "57338007d058e614000b5bdb",
"question": "Quina era la població de Varsòvia l’any 1901?"
},
{
"answers": [
{
"text": "56,2 %",
"answer_start": 126
}
],
"id": "57338007d058e614000b5bdc",
"question": "Dels habitants de Varsòvia l’any 1901, quin percentatge era catòlic?"
},
...
]
}
]
},
...
]
}
</pre>
### Data Fields
Follows [Rajpurkar, Pranav et al., 2016](http://arxiv.org/abs/1606.05250) for SQuAD v1 datasets.
- `id` (str): Unique ID assigned to the question.
- `title` (str): Title of the Wikipedia article.
- `context` (str): Wikipedia section text.
- `question` (str): Question.
- `answers` (list): List of answers to the question, each containing:
- `text` (str): Span text answering to the question.
- `answer_start` Starting offset of the span text answering to the question.
### Data Splits
- test.json: 1189 examples.
## Dataset Creation
### Curation RationaleCA
We created this dataset to contribute to the development of language models in Catalan, a low-resource language, and for compatibility with similar datasets in other languages, and to allow inter-lingual comparisons.
### Source Data
- [XQuAD's webpage](https://github.com/deepmind/xquad).
#### Initial Data Collection and Normalization
This dataset is a professional translation of [XQuAD](https://github.com/deepmind/xquad) into Catalan, commissioned by [BSC TeMU](https://temu.bsc.es/) within [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina/).
For more information on how XQuAD was created, refer to the paper, On the [Cross-lingual Transferability of Monolingual Representations](https://arxiv.org/abs/1910.11856), or visit the [XQuAD's webpage](https://github.com/deepmind/xquad).
#### Who are the source language producers?
For more information on how XQuAD was created, refer to the paper, [On the Cross-lingual Transferability of Monolingual Representations ](https://arxiv.org/abs/1910.11856), or visit the [XQuAD's webpage](https://github.com/deepmind/xquad).
### Annotations
This is a professional translation of the XQuAD corpus and its annotations.
#### Annotation process
[N/A]
#### Who are the annotators?
Translation was commissioned to a professional translation company.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
This work is licensed under a [CC-by-sa](https://creativecommons.org/licenses/by-sa/3.0/legalcode) licence.
### Citation Information
```
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
[DOI](https://doi.org/10.5281/zenodo.4526223)
### Contributions
[N/A] | [
-0.4466751515865326,
-0.43814152479171753,
0.06253210455179214,
0.3038616478443146,
-0.09156305342912674,
0.33994260430336,
-0.3260754942893982,
-0.35871192812919617,
0.510927677154541,
0.2915928363800049,
-0.6632285118103027,
-0.8509811758995056,
-0.35841453075408936,
0.0875539779663086,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vasudevgupta/natural-questions-validation | vasudevgupta | 2021-05-04T18:25:07Z | 30 | 0 | null | [
"region:us"
] | 2021-05-04T18:25:07Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Obtained using following code:
```python
from datasets import load_dataset
dataset = load_dataset("natural_questions", split="validation")
dataset.save_to_disk("natural-questions-validation")
``` | [
-0.6151459813117981,
-0.8827422857284546,
-0.008796020410954952,
0.23453406989574432,
0.08514569699764252,
-0.19770702719688416,
-0.010198275558650494,
0.2192760407924652,
0.22426173090934753,
0.8986152410507202,
-0.7239251732826233,
-0.32447928190231323,
0.15147534012794495,
0.85334360599... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
webis/conclugen | webis | 2022-05-03T06:18:33Z | 30 | 1 | null | [
"region:us"
] | 2022-05-03T06:18:33Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | # Dataset Card for ConcluGen
## Table of Contents
- [Dataset Card for ConcluGen](#dataset-card-for-conclugen)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://zenodo.org/record/4818134
- **Repository:** https://github.com/webis-de/acl21-informative-conclusion-generation
- **Paper:** [Generating Informative Conclusions for Argumentative Texts](https://aclanthology.org/2021.findings-acl.306.pdf)
- **Leaderboard:** [N/A]
- **Point of Contact:** [Shahbaz Syed](mailto:shahbaz.syed@uni-leipzig.de)
### Dataset Summary
The ConcluGen corpus is constructed for the task of argument summarization. It consists of 136,996 pairs of argumentative texts and their conclusions collected from the ChangeMyView subreddit, a web portal for argumentative discussions on controversial topics.
The corpus has three variants: topics, aspects, and targets. Each variation encodes the corresponding information via control codes. These provide additional argumentative knowledge for generating more informative conclusions.
### Supported Tasks and Leaderboards
Argument Summarization, Conclusion Generation
### Languages
English ('en') as spoken by Reddit users on the [r/changemyview](https://old.reddit.com/r/changemyview/) subreddits.
## Dataset Structure
### Data Instances
An example consists of a unique 'id', an 'argument', and its 'conclusion'.
**base**
Contains only the argument and its conclusion.
```
{'id': 'ee11c116-23df-4795-856e-8b6c6626d5ed',
'argument': "In my opinion, the world would be a better place if alcohol was illegal. I've done a little bit of research to get some numbers, and I was quite shocked at what I found. Source On average, one in three people will be involved in a drunk driving crash in their lifetime. In 2011, 9,878 people died in drunk driving crashes Drunk driving costs each adult in this country almost 500 per year. Drunk driving costs the United States 132 billion a year. Every day in America, another 27 people die as a result of drunk driving crashes. Almost every 90 seconds, a person is injured in a drunk driving crash. These are just the driving related statistics. They would each get reduced by at least 75 if the sale of alcohol was illegal. I just don't see enough positives to outweigh all the deaths and injuries that result from irresponsible drinking. Alcohol is quite literally a drug, and is also extremely addicting. It would already be illegal if not for all these pointless ties with culture. Most people wouldn't even think to live in a world without alcohol, but in my opinion that world would be a better, safer, and more productive one. , or at least defend the fact that it's legal.",
'conclusion': 'I think alcohol should be illegal.'}
```
**topic**
Argument encoded with the discussion topic.
```
{"id":"b22272fd-00d2-4373-b46c-9c1d9d21e6c2","argument":"<|TOPIC|>Should Planned Parenthood Be Defunded?<|ARGUMENT|>Even the best contraceptive methods such as surgical sterilisation can fail, and even with perfect use the pill may not work.<|CONCLUSION|>","conclusion":"Even with the best intentions and preparation, contraceptives can and do fail."}
```
**aspects**
Argument encoded with the discussion topic and argument's aspects.
```
{"id":"adc92826-7892-42d4-9405-855e845bf027","argument":"<|TOPIC|>Gender Neutral Bathrooms: Should They be Standard?<|ARGUMENT|>Men's toilets and women's urine have different odours due to hormone differences in each biological sex. As a result, the urine of one sex may smell much worse to the other sex and vice versa, meaning that it is logical to keep their toilet facilities separate.<|ASPECTS|>hormone differences, urine, separate, facilities, different odours, smell much worse<|CONCLUSION|>","conclusion":"Men and women, because of their different biological characteristics, each need a different type of bathroom. Gender-segregated bathrooms reflect and honour these differences."}
```
**targets**
Argument encoded with the discussion topic and possible conclusion targets.
```
{"id":"c9a87a03-edda-42be-9c0d-1e7d2d311816","argument":"<|TOPIC|>Australian republic vs. monarchy<|ARGUMENT|>The monarchy is a direct reflection of Australia's past as a British colony and continues to symbolize Australia's subservience to the British crown. Such symbolism has a powerfully negative effect on Australians' sense of independence and identity. Ending the monarchy and establishing a republic would constitute a substantial stride in the direction of creating a greater sense of independence and national pride and identity.<|TARGETS|>Such symbolism, The monarchy, Ending the monarchy and establishing a republic<|CONCLUSION|>","conclusion":"Ending the monarchy would foster an independent identity in Australia"}
```
### Data Fields
- `id`: a string identifier for each example.
- `argument`: the argumentative text.
- `conclusion`: the conclusion of the argumentative text.
### Data Splits
The data is split into train, validation, and test splits for each variation of the dataset (including base).
| | Train | Validation | Test |
|--------- |--------- |------------ |------ |
| Base | 116,922 | 12,224 | 1373 |
| Aspects | 120,142 | 12,174 | 1357 |
| Targets | 109,376 | 11,053 | 1237 |
| Topic | 121,588 | 12,335 | 1372 |
## Dataset Creation
### Curation Rationale
ConcluGen was built as a first step towards argument summarization technology. The [rules of the subreddit](https://old.reddit.com/r/changemyview/wiki/rules) ensure high quality data suitable for the task.
### Source Data
#### Initial Data Collection and Normalization
Reddit [ChangeMyView](https://old.reddit.com/r/changemyview/)
#### Who are the source language producers?
Users of the subreddit [r/changemyview](https://old.reddit.com/r/changemyview/). Further demographic information is unavailable from the data source.
### Annotations
The dataset is augmented with automatically extracted knowledge such as the argument's aspects, the discussion topic, and possible conclusion targets.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
Only the argumentative text and its conclusion are provided. No personal information of the posters is included.
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
The licensing status of the dataset hinges on the legal status of the [Pushshift.io](https://files.pushshift.io/reddit/) data which is unclear.
### Citation Information
```
@inproceedings{syed:2021,
author = {Shahbaz Syed and
Khalid Al Khatib and
Milad Alshomary and
Henning Wachsmuth and
Martin Potthast},
editor = {Chengqing Zong and
Fei Xia and
Wenjie Li and
Roberto Navigli},
title = {Generating Informative Conclusions for Argumentative Texts},
booktitle = {Findings of the Association for Computational Linguistics: {ACL/IJCNLP}
2021, Online Event, August 1-6, 2021},
pages = {3482--3493},
publisher = {Association for Computational Linguistics},
year = {2021},
url = {https://doi.org/10.18653/v1/2021.findings-acl.306},
doi = {10.18653/v1/2021.findings-acl.306}
}
```
| [
-0.637199342250824,
-0.7849510312080383,
0.17175301909446716,
0.18286189436912537,
-0.4327657222747803,
-0.23191267251968384,
-0.19598832726478577,
-0.26635774970054626,
0.4189571738243103,
0.5513384342193604,
-0.5276154279708862,
-0.6531802415847778,
-0.5460163950920105,
0.516000807285308... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
huggingface/image-classification-test-sample | huggingface | 2022-04-19T08:02:02Z | 30 | 1 | null | [
"region:us"
] | 2022-04-19T08:02:02Z | 2022-04-19T08:02:01.000Z | 2022-04-19T08:02:01 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
laion/laion-high-resolution | laion | 2022-05-07T12:11:38Z | 30 | 44 | null | [
"license:cc-by-4.0",
"region:us"
] | 2022-05-07T12:11:38Z | 2022-05-07T11:02:09.000Z | 2022-05-07T11:02:09 | ---
license: cc-by-4.0
---
Laion high resolution is a >= 1024x1024 subset of laion5B. It has 170M samples
A good use case is to train a superresolution model.
Refer to [img2dataset guide](https://github.com/rom1504/img2dataset/blob/main/dataset_examples/laion-high-resolution.md) for downloading | [
-0.8620005249977112,
-0.04406237602233887,
0.14772315323352814,
0.18958887457847595,
-0.3872067928314209,
-0.19902731478214264,
0.232467383146286,
-0.35082074999809265,
0.4094199240207672,
0.6842384338378906,
-0.3081294596195221,
-0.27401405572891235,
-0.45411747694015503,
-0.2569752633571... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
blinoff/medical_institutions_reviews | blinoff | 2022-10-23T16:51:28Z | 30 | 1 | null | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:ru",
"region:us"
] | 2022-10-23T16:51:28Z | 2022-05-27T10:09:02.000Z | 2022-05-27T10:09:02 | ---
language:
- ru
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
### Dataset Summary
The dataset contains user reviews about medical institutions.
In total it contains 12,036 reviews. A review tagged with the <em>general</em> sentiment and sentiments on 5 aspects: <em>quality, service, equipment, food, location</em>.
### Data Fields
Each sample contains the following fields:
- **review_id**;
- **content**: review text;
- **general**;
- **quality**;
- **service**;
- **equipment**;
- **food**;
- **location**.
### Python
```python3
import pandas as pd
df = pd.read_json('medical_institutions_reviews.jsonl', lines=True)
df.sample(5)
```
| [
-0.26274430751800537,
-0.4317667484283447,
0.4998873174190521,
0.4370332360267639,
-0.2628639042377472,
-0.31617242097854614,
0.2188524305820465,
-0.17695528268814087,
0.6360212564468384,
0.770452082157135,
-0.1891750544309616,
-1.1819448471069336,
-0.435626745223999,
0.7164734601974487,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
crystina-z/mmarco-train | crystina-z | 2023-03-27T05:26:27Z | 30 | 0 | null | [
"region:us"
] | 2023-03-27T05:26:27Z | 2022-06-04T09:19:16.000Z | 2022-06-04T09:19:16 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AlekseyKorshuk/fiction-books | AlekseyKorshuk | 2022-06-12T05:29:38Z | 30 | 3 | null | [
"region:us"
] | 2022-06-12T05:29:38Z | 2022-06-12T05:29:30.000Z | 2022-06-12T05:29:30 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
arize-ai/xtreme_en | arize-ai | 2022-07-01T17:23:29Z | 30 | 0 | null | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|xtreme",
"language:en",
"license:mit",
"region:us"
] | 2022-07-01T17:23:29Z | 2022-06-30T19:48:47.000Z | 2022-06-30T19:48:47 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
pretty_name: named-entity-recognition-en-no-drift
size_categories:
- 10K<n<100K
source_datasets:
- extended|xtreme
task_categories:
- token-classification
task_ids:
- named-entity-recognition
---
# Dataset Card for `reviews_with_drift`
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
This dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added (`age`, `gender`, `context`) as well as a made up timestamp `prediction_ts` of when the inference took place.
### Supported Tasks and Leaderboards
`text-classification`, `sentiment-classification`: The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative).
### Languages
Text is mainly written in english.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@fjcasti1](https://github.com/fjcasti1) for adding this dataset. | [
-0.628180205821991,
-0.4517955482006073,
0.25332391262054443,
0.13039681315422058,
-0.3790302574634552,
0.17068056762218475,
-0.3452197015285492,
-0.19959135353565216,
0.6291214227676392,
0.6300225853919983,
-1.0287784337997437,
-0.9946329593658447,
-0.5445864200592041,
0.03664374724030495... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
arize-ai/xtreme_en_language_drift_es | arize-ai | 2022-07-01T17:25:51Z | 30 | 0 | null | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|xtreme",
"language:en",
"license:mit",
"region:us"
] | 2022-07-01T17:25:51Z | 2022-06-30T21:07:38.000Z | 2022-06-30T21:07:38 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
pretty_name: named-entity-recognition-en-no-drift
size_categories:
- 10K<n<100K
source_datasets:
- extended|xtreme
task_categories:
- token-classification
task_ids:
- named-entity-recognition
---
# Dataset Card for `reviews_with_drift`
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
This dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added (`age`, `gender`, `context`) as well as a made up timestamp `prediction_ts` of when the inference took place.
### Supported Tasks and Leaderboards
`text-classification`, `sentiment-classification`: The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative).
### Languages
Text is mainly written in english.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@fjcasti1](https://github.com/fjcasti1) for adding this dataset. | [
-0.628180205821991,
-0.4517955482006073,
0.25332391262054443,
0.13039681315422058,
-0.3790302574634552,
0.17068056762218475,
-0.3452197015285492,
-0.19959135353565216,
0.6291214227676392,
0.6300225853919983,
-1.0287784337997437,
-0.9946329593658447,
-0.5445864200592041,
0.03664374724030495... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SocialGrep/one-year-of-tsla-on-reddit | SocialGrep | 2022-07-07T18:54:18Z | 30 | 1 | null | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-07-07T18:54:18Z | 2022-07-07T17:23:17.000Z | 2022-07-07T17:23:17 | ---
annotations_creators:
- lexyr
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
---
# Dataset Card for one-year-of-tsla-on-reddit
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets/one-year-of-tsla-on-reddit?utm_source=huggingface&utm_medium=link&utm_campaign=oneyearoftslaonreddit)
- **Reddit downloader used:** [https://socialgrep.com/exports](https://socialgrep.com/exports?utm_source=huggingface&utm_medium=link&utm_campaign=oneyearoftslaonreddit)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=oneyearoftslaonreddit)
### Dataset Summary
A year's worth of mentions of Tesla Inc. (TSLA) in Reddit posts and comments.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Additional Information
### Licensing Information
CC-BY v4.0
| [
-0.46199706196784973,
-0.7459802627563477,
0.26587727665901184,
0.47357267141342163,
-0.5319382548332214,
0.2113087773323059,
0.04862354323267937,
-0.30751433968544006,
0.7164837718009949,
0.22213208675384521,
-1.02584707736969,
-0.9250645041465759,
-0.5336516499519348,
0.16577543318271637... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
embedding-data/WikiAnswers | embedding-data | 2022-08-02T03:33:01Z | 30 | 1 | embedding-data/WikiAnswers | [
"task_categories:sentence-similarity",
"task_ids:semantic-similarity-classification",
"language:en",
"license:mit",
"region:us"
] | 2022-08-02T03:33:01Z | 2022-07-09T00:13:25.000Z | 2022-07-09T00:13:25 | ---
license: mit
language:
- en
paperswithcode_id: embedding-data/WikiAnswers
pretty_name: WikiAnswers
task_categories:
- sentence-similarity
- paraphrase-mining
task_ids:
- semantic-similarity-classification
---
# Dataset Card for "WikiAnswers"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/afader/oqa#wikianswers-corpus](https://github.com/afader/oqa#wikianswers-corpus)
- **Repository:** [More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
- **Paper:** [More Information Needed](https://doi.org/10.1145/2623330.2623677)
- **Point of Contact:** [Anthony Fader](https://dl.acm.org/profile/81324489111), [Luke Zettlemoyer](https://dl.acm.org/profile/81100527621), [Oren Etzioni](https://dl.acm.org/profile/99658633129)
### Dataset Summary
The WikiAnswers corpus contains clusters of questions tagged by WikiAnswers users as paraphrases.
Each cluster optionally contains an answer provided by WikiAnswers users. There are 30,370,994 clusters containing an average of 25 questions per cluster. 3,386,256 (11%) of the clusters have an answer.
### Supported Tasks
- [Sentence Transformers](https://huggingface.co/sentence-transformers) training; useful for semantic search and sentence similarity.
### Languages
- English.
## Dataset Structure
Each example in the dataset contains 25 equivalent sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value".
```
{"set": [sentence_1, sentence_2, ..., sentence_25]}
{"set": [sentence_1, sentence_2, ..., sentence_25]}
...
{"set": [sentence_1, sentence_2, ..., sentence_25]}
```
This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar sentences.
### Usage Example
Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with:
```python
from datasets import load_dataset
dataset = load_dataset("embedding-data/WikiAnswers")
```
The dataset is loaded as a `DatasetDict` and has the format for `N` examples:
```python
DatasetDict({
train: Dataset({
features: ['set'],
num_rows: N
})
})
```
Review an example `i` with:
```python
dataset["train"][i]["set"]
```
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
#### Who are the source language producers?
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
#### Who are the annotators?
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
### Personal and Sensitive Information
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
### Discussion of Biases
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
### Other Known Limitations
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
### Licensing Information
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
### Citation Information
```
@inproceedings{Fader14,
author = {Anthony Fader and Luke Zettlemoyer and Oren Etzioni},
title = {{Open Question Answering Over Curated and Extracted
Knowledge Bases}},
booktitle = {KDD},
year = {2014}
}
```
### Contributions
| [
-0.4999684691429138,
-0.6732252240180969,
0.1468072086572647,
-0.1129801869392395,
0.07966147363185883,
-0.17219939827919006,
-0.32778623700141907,
-0.15185216069221497,
0.5374420881271362,
0.5538923740386963,
-0.6270370483398438,
-0.7449226379394531,
-0.8097714185714722,
0.315653324127197... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
biglam/atypical_animacy | biglam | 2022-07-22T17:29:12Z | 30 | 3 | null | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"task_ids:intent-classification",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:c... | 2022-07-22T17:29:12Z | 2022-07-11T21:33:07.000Z | 2022-07-11T21:33:07 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- machine-generated
license:
- cc0-1.0
multilinguality:
- monolingual
paperswithcode_id: null
pretty_name: Atypical Animacy
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
- intent-classification
---
# Dataset Card for atypical_animacy
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://bl.iro.bl.uk/concern/datasets/323177af-6081-4e93-8aaf-7932ca4a390a?locale=en
- **Repository:** https://github.com/Living-with-machines/AtypicalAnimacy
- **Paper:** https://arxiv.org/abs/2005.11140
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Mariona Coll Ardanuy](mailto:mcollardanuy@turing.ac.uk), [Daniel CS Wilson](mailto:dwilson@turing.ac.uk)
### Dataset Summary
Atypical animacy detection dataset, based on nineteenth-century sentences in English extracted from an open dataset of nineteenth-century books digitized by the British Library. This dataset contains 598 sentences containing mentions of machines. Each sentence has been annotated according to the animacy and humanness of the machine in the sentence.
### Supported Tasks and Leaderboards
- `text-classification` - This dataset can be used to determine if a mention of an entity in a document was humanlike or not
- `entity-recognition` - The dataset can be used to fine tune large models for NER, albeit for a very specific use case
### Languages
The text in the dataset is in English, as written by authors of books digitized by the British Library. The associated BCP-47 code in `en`
## Dataset Structure
The dataset has a single configuration
### Data Instances
An example data point
```
{'id': '002757962_01_184_16',
'sentence': '100 shows a Cornish boiler improperly seated with one small side flue and a bottom flue.',
'context': 'Fig. 100 shows a Cornish boiler improperly seated with one small side flue and a bottom flue. The effect of this on a long boiler is to cause springing and leakage of the seams from the heat being applied to one side of the boiler only.',
'target': 'boiler',
'animacy': 0.0,
'humanness': 1.0,
'offsets': [20, 26],
'date': '1893'}
```
### Data Fields
- id: sentence identifier according to internal Living with Machines BL books indexing.
- sentence: sentence where target expression occurs.
- context: sentence where target expression occurs, plus one sentence to the left and one sentence to the right.
- target: target expression
- animacy: animacy of the target expression
- humanness: humanness of the target expression
### Data Splits
Train | 598
## Dataset Creation
The dataset was created by manually annotating books that had been digitized by the British Library. According to the paper's authors,
> "we provide a basis for examining how machines were imagined during the nineteenth century as everything from lifeless mechanical objects to living beings, or even human-like agents that feel, think, and love. We focus on texts from nineteenth-century Britain, a society being transformed by industrialization, as a good candidate for studying the broader issue"
### Curation Rationale
From the paper:
> The Stories dataset is largely composed of target expressions that correspond to either typically animate or typically inanimate entities. Even though some cases of unconventional animacy can be found(folktales, in particular, are richer in typically inanimate entities that become animate), these accountfor a very small proportion of the data.6 We decided to create our own dataset (henceforth 19thC Machines dataset) to gain a better sense of the suitability of our method to the problem of atypical animacy detection, with particular attention to the case of animacy of machines in nineteenth-century texts.
### Source Data
#### Initial Data Collection and Normalization
The dataset was generated by manually annotating books that have been digitized by the British Library
#### Who are the source language producers?
The data was originally produced by British authors in the 19th century. The books were then digitzed whcih produces some noise due to the OCR method. The annotators are from The Alan Turing Institute, The British Library, University of Cambridge, University of Exeter and Queen Mary University of London
### Annotations
#### Annotation process
Annotation was carried out in two parts.
For the intial annotation process, from the paper:
> "For human annotators, even history and literature experts, language subtleties made this task extremely subjective. In the first task, we masked the target word (i.e. the machine) in each sentence and asked the annotator to fill the slot with the most likely entity between ‘human’, ‘horse’, and ‘machine’, representing three levels in the animacy hierarchy: human, animal, and object (Comrie, 1989, 185). We asked annotators to stick to the most literal meaning and avoid metaphorical interpretations when possible. The second task was more straightforwardly related to determining the animacy of the target entity, given the same 100 sentences. We asked annotators to provide a score between -2 and 2, with -2 being definitely inanimate, -1 possibly inanimate, 1 possibly animate, and 2 definitely animate. Neutral judgements were not allowed. "
For the final annotations, from the paper:
> A subgroup of five annotators collaboratively wrote the guidelines based on their experience annotating the first batch of sentences, taking into account common discrepancies. After discussion, it was decided that a machine would be tagged as animate if it is described as having traits distinctive of biologically animate beings or human-specific skills, or portrayed as having feelings, emotions, or a soul. Sentences like the ones in example 2 would be considered animate, but an additional annotation layer would be provided to capture the notion of humanness, which would be true if the machine is portrayed as sentient and capable of specifically human emotions, and false if it used to suggest some degree of dehumanization.
#### Who are the annotators?
Annotations were carried out by the following people
- Giorgia Tolfo
- Ruth Ahnert
- Kaspar Beelen
- Mariona Coll Ardanuy
- Jon Lawrence
- Katherine McDonough
- Federico Nanni
- Daniel CS Wilson
### Personal and Sensitive Information
This dataset does not have any personal information since they are digitizations of books from the 19th century. Some passages might be sensitive, but it is not explicitly mentioned in the paper.
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The curators for this dataset are:
- Kaspar Beelen
- Mariona Coll Ardanuy
- Federico Nanni
- Giorgia Tolfo
### Licensing Information
CC0 1.0 Universal Public Domain
### Citation Information
```
@article{DBLP:journals/corr/abs-2005-11140,
author = {Mariona Coll Ardanuy and
Federico Nanni and
Kaspar Beelen and
Kasra Hosseini and
Ruth Ahnert and
Jon Lawrence and
Katherine McDonough and
Giorgia Tolfo and
Daniel C. S. Wilson and
Barbara McGillivray},
title = {Living Machines: {A} study of atypical animacy},
journal = {CoRR},
volume = {abs/2005.11140},
year = {2020},
url = {https://arxiv.org/abs/2005.11140},
eprinttype = {arXiv},
eprint = {2005.11140},
timestamp = {Sat, 23 Jan 2021 01:12:25 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2005-11140.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | [
-0.35164839029312134,
-0.5695189833641052,
0.25799956917762756,
-0.1416952759027481,
-0.2848323881626129,
0.034187667071819305,
-0.04488390311598778,
-0.5975668430328369,
0.6594732403755188,
0.5635270476341248,
-0.6708900928497314,
-0.4849807918071747,
-0.5791553854942322,
0.37017163634300... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
projecte-aina/WikiCAT_ca | projecte-aina | 2023-11-25T06:02:26Z | 30 | 1 | null | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:auromatically-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"language:ca",
"license:cc-by-sa-3.0",
"region:us"
] | 2023-11-25T06:02:26Z | 2022-08-18T14:29:02.000Z | 2022-08-18T14:29:02 | ---
YAML tags:
annotations_creators:
- auromatically-generated
language_creators:
- found
language:
- ca
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
pretty_name: wikicat_ca
size_categories:
- unknown
source_datasets: []
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
# WikiCAT_ca: Catalan Text Classification dataset
## Dataset Description
- **Paper:**
- **Point of Contact:** carlos.rodriguez1@bsc.es
**Repository**
https://github.com/TeMU-BSC/WikiCAT
### Dataset Summary
WikiCAT_ca is a Catalan corpus for thematic Text Classification tasks. It is created automagically from Wikipedia and Wikidata sources, and contains 13201 articles from the Viquipedia classified under 13 different categories.
This dataset was developed by BSC TeMU as part of the AINA project, and intended as an evaluation of LT capabilities to generate useful synthetic corpus.
This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/">Attribution-ShareAlike 4.0 International</a>.
### Supported Tasks and Leaderboards
Text classification, Language Model
### Languages
The dataset is in Catalan (ca-ES).
## Dataset Structure
### Data Instances
Two json files, one for each split.
### Data Fields
We used a simple model with the article text and associated labels, without further metadata.
#### Example:
<pre>
{"version": "1.1.0",
"data":
[
{
'sentence': ' Celsius és conegut com l\'inventor de l\'escala centesimal del termòmetre. Encara que aquest instrument és un invent molt antic, la història de la seva gradació és molt més capritxosa. Durant el segle xvi era graduat com "fred" col·locant-lo (...)',
'label': 'Ciència'
},
.
.
.
]
}
</pre>
#### Labels
'Ciència_i_Tecnologia', 'Dret', 'Economia', 'Enginyeria', 'Entreteniment', 'Esport', 'Filosofia', 'Història', 'Humanitats', 'Matemàtiques', 'Música', 'Política', 'Religió'
### Data Splits
* dev_ca.json: 2484 label-document pairs
* train_ca.json: 9907 label-document pairs
## Dataset Creation
### Methodology
“Category” starting pages are chosen to represent the topics in each language.
We extract, for each category, the main pages, as well as the subcategories ones, and the individual pages under this first level.
For each page, the "summary" provided by Wikipedia is also extracted as the representative text.
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
The source data are thematic categories in the different Wikipedias
#### Who are the source language producers?
### Annotations
#### Annotation process
Automatic annotation
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this corpus contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
We are aware that this data might contain biases. We have not applied any steps to reduce their impact.
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/">Attribution-ShareAlike 4.0 International</a>.
### Contributions
[N/A]
| [
-0.32270270586013794,
-0.5282717943191528,
0.10872720181941986,
0.27743420004844666,
-0.230820894241333,
0.11466975510120392,
-0.29431936144828796,
-0.27149203419685364,
0.6841784119606018,
0.4815708100795746,
-0.3539355993270874,
-0.9731862545013428,
-0.5959477424621582,
0.175424888730049... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Blaise-MR/celeb-identities | Blaise-MR | 2022-10-07T08:52:20Z | 30 | 0 | null | [
"region:us"
] | 2022-10-07T08:52:20Z | 2022-10-07T08:52:17.000Z | 2022-10-07T08:52:17 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
FrancescaFr/celeb-identities | FrancescaFr | 2022-10-12T22:54:03Z | 30 | 0 | null | [
"region:us"
] | 2022-10-12T22:54:03Z | 2022-10-12T22:22:20.000Z | 2022-10-12T22:22:20 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
joey234/nan-nli | joey234 | 2022-10-13T23:18:18Z | 30 | 0 | null | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"negation",
"regi... | 2022-10-13T23:18:18Z | 2022-10-13T23:16:18.000Z | 2022-10-13T23:16:18 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- expert-generated
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: nan-nli
size_categories:
- n<1K
source_datasets:
- original
tags:
- negation
task_categories:
- text-classification
task_ids:
- natural-language-inference
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
Natural Language Inference
Text Classification
### Languages
en
## Dataset Structure
### Data Instances
### Data Fields
premise:
hypothesis:
label:
### Data Splits
Evaluation: 258 samples
## Dataset Creation
### Curation Rationale
Extracting samples corresponding to different linguistics constructions of negation.
### Source Data
Geoffrey K. Pullum and Rodney Huddleston. 2002. Negation, chapter 9. Cambridge University Press.
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
The annotators are the authors of the papers, one of whom holds a graduate degree in linguistics.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@joey234](https://github.com/joey234) for adding this dataset. | [
-0.5260227918624878,
-0.7101427316665649,
0.0765504539012909,
0.27495211362838745,
-0.08377966284751892,
-0.0026334691792726517,
-0.3817756175994873,
-0.3307355046272278,
0.6519253849983215,
0.6512899994850159,
-0.8284393548965454,
-1.1070008277893066,
-0.7551981806755066,
0.28925886750221... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bburns/celeb-identities | bburns | 2022-10-14T15:20:20Z | 30 | 0 | null | [
"region:us"
] | 2022-10-14T15:20:20Z | 2022-10-14T04:21:48.000Z | 2022-10-14T04:21:48 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
0: Geohot
1: Grimes
2: Kanye
3: PG
4: Riva
5: Trump
splits:
- name: train
num_bytes: 4350264.0
num_examples: 18
download_size: 4342420
dataset_size: 4350264.0
---
# Dataset Card for "celeb-identities"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.46353304386138916,
-0.2503787577152252,
0.003946621436625719,
0.09495989978313446,
-0.06635984033346176,
0.3351333737373352,
0.2677997946739197,
-0.30673038959503174,
0.9174419641494751,
0.39497241377830505,
-0.8485506176948547,
-0.6416086554527283,
-0.6570073962211609,
-0.2662411034107... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Harsit/xnli2.0_train_hindi | Harsit | 2022-10-15T09:20:03Z | 30 | 0 | null | [
"region:us"
] | 2022-10-15T09:20:03Z | 2022-10-15T09:19:09.000Z | 2022-10-15T09:19:09 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
IIC/sqac_tests | IIC | 2022-10-17T08:42:33Z | 30 | 0 | null | [
"region:us"
] | 2022-10-17T08:42:33Z | 2022-10-17T08:42:18.000Z | 2022-10-17T08:42:18 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
severo/glue | severo | 2022-10-28T16:35:04Z | 30 | 0 | glue | [
"task_categories:text-classification",
"task_ids:acceptability-classification",
"task_ids:natural-language-inference",
"task_ids:semantic-similarity-scoring",
"task_ids:sentiment-classification",
"task_ids:text-scoring",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monol... | 2022-10-28T16:35:04Z | 2022-10-28T21:00:14.000Z | 2022-10-28T21:00:14 | ---
annotations_creators:
- other
language_creators:
- other
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- acceptability-classification
- natural-language-inference
- semantic-similarity-scoring
- sentiment-classification
- text-scoring
paperswithcode_id: glue
pretty_name: GLUE (General Language Understanding Evaluation benchmark)
train-eval-index:
- config: cola
task: text-classification
task_id: binary_classification
splits:
train_split: train
eval_split: validation
col_mapping:
sentence: text
label: target
- config: sst2
task: text-classification
task_id: binary_classification
splits:
train_split: train
eval_split: validation
col_mapping:
sentence: text
label: target
- config: mrpc
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
sentence1: text1
sentence2: text2
label: target
- config: qqp
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
question1: text1
question2: text2
label: target
- config: stsb
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
sentence1: text1
sentence2: text2
label: target
- config: mnli
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation_matched
col_mapping:
premise: text1
hypothesis: text2
label: target
- config: mnli_mismatched
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
premise: text1
hypothesis: text2
label: target
- config: mnli_matched
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
premise: text1
hypothesis: text2
label: target
- config: qnli
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
question: text1
sentence: text2
label: target
- config: rte
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
sentence1: text1
sentence2: text2
label: target
- config: wnli
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
sentence1: text1
sentence2: text2
label: target
configs:
- ax
- cola
- mnli
- mnli_matched
- mnli_mismatched
- mrpc
- qnli
- qqp
- rte
- sst2
- stsb
- wnli
tags:
- qa-nli
- coreference-nli
- paraphrase-identification
---
# Dataset Card for GLUE
## Table of Contents
- [Dataset Card for GLUE](#dataset-card-for-glue)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [ax](#ax)
- [cola](#cola)
- [mnli](#mnli)
- [mnli_matched](#mnli_matched)
- [mnli_mismatched](#mnli_mismatched)
- [mrpc](#mrpc)
- [qnli](#qnli)
- [qqp](#qqp)
- [rte](#rte)
- [sst2](#sst2)
- [stsb](#stsb)
- [wnli](#wnli)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [ax](#ax-1)
- [cola](#cola-1)
- [mnli](#mnli-1)
- [mnli_matched](#mnli_matched-1)
- [mnli_mismatched](#mnli_mismatched-1)
- [mrpc](#mrpc-1)
- [qnli](#qnli-1)
- [qqp](#qqp-1)
- [rte](#rte-1)
- [sst2](#sst2-1)
- [stsb](#stsb-1)
- [wnli](#wnli-1)
- [Data Fields](#data-fields)
- [ax](#ax-2)
- [cola](#cola-2)
- [mnli](#mnli-2)
- [mnli_matched](#mnli_matched-2)
- [mnli_mismatched](#mnli_mismatched-2)
- [mrpc](#mrpc-2)
- [qnli](#qnli-2)
- [qqp](#qqp-2)
- [rte](#rte-2)
- [sst2](#sst2-2)
- [stsb](#stsb-2)
- [wnli](#wnli-2)
- [Data Splits](#data-splits)
- [ax](#ax-3)
- [cola](#cola-3)
- [mnli](#mnli-3)
- [mnli_matched](#mnli_matched-3)
- [mnli_mismatched](#mnli_mismatched-3)
- [mrpc](#mrpc-3)
- [qnli](#qnli-3)
- [qqp](#qqp-3)
- [rte](#rte-3)
- [sst2](#sst2-3)
- [stsb](#stsb-3)
- [wnli](#wnli-3)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://nyu-mll.github.io/CoLA/](https://nyu-mll.github.io/CoLA/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 955.33 MB
- **Size of the generated dataset:** 229.68 MB
- **Total amount of disk used:** 1185.01 MB
### Dataset Summary
GLUE, the General Language Understanding Evaluation benchmark (https://gluebenchmark.com/) is a collection of resources for training, evaluating, and analyzing natural language understanding systems.
### Supported Tasks and Leaderboards
The leaderboard for the GLUE benchmark can be found [at this address](https://gluebenchmark.com/). It comprises the following tasks:
#### ax
A manually-curated evaluation dataset for fine-grained analysis of system performance on a broad range of linguistic phenomena. This dataset evaluates sentence understanding through Natural Language Inference (NLI) problems. Use a model trained on MulitNLI to produce predictions for this dataset.
#### cola
The Corpus of Linguistic Acceptability consists of English acceptability judgments drawn from books and journal articles on linguistic theory. Each example is a sequence of words annotated with whether it is a grammatical English sentence.
#### mnli
The Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The authors of the benchmark use the standard test set, for which they obtained private labels from the RTE authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) section. They also uses and recommend the SNLI corpus as 550k examples of auxiliary training data.
#### mnli_matched
The matched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information.
#### mnli_mismatched
The mismatched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information.
#### mrpc
The Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent.
#### qnli
The Stanford Question Answering Dataset is a question-answering dataset consisting of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The authors of the benchmark convert the task into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue.
#### qqp
The Quora Question Pairs2 dataset is a collection of question pairs from the community question-answering website Quora. The task is to determine whether a pair of questions are semantically equivalent.
#### rte
The Recognizing Textual Entailment (RTE) datasets come from a series of annual textual entailment challenges. The authors of the benchmark combined the data from RTE1 (Dagan et al., 2006), RTE2 (Bar Haim et al., 2006), RTE3 (Giampiccolo et al., 2007), and RTE5 (Bentivogli et al., 2009). Examples are constructed based on news and Wikipedia text. The authors of the benchmark convert all datasets to a two-class split, where for three-class datasets they collapse neutral and contradiction into not entailment, for consistency.
#### sst2
The Stanford Sentiment Treebank consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. It uses the two-way (positive/negative) class split, with only sentence-level labels.
#### stsb
The Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data. Each pair is human-annotated with a similarity score from 1 to 5.
#### wnli
The Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. The examples are manually constructed to foil simple statistical methods: Each one is contingent on contextual information provided by a single word or phrase in the sentence. To convert the problem into sentence pair classification, the authors of the benchmark construct sentence pairs by replacing the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence. They use a small evaluation set consisting of new examples derived from fiction books that was shared privately by the authors of the original corpus. While the included training set is balanced between two classes, the test set is imbalanced between them (65% not entailment). Also, due to a data quirk, the development set is adversarial: hypotheses are sometimes shared between training and development examples, so if a model memorizes the training examples, they will predict the wrong label on corresponding development set example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence between a model's score on this task and its score on the unconverted original task. The authors of the benchmark call converted dataset WNLI (Winograd NLI).
### Languages
The language data in GLUE is in English (BCP-47 `en`)
## Dataset Structure
### Data Instances
#### ax
- **Size of downloaded dataset files:** 0.21 MB
- **Size of the generated dataset:** 0.23 MB
- **Total amount of disk used:** 0.44 MB
An example of 'test' looks as follows.
```
{
"premise": "The cat sat on the mat.",
"hypothesis": "The cat did not sit on the mat.",
"label": -1,
"idx: 0
}
```
#### cola
- **Size of downloaded dataset files:** 0.36 MB
- **Size of the generated dataset:** 0.58 MB
- **Total amount of disk used:** 0.94 MB
An example of 'train' looks as follows.
```
{
"sentence": "Our friends won't buy this analysis, let alone the next one we propose.",
"label": 1,
"id": 0
}
```
#### mnli
- **Size of downloaded dataset files:** 298.29 MB
- **Size of the generated dataset:** 78.65 MB
- **Total amount of disk used:** 376.95 MB
An example of 'train' looks as follows.
```
{
"premise": "Conceptually cream skimming has two basic dimensions - product and geography.",
"hypothesis": "Product and geography are what make cream skimming work.",
"label": 1,
"idx": 0
}
```
#### mnli_matched
- **Size of downloaded dataset files:** 298.29 MB
- **Size of the generated dataset:** 3.52 MB
- **Total amount of disk used:** 301.82 MB
An example of 'test' looks as follows.
```
{
"premise": "Hierbas, ans seco, ans dulce, and frigola are just a few names worth keeping a look-out for.",
"hypothesis": "Hierbas is a name worth looking out for.",
"label": -1,
"idx": 0
}
```
#### mnli_mismatched
- **Size of downloaded dataset files:** 298.29 MB
- **Size of the generated dataset:** 3.73 MB
- **Total amount of disk used:** 302.02 MB
An example of 'test' looks as follows.
```
{
"premise": "What have you decided, what are you going to do?",
"hypothesis": "So what's your decision?,
"label": -1,
"idx": 0
}
```
#### mrpc
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### stsb
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### wnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
#### ax
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### cola
- `sentence`: a `string` feature.
- `label`: a classification label, with possible values including `unacceptable` (0), `acceptable` (1).
- `idx`: a `int32` feature.
#### mnli
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### mnli_matched
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### mnli_mismatched
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### mrpc
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### stsb
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### wnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Splits
#### ax
| |test|
|---|---:|
|ax |1104|
#### cola
| |train|validation|test|
|----|----:|---------:|---:|
|cola| 8551| 1043|1063|
#### mnli
| |train |validation_matched|validation_mismatched|test_matched|test_mismatched|
|----|-----:|-----------------:|--------------------:|-----------:|--------------:|
|mnli|392702| 9815| 9832| 9796| 9847|
#### mnli_matched
| |validation|test|
|------------|---------:|---:|
|mnli_matched| 9815|9796|
#### mnli_mismatched
| |validation|test|
|---------------|---------:|---:|
|mnli_mismatched| 9832|9847|
#### mrpc
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### stsb
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### wnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{warstadt2018neural,
title={Neural Network Acceptability Judgments},
author={Warstadt, Alex and Singh, Amanpreet and Bowman, Samuel R},
journal={arXiv preprint arXiv:1805.12471},
year={2018}
}
@inproceedings{wang2019glue,
title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},
author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},
note={In the Proceedings of ICLR.},
year={2019}
}
Note that each GLUE dataset has its own citation. Please see the source to see
the correct citation for each contained dataset.
```
### Contributions
Thanks to [@patpizio](https://github.com/patpizio), [@jeswan](https://github.com/jeswan), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset. | [
-0.39955243468284607,
-0.7582665681838989,
0.1232885792851448,
0.20188935101032257,
-0.07950395345687866,
-0.056324273347854614,
-0.1609123796224594,
-0.4034503698348999,
0.3546956777572632,
0.4174545109272003,
-0.7718503475189209,
-0.7129193544387817,
-0.47494813799858093,
0.3079266846179... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
crystina-z/mmarco | crystina-z | 2023-02-07T14:21:54Z | 30 | 0 | null | [
"region:us"
] | 2023-02-07T14:21:54Z | 2022-11-09T00:48:48.000Z | 2022-11-09T00:48:48 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
osanseviero/twitter-airline-sentiment | osanseviero | 2022-11-16T22:31:48Z | 30 | 0 | null | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-11-16T22:31:48Z | 2022-11-16T22:31:43.000Z | 2022-11-16T22:31:43 | ---
license:
- cc-by-nc-sa-4.0
converted_from: kaggle
kaggle_id: crowdflower/twitter-airline-sentiment
---
# Dataset Card for Twitter US Airline Sentiment
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://kaggle.com/datasets/crowdflower/twitter-airline-sentiment
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
*This data originally came from [Crowdflower's Data for Everyone library](http://www.crowdflower.com/data-for-everyone).*
As the original source says,
> A sentiment analysis job about the problems of each major U.S. airline. Twitter data was scraped from February of 2015 and contributors were asked to first classify positive, negative, and neutral tweets, followed by categorizing negative reasons (such as "late flight" or "rude service").
The data we're providing on Kaggle is a slightly reformatted version of the original source. It includes both a CSV file and SQLite database. The code that does these transformations is [available on GitHub](https://github.com/benhamner/crowdflower-airline-twitter-sentiment)
For example, it contains whether the sentiment of the tweets in this set was positive, neutral, or negative for six US airlines:
[](https://www.kaggle.com/benhamner/d/crowdflower/twitter-airline-sentiment/exploring-airline-twitter-sentiment-data)
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was shared by [@crowdflower](https://kaggle.com/crowdflower)
### Licensing Information
The license for this dataset is cc-by-nc-sa-4.0
### Citation Information
```bibtex
[More Information Needed]
```
### Contributions
[More Information Needed] | [
-0.4524151086807251,
-0.4644498825073242,
0.03136780485510826,
0.49152711033821106,
-0.2960156798362732,
0.15591616928577423,
-0.3040861487388611,
-0.34280216693878174,
0.7815086841583252,
0.3272053301334381,
-0.9897688031196594,
-0.7879704236984253,
-0.5557403564453125,
0.0793333873152732... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mlxen/squad_v1 | mlxen | 2022-11-28T18:37:58Z | 30 | 0 | null | [
"region:us"
] | 2022-11-28T18:37:58Z | 2022-11-28T18:37:54.000Z | 2022-11-28T18:37:54 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: train
num_bytes: 79346108
num_examples: 87599
download_size: 14457366
dataset_size: 79346108
---
# Dataset Card for "squad_v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5650163292884827,
-0.2089657336473465,
0.08392820507287979,
0.454553484916687,
-0.18632519245147705,
0.14322997629642487,
0.5166676044464111,
-0.16716325283050537,
0.9115623235702515,
0.389393150806427,
-1.4751224517822266,
-0.7189956307411194,
-0.4992825984954834,
-0.18716652691364288,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dlproject/nsc_batches | dlproject | 2022-12-14T17:51:02Z | 30 | 0 | null | [
"region:us"
] | 2022-12-14T17:51:02Z | 2022-12-14T17:48:01.000Z | 2022-12-14T17:48:01 | ---
dataset_info:
features:
- name: input_values
sequence:
sequence:
sequence: float32
- name: labels
dtype: string
splits:
- name: train
num_bytes: 2000882211
num_examples: 10000
download_size: 1765862751
dataset_size: 2000882211
---
# Dataset Card for "nsc_batches"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6226891279220581,
-0.13612030446529388,
0.2509412169456482,
0.4981464147567749,
-0.14072421193122864,
-0.007212683092802763,
0.3655467629432678,
0.05269135907292366,
0.9792064428329468,
0.7527628540992737,
-1.0073219537734985,
-0.6331340670585632,
-0.6673365235328674,
-0.091624312102794... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
rcds/MultiLegalSBD | rcds | 2023-10-23T06:36:36Z | 30 | 3 | null | [
"task_categories:token-classification",
"size_categories:100K<n<1M",
"language:en",
"language:es",
"language:de",
"language:it",
"language:pt",
"language:fr",
"region:us"
] | 2023-10-23T06:36:36Z | 2023-01-10T15:17:41.000Z | 2023-01-10T15:17:41 | ---
dataset_info:
- config_name: fr_Laws
features:
- name: text
dtype: string
- name: spans
list:
- name: start
dtype: int64
- name: end
dtype: int64
- name: label
dtype: string
- name: token_start
dtype: int64
- name: token_end
dtype: int64
- name: tokens
list:
- name: text
dtype: string
- name: start
dtype: int64
- name: end
dtype: int64
- name: id
dtype: int64
- name: ws
dtype: bool
- name: source
dtype: string
splits:
- name: train
num_bytes: 8773683
num_examples: 2131
download_size: 0
dataset_size: 8773683
- config_name: it_Laws
features:
- name: text
dtype: string
- name: spans
list:
- name: start
dtype: int64
- name: end
dtype: int64
- name: label
dtype: string
- name: token_start
dtype: int64
- name: token_end
dtype: int64
- name: tokens
list:
- name: text
dtype: string
- name: start
dtype: int64
- name: end
dtype: int64
- name: id
dtype: int64
- name: ws
dtype: bool
- name: source
dtype: string
splits:
- name: train
num_bytes: 8130577
num_examples: 2910
download_size: 0
dataset_size: 8130577
- config_name: es_Laws
features:
- name: text
dtype: string
- name: spans
list:
- name: start
dtype: int64
- name: end
dtype: int64
- name: label
dtype: string
- name: token_start
dtype: int64
- name: token_end
dtype: int64
- name: tokens
list:
- name: text
dtype: string
- name: start
dtype: int64
- name: end
dtype: int64
- name: id
dtype: int64
- name: ws
dtype: bool
- name: source
dtype: string
splits:
- name: train
num_bytes: 6260211
num_examples: 677
download_size: 0
dataset_size: 6260211
- config_name: en_Laws
features:
- name: text
dtype: string
- name: spans
list:
- name: start
dtype: int64
- name: end
dtype: int64
- name: label
dtype: string
- name: token_start
dtype: int64
- name: token_end
dtype: int64
- name: tokens
list:
- name: text
dtype: string
- name: start
dtype: int64
- name: end
dtype: int64
- name: id
dtype: int64
- name: ws
dtype: bool
- name: source
dtype: string
splits:
- name: train
download_size: 0
dataset_size: 0
- config_name: de_Laws
features:
- name: text
dtype: string
- name: spans
list:
- name: start
dtype: int64
- name: end
dtype: int64
- name: label
dtype: string
- name: token_start
dtype: int64
- name: token_end
dtype: int64
- name: tokens
list:
- name: text
dtype: string
- name: start
dtype: int64
- name: end
dtype: int64
- name: id
dtype: int64
- name: ws
dtype: bool
- name: source
dtype: string
splits:
- name: train
num_bytes: 13792836
num_examples: 13
download_size: 0
dataset_size: 13792836
- config_name: fr_Judgements
features:
- name: text
dtype: string
- name: spans
list:
- name: start
dtype: int64
- name: end
dtype: int64
- name: label
dtype: string
- name: token_start
dtype: int64
- name: token_end
dtype: int64
- name: tokens
list:
- name: text
dtype: string
- name: start
dtype: int64
- name: end
dtype: int64
- name: id
dtype: int64
- name: ws
dtype: bool
- name: source
dtype: string
splits:
- name: train
num_bytes: 8788244
num_examples: 315
download_size: 0
dataset_size: 8788244
- config_name: fr_all
features:
- name: text
dtype: string
- name: spans
list:
- name: start
dtype: int64
- name: end
dtype: int64
- name: label
dtype: string
- name: token_start
dtype: int64
- name: token_end
dtype: int64
- name: tokens
list:
- name: text
dtype: string
- name: start
dtype: int64
- name: end
dtype: int64
- name: id
dtype: int64
- name: ws
dtype: bool
- name: source
dtype: string
splits:
- name: train
num_bytes: 25977816
num_examples: 2446
download_size: 4782672
dataset_size: 25977816
- config_name: it_Judgements
features:
- name: text
dtype: string
- name: spans
list:
- name: start
dtype: int64
- name: end
dtype: int64
- name: label
dtype: string
- name: token_start
dtype: int64
- name: token_end
dtype: int64
- name: tokens
list:
- name: text
dtype: string
- name: start
dtype: int64
- name: end
dtype: int64
- name: id
dtype: int64
- name: ws
dtype: bool
- name: source
dtype: string
splits:
- name: train
num_bytes: 8989061
num_examples: 243
download_size: 0
dataset_size: 8989061
- config_name: it_all
features:
- name: text
dtype: string
- name: spans
list:
- name: start
dtype: int64
- name: end
dtype: int64
- name: label
dtype: string
- name: token_start
dtype: int64
- name: token_end
dtype: int64
- name: tokens
list:
- name: text
dtype: string
- name: start
dtype: int64
- name: end
dtype: int64
- name: id
dtype: int64
- name: ws
dtype: bool
- name: source
dtype: string
splits:
- name: train
num_bytes: 25097560
num_examples: 3153
download_size: 4610540
dataset_size: 25097560
- config_name: es_Judgements
features:
- name: text
dtype: string
- name: spans
list:
- name: start
dtype: int64
- name: end
dtype: int64
- name: label
dtype: string
- name: token_start
dtype: int64
- name: token_end
dtype: int64
- name: tokens
list:
- name: text
dtype: string
- name: start
dtype: int64
- name: end
dtype: int64
- name: id
dtype: int64
- name: ws
dtype: bool
- name: source
dtype: string
splits:
- name: train
num_bytes: 9460558
num_examples: 190
download_size: 0
dataset_size: 9460558
- config_name: es_all
features:
- name: text
dtype: string
- name: spans
list:
- name: start
dtype: int64
- name: end
dtype: int64
- name: label
dtype: string
- name: token_start
dtype: int64
- name: token_end
dtype: int64
- name: tokens
list:
- name: text
dtype: string
- name: start
dtype: int64
- name: end
dtype: int64
- name: id
dtype: int64
- name: ws
dtype: bool
- name: source
dtype: string
splits:
- name: train
num_bytes: 23090629
num_examples: 867
download_size: 4438716
dataset_size: 23090629
- config_name: en_Judgements
features:
- name: text
dtype: string
- name: spans
list:
- name: start
dtype: int64
- name: end
dtype: int64
- name: label
dtype: string
- name: token_start
dtype: int64
- name: token_end
dtype: int64
- name: tokens
list:
- name: text
dtype: string
- name: start
dtype: int64
- name: end
dtype: int64
- name: id
dtype: int64
- name: ws
dtype: bool
- name: source
dtype: string
splits:
- name: train
num_bytes: 18401754
num_examples: 80
download_size: 0
dataset_size: 18401754
- config_name: en_all
features:
- name: text
dtype: string
- name: spans
list:
- name: start
dtype: int64
- name: end
dtype: int64
- name: label
dtype: string
- name: token_start
dtype: int64
- name: token_end
dtype: int64
- name: tokens
list:
- name: text
dtype: string
- name: start
dtype: int64
- name: end
dtype: int64
- name: id
dtype: int64
- name: ws
dtype: bool
- name: source
dtype: string
splits:
- name: train
num_bytes: 27363914
num_examples: 80
download_size: 5448700
dataset_size: 27363914
- config_name: de_Judgements
features:
- name: text
dtype: string
- name: spans
list:
- name: start
dtype: int64
- name: end
dtype: int64
- name: label
dtype: string
- name: token_start
dtype: int64
- name: token_end
dtype: int64
- name: tokens
list:
- name: text
dtype: string
- name: start
dtype: int64
- name: end
dtype: int64
- name: id
dtype: int64
- name: ws
dtype: bool
- name: source
dtype: string
splits:
- name: train
num_bytes: 14082173
num_examples: 131
download_size: 0
dataset_size: 14082173
- config_name: de_all
features:
- name: text
dtype: string
- name: spans
list:
- name: start
dtype: int64
- name: end
dtype: int64
- name: label
dtype: string
- name: token_start
dtype: int64
- name: token_end
dtype: int64
- name: tokens
list:
- name: text
dtype: string
- name: start
dtype: int64
- name: end
dtype: int64
- name: id
dtype: int64
- name: ws
dtype: bool
- name: source
dtype: string
splits:
- name: train
num_bytes: 40429185
num_examples: 144
download_size: 7883640
dataset_size: 40429185
- config_name: fr_laws
features:
- name: text
dtype: string
- name: spans
list:
- name: start
dtype: int64
- name: end
dtype: int64
- name: label
dtype: string
- name: token_start
dtype: int64
- name: token_end
dtype: int64
- name: tokens
list:
- name: text
dtype: string
- name: start
dtype: int64
- name: end
dtype: int64
- name: id
dtype: int64
- name: ws
dtype: bool
- name: source
dtype: string
splits:
- name: train
num_bytes: 12924503
num_examples: 2131
download_size: 2201568
dataset_size: 12924503
- config_name: fr_judgements
features:
- name: text
dtype: string
- name: spans
list:
- name: start
dtype: int64
- name: end
dtype: int64
- name: label
dtype: string
- name: token_start
dtype: int64
- name: token_end
dtype: int64
- name: tokens
list:
- name: text
dtype: string
- name: start
dtype: int64
- name: end
dtype: int64
- name: id
dtype: int64
- name: ws
dtype: bool
- name: source
dtype: string
splits:
- name: train
num_bytes: 13053313
num_examples: 315
download_size: 2581104
dataset_size: 13053313
- config_name: it_laws
features:
- name: text
dtype: string
- name: spans
list:
- name: start
dtype: int64
- name: end
dtype: int64
- name: label
dtype: string
- name: token_start
dtype: int64
- name: token_end
dtype: int64
- name: tokens
list:
- name: text
dtype: string
- name: start
dtype: int64
- name: end
dtype: int64
- name: id
dtype: int64
- name: ws
dtype: bool
- name: source
dtype: string
splits:
- name: train
num_bytes: 11869343
num_examples: 2910
download_size: 2048828
dataset_size: 11869343
- config_name: it_judgements
features:
- name: text
dtype: string
- name: spans
list:
- name: start
dtype: int64
- name: end
dtype: int64
- name: label
dtype: string
- name: token_start
dtype: int64
- name: token_end
dtype: int64
- name: tokens
list:
- name: text
dtype: string
- name: start
dtype: int64
- name: end
dtype: int64
- name: id
dtype: int64
- name: ws
dtype: bool
- name: source
dtype: string
splits:
- name: train
num_bytes: 13228218
num_examples: 243
download_size: 2561712
dataset_size: 13228218
- config_name: es_laws
features:
- name: text
dtype: string
- name: spans
list:
- name: start
dtype: int64
- name: end
dtype: int64
- name: label
dtype: string
- name: token_start
dtype: int64
- name: token_end
dtype: int64
- name: tokens
list:
- name: text
dtype: string
- name: start
dtype: int64
- name: end
dtype: int64
- name: id
dtype: int64
- name: ws
dtype: bool
- name: source
dtype: string
splits:
- name: train
num_bytes: 9183057
num_examples: 677
download_size: 1753376
dataset_size: 9183057
- config_name: es_judgements
features:
- name: text
dtype: string
- name: spans
list:
- name: start
dtype: int64
- name: end
dtype: int64
- name: label
dtype: string
- name: token_start
dtype: int64
- name: token_end
dtype: int64
- name: tokens
list:
- name: text
dtype: string
- name: start
dtype: int64
- name: end
dtype: int64
- name: id
dtype: int64
- name: ws
dtype: bool
- name: source
dtype: string
splits:
- name: train
num_bytes: 13907572
num_examples: 190
download_size: 2685340
dataset_size: 13907572
- config_name: en_laws
features:
- name: text
dtype: string
- name: spans
list:
- name: start
dtype: int64
- name: end
dtype: int64
- name: label
dtype: string
- name: token_start
dtype: int64
- name: token_end
dtype: int64
- name: tokens
list:
- name: text
dtype: string
- name: start
dtype: int64
- name: end
dtype: int64
- name: id
dtype: int64
- name: ws
dtype: bool
- name: source
dtype: string
splits:
- name: train
download_size: 0
dataset_size: 0
- config_name: en_judgements
features:
- name: text
dtype: string
- name: spans
list:
- name: start
dtype: int64
- name: end
dtype: int64
- name: label
dtype: string
- name: token_start
dtype: int64
- name: token_end
dtype: int64
- name: tokens
list:
- name: text
dtype: string
- name: start
dtype: int64
- name: end
dtype: int64
- name: id
dtype: int64
- name: ws
dtype: bool
- name: source
dtype: string
splits:
- name: train
num_bytes: 27363914
num_examples: 80
download_size: 5448700
dataset_size: 27363914
- config_name: de_laws
features:
- name: text
dtype: string
- name: spans
list:
- name: start
dtype: int64
- name: end
dtype: int64
- name: label
dtype: string
- name: token_start
dtype: int64
- name: token_end
dtype: int64
- name: tokens
list:
- name: text
dtype: string
- name: start
dtype: int64
- name: end
dtype: int64
- name: id
dtype: int64
- name: ws
dtype: bool
- name: source
dtype: string
splits:
- name: train
num_bytes: 19935635
num_examples: 13
download_size: 3745480
dataset_size: 19935635
- config_name: de_judgements
features:
- name: text
dtype: string
- name: spans
list:
- name: start
dtype: int64
- name: end
dtype: int64
- name: label
dtype: string
- name: token_start
dtype: int64
- name: token_end
dtype: int64
- name: tokens
list:
- name: text
dtype: string
- name: start
dtype: int64
- name: end
dtype: int64
- name: id
dtype: int64
- name: ws
dtype: bool
- name: source
dtype: string
splits:
- name: train
num_bytes: 20493550
num_examples: 131
download_size: 4138160
dataset_size: 20493550
- config_name: pt_laws
features:
- name: text
dtype: string
- name: spans
list:
- name: start
dtype: int64
- name: end
dtype: int64
- name: label
dtype: string
- name: token_start
dtype: int64
- name: token_end
dtype: int64
- name: tokens
list:
- name: text
dtype: string
- name: start
dtype: int64
- name: end
dtype: int64
- name: id
dtype: int64
- name: ws
dtype: bool
- name: source
dtype: string
splits:
- name: train
num_bytes: 1005902
num_examples: 58
download_size: 209128
dataset_size: 1005902
- config_name: pt_judgements
features:
- name: text
dtype: string
- name: spans
list:
- name: start
dtype: int64
- name: end
dtype: int64
- name: label
dtype: string
- name: token_start
dtype: int64
- name: token_end
dtype: int64
- name: tokens
list:
- name: text
dtype: string
- name: start
dtype: int64
- name: end
dtype: int64
- name: id
dtype: int64
- name: ws
dtype: bool
- name: source
dtype: string
splits:
- name: train
num_bytes: 812282
num_examples: 10
download_size: 173424
dataset_size: 812282
- config_name: pt_all
features:
- name: text
dtype: string
- name: spans
list:
- name: start
dtype: int64
- name: end
dtype: int64
- name: label
dtype: string
- name: token_start
dtype: int64
- name: token_end
dtype: int64
- name: tokens
list:
- name: text
dtype: string
- name: start
dtype: int64
- name: end
dtype: int64
- name: id
dtype: int64
- name: ws
dtype: bool
- name: source
dtype: string
splits:
- name: train
num_bytes: 1818184
num_examples: 68
download_size: 382552
dataset_size: 1818184
- config_name: all_laws
features:
- name: text
dtype: string
- name: spans
list:
- name: start
dtype: int64
- name: end
dtype: int64
- name: label
dtype: string
- name: token_start
dtype: int64
- name: token_end
dtype: int64
- name: tokens
list:
- name: text
dtype: string
- name: start
dtype: int64
- name: end
dtype: int64
- name: id
dtype: int64
- name: ws
dtype: bool
- name: source
dtype: string
splits:
- name: train
num_bytes: 54918438
num_examples: 5789
download_size: 9958380
dataset_size: 54918438
- config_name: all_judgements
features:
- name: text
dtype: string
- name: spans
list:
- name: start
dtype: int64
- name: end
dtype: int64
- name: label
dtype: string
- name: token_start
dtype: int64
- name: token_end
dtype: int64
- name: tokens
list:
- name: text
dtype: string
- name: start
dtype: int64
- name: end
dtype: int64
- name: id
dtype: int64
- name: ws
dtype: bool
- name: source
dtype: string
splits:
- name: train
num_bytes: 88858845
num_examples: 969
download_size: 17588440
dataset_size: 88858845
- config_name: all_all
features:
- name: text
dtype: string
- name: spans
list:
- name: start
dtype: int64
- name: end
dtype: int64
- name: label
dtype: string
- name: token_start
dtype: int64
- name: token_end
dtype: int64
- name: tokens
list:
- name: text
dtype: string
- name: start
dtype: int64
- name: end
dtype: int64
- name: id
dtype: int64
- name: ws
dtype: bool
- name: source
dtype: string
splits:
- name: train
num_bytes: 143777284
num_examples: 6758
download_size: 27546820
dataset_size: 143777284
task_categories:
- token-classification
language:
- en
- es
- de
- it
- pt
- fr
pretty_name: 'MultiLegalSBD: A Multilingual Legal Sentence Boundary Detection Dataset'
size_categories:
- 100K<n<1M
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This is a multilingual dataset containing ~130k annotated sentence boundaries. It contains laws and court decision in 6 different languages.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English, French, Italian, German, Portuguese, Spanish
## Dataset Structure
It is structured in the following format: {language}\_{type}\_{shard}.jsonl.xz
type is one of the following:
- laws
- judgements
Use the the dataset like this:
```
from datasets import load_dataset
config = 'fr_laws' #{language}_{type} | to load all languages and/or all types, use 'all_all'
dataset = load_dataset('rdcs/MultiLegalSBD', config)
```
### Data Instances
[More Information Needed]
### Data Fields
- text: the original text
- spans:
- start: offset of the first character
- end: offset of the last character
- label: One label only -> Sentence
- token_start: id of the first token
- token_end: id of the last token
- tokens:
- text: token text
- start: offset of the first character
- end: offset of the last character
- id: token id
- ws: whether the token is followed by whitespace
### Data Splits
There is only one split available
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{10.1145/3594536.3595132,
author = {Brugger, Tobias and St\"{u}rmer, Matthias and Niklaus, Joel},
title = {MultiLegalSBD: A Multilingual Legal Sentence Boundary Detection Dataset},
year = {2023},
isbn = {9798400701979},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3594536.3595132},
doi = {10.1145/3594536.3595132},
abstract = {Sentence Boundary Detection (SBD) is one of the foundational building blocks of Natural Language Processing (NLP), with incorrectly split sentences heavily influencing the output quality of downstream tasks. It is a challenging task for algorithms, especially in the legal domain, considering the complex and different sentence structures used. In this work, we curated a diverse multilingual legal dataset consisting of over 130'000 annotated sentences in 6 languages. Our experimental results indicate that the performance of existing SBD models is subpar on multilingual legal data. We trained and tested monolingual and multilingual models based on CRF, BiLSTM-CRF, and transformers, demonstrating state-of-the-art performance. We also show that our multilingual models outperform all baselines in the zero-shot setting on a Portuguese test set. To encourage further research and development by the community, we have made our dataset, models, and code publicly available.},
booktitle = {Proceedings of the Nineteenth International Conference on Artificial Intelligence and Law},
pages = {42–51},
numpages = {10},
keywords = {Natural Language Processing, Sentence Boundary Detection, Text Annotation, Legal Document Analysis, Multilingual},
location = {Braga, Portugal},
series = {ICAIL '23}
}
```
### Contributions
[More Information Needed] | [
-0.33376559615135193,
-0.9411370754241943,
0.2976232171058655,
0.47208815813064575,
-0.4111798405647278,
-0.1467476189136505,
-0.5308482646942139,
-0.44308701157569885,
0.11295054107904434,
0.7520277500152588,
-0.5790834426879883,
-1.0758336782455444,
-0.7028824090957642,
0.183860152959823... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
matchbench/openea-d-w-100k-v1 | matchbench | 2023-01-18T11:37:47Z | 30 | 0 | null | [
"region:us"
] | 2023-01-18T11:37:47Z | 2023-01-18T11:34:09.000Z | 2023-01-18T11:34:09 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigcode/the-stack-smol-xs | bigcode | 2023-02-13T09:05:23Z | 30 | 2 | null | [
"task_categories:text-generation",
"task_ids:language-modeling",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:unknown",
"language:code",
"region:us"
] | 2023-02-13T09:05:23Z | 2023-02-10T11:47:50.000Z | 2023-02-10T11:47:50 | ---
annotations_creators: []
language_creators:
- crowdsourced
language: ["code"]
multilinguality:
- multilingual
size_categories:
- unknown
source_datasets: []
task_categories:
- text-generation
task_ids:
- language-modeling
---
## Dataset Description
A small subset of [the-stack](https://huggingface.co/datasets/bigcode/the-stack) dataset, with 87 programming languages, each has 100 random samples from the original dataset for visualization.
## Languages
The dataset contains 87 programming languages:
````
'ada', 'agda', 'alloy', 'antlr', 'applescript', 'assembly', 'augeas', 'awk', 'batchfile', 'bison', 'bluespec', 'c',
'c++', 'c-sharp', 'clojure', 'cmake', 'coffeescript', 'common-lisp', 'css', 'cuda', 'dart', 'dockerfile', 'elixir',
'elm', 'emacs-lisp','erlang', 'f-sharp', 'fortran', 'glsl', 'go', 'groovy', 'haskell','html', 'idris', 'isabelle', 'java',
'java-server-pages', 'javascript', 'julia', 'kotlin', 'lean', 'literate-agda', 'literate-coffeescript', 'literate-haskell',
'lua', 'makefile', 'maple', 'markdown', 'mathematica', 'matlab', 'ocaml', 'pascal', 'perl', 'php', 'powershell', 'prolog',
'protocol-buffer', 'python', 'r', 'racket', 'restructuredtext', 'rmarkdown', 'ruby', 'rust', 'sas', 'scala', 'scheme',
'shell', 'smalltalk', 'solidity', 'sparql', 'sql', 'stan', 'standard-ml', 'stata', 'systemverilog', 'tcl', 'tcsh', 'tex',
'thrift', 'typescript', 'verilog', 'vhdl', 'visual-basic', 'xslt', 'yacc', 'zig'
`````
## Dataset Structure
You can specify which language you want to load, python is loaded by default:
```python
# to load go:
from datasets import load_dataset
load_dataset("bigcode/the-stack-smol-xs", "go")
DatasetDict({
train: Dataset({
features: ['content', 'lang', 'size', 'ext', 'max_stars_count', 'avg_line_length', 'max_line_length', 'alphanum_fraction'],
num_rows: 100
})
})
```
| [
-0.6187657117843628,
-0.5290568470954895,
0.08203508704900742,
0.3389623761177063,
0.12547503411769867,
0.29584062099456787,
-0.4283028542995453,
-0.22497990727424622,
0.35320204496383667,
0.6456895470619202,
-0.552862286567688,
-1.0719702243804932,
-0.6184027194976807,
0.15810048580169678... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jonathan-roberts1/RSD46-WHU | jonathan-roberts1 | 2023-03-31T14:43:55Z | 30 | 0 | null | [
"license:other",
"region:us"
] | 2023-03-31T14:43:55Z | 2023-02-17T15:41:45.000Z | 2023-02-17T15:41:45 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': airplane
'1': airport
'2': artificial dense forest land
'3': artificial sparse forest land
'4': bare land
'5': basketball court
'6': blue structured factory building
'7': building
'8': construction site
'9': cross river bridge
'10': crossroads
'11': dense tall building
'12': dock
'13': fish pond
'14': footbridge
'15': graff
'16': grassland
'17': irregular farmland
'18': low scattered building
'19': medium density scattered building
'20': medium density structured building
'21': natural dense forest land
'22': natural sparse forest land
'23': oil tank
'24': overpass
'25': parking lot
'26': plastic greenhouse
'27': playground
'28': railway
'29': red structured factory building
'30': refinery
'31': regular farmland
'32': scattered blue roof factory building
'33': scattered red roof factory building
'34': sewage plant-type-one
'35': sewage plant-type-two
'36': ship
'37': solar power station
'38': sparse residential area
'39': square
'40': steelworks
'41': storage land
'42': tennis court
'43': thermal power plant
'44': vegetable plot
'45': water
splits:
- name: train
num_bytes: 1650045051.96
num_examples: 17516
download_size: 2184490825
dataset_size: 1650045051.96
license: other
---
# Dataset Card for "RSD46-WHU"
## Dataset Description
- **Paper** [Accurate Object Localization in Remote Sensing Images Based on Convolutional Neural Networks](https://ieeexplore.ieee.org/iel7/36/7880748/07827088.pdf)
- **Paper** [High-Resolution Remote Sensing Image Retrieval Based on CNNs from a Dimensional Perspective](https://www.mdpi.com/209338)
- **Split** Validation
## Split Information
This HuggingFace dataset repository contains just the Validation split.
### Licensing Information
[Free for education, research and commercial use.](https://github.com/RSIA-LIESMARS-WHU/RSD46-WHU)
## Citation Information
[Accurate Object Localization in Remote Sensing Images Based on Convolutional Neural Networks](https://ieeexplore.ieee.org/iel7/36/7880748/07827088.pdf)
[High-Resolution Remote Sensing Image Retrieval Based on CNNs from a Dimensional Perspective](https://www.mdpi.com/209338)
```
@article{long2017accurate,
title = {Accurate object localization in remote sensing images based on convolutional neural networks},
author = {Long, Yang and Gong, Yiping and Xiao, Zhifeng and Liu, Qing},
year = 2017,
journal = {IEEE Transactions on Geoscience and Remote Sensing},
publisher = {IEEE},
volume = 55,
number = 5,
pages = {2486--2498}
}
@article{xiao2017high,
title = {High-resolution remote sensing image retrieval based on CNNs from a dimensional perspective},
author = {Xiao, Zhifeng and Long, Yang and Li, Deren and Wei, Chunshan and Tang, Gefu and Liu, Junyi},
year = 2017,
journal = {Remote Sensing},
publisher = {MDPI},
volume = 9,
number = 7,
pages = 725
}
``` | [
-0.5428569316864014,
-0.30279427766799927,
0.1775580495595932,
-0.17483758926391602,
-0.34555938839912415,
-0.24121293425559998,
-0.349740594625473,
-0.6599151492118835,
-0.16969838738441467,
0.02895423211157322,
-0.3056187033653259,
-0.7486063838005066,
-0.6730093955993652,
0.077866360545... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Ddream-ai/InsuranceCorpus | Ddream-ai | 2023-03-04T02:07:47Z | 30 | 4 | null | [
"license:mit",
"region:us"
] | 2023-03-04T02:07:47Z | 2023-03-04T02:03:46.000Z | 2023-03-04T02:03:46 | ---
license: mit
dataset_info:
features:
- name: 咨询
dtype: string
- name: 回复
dtype: string
splits:
- name: train
num_bytes: 3612350
num_examples: 3599
- name: validation
num_bytes: 186138
num_examples: 189
download_size: 2267366
dataset_size: 3798488
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
liuyanchen1015/MULTI_VALUE_sst2_drop_aux_have | liuyanchen1015 | 2023-04-03T19:50:50Z | 30 | 0 | null | [
"region:us"
] | 2023-04-03T19:50:50Z | 2023-04-03T19:50:46.000Z | 2023-04-03T19:50:46 | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: dev
num_bytes: 4815
num_examples: 34
- name: test
num_bytes: 13587
num_examples: 85
- name: train
num_bytes: 183450
num_examples: 1474
download_size: 102695
dataset_size: 201852
---
# Dataset Card for "MULTI_VALUE_sst2_drop_aux_have"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5623722672462463,
-0.10325571149587631,
0.14685697853565216,
0.05752372741699219,
-0.33483752608299255,
0.2592889368534088,
0.14396324753761292,
-0.1465378999710083,
0.6358417868614197,
0.3436853587627411,
-1.0737271308898926,
-0.5377828478813171,
-0.7243403792381287,
-0.291458874940872... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mstz/fertility | mstz | 2023-04-16T17:28:42Z | 30 | 0 | null | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"fertility",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"UCI",
"region:us"
] | 2023-04-16T17:28:42Z | 2023-04-06T09:27:16.000Z | 2023-04-06T09:27:16 | ---
language:
- en
tags:
- fertility
- tabular_classification
- binary_classification
- multiclass_classification
- UCI
pretty_name: Fertility
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- encoding
- fertility
license: cc
---
# Fertility
The [Fertility dataset](https://archive.ics.uci.edu/ml/datasets/Fertility) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Classify fertility abnormalities of patients.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|------------------------------------------|
| encoding | | Encoding dictionary |
| fertility | Binary classification | Does the patient have fertility issues? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/fertility", "fertility")["train"]
```
# Features
|**Feature** |**Type** |
|----------------------------------------|------------------|
| season_of_sampling | `[string]` |
| age_at_time_of_sampling | `[int8]` |
| has_had_childhood_diseases | `[bool]` |
| has_had_serious_trauma | `[bool]` |
| has_had_surgical_interventions | `[bool]` |
| has_had_high_fevers_in_the_past_year | `[string]` |
| frequency_of_alcohol_consumption | `[float16]` |
| smoking_frequency | `[string]` |
| number_of_sitting_hours_per_day | `[float16]` | | [
-0.1525534987449646,
-0.19041892886161804,
0.20629183948040009,
0.4214459955692291,
-0.22127678990364075,
-0.5424982905387878,
0.0683698058128357,
-0.1390661746263504,
0.22610925137996674,
0.49870219826698303,
-0.43948623538017273,
-0.961808979511261,
-0.7981854677200317,
0.311899125576019... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.