id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
renumics/speech_commands-ast-finetuned-results | 2023-10-09T09:18:38.000Z | [
"region:us"
] | renumics | null | null | 0 | 33 | 2023-10-05T16:46:44 | ---
dataset_info:
config_name: v0.01
features:
- name: probability
dtype: float64
- name: prediction
dtype:
class_label:
names:
'0': 'yes'
'1': 'no'
'2': up
'3': down
'4': left
'5': right
'6': 'on'
'7': 'off'
'8': stop
'9': go
'10': zero
'11': one
'12': two
'13': three
'14': four
'15': five
'16': six
'17': seven
'18': eight
'19': nine
'20': bed
'21': bird
'22': cat
'23': dog
'24': happy
'25': house
'26': marvin
'27': sheila
'28': tree
'29': wow
'30': _silence_
- name: embedding
sequence: float32
- name: entropy
dtype: float64
splits:
- name: train
num_bytes: 1839348
num_examples: 51093
- name: validation
num_bytes: 244764
num_examples: 6799
- name: test
num_bytes: 110916
num_examples: 3081
download_size: 0
dataset_size: 2195028
configs:
- config_name: v0.01
data_files:
- split: train
path: v0.01/train-*
- split: validation
path: v0.01/validation-*
- split: test
path: v0.01/test-*
---
# Dataset Card for "speech_commands-ast-finetuned-results"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,496 | [
[
-0.036346435546875,
-0.0386962890625,
-0.0010576248168945312,
0.0108489990234375,
-0.0158538818359375,
0.0018606185913085938,
-0.0251312255859375,
0.0036296844482421875,
0.057708740234375,
0.040740966796875,
-0.059051513671875,
-0.06219482421875,
-0.049133300781... |
McSpicyWithMilo/infographic-instructions | 2023-10-19T13:48:50.000Z | [
"language:en",
"region:us"
] | McSpicyWithMilo | null | null | 0 | 33 | 2023-10-08T09:21:48 | ---
language:
- en
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | 4,384 | [
[
-0.04034423828125,
-0.0419921875,
0.009765625,
0.0178070068359375,
-0.0300445556640625,
-0.00893402099609375,
-0.0026874542236328125,
-0.048431396484375,
0.043212890625,
0.059478759765625,
-0.05938720703125,
-0.069580078125,
-0.042205810546875,
0.00993347167... |
FinGPT/fingpt-sentiment-cls | 2023-10-10T06:49:38.000Z | [
"region:us"
] | FinGPT | null | null | 2 | 33 | 2023-10-10T06:39:32 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 10908696
num_examples: 47557
download_size: 3902114
dataset_size: 10908696
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "fingpt-sentiment-cls"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 527 | [
[
-0.0667724609375,
-0.008544921875,
0.01251983642578125,
0.024169921875,
-0.03485107421875,
-0.0045166015625,
-0.0062255859375,
-0.0027980804443359375,
0.057861328125,
0.028778076171875,
-0.06707763671875,
-0.061370849609375,
-0.045867919921875,
-0.0204315185... |
carnival13/xlmr_eval2 | 2023-10-12T10:26:00.000Z | [
"region:us"
] | carnival13 | null | null | 0 | 33 | 2023-10-12T10:14:39 | ---
dataset_info:
features:
- name: domain_label
dtype: int64
- name: pass_label
dtype: int64
- name: input
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 19326005
num_examples: 11590
download_size: 5464964
dataset_size: 19326005
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "xlmr_eval2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 604 | [
[
-0.025238037109375,
-0.031005859375,
0.0199432373046875,
0.0078277587890625,
-0.006221771240234375,
0.021209716796875,
0.0193634033203125,
0.001789093017578125,
0.0291748046875,
0.03826904296875,
-0.033721923828125,
-0.038238525390625,
-0.050994873046875,
-0... |
fury36/shortcut_key | 2023-10-12T11:42:40.000Z | [
"region:us"
] | fury36 | null | null | 0 | 33 | 2023-10-12T11:41:25 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Cartinoe5930/Hermes_preference | 2023-10-19T11:55:36.000Z | [
"size_categories:100K<n<1M",
"language:en",
"license:mit",
"region:us"
] | Cartinoe5930 | null | null | 1 | 33 | 2023-10-12T12:20:06 | ---
license: mit
language:
- en
size_categories:
- 100K<n<1M
---
# The Hermes_preference dataset
<!-- Provide a quick summary of the dataset. -->
The **Hermes_preference** dataset is the type of feedback dataset, used for training reward models which is used for RLHF!
In addition, **Hermes_preference** dataset can be also used for DPO!
We collect the preference data from several popular feedback datasets([UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback), [hh-rlhf](https://huggingface.co/datasets/Anthropic/hh-rlhf), [rlhf-reward-datasets](https://huggingface.co/datasets/yitingxie/rlhf-reward-datasets)) through sampling and preprocessing.
As a result, we could have collected approximately 190K preference data.
To collect high-quality feedback data, we decided to collect feedback data from [UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) & [rlhf-reward-datasets](https://huggingface.co/datasets/yitingxie/rlhf-reward-datasets) which are curated datasets.
In addition, we also collect the data from [hh-rlhf](https://huggingface.co/datasets/Anthropic/hh-rlhf) to accumulate the data that teach the models to output helpful and harmless response.
We hope that **Hermes_preference** dataset provides a promising way to future RLHF & DPO research!
## Dataset Details
<!-- Provide a longer summary of what this dataset is. -->
The **Hermes_preference** dataset is a mixture of several popular preference datasets(UltraFeedback, hh-rlhf, rlhf-reward-datasets) as we mentioned above.
The purpose of this dataset is to make a preference dataset that consists of more varied data.
To accomplish this purpose, we selected the UltraFeedback, hh-rlhf, and rlhf-reward-datasets as the base dataset.
More specifically, we sampled and preprocessed the datasets mentioned above to make Hermes_preference dataset more structural.
- **Curated by:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** MIT
### Source Data
The Hermes_preference dataset consists of the following datasets.
- [**openbmb/UltraFeedback**](https://huggingface.co/datasets/openbmb/UltraFeedback)
- [**Anthropic/hh-rlhf**](https://huggingface.co/datasets/Anthropic/hh-rlhf)
- [**yitingxie/rlhf-reward-datasets**](https://huggingface.co/datasets/yitingxie/rlhf-reward-datasets)
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [gauss5930/Hermes](https://github.com/gauss5930/Hermes)
- **Model** [Cartinoe5930/Hermes-7b]()
## Dataset Structure
The structure of **Hermes_prference** dataset is as follows:
```
{
"source": The source dataset of data,
"prompt": The instruction of question,
"chosen": Choosed response,
"rejected": Rejected response
}
``` | 2,889 | [
[
-0.04010009765625,
-0.0179901123046875,
0.019622802734375,
0.008544921875,
-0.0193939208984375,
-0.0194549560546875,
0.0037555694580078125,
-0.0367431640625,
0.057220458984375,
0.048919677734375,
-0.071044921875,
-0.03759765625,
-0.022796630859375,
0.0133361... |
crumb/textbook-codex | 2023-10-12T21:49:53.000Z | [
"region:us"
] | crumb | null | null | 2 | 33 | 2023-10-12T18:37:01 | ---
dataset_info:
features:
- name: text
dtype: string
- name: src
dtype: string
- name: src_col
dtype: string
- name: model
dtype: string
splits:
- name: train
num_bytes: 12286698438.0
num_examples: 3593574
download_size: 5707800000
dataset_size: 12286698438.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "textbook-codex"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 562 | [
[
-0.04888916015625,
-0.0088653564453125,
0.014373779296875,
0.00022292137145996094,
-0.01276397705078125,
-0.00836181640625,
0.006862640380859375,
-0.0015926361083984375,
0.0423583984375,
0.035980224609375,
-0.053070068359375,
-0.07049560546875,
-0.02592468261718... |
sunjun/medqa | 2023-10-14T13:43:37.000Z | [
"region:us"
] | sunjun | null | null | 0 | 33 | 2023-10-14T13:43:01 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: options
struct:
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: meta_info
dtype: string
- name: answer_idx
dtype: string
- name: metamap_phrases
sequence: string
splits:
- name: train
num_bytes: 15175834
num_examples: 10178
- name: test
num_bytes: 1946030
num_examples: 1273
download_size: 8870009
dataset_size: 17121864
---
# Dataset Card for "medqa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 864 | [
[
-0.0426025390625,
-0.00991058349609375,
0.024322509765625,
-0.00452423095703125,
-0.01027679443359375,
0.004970550537109375,
0.035186767578125,
-0.005802154541015625,
0.05767822265625,
0.0413818359375,
-0.06396484375,
-0.054779052734375,
-0.03411865234375,
-... |
ContextualAI/nq_open_neighbors | 2023-10-14T23:34:46.000Z | [
"region:us"
] | ContextualAI | null | null | 0 | 33 | 2023-10-14T23:08:41 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
sequence: string
- name: neighbor
dtype: string
splits:
- name: validation
num_bytes: 1106156
num_examples: 3610
download_size: 744341
dataset_size: 1106156
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
# Dataset Card for "nq_open_neighbors"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 538 | [
[
-0.041259765625,
-0.014892578125,
0.01544952392578125,
0.0011892318725585938,
-0.002277374267578125,
-0.01102447509765625,
0.025665283203125,
-0.005481719970703125,
0.060028076171875,
0.037689208984375,
-0.0487060546875,
-0.05902099609375,
-0.021820068359375,
... |
philTheThill/news-articles | 2023-10-16T07:09:23.000Z | [
"region:us"
] | philTheThill | null | null | 0 | 33 | 2023-10-16T06:41:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
bigbio/sem_eval_2024_task_2 | 2023-10-16T12:44:03.000Z | [
"multilinguality:monolingual",
"language:en",
"region:us"
] | bigbio | (Copied from dataset homepage)
## Dataset
The statements and evidence are generated by clinical domain experts, clinical trial organisers, and research oncologists from the Cancer Research UK Manchester Institute and the Digital Experimental Cancer Medicine Team. There are a total of (TBD) statements split evenly across the different sections and classes.
## Description
Each Clinical Trial Report (CTR) consists of 4 sections:
Eligibility criteria - A set of conditions for patients to be allowed to take part in the clinical trial
Intervention - Information concerning the type, dosage, frequency, and duration of treatments being studied.
Results - Number of participants in the trial, outcome measures, units, and the results.
Adverse events - These are signs and symptoms observed in patients during the clinical trial.
For this task, each CTR may contain 1-2 patient groups, called cohorts or arms. These groups may receive different treatments, or have different baseline characteristics. | @article{,
author = {},
title = {},
journal = {},
volume = {},
year = {},
url = {},
doi = {},
biburl = {},
bibsource = {}
} | 0 | 33 | 2023-10-16T09:54:10 | ---
language:
- en
bigbio_language:
- English
multilinguality: monolingual
pretty_name: SemEval 2024 Task 2
homepage: https://allenai.org/data/scitail
bigbio_pubmed: false
bigbio_public: true
bigbio_tasks:
- TEXTUAL_ENTAILMENT
---
# Dataset Card for SemEval 2024 Task 2
## Dataset Description
- **Homepage:** https://sites.google.com/view/nli4ct/semeval-2024?authuser=0
- **Pubmed:** False
- **Public:** True
- **Tasks:** TE
## Dataset
(Description copied from dataset homepage)
The statements and evidence are generated by clinical domain experts, clinical trial organisers, and research oncologists from the Cancer Research UK Manchester Institute and the Digital Experimental Cancer Medicine Team. There are a total of (TBD) statements split evenly across the different sections and classes.
## Description
Each Clinical Trial Report (CTR) consists of 4 sections:
Eligibility criteria - A set of conditions for patients to be allowed to take part in the clinical trial
Intervention - Information concerning the type, dosage, frequency, and duration of treatments being studied.
Results - Number of participants in the trial, outcome measures, units, and the results.
Adverse events - These are signs and symptoms observed in patients during the clinical trial.
For this task, each CTR may contain 1-2 patient groups, called cohorts or arms. These groups may receive different treatments, or have different baseline characteristics.
## Citation Information
```
@article{,
author = {},
title = {},
journal = {},
volume = {},
year = {},
url = {},
doi = {},
biburl = {},
bibsource = {}
}
| 1,656 | [
[
-0.00693511962890625,
-0.024688720703125,
0.034423828125,
0.01751708984375,
-0.0322265625,
-0.0132904052734375,
0.006725311279296875,
-0.0189056396484375,
0.00909423828125,
0.0687255859375,
-0.033477783203125,
-0.04974365234375,
-0.068603515625,
0.0155792236... |
Kabatubare/midjurney | 2023-10-21T06:44:07.000Z | [
"region:us"
] | Kabatubare | null | null | 0 | 33 | 2023-10-16T16:46:30 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
Isamu136/bk-sdm-small_generated_images_pokemon_blip | 2023-10-19T15:26:07.000Z | [
"region:us"
] | Isamu136 | null | null | 0 | 33 | 2023-10-19T15:25:22 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
- name: seed
dtype: int64
splits:
- name: train
num_bytes: 33954051.0
num_examples: 833
download_size: 33930907
dataset_size: 33954051.0
---
# Dataset Card for "bk-sdm-small_generated_images_pokemon_blip"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 455 | [
[
-0.037841796875,
-0.006633758544921875,
0.022430419921875,
0.03192138671875,
-0.034820556640625,
-0.006710052490234375,
0.016632080078125,
-0.004077911376953125,
0.07806396484375,
0.040802001953125,
-0.04638671875,
-0.051910400390625,
-0.04266357421875,
-0.0... |
ChaiML/tiny_chai_prize_reward_model_data | 2023-10-20T11:05:01.000Z | [
"region:us"
] | ChaiML | null | null | 0 | 33 | 2023-10-20T11:04:58 | ---
dataset_info:
features:
- name: input_text
dtype: string
- name: labels
dtype: int64
splits:
- name: train
num_bytes: 137495.22787897263
num_examples: 90
- name: validation
num_bytes: 15277.24754210807
num_examples: 10
download_size: 107343
dataset_size: 152772.4754210807
---
# Dataset Card for "tiny_chai_prize_reward_model_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 508 | [
[
-0.021484375,
-0.01374053955078125,
0.0171661376953125,
-0.00029015541076660156,
-0.0118560791015625,
-0.0160064697265625,
0.01776123046875,
0.001361846923828125,
0.051727294921875,
0.0228729248046875,
-0.05816650390625,
-0.032012939453125,
-0.044586181640625,
... |
AlanRobotics/text2code | 2023-10-20T11:48:15.000Z | [
"region:us"
] | AlanRobotics | null | null | 0 | 33 | 2023-10-20T11:47:55 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 14682799.631957877
num_examples: 16750
- name: test
num_bytes: 1632201.3680421233
num_examples: 1862
download_size: 6097942
dataset_size: 16315001.0
---
# Dataset Card for "text2code"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 438 | [
[
-0.0224151611328125,
-0.01251983642578125,
0.016815185546875,
0.0243682861328125,
-0.01119232177734375,
0.00020456314086914062,
0.00832366943359375,
-0.0217742919921875,
0.03924560546875,
0.037872314453125,
-0.046600341796875,
-0.050445556640625,
-0.04931640625,... |
renumics/dmu_tiny | 2023-10-20T18:21:09.000Z | [
"region:us"
] | renumics | null | null | 0 | 33 | 2023-10-20T16:55:06 | Subset of the DMU datast (https://www.dmu-net.org/) with
- cleaned meshes
- voxels
- mesh representation of voxels | 114 | [
[
-0.04888916015625,
-0.053802490234375,
0.0396728515625,
-0.00536346435546875,
-0.0299835205078125,
0.0174560546875,
0.0200042724609375,
0.03826904296875,
0.03448486328125,
0.0287933349609375,
-0.058380126953125,
-0.03472900390625,
-0.00008893013000488281,
0.... |
Horus7/FromTo | 2023-10-31T16:06:11.000Z | [
"region:us"
] | Horus7 | null | null | 0 | 33 | 2023-10-22T12:54:04 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
andersonbcdefg/micropile | 2023-10-23T17:38:01.000Z | [
"region:us"
] | andersonbcdefg | null | null | 0 | 33 | 2023-10-23T17:37:57 | ---
dataset_info:
features:
- name: text
dtype: string
- name: __id
dtype: int64
splits:
- name: train
num_bytes: 5544284
num_examples: 1000
download_size: 2933209
dataset_size: 5544284
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "micropile"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 469 | [
[
-0.049713134765625,
-0.015716552734375,
0.0180816650390625,
0.0081634521484375,
0.00016570091247558594,
0.00823974609375,
0.0174102783203125,
0.00447845458984375,
0.062408447265625,
0.02862548828125,
-0.05389404296875,
-0.052642822265625,
-0.033599853515625,
... |
tingchih/mult_1023 | 2023-10-24T04:01:46.000Z | [
"region:us"
] | tingchih | null | null | 0 | 33 | 2023-10-24T04:01:42 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 47982484
num_examples: 277071
- name: test
num_bytes: 20569135
num_examples: 118745
download_size: 44901294
dataset_size: 68551619
---
# Dataset Card for "mult_1023"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 577 | [
[
-0.0498046875,
-0.01421356201171875,
0.02105712890625,
0.0187835693359375,
-0.006900787353515625,
-0.002979278564453125,
0.01788330078125,
-0.00031113624572753906,
0.059295654296875,
0.0316162109375,
-0.05377197265625,
-0.035247802734375,
-0.037139892578125,
... |
ComponentSoft/k8s-kubectl-cot-20k | 2023-10-27T03:54:10.000Z | [
"region:us"
] | ComponentSoft | null | null | 0 | 33 | 2023-10-26T20:30:51 | ---
dataset_info:
features:
- name: objective
dtype: string
- name: command_name
dtype: string
- name: command
dtype: string
- name: description
dtype: string
- name: syntax
dtype: string
- name: flags
list:
- name: default
dtype: string
- name: description
dtype: string
- name: option
dtype: string
- name: short
dtype: string
- name: question
dtype: string
- name: chain_of_thought
dtype: string
splits:
- name: train
num_bytes: 51338358
num_examples: 19661
download_size: 0
dataset_size: 51338358
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "k8s-kubectl-cot-20k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 870 | [
[
-0.051300048828125,
-0.003170013427734375,
0.02593994140625,
0.0299530029296875,
-0.031219482421875,
0.0279388427734375,
0.01198577880859375,
-0.012115478515625,
0.044525146484375,
0.045562744140625,
-0.043365478515625,
-0.069091796875,
-0.05633544921875,
-0... |
lewtun/drug-reviews | 2021-08-10T21:35:52.000Z | [
"region:us"
] | lewtun | null | null | 7 | 32 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
dl4phys/top_tagging | 2022-04-18T07:43:02.000Z | [
"license:cc-by-4.0",
"arxiv:1902.09914",
"region:us"
] | dl4phys | null | null | 0 | 32 | 2022-04-16T09:53:34 | ---
license: cc-by-4.0
---
# Dataset Card for Top Quark Tagging
## Table of Contents
- [Dataset Card for Top Quark Tagging](#dataset-card-for-top-quark-tagging)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://zenodo.org/record/2603256
- **Paper:** https://arxiv.org/abs/1902.09914
- **Point of Contact:** [Gregor Kasieczka](gregor.kasieczka@uni-hamburg.de)
### Dataset Summary
Top Quark Tagging is a dataset of Monte Carlo simulated events produced by proton-proton collisions at the Large Hadron Collider. The top-quark signal and mixed quark-gluon background jets are produced with Pythia8 with its default tune for a center-of-mass energy of 14 TeV. Multiple interactions and pile-up are ignored. The leading 200 jet constituent four-momenta \\( (E, p_x, p_y, p_z) \\) are stored, with zero-padding applied to jets with fewer than 200 constituents.
### Supported Tasks and Leaderboards
- `tabular-classification`: The dataset can be used to train a model for tabular binary classification, which consists in predicting whether an event is produced from a top signal or quark-gluon background. Success on this task is typically measured by achieving a *high* [accuracy](https://huggingface.co/metrics/accuracy) and AUC score.
## Dataset Structure
### Data Instances
Each instance in the dataset consists of the four-momenta of the leading 200 jet constituents, sorted by \\(p_T\\). For jets with fewer than 200 constituents, zero-padding is applied. The four-momenta of the top-quark are also provided, along with a label in the `is_signal_new` column to indicate whether the event stems from a top-quark (1) or QCD background (0). An example instance looks as follows:
```
{'E_0': 474.0711364746094,
'PX_0': -250.34703063964844,
'PY_0': -223.65196228027344,
'PZ_0': -334.73809814453125,
...
'E_199': 0.0,
'PX_199': 0.0,
'PY_199': 0.0,
'PZ_199': 0.0,
'truthE': 0.0,
'truthPX': 0.0,
'truthPY': 0.0,
'truthPZ': 0.0,
'ttv': 0,
'is_signal_new': 0}
```
### Data Fields
The fields in the dataset have the following meaning:
- `E_i`: the energy of jet constituent \\(i\\).
- `PX_i`: the \\(x\\) component of the jet constituent's momentum
- `PY_i`: the \\(y\\) component of the jet constituent's momentum
- `PZ_i`: the \\(z\\) component of the jet constituent's momentum
- `truthE`: the energy of the top-quark
- `truthPX`: the \\(x\\) component of the top quark's momentum
- `truthPY`: the \\(y\\) component of the top quark's momentum
- `truthPZ`: the \\(z\\) component of the top quark's momentum
- `ttv`: a flag that indicates which split (train, validation, or test) that a jet belongs to. Redundant since each split is provided as a separate dataset
- `is_signal_new`: the label for each jet. A 1 indicates a top-quark, while a 0 indicates QCD background.
### Data Splits
| | train | validation | test |
|------------------|--------:|-----------:|-------:|
| Number of events | 1211000 | 403000 | 404000 |
### Licensing Information
This dataset is released under the [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode) license.
### Citation Information
```
@dataset{kasieczka_gregor_2019_2603256,
author = {Kasieczka, Gregor and
Plehn, Tilman and
Thompson, Jennifer and
Russel, Michael},
title = {Top Quark Tagging Reference Dataset},
month = mar,
year = 2019,
publisher = {Zenodo},
version = {v0 (2018\_03\_27)},
doi = {10.5281/zenodo.2603256},
url = {https://doi.org/10.5281/zenodo.2603256}
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun) for adding this dataset.
| 4,232 | [
[
-0.0416259765625,
-0.0096282958984375,
0.01364898681640625,
-0.01416015625,
-0.0345458984375,
0.023834228515625,
-0.0030689239501953125,
0.01079559326171875,
0.0179901123046875,
0.0018491744995117188,
-0.04583740234375,
-0.06646728515625,
-0.034576416015625,
... |
merionum/ru_paraphraser | 2022-07-28T15:01:08.000Z | [
"task_categories:text-classification",
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:sentence-similarity",
"task_ids:semantic-similarity-scoring",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"annotations_creators:machine-g... | merionum | null | null | 5 | 32 | 2022-05-26T14:53:46 | ---
annotations_creators:
- crowdsourced
- expert-generated
- machine-generated
language_creators:
- crowdsourced
language:
- ru
license:
- mit
multilinguality:
- monolingual
paperswithcode_id: null
pretty_name: ParaPhraser
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-classification
- text-generation
- text2text-generation
- sentence-similarity
task_ids:
- semantic-similarity-scoring
---
# Dataset Card for ParaPhraser
### Dataset Summary
ParaPhraser is a news headlines corpus annotated according to the following schema:
```
1: precise paraphrases
0: near paraphrases
-1: non-paraphrases
```
The _Plus_ part is also available.
It contains clusters of news headline paraphrases labeled automatically by a fine-tuned paraphrase detection BERT model.
In order to load it:
```python
from datasets import load_dataset
corpus = load_dataset('merionum/ru_paraphraser', data_files='plus.jsonl')
```
## Dataset Structure
```
train: 7,227 pairs
test: 1,924 pairs
plus: 1,725,393 clusters (total: ~7m texts)
```
### Citation Information
```
@inproceedings{pivovarova2017paraphraser,
title={ParaPhraser: Russian paraphrase corpus and shared task},
author={Pivovarova, Lidia and Pronoza, Ekaterina and Yagunova, Elena and Pronoza, Anton},
booktitle={Conference on artificial intelligence and natural language},
pages={211--225},
year={2017},
organization={Springer}
}
```
```
@inproceedings{gudkov-etal-2020-automatically,
title = "Automatically Ranked {R}ussian Paraphrase Corpus for Text Generation",
author = "Gudkov, Vadim and
Mitrofanova, Olga and
Filippskikh, Elizaveta",
booktitle = "Proceedings of the Fourth Workshop on Neural Generation and Translation",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.ngt-1.6",
doi = "10.18653/v1/2020.ngt-1.6",
pages = "54--59",
abstract = "The article is focused on automatic development and ranking of a large corpus for Russian paraphrase generation which proves to be the first corpus of such type in Russian computational linguistics. Existing manually annotated paraphrase datasets for Russian are limited to small-sized ParaPhraser corpus and ParaPlag which are suitable for a set of NLP tasks, such as paraphrase and plagiarism detection, sentence similarity and relatedness estimation, etc. Due to size restrictions, these datasets can hardly be applied in end-to-end text generation solutions. Meanwhile, paraphrase generation requires a large amount of training data. In our study we propose a solution to the problem: we collect, rank and evaluate a new publicly available headline paraphrase corpus (ParaPhraser Plus), and then perform text generation experiments with manual evaluation on automatically ranked corpora using the Universal Transformer architecture.",
}
```
### Contributions
Dataset maintainer:
Vadim Gudkov: [@merionum](https://github.com/merionum)
| 3,033 | [
[
-0.009368896484375,
-0.042724609375,
0.0276641845703125,
0.0255889892578125,
-0.0386962890625,
-0.00588226318359375,
-0.021575927734375,
0.0013666152954101562,
0.00656890869140625,
0.035003662109375,
-0.0142822265625,
-0.06231689453125,
-0.03619384765625,
0.... |
PiC/phrase_similarity | 2023-01-20T16:32:19.000Z | [
"task_categories:text-classification",
"task_ids:semantic-similarity-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"l... | PiC | Phrase in Context is a curated benchmark for phrase understanding and semantic search, consisting of three tasks of increasing difficulty: Phrase Similarity (PS), Phrase Retrieval (PR) and Phrase Sense Disambiguation (PSD). The datasets are annotated by 13 linguistic experts on Upwork and verified by two groups: ~1000 AMT crowdworkers and another set of 5 linguistic experts. PiC benchmark is distributed under CC-BY-NC 4.0. | @article{pham2022PiC,
title={PiC: A Phrase-in-Context Dataset for Phrase Understanding and Semantic Search},
author={Pham, Thang M and Yoon, Seunghyun and Bui, Trung and Nguyen, Anh},
journal={arXiv preprint arXiv:2207.09068},
year={2022}
} | 6 | 32 | 2022-06-14T01:35:19 | ---
annotations_creators:
- expert-generated
language_creators:
- found
- expert-generated
language:
- en
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
paperswithcode_id: phrase-in-context
pretty_name: 'PiC: Phrase Similarity (PS)'
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- semantic-similarity-classification
---
# Dataset Card for "PiC: Phrase Similarity"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://phrase-in-context.github.io/](https://phrase-in-context.github.io/)
- **Repository:** [https://github.com/phrase-in-context](https://github.com/phrase-in-context)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [Thang Pham](<thangpham@auburn.edu>)
- **Size of downloaded dataset files:** 4.60 MB
- **Size of the generated dataset:** 2.96 MB
- **Total amount of disk used:** 7.56 MB
### Dataset Summary
PS is a binary classification task with the goal of predicting whether two multi-word noun phrases are semantically similar or not given *the same context* sentence.
This dataset contains ~10K pairs of two phrases along with their contexts used for disambiguation, since two phrases are not enough for semantic comparison.
Our ~10K examples were annotated by linguistic experts on <upwork.com> and verified in two rounds by 1000 Mturkers and 5 linguistic experts.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English.
## Dataset Structure
### Data Instances
**PS**
* Size of downloaded dataset files: 4.60 MB
* Size of the generated dataset: 2.96 MB
* Total amount of disk used: 7.56 MB
```
{
"phrase1": "annual run",
"phrase2": "yearlong performance",
"sentence1": "since 2004, the club has been a sponsor of the annual run for rigby to raise money for off-campus housing safety awareness.",
"sentence2": "since 2004, the club has been a sponsor of the yearlong performance for rigby to raise money for off-campus housing safety awareness.",
"label": 0,
"idx": 0,
}
```
### Data Fields
The data fields are the same among all splits.
* phrase1: a string feature.
* phrase2: a string feature.
* sentence1: a string feature.
* sentence2: a string feature.
* label: a classification label, with negative (0) and positive (1).
* idx: an int32 feature.
### Data Splits
| name |train |validation|test |
|--------------------|----:|--------:|----:|
|PS |7362| 1052|2102|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The source passages + answers are from Wikipedia and the source of queries were produced by our hired linguistic experts from [Upwork.com](https://upwork.com).
#### Who are the source language producers?
We hired 13 linguistic experts from [Upwork.com](https://upwork.com) for annotation and more than 1000 human annotators on Mechanical Turk along with another set of 5 Upwork experts for 2-round verification.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
13 linguistic experts from [Upwork.com](https://upwork.com).
### Personal and Sensitive Information
No annotator identifying details are provided.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset is a joint work between Adobe Research and Auburn University.
Creators: [Thang M. Pham](https://scholar.google.com/citations?user=eNrX3mYAAAAJ), [David Seunghyun Yoon](https://david-yoon.github.io/), [Trung Bui](https://sites.google.com/site/trungbuistanford/), and [Anh Nguyen](https://anhnguyen.me).
[@PMThangXAI](https://twitter.com/pmthangxai) added this dataset to HuggingFace.
### Licensing Information
This dataset is distributed under [Creative Commons Attribution-NonCommercial 4.0 International (CC-BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/)
### Citation Information
```
@article{pham2022PiC,
title={PiC: A Phrase-in-Context Dataset for Phrase Understanding and Semantic Search},
author={Pham, Thang M and Yoon, Seunghyun and Bui, Trung and Nguyen, Anh},
journal={arXiv preprint arXiv:2207.09068},
year={2022}
}
``` | 5,470 | [
[
-0.027191162109375,
-0.0506591796875,
0.01215362548828125,
0.0189361572265625,
-0.030517578125,
-0.0021686553955078125,
-0.02191162109375,
-0.038665771484375,
0.037200927734375,
0.0264892578125,
-0.035858154296875,
-0.06622314453125,
-0.0380859375,
0.0183563... |
BeIR/scidocs-generated-queries | 2022-10-23T06:12:52.000Z | [
"task_categories:text-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:fact-checking-retrieval",
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | BeIR | null | null | 2 | 32 | 2022-06-17T12:53:49 | ---
annotations_creators: []
language_creators: []
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: beir
pretty_name: BEIR Benchmark
size_categories:
msmarco:
- 1M<n<10M
trec-covid:
- 100k<n<1M
nfcorpus:
- 1K<n<10K
nq:
- 1M<n<10M
hotpotqa:
- 1M<n<10M
fiqa:
- 10K<n<100K
arguana:
- 1K<n<10K
touche-2020:
- 100K<n<1M
cqadupstack:
- 100K<n<1M
quora:
- 100K<n<1M
dbpedia:
- 1M<n<10M
scidocs:
- 10K<n<100K
fever:
- 1M<n<10M
climate-fever:
- 1M<n<10M
scifact:
- 1K<n<10K
source_datasets: []
task_categories:
- text-retrieval
- zero-shot-retrieval
- information-retrieval
- zero-shot-information-retrieval
task_ids:
- passage-retrieval
- entity-linking-retrieval
- fact-checking-retrieval
- tweet-retrieval
- citation-prediction-retrieval
- duplication-question-retrieval
- argument-retrieval
- news-retrieval
- biomedical-information-retrieval
- question-answering-retrieval
---
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. | 13,988 | [
[
-0.0396728515625,
-0.03985595703125,
0.01094818115234375,
0.0036602020263671875,
0.00423431396484375,
0.00009590387344360352,
-0.0081939697265625,
-0.0188751220703125,
0.021697998046875,
0.00595855712890625,
-0.034332275390625,
-0.0545654296875,
-0.0263824462890... |
owaiskha9654/PubMed_MultiLabel_Text_Classification_Dataset_MeSH | 2023-01-30T09:50:44.000Z | [
"task_categories:text-classification",
"task_ids:multi-label-classification",
"size_categories:10K<n<100K",
"source_datasets:BioASQ Task A",
"language:en",
"license:afl-3.0",
"region:us"
] | owaiskha9654 | null | null | 6 | 32 | 2022-08-02T20:13:50 | ---
language:
- en
license: afl-3.0
source_datasets:
- BioASQ Task A
task_categories:
- text-classification
task_ids:
- multi-label-classification
pretty_name: BioASQ, PUBMED
size_categories:
- 10K<n<100K
---
This dataset consists of a approx 50k collection of research articles from **PubMed** repository. Originally these documents are manually annotated by Biomedical Experts with their MeSH labels and each articles are described in terms of 10-15 MeSH labels. In this Dataset we have huge numbers of labels present as a MeSH major which is raising the issue of extremely large output space and severe label sparsity issues. To solve this Issue Dataset has been Processed and mapped to its root as Described in the Below Figure.

 | 960 | [
[
-0.045318603515625,
-0.041595458984375,
0.0183563232421875,
0.0019140243530273438,
-0.0179290771484375,
-0.00345611572265625,
0.00016260147094726562,
-0.01812744140625,
0.00972747802734375,
0.039886474609375,
-0.02960205078125,
-0.044586181640625,
-0.06726074218... |
rungalileo/medical_transcription_40 | 2022-08-04T04:58:53.000Z | [
"region:us"
] | rungalileo | null | null | 4 | 32 | 2022-08-04T04:58:43 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.035064697265625,
0.0465087890625,
0.052490234375,
0.00505828857421875,
0.051361083984375,
0.01702880859375,
-0.05206298828125,
-0.01497650146484375,
-0.060302734375,
0.03790283203... |
csebuetnlp/BanglaNMT | 2023-02-24T14:46:55.000Z | [
"task_categories:translation",
"annotations_creators:other",
"language_creators:found",
"multilinguality:translation",
"size_categories:1M<n<10M",
"language:bn",
"language:en",
"license:cc-by-nc-sa-4.0",
"bengali",
"BanglaNMT",
"region:us"
] | csebuetnlp | This is the largest Machine Translation (MT) dataset for Bengali-English, introduced in the paper
`Not Low-Resource Anymore: Aligner Ensembling, Batch Filtering, and New Datasets for Bengali-English Machine Translation`. | @inproceedings{hasan-etal-2020-low,
title = "Not Low-Resource Anymore: Aligner Ensembling, Batch Filtering, and New Datasets for {B}engali-{E}nglish Machine Translation",
author = "Hasan, Tahmid and
Bhattacharjee, Abhik and
Samin, Kazi and
Hasan, Masum and
Basak, Madhusudan and
Rahman, M. Sohel and
Shahriyar, Rifat",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.207",
doi = "10.18653/v1/2020.emnlp-main.207",
pages = "2612--2623",
abstract = "Despite being the seventh most widely spoken language in the world, Bengali has received much less attention in machine translation literature due to being low in resources. Most publicly available parallel corpora for Bengali are not large enough; and have rather poor quality, mostly because of incorrect sentence alignments resulting from erroneous sentence segmentation, and also because of a high volume of noise present in them. In this work, we build a customized sentence segmenter for Bengali and propose two novel methods for parallel corpus creation on low-resource setups: aligner ensembling and batch filtering. With the segmenter and the two methods combined, we compile a high-quality Bengali-English parallel corpus comprising of 2.75 million sentence pairs, more than 2 million of which were not available before. Training on neural models, we achieve an improvement of more than 9 BLEU score over previous approaches to Bengali-English machine translation. We also evaluate on a new test set of 1000 pairs made with extensive quality control. We release the segmenter, parallel corpus, and the evaluation set, thus elevating Bengali from its low-resource status. To the best of our knowledge, this is the first ever large scale study on Bengali-English machine translation. We believe our study will pave the way for future research on Bengali-English machine translation as well as other low-resource languages. Our data and code are available at https://github.com/csebuetnlp/banglanmt.",
} | 0 | 32 | 2022-08-21T13:25:09 | ---
annotations_creators:
- other
language:
- bn
- en
language_creators:
- found
license:
- cc-by-nc-sa-4.0
multilinguality:
- translation
pretty_name: BanglaNMT
size_categories:
- 1M<n<10M
source_datasets: []
tags:
- bengali
- BanglaNMT
task_categories:
- translation
---
# Dataset Card for `BanglaNMT`
## Table of Contents
- [Dataset Card for `BanglaNMT`](#dataset-card-for-BanglaNMT)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Usage](#usage)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [https://github.com/csebuetnlp/banglanmt](https://github.com/csebuetnlp/banglanmt)
- **Paper:** [**"Not Low-Resource Anymore: Aligner Ensembling, Batch Filtering, and New Datasets for Bengali-English Machine Translation"**](https://www.aclweb.org/anthology/2020.emnlp-main.207)
- **Point of Contact:** [Tahmid Hasan](mailto:tahmidhasan@cse.buet.ac.bd)
### Dataset Summary
This is the largest Machine Translation (MT) dataset for Bengali-English, curated using novel sentence alignment methods introduced **[here](https://aclanthology.org/2020.emnlp-main.207/).**
**Note:** This is a filtered version of the original dataset that the authors used for NMT training. For the complete set, refer to the offical [repository](https://github.com/csebuetnlp/banglanmt)
### Supported Tasks and Leaderboards
[More information needed](https://github.com/csebuetnlp/banglanmt)
### Languages
- `Bengali`
- `English`
### Usage
```python
from datasets import load_dataset
dataset = load_dataset("csebuetnlp/BanglaNMT")
```
## Dataset Structure
### Data Instances
One example from the dataset is given below in JSON format.
```
{
'bn': 'বিমানবন্দরে যুক্তরাজ্যে নিযুক্ত বাংলাদেশ হাইকমিশনার সাঈদা মুনা তাসনীম ও লন্ডনে বাংলাদেশ মিশনের জ্যেষ্ঠ কর্মকর্তারা তাকে বিদায় জানান।',
'en': 'Bangladesh High Commissioner to the United Kingdom Saida Muna Tasneen and senior officials of Bangladesh Mission in London saw him off at the airport.'
}
```
### Data Fields
The data fields are as follows:
- `bn`: a `string` feature indicating the Bengali sentence.
- `en`: a `string` feature indicating the English translation.
### Data Splits
| split |count |
|----------|--------|
|`train`| 2379749 |
|`validation`| 597 |
|`test`| 1000 |
## Dataset Creation
[More information needed](https://github.com/csebuetnlp/banglanmt)
### Curation Rationale
[More information needed](https://github.com/csebuetnlp/banglanmt)
### Source Data
[More information needed](https://github.com/csebuetnlp/banglanmt)
#### Initial Data Collection and Normalization
[More information needed](https://github.com/csebuetnlp/banglanmt)
#### Who are the source language producers?
[More information needed](https://github.com/csebuetnlp/banglanmt)
### Annotations
[More information needed](https://github.com/csebuetnlp/banglanmt)
#### Annotation process
[More information needed](https://github.com/csebuetnlp/banglanmt)
#### Who are the annotators?
[More information needed](https://github.com/csebuetnlp/banglanmt)
### Personal and Sensitive Information
[More information needed](https://github.com/csebuetnlp/banglanmt)
## Considerations for Using the Data
### Social Impact of Dataset
[More information needed](https://github.com/csebuetnlp/banglanmt)
### Discussion of Biases
[More information needed](https://github.com/csebuetnlp/banglanmt)
### Other Known Limitations
[More information needed](https://github.com/csebuetnlp/banglanmt)
## Additional Information
### Dataset Curators
[More information needed](https://github.com/csebuetnlp/banglanmt)
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use the dataset, please cite the following paper:
```
@inproceedings{hasan-etal-2020-low,
title = "Not Low-Resource Anymore: Aligner Ensembling, Batch Filtering, and New Datasets for {B}engali-{E}nglish Machine Translation",
author = "Hasan, Tahmid and
Bhattacharjee, Abhik and
Samin, Kazi and
Hasan, Masum and
Basak, Madhusudan and
Rahman, M. Sohel and
Shahriyar, Rifat",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.207",
doi = "10.18653/v1/2020.emnlp-main.207",
pages = "2612--2623",
abstract = "Despite being the seventh most widely spoken language in the world, Bengali has received much less attention in machine translation literature due to being low in resources. Most publicly available parallel corpora for Bengali are not large enough; and have rather poor quality, mostly because of incorrect sentence alignments resulting from erroneous sentence segmentation, and also because of a high volume of noise present in them. In this work, we build a customized sentence segmenter for Bengali and propose two novel methods for parallel corpus creation on low-resource setups: aligner ensembling and batch filtering. With the segmenter and the two methods combined, we compile a high-quality Bengali-English parallel corpus comprising of 2.75 million sentence pairs, more than 2 million of which were not available before. Training on neural models, we achieve an improvement of more than 9 BLEU score over previous approaches to Bengali-English machine translation. We also evaluate on a new test set of 1000 pairs made with extensive quality control. We release the segmenter, parallel corpus, and the evaluation set, thus elevating Bengali from its low-resource status. To the best of our knowledge, this is the first ever large scale study on Bengali-English machine translation. We believe our study will pave the way for future research on Bengali-English machine translation as well as other low-resource languages. Our data and code are available at https://github.com/csebuetnlp/banglanmt.",
}
```
### Contributions
Thanks to [@abhik1505040](https://github.com/abhik1505040) and [@Tahmid](https://github.com/Tahmid04) for adding this dataset. | 7,796 | [
[
-0.0300445556640625,
-0.053863525390625,
0.0009598731994628906,
0.032745361328125,
-0.02880859375,
0.01123046875,
-0.034423828125,
-0.0209808349609375,
0.03216552734375,
0.03265380859375,
-0.036041259765625,
-0.04962158203125,
-0.04608154296875,
0.0311889648... |
illuin/small_commonvoice_test_set | 2022-10-06T13:37:15.000Z | [
"region:us"
] | illuin | null | null | 0 | 32 | 2022-10-06T13:36:55 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Harsit/xnli2.0_train_hindi | 2022-10-15T09:20:03.000Z | [
"region:us"
] | Harsit | null | null | 0 | 32 | 2022-10-15T09:19:09 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Dahoas/instruct-synthetic-prompt-responses | 2022-12-19T16:18:50.000Z | [
"region:us"
] | Dahoas | null | null | 9 | 32 | 2022-12-19T16:18:47 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
dvilasuero/banking_app | 2022-12-29T13:25:35.000Z | [
"region:us"
] | dvilasuero | null | null | 0 | 32 | 2022-12-29T13:25:10 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
intfloat/query2doc_msmarco | 2023-03-30T02:44:59.000Z | [
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-4.0",
"arxiv:2303.07678",
"region:us"
] | intfloat | This dataset contains GPT-3.5 (text-davinci-003) generations from MS-MARCO queries. | @inproceedings{Wang2023Query2docQE,
title={Query2doc: Query Expansion with Large Language Models},
author={Liang Wang and Nan Yang and Furu Wei},
year={2023}
} | 3 | 32 | 2023-03-10T10:28:59 | ---
license: cc-by-4.0
language:
- en
size_categories:
- 100K<n<1M
---
### Dataset Summary
This dataset contains GPT-3.5 (`text-davinci-003`) generations from MS-MARCO queries.
[Query2doc: Query Expansion with Large Language Models](https://arxiv.org/pdf/2303.07678.pdf) Liang Wang, Nan Yang and Furu Wei
### Data Instances
An example looks as follows.
```
{
"query_id": "1030303",
"query": "who is aziz hashim",
"pseudo_doc": "Aziz Hashim is a renowned entrepreneur, business leader, and one of the most successful restaurant franchise operators in the US. He is the founder of NRD Capital, a private equity firm focused on investments in multi-unit restaurant franchised businesses. Hashim has built a formidable track record of success in the franchise industry, with brands such as Outback Steakhouse and Jamba Juice. His accomplishments and philanthropic initiatives have earned him numerous awards, including the prestigious Ernst and Young Entrepreneur of the Year award."
}
```
### Data Fields
- `query_id`: a `string` feature.
- `query`: a `string` feature.
- `pseudo_doc`: a `string` feature.
### Data Splits
| train | dev | test | trec_dl2019 | trec_dl2020 |
|--------|------:|------:|------:|------:|
| 502939 | 6980 | 6837 | 43 | 54 |
### How to use this dataset
```python
from datasets import load_dataset
dataset = load_dataset('intfloat/query2doc_msmarco')
print(dataset['trec_dl2019'][0])
```
### Reproducing our results
We provide a python script [repro_bm25.py](https://huggingface.co/datasets/intfloat/query2doc_msmarco/blob/main/repro_bm25.py) to reproduce our results with BM25 retrieval.
First install some python dependency packages:
```
pip install pyserini==0.15.0 pytrec_eval datasets tqdm
```
Then download and run the python code:
```
python repro_bm25.py
```
This script utilizes the pre-built Lucene index from [Pyserini](https://github.com/castorini/pyserini/blob/pyserini-0.15.0/docs/prebuilt-indexes.md)
and might yield slightly different results compared to the paper.
### Citation Information
```
@article{wang2023query2doc,
title={Query2doc: Query Expansion with Large Language Models},
author={Wang, Liang and Yang, Nan and Wei, Furu},
journal={arXiv preprint arXiv:2303.07678},
year={2023}
}
```
| 2,275 | [
[
-0.0165863037109375,
-0.050262451171875,
0.038482666015625,
0.009979248046875,
-0.0005297660827636719,
-0.008270263671875,
-0.0222015380859375,
-0.0073394775390625,
-0.00299072265625,
0.0347900390625,
-0.027008056640625,
-0.057708740234375,
-0.036651611328125,
... |
Patt/copa_th | 2023-06-05T12:36:44.000Z | [
"language:th",
"language:en",
"arxiv:1907.04307",
"region:us"
] | Patt | null | null | 0 | 32 | 2023-06-02T09:43:18 | ---
language:
- th
- en
---
# Dataset Card for copa_th
### Dataset Description
This dataset is Thai translated version of [copa](https://huggingface.co/datasets/super_glue/viewer/copa) using google translate with [Multilingual Universal Sentence Encoder](https://arxiv.org/abs/1907.04307) to calculate score for Thai translation.
### Languages
- EN
- TH | 359 | [
[
-0.003124237060546875,
-0.036376953125,
0.0033359527587890625,
0.04412841796875,
-0.058685302734375,
0.020111083984375,
-0.007007598876953125,
-0.029205322265625,
0.04486083984375,
0.04229736328125,
-0.033111572265625,
-0.06829833984375,
-0.042877197265625,
... |
imnaveenk/earrings | 2023-06-14T04:50:46.000Z | [
"region:us"
] | imnaveenk | null | null | 0 | 32 | 2023-06-13T08:57:22 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 107545898.846
num_examples: 1626
download_size: 91556390
dataset_size: 107545898.846
---
# Dataset Card for "earrings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 396 | [
[
-0.0250244140625,
-0.0277252197265625,
0.01084136962890625,
0.01432037353515625,
-0.0273590087890625,
-0.0003399848937988281,
0.0173187255859375,
-0.021148681640625,
0.06817626953125,
0.0306854248046875,
-0.06256103515625,
-0.057708740234375,
-0.043365478515625,... |
open-llm-leaderboard/results | 2023-10-30T04:12:22.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 14 | 32 | 2023-06-19T15:15:24 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
KaiLv/UDR_BREAK | 2023-06-21T12:23:29.000Z | [
"region:us"
] | KaiLv | null | null | 0 | 32 | 2023-06-21T12:23:18 | ---
dataset_info:
features:
- name: question_id
dtype: string
- name: question_text
dtype: string
- name: decomposition
dtype: string
- name: operators
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 12757200
num_examples: 44321
- name: validation
num_bytes: 2231632
num_examples: 7760
- name: test
num_bytes: 894558
num_examples: 8069
download_size: 5175505
dataset_size: 15883390
---
# Dataset Card for "UDR_BREAK"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 644 | [
[
-0.036651611328125,
-0.034454345703125,
0.00783538818359375,
0.0169830322265625,
-0.0136260986328125,
0.0148468017578125,
0.028594970703125,
-0.01515960693359375,
0.050384521484375,
0.0291595458984375,
-0.0517578125,
-0.0379638671875,
-0.03131103515625,
-0.0... |
KaiLv/UDR_MR | 2023-06-21T12:42:19.000Z | [
"region:us"
] | KaiLv | null | null | 0 | 32 | 2023-06-21T12:42:08 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: label
dtype: int64
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 1164193
num_examples: 8662
- name: test
num_bytes: 266849
num_examples: 2000
- name: debug
num_bytes: 672162
num_examples: 5000
download_size: 1379605
dataset_size: 2103204
---
# Dataset Card for "UDR_MR"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 537 | [
[
-0.0408935546875,
-0.0137939453125,
0.00562286376953125,
0.0020694732666015625,
-0.01268768310546875,
0.007335662841796875,
0.0257720947265625,
-0.00496673583984375,
0.055755615234375,
0.0308074951171875,
-0.056640625,
-0.045806884765625,
-0.03790283203125,
... |
kjj0/4chanpol-openaimod | 2023-06-23T21:28:11.000Z | [
"arxiv:2001.07487",
"region:us"
] | kjj0 | null | null | 1 | 32 | 2023-06-23T21:08:52 | ---
dataset_info:
features:
- name: text
dtype: string
- name: sexual
dtype: float64
- name: hate
dtype: float64
- name: violence
dtype: float64
- name: self-harm
dtype: float64
- name: sexual/minors
dtype: float64
- name: hate/threatening
dtype: float64
- name: violence/graphic
dtype: float64
splits:
- name: train
num_bytes: 23614214277
num_examples: 114647404
download_size: 14061193653
dataset_size: 23614214277
---
# Dataset Card for "kjj0/4chanpol-openaimod"
This dataset contains 114M unique posts made between June 2016 and November 2019.
This is a variant of the dataset provided by [Raiders of the Lost Kek: 3.5 Years of Augmented 4chan Posts from the Politically Incorrect Board](https://arxiv.org/abs/2001.07487).
We have deduplicated posts and stripped metadata to create an easily accessible collection of unique texts.
We have also provided OpenAI moderation scores. A variant without these scores can be found at [kjj0/4chanpol](https://huggingface.co/datasets/kjj0/4chanpol).
```
@inproceedings{papasavva2020raiders,
title={Raiders of the lost kek: 3.5 years of augmented 4chan posts from the politically incorrect board},
author={Papasavva, Antonis and Zannettou, Savvas and De Cristofaro, Emiliano and Stringhini, Gianluca and Blackburn, Jeremy},
booktitle={Proceedings of the International AAAI Conference on Web and Social Media},
volume={14},
pages={885--894},
year={2020}
}
``` | 1,483 | [
[
-0.032012939453125,
-0.046905517578125,
0.00901031494140625,
0.01702880859375,
-0.04083251953125,
-0.002353668212890625,
0.0026149749755859375,
-0.0148162841796875,
0.05181884765625,
0.05206298828125,
-0.051666259765625,
-0.042022705078125,
-0.04364013671875,
... |
MoritzLaurer/sentiment_economy_news | 2023-06-28T10:28:33.000Z | [
"region:us"
] | MoritzLaurer | null | null | 2 | 32 | 2023-06-28T10:28:19 | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
dtype: string
- name: articleid
dtype: string
- name: relevance
dtype: string
- name: positivity
dtype: string
- name: split
dtype: string
- name: positivity_rounded
dtype: string
- name: idx
dtype: int64
splits:
- name: train
num_bytes: 5122725
num_examples: 3000
- name: test
num_bytes: 653059
num_examples: 382
- name: train_sample
num_bytes: 1684685
num_examples: 1000
- name: train_sample_numeric
num_bytes: 1720504
num_examples: 1000
download_size: 5611673
dataset_size: 9180973
---
# Dataset Card for "sentiment_economy_news"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 831 | [
[
-0.044464111328125,
-0.02294921875,
0.0204925537109375,
0.032501220703125,
-0.0282440185546875,
-0.01044464111328125,
-0.0004258155822753906,
-0.0004811286926269531,
0.0667724609375,
0.0272064208984375,
-0.055572509765625,
-0.0653076171875,
-0.043731689453125,
... |
TrainingDataPro/facial-emotion-recognition-dataset | 2023-09-14T16:40:22.000Z | [
"task_categories:image-classification",
"task_categories:image-to-image",
"language:en",
"license:cc-by-nc-nd-4.0",
"code",
"region:us"
] | TrainingDataPro | The dataset consists of images capturing people displaying 7 distinct emotions
(anger, contempt, disgust, fear, happiness, sadness and surprise).
Each image in the dataset represents one of these specific emotions,
enabling researchers and machine learning practitioners to study and develop
models for emotion recognition and analysis.
The images encompass a diverse range of individuals, including different
genders, ethnicities, and age groups*. The dataset aims to provide
a comprehensive representation of human emotions, allowing for a wide range of
use cases. | @InProceedings{huggingface:dataset,
title = {facial-emotion-recognition-dataset},
author = {TrainingDataPro},
year = {2023}
} | 3 | 32 | 2023-07-19T10:44:09 | ---
language:
- en
license: cc-by-nc-nd-4.0
task_categories:
- image-classification
- image-to-image
tags:
- code
dataset_info:
features:
- name: set_id
dtype: int32
- name: neutral
dtype: image
- name: anger
dtype: image
- name: contempt
dtype: image
- name: disgust
dtype: image
- name: fear
dtype: image
- name: happy
dtype: image
- name: sad
dtype: image
- name: surprised
dtype: image
- name: age
dtype: int8
- name: gender
dtype: string
- name: country
dtype: string
splits:
- name: train
num_bytes: 22981
num_examples: 19
download_size: 453786356
dataset_size: 22981
---
# Facial Emotion Recognition Dataset
The dataset consists of images capturing people displaying **7 distinct emotions** (*anger, contempt, disgust, fear, happiness, sadness and surprise*). Each image in the dataset represents one of these specific emotions, enabling researchers and machine learning practitioners to study and develop models for emotion recognition and analysis.
The images encompass a diverse range of individuals, including different *genders, ethnicities, and age groups*. The dataset aims to provide a comprehensive representation of human emotions, allowing for a wide range of use cases.
### The dataset's possible applications:
- automatic emotion detection
- mental health analysis
- artificial intelligence (AI) and computer vision
- entertainment industries
- advertising and market research
- security and surveillance

# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=facial-emotion-recognition-dataset) to discuss your requirements, learn about the price and buy the dataset.
# Content
- **images**: includes folders corresponding to people and containing images with 8 different impersonated emotions, each file is named according to the expressed emotion
- **.csv** file: contains information about people in the dataset
### Emotions in the dataset:
- anger
- contempt
- disgust
- fear
- happy
- sad
- surprised
### File with the extension .csv
includes the following information for each set of media files:
- **set_id**: id of the set of images,
- **gender**: gender of the person,
- **age**: age of the person,
- **country**: country of the person
# Images for facial emotion recognition might be collected in accordance with your requirements.
## [**TrainingData**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=facial-emotion-recognition-dataset) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** | 3,119 | [
[
-0.037445068359375,
-0.0219879150390625,
0.00333404541015625,
0.02239990234375,
-0.01788330078125,
0.005764007568359375,
-0.0033359527587890625,
-0.02935791015625,
0.01654052734375,
0.0269927978515625,
-0.057342529296875,
-0.052642822265625,
-0.044769287109375,
... |
nisaar/Constitution_Of_India_Instruction_Set | 2023-07-27T09:11:15.000Z | [
"license:apache-2.0",
"region:us"
] | nisaar | null | null | 2 | 32 | 2023-07-27T09:04:01 | ---
license: apache-2.0
---
---
# Indian Legal Case Reasoning Dataset
This dataset is a collection of legal reasoning tasks based on Indian case laws. Each entry in the dataset consists of an instruction for a legal reasoning task, the context necessary to complete the task, the correct response to the task, and a formatted prompt to guide the response. The dataset is designed for training and evaluating models on a variety of legal reasoning tasks, including case analysis, issue identification, legal argument formulation, and precedent identification.
## Data Details
- **Instruction**: A string field containing the instruction for the task.
- **Input**: A string field providing the context necessary to complete the task.
- **Output**: A string field containing the correct response to the task.
- **Prompt**: A string field containing a formatted version of the instruction and input, intended to guide the response.
## Dataset Use
The dataset can be used to train a model for a variety of legal reasoning tasks. Given the instruction and input, the model must generate the correct output. Performance could be evaluated based on the accuracy of the generated output.
## Languages
The text in the dataset is in English.
## Data Splits
The dataset provided does not have predefined splits.
## Dataset Creation
The dataset was curated to provide a resource for training and evaluating models on a variety of legal reasoning tasks. The entries in the dataset represent a diverse range of legal reasoning tasks and are based on actual case laws, providing a realistic and practical dataset for legal reasoning tasks.
## Source Data
The dataset is based on case laws from the Indian legal system.
## Considerations for Using the Data
The dataset contains real-world legal texts and should be used in accordance with all relevant legal and ethical guidelines. Users should be aware that legal texts may contain sensitive information and should use the dataset responsibly.
--- | 1,998 | [
[
0.00791168212890625,
-0.036468505859375,
0.0300750732421875,
0.0184326171875,
-0.0482177734375,
-0.0177154541015625,
-0.0001245737075805664,
0.0022487640380859375,
-0.0011148452758789062,
0.06048583984375,
-0.045654296875,
-0.04461669921875,
-0.03155517578125,
... |
illuin/small_african_accented_french_test | 2023-08-04T15:59:27.000Z | [
"region:us"
] | illuin | null | null | 1 | 32 | 2023-08-04T15:32:20 | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: path
dtype: string
splits:
- name: test
num_bytes: 97487354.0
num_examples: 1000
download_size: 97330196
dataset_size: 97487354.0
---
# Dataset Card for "small_african_accented_french_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 488 | [
[
-0.055938720703125,
-0.03643798828125,
0.00698089599609375,
0.0156402587890625,
-0.0002434253692626953,
-0.0150299072265625,
-0.007587432861328125,
-0.0119781494140625,
0.0753173828125,
0.03204345703125,
-0.0479736328125,
-0.0406494140625,
-0.0413818359375,
... |
dkoterwa/kor_nli_simcse | 2023-08-30T07:39:32.000Z | [
"region:us"
] | dkoterwa | null | null | 0 | 32 | 2023-08-09T10:28:07 | ---
dataset_info:
features:
- name: premise
dtype: string
- name: entailment
dtype: string
- name: contradiction
dtype: string
splits:
- name: train
num_bytes: 90132700
num_examples: 413837
- name: valid
num_bytes: 10572025
num_examples: 48686
- name: test
num_bytes: 5289636
num_examples: 24345
download_size: 64195317
dataset_size: 105994361
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
---
# Korean Natural Language Inference (KorNLI) for SimCSE Dataset
For a better dataset description, please visit this GitHub repository prepared by the authors of the article: [LINK](https://github.com/kakaobrain/kor-nlu-datasets) <br>
<br>
**This dataset was prepared by converting KorNLI dataset**. I took every unique premise of the dataset and searched for its entailment (positive example) and contradiction (negative example).
These changes have been made in order to apply SimCSE method.
**I additionaly share the code, which I used to convert the KorNLI dataset to make everything more clear**
```
from datasets import load_dataset
from tqdm import tqdm
import pandas as pd
import numpy as np
def create_trios(df, save_path):
list_of_examples = []
unique_premises = df.drop_duplicates("premise")["premise"]
for premise in tqdm(unique_premises):
premise_dataset = df[(df["premise"] == premise)]
positive_examples = premise_dataset[premise_dataset["label"] == "entailment"]["hypothesis"]
negative_examples = premise_dataset[premise_dataset["label"] == "contradiction"]["hypothesis"]
if len(positive_examples) == 0 or len(negative_examples) == 0:
continue
for positive_example in positive_examples:
for negative_example in negative_examples:
list_of_examples.append((premise, positive_example, negative_example))
examples_df = pd.DataFrame(list_of_examples, columns=["premise", "entailment", "contradiction"])
examples_df.to_csv(save_path)
if __name__ == "__main__":
dataset1 = load_dataset("kor_nli", "snli")["train"]
dataset2 = load_dataset("kor_nli", "multi_nli")["train"]
df_1 = pd.DataFrame(dataset1)
df_2 = pd.DataFrame(dataset2)
df_full = pd.concat([df_1, df_2])
df_full.dropna(inplace=True)
df_full["label"] = ["neutral" if label == 1 else "contradiction" if label == 2 else "entailment" for label in df_full["label"]]
create_trios(df_full, <insert your path>)
```
**How to download**
```
from datasets import load_dataset
data = load_dataset("dkoterwa/kor_nli_simcse")
```
**If you use this dataset for research, please cite this paper:**
```
@article{ham2020kornli,
title={KorNLI and KorSTS: New Benchmark Datasets for Korean Natural Language Understanding},
author={Ham, Jiyeon and Choe, Yo Joong and Park, Kyubyong and Choi, Ilji and Soh, Hyungjoon},
journal={arXiv preprint arXiv:2004.03289},
year={2020}
}
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 3,214 | [
[
-0.032989501953125,
-0.0360107421875,
0.0301666259765625,
0.03375244140625,
-0.01467132568359375,
-0.01189422607421875,
-0.0198211669921875,
-0.0055084228515625,
0.0028324127197265625,
0.05059814453125,
-0.0419921875,
-0.052001953125,
-0.0198516845703125,
0.... |
philschmid/meta-shepherd-human-data | 2023-08-23T14:38:43.000Z | [
"license:cc-by-nc-4.0",
"region:us"
] | philschmid | null | null | 3 | 32 | 2023-08-23T14:36:13 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: dataset
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: feedback
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2588873
num_examples: 1317
download_size: 1319368
dataset_size: 2588873
license: cc-by-nc-4.0
---
# Dataset Card for "meta-shepherd-human-data"
Original Dataset: https://github.com/facebookresearch/Shepherd
## Example
```bash
### Question: Where on the planet would you expect a bald eagle to live?
Here are the options:
Option 1: colorado
Option 2: outside
Option 3: protection
Option 4: zoo exhibit
Option 5: world
Please choose the correct option and justify your choice:
### Answer: Bald eagles are found throughout most of North America, from Alaska and Canada south to Mexico. They are also found in some parts of Europe and Asia. In the United States, bald eagles are usually seen near large bodies of water where they hunt for fish. They build their nests high up in tall trees near rivers, lakes, marshes, and coastal areas. Because of hunting and loss of habitat, bald eagle populations declined dramatically during the twentieth century. Conservation efforts have helped increase their numbers, and they were removed from the U.S. Endangered Species List in 2007.
### Feedback: The answer adds extra information, last 2 sentences are unnecessary. It does not state which options specifically are where bald eagles might live, like outside, Colorado, Zoo Exhibit, and World.
``` | 1,593 | [
[
-0.057373046875,
-0.05609130859375,
0.043304443359375,
0.0163726806640625,
-0.01947021484375,
-0.00991058349609375,
0.01483154296875,
-0.036468505859375,
0.041839599609375,
0.040679931640625,
-0.0723876953125,
-0.058013916015625,
-0.02825927734375,
0.0284423... |
lamini/spider_text_to_sql | 2023-08-28T06:57:19.000Z | [
"region:us"
] | lamini | null | null | 5 | 32 | 2023-08-27T01:09:38 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 9388343
num_examples: 7000
- name: validation
num_bytes: 1090039
num_examples: 1034
download_size: 1054303
dataset_size: 10478382
---
# Dataset Card for "spider_text_to_sql"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 600 | [
[
-0.0276641845703125,
-0.02178955078125,
0.019195556640625,
0.01904296875,
-0.024139404296875,
0.004512786865234375,
0.0218353271484375,
-0.011688232421875,
0.06610107421875,
0.037200927734375,
-0.048431396484375,
-0.05126953125,
-0.0452880859375,
0.005302429... |
miazhao/prm800k_processed_preference | 2023-09-04T00:10:16.000Z | [
"region:us"
] | miazhao | null | null | 0 | 32 | 2023-09-04T00:10:15 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: responses
sequence: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 23805614
num_examples: 22036
download_size: 9396871
dataset_size: 23805614
---
# Dataset Card for "prm800k_processed_preference"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 493 | [
[
-0.047698974609375,
-0.00893402099609375,
0.015472412109375,
0.015472412109375,
-0.0175628662109375,
-0.023040771484375,
0.007274627685546875,
0.006145477294921875,
0.07147216796875,
0.05780029296875,
-0.060333251953125,
-0.0419921875,
-0.0384521484375,
-0.0... |
atmallen/mmlu_binary | 2023-09-19T05:12:16.000Z | [
"region:us"
] | atmallen | null | null | 0 | 32 | 2023-09-14T04:47:27 | ---
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int32
- name: statement
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'false'
'1': 'true'
splits:
- name: validation
num_bytes: 653717
num_examples: 1218
- name: test
num_bytes: 5979564
num_examples: 11526
download_size: 3456524
dataset_size: 6633281
---
# Dataset Card for "mmlu_binary"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 810 | [
[
-0.03875732421875,
-0.0232391357421875,
0.01155853271484375,
0.0131683349609375,
-0.0208892822265625,
0.004329681396484375,
0.0300750732421875,
-0.01432037353515625,
0.0673828125,
0.020263671875,
-0.062744140625,
-0.049224853515625,
-0.045196533203125,
-0.00... |
ShrinivasSK/small-hi-kn2 | 2023-09-30T17:19:38.000Z | [
"region:us"
] | ShrinivasSK | null | null | 0 | 32 | 2023-09-30T17:19:19 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
BramVanroy/xlwic_wn | 2023-10-02T09:19:20.000Z | [
"task_categories:text-classification",
"language:bg",
"language:zh",
"language:hr",
"language:da",
"language:nl",
"language:et",
"language:fa",
"language:ja",
"language:ko",
"license:cc-by-nc-4.0",
"region:us"
] | BramVanroy | null | null | 1 | 32 | 2023-10-02T07:48:29 | ---
license: cc-by-nc-4.0
language:
- bg
- zh
- hr
- da
- nl
- et
- fa
- ja
- ko
task_categories:
- text-classification
pretty_name: Multilingual Word-in-Context (WordNet)
configs:
- config_name: default
sep: "\t"
data_files:
- split: valid
path: "**/*_valid.csv"
- split: test
path: "**/*_test.csv"
- config_name: bg
sep: "\t"
data_files:
- split: valid
path: "bulgarian_bg/bg_valid.csv"
- split: test
path: "bulgarian_bg/bg_test.csv"
- config_name: zh
sep: "\t"
data_files:
- split: valid
path: "chinese_zh/zh_valid.csv"
- split: test
path: "chinese_zh/zh_test.csv"
- config_name: hr
sep: "\t"
data_files:
- split: valid
path: "croatian_hr/hr_valid.csv"
- split: test
path: "croatian_hr/hr_test.csv"
- config_name: da
sep: "\t"
data_files:
- split: valid
path: "danish_da/da_valid.csv"
- split: test
path: "danish_da/da_test.csv"
- config_name: nl
sep: "\t"
data_files:
- split: valid
path: "dutch_nl/nl_valid.csv"
- split: test
path: "dutch_nl/nl_test.csv"
- config_name: et
sep: "\t"
data_files:
- split: valid
path: "estonian_et/et_valid.csv"
- split: test
path: "estonian_et/et_test.csv"
- config_name: fa
sep: "\t"
data_files:
- split: valid
path: "farsi_fa/fa_valid.csv"
- split: test
path: "farsi_fa/fa_test.csv"
- config_name: ja
sep: "\t"
data_files:
- split: valid
path: "japanese_ja/ja_valid.csv"
- split: test
path: "japanese_ja/ja_test.csv"
- config_name: ko
sep: "\t"
data_files:
- split: valid
path: "korean_ko/ko_valid.csv"
- split: test
path: "korean_ko/ko_test.csv"
---
# Multilingual Word-in-Context (WordNet)
Refer to the [documentation](https://pilehvar.github.io/xlwic/) and [paper](https://aclanthology.org/2020.emnlp-main.584/) for more information. | 1,850 | [
[
-0.033599853515625,
-0.038970947265625,
0.018463134765625,
0.006954193115234375,
-0.0008969306945800781,
-0.006557464599609375,
-0.005939483642578125,
-0.0406494140625,
0.053314208984375,
0.04498291015625,
-0.03912353515625,
-0.041961669921875,
-0.04025268554687... |
tyzhu/eval_tag_squad_v8 | 2023-10-05T16:55:19.000Z | [
"region:us"
] | tyzhu | null | null | 0 | 32 | 2023-10-03T08:07:36 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 13020105
num_examples: 10570
- name: validation
num_bytes: 13020105
num_examples: 10570
download_size: 5664930
dataset_size: 26040210
---
# Dataset Card for "eval_tag_squad_v8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 720 | [
[
-0.036651611328125,
-0.024017333984375,
0.0029048919677734375,
0.0169677734375,
-0.0036830902099609375,
0.031524658203125,
0.0223846435546875,
-0.01363372802734375,
0.047637939453125,
0.0187835693359375,
-0.0706787109375,
-0.0439453125,
-0.0192413330078125,
... |
tyzhu/eval_tag_squad_v9 | 2023-10-05T16:55:32.000Z | [
"region:us"
] | tyzhu | null | null | 0 | 32 | 2023-10-03T08:08:38 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 13273785
num_examples: 10570
- name: validation
num_bytes: 13273785
num_examples: 10570
download_size: 5722530
dataset_size: 26547570
---
# Dataset Card for "eval_tag_squad_v9"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 720 | [
[
-0.0310821533203125,
-0.016845703125,
0.01097869873046875,
0.0171661376953125,
-0.00716400146484375,
0.0386962890625,
0.0189971923828125,
-0.00905609130859375,
0.054290771484375,
0.0226287841796875,
-0.06463623046875,
-0.04351806640625,
-0.019622802734375,
-... |
flozi00/classify-llm-tasks-german | 2023-10-27T09:26:53.000Z | [
"task_categories:text-classification",
"language:de",
"region:us"
] | flozi00 | null | null | 0 | 32 | 2023-10-06T09:55:08 | ---
language:
- de
task_categories:
- text-classification
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 3162
num_examples: 66
download_size: 0
dataset_size: 3162
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "classify-llm-tasks-german"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 529 | [
[
-0.033294677734375,
-0.02593994140625,
0.025604248046875,
0.018096923828125,
-0.0023040771484375,
-0.006099700927734375,
0.0001093149185180664,
-0.0182952880859375,
0.0418701171875,
0.032196044921875,
-0.06805419921875,
-0.07159423828125,
-0.060699462890625,
... |
facat/sci-llm-part-rev | 2023-10-07T13:55:46.000Z | [
"region:us"
] | facat | null | null | 0 | 32 | 2023-10-07T13:52:52 | ---
configs:
- config_name: default
data_files:
- split: gpt1
path: data/gpt1-*
- split: gpt2
path: data/gpt2-*
- split: gpt3
path: data/gpt3-*
- split: gpt4
path: data/gpt4-*
- split: gpt5
path: data/gpt5-*
- split: gpt6
path: data/gpt6-*
- split: han_40k
path: data/han_40k-*
- split: test
path: data/test-*
- split: test2
path: data/test2-*
dataset_info:
features:
- name: prompt
dtype: string
- name: context
dtype: string
- name: chosen
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
splits:
- name: gpt1
num_bytes: 130420316
num_examples: 22113
- name: gpt2
num_bytes: 264545680
num_examples: 44859
- name: gpt3
num_bytes: 98018603
num_examples: 16648
- name: gpt4
num_bytes: 309111447
num_examples: 52813
- name: gpt5
num_bytes: 99277151
num_examples: 16795
- name: gpt6
num_bytes: 110054529
num_examples: 18325
- name: han_40k
num_bytes: 236235210
num_examples: 40807
- name: test
num_bytes: 2214599
num_examples: 500
- name: test2
num_bytes: 1111116
num_examples: 200
download_size: 608607150
dataset_size: 1250988651
---
# Dataset Card for "sci-llm-part-rev"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,460 | [
[
-0.030181884765625,
-0.0008873939514160156,
0.020904541015625,
0.01873779296875,
-0.0252532958984375,
0.0078277587890625,
0.031005859375,
-0.008270263671875,
0.0687255859375,
0.03643798828125,
-0.08038330078125,
-0.046142578125,
-0.0242767333984375,
0.003339... |
Sharka/CIVQA_easyocr_encode_train_32 | 2023-10-11T06:08:18.000Z | [
"region:us"
] | Sharka | null | null | 0 | 32 | 2023-10-11T04:37:51 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: bbox
dtype:
array2_d:
shape:
- 512
- 4
dtype: int32
- name: attention_mask
sequence: int32
- name: image
dtype:
array3_d:
shape:
- 3
- 224
- 224
dtype: int32
- name: start_positions
dtype: int32
- name: end_positions
dtype: int32
- name: questions
dtype: string
- name: answers
dtype: string
splits:
- name: train
num_bytes: 89021492745
num_examples: 143765
download_size: 913954164
dataset_size: 89021492745
---
# Dataset Card for "CIVQA_easyocr_encode_train_32"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 816 | [
[
-0.039794921875,
0.01561737060546875,
0.014007568359375,
0.0211944580078125,
-0.00832366943359375,
-0.002353668212890625,
0.01256561279296875,
0.01473236083984375,
0.027374267578125,
0.0263519287109375,
-0.04034423828125,
-0.0460205078125,
-0.040191650390625,
... |
open-phi/ft-sample-mistral | 2023-10-18T04:39:39.000Z | [
"region:us"
] | open-phi | null | null | 13 | 32 | 2023-10-12T05:00:48 | ---
dataset_info:
features:
- name: topic
dtype: string
- name: model
dtype: string
- name: concepts
sequence: string
- name: outline
sequence: string
- name: markdown
dtype: string
splits:
- name: train
num_bytes: 2856370189
num_examples: 23650
download_size: 937886508
dataset_size: 2856370189
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ft-sample-mistral"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 605 | [
[
-0.0367431640625,
-0.0229339599609375,
0.00835418701171875,
0.0189361572265625,
-0.0169830322265625,
-0.004940032958984375,
0.02691650390625,
-0.00881195068359375,
0.048095703125,
0.02459716796875,
-0.059356689453125,
-0.03961181640625,
-0.03778076171875,
-0... |
jjonhwa/SECOND_KOWIKI_RETRIEVE_200 | 2023-10-17T01:46:27.000Z | [
"region:us"
] | jjonhwa | null | null | 0 | 32 | 2023-10-16T09:48:15 | ---
dataset_info:
features:
- name: ctxs
list:
- name: score
dtype: float64
- name: text
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 135949554
num_examples: 15504
download_size: 73942447
dataset_size: 135949554
---
# Dataset Card for "SECOND_KOWIKI_RETRIEVE_200"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 478 | [
[
-0.044647216796875,
-0.01036834716796875,
0.0153045654296875,
0.01715087890625,
-0.01910400390625,
-0.00047588348388671875,
0.0123291015625,
-0.01209259033203125,
0.04620361328125,
0.046051025390625,
-0.064697265625,
-0.031951904296875,
-0.0479736328125,
-0.... |
expertai/BUSTER | 2023-10-27T20:56:26.000Z | [
"task_categories:token-classification",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"finance",
"region:us"
] | expertai | Buster is an Entity Recognition dataset consisting of 3779 manually annotated documents on financial transactions.
Documents were selected using EDGAR (Electronic Data Gathering, Analysis, and Retrieval system) from the
U.S. Securities and Exchange Commission (SEC).
The corpus focuses on the main actors involved in business transactions.
Overall, there are three families of entities: Parties, Advisors and Generic information, for a total of 6 annotated
entity types.
We also released a corpus of 6196 automatically annotated documents. | Accepted at EMNLP 2023 - Industry Track.
TBA | 1 | 32 | 2023-10-18T13:03:49 | ---
license: apache-2.0
task_categories:
- token-classification
language:
- en
tags:
- finance
pretty_name: buster
size_categories:
- 10K<n<100K
---
dataset_info:
config_name: BUSTER
features:
- name: document_id
dtype: string
- name: tokens
sequence: string
- name: labels
sequence:
class_label:
names:
'0': O
'1': B-Parties.BUYING_COMPANY
'2': I-Parties.BUYING_COMPANY
'3': B-Parties.SELLING_COMPANY
'4': I-Parties.SELLING_COMPANY
'5': B-Parties.ACQUIRED_COMPANY
'6': I-Parties.ACQUIRED_COMPANY
'7': B-Advisors.LEGAL_CONSULTING_COMPANY
'8': I-Advisors.LEGAL_CONSULTING_COMPANY
'9': B-Advisors.GENERIC_CONSULTING_COMPANY
'10': I-Advisors.GENERIC_CONSULTING_COMPANY
'11': B-Generic_Info.ANNUAL_REVENUES
'12': I-Generic_Info.ANNUAL_REVENUES
splits:
- name: FOLD_1
num_bytes: 11508541
num_examples: 753
- name: FOLD_2
num_bytes: 11409488
num_examples: 759
- name: FOLD_3
num_bytes: 11524994
num_examples: 758
- name: FOLD_4
num_bytes: 11714536
num_examples: 755
- name: FOLD_5
num_bytes: 11543314
num_examples: 754
- name: SILVER
num_bytes: 94702584
num_examples: 6196
download_size: 20824877
dataset_size: 152403457
---
# Dataset Card for BUSTER
BUSiness Transaction Entity Recognition dataset.
BUSTER is an Entity Recognition (ER) benchmark for entities related to business transactions. It consists of a gold corpus of
3779 manually annotated documents on financial transactions that were randomly divided into 5 folds,
plus an additional silver corpus of 6196 automatically annotated documents that were created by the model-optimized RoBERTa system. | 1,790 | [
[
-0.04583740234375,
-0.044097900390625,
0.01180267333984375,
0.0056610107421875,
-0.0211029052734375,
0.0024318695068359375,
0.007282257080078125,
-0.029083251953125,
0.032318115234375,
0.03375244140625,
-0.0233001708984375,
-0.054718017578125,
-0.05389404296875,... |
jjonhwa/SECOND_KOWIKI_RETRIEVE_200_V2 | 2023-10-20T03:19:31.000Z | [
"region:us"
] | jjonhwa | null | null | 0 | 32 | 2023-10-20T03:19:15 | ---
dataset_info:
features:
- name: ctxs
list:
- name: score
dtype: float64
- name: text
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 141924897
num_examples: 15504
download_size: 75209045
dataset_size: 141924897
---
# Dataset Card for "SECOND_KOWIKI_RETRIEVE_200_V2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 481 | [
[
-0.035614013671875,
-0.008209228515625,
0.016448974609375,
0.0152130126953125,
-0.02337646484375,
-0.006561279296875,
0.0179595947265625,
-0.0171966552734375,
0.04608154296875,
0.047088623046875,
-0.0653076171875,
-0.026947021484375,
-0.047393798828125,
-0.0... |
aiface/vivos_mms | 2023-10-21T06:55:19.000Z | [
"region:us"
] | aiface | null | null | 0 | 32 | 2023-10-21T06:52:05 | ---
dataset_info:
features:
- name: input_values
sequence: float32
- name: input_length
dtype: int64
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 3443268452
num_examples: 11660
- name: test
num_bytes: 172149180
num_examples: 760
download_size: 3175004057
dataset_size: 3615417632
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "vivos_mms"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 636 | [
[
-0.0306549072265625,
-0.0003879070281982422,
-0.0009765625,
0.03778076171875,
-0.0241241455078125,
-0.00308990478515625,
0.0268096923828125,
0.00939178466796875,
0.061737060546875,
0.007457733154296875,
-0.0703125,
-0.046478271484375,
-0.034423828125,
-0.001... |
jay401521/label0 | 2023-10-23T12:25:55.000Z | [
"region:us"
] | jay401521 | null | null | 0 | 32 | 2023-10-21T09:55:49 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: int64
- name: domain
dtype: string
- name: label
dtype: int64
- name: rank
dtype: int64
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 922790.3333333334
num_examples: 10007
download_size: 441033
dataset_size: 922790.3333333334
---
# Dataset Card for "label0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 588 | [
[
-0.05157470703125,
-0.0042724609375,
0.01264190673828125,
0.0190887451171875,
-0.003971099853515625,
-0.0005564689636230469,
0.0279541015625,
-0.02532958984375,
0.06787109375,
0.0280609130859375,
-0.051849365234375,
-0.052337646484375,
-0.04730224609375,
-0.... |
sayan1101/sft_test_custom_dataset_RLHF | 2023-10-24T05:59:21.000Z | [
"region:us"
] | sayan1101 | null | null | 0 | 32 | 2023-10-23T09:11:29 | ---
dataset_info:
features:
- name: label
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 34685
num_examples: 51
- name: test
num_bytes: 34685
num_examples: 51
- name: valid
num_bytes: 34685
num_examples: 51
download_size: 86937
dataset_size: 104055
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
---
# Dataset Card for "sft_test_custom_dataset_RLHF"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 678 | [
[
-0.0450439453125,
-0.043182373046875,
0.005069732666015625,
0.0189971923828125,
-0.01666259765625,
0.01390838623046875,
0.027557373046875,
-0.006099700927734375,
0.0643310546875,
0.042144775390625,
-0.07208251953125,
-0.04119873046875,
-0.0201416015625,
-0.0... |
advancedcv/Food500Cap_test | 2023-10-24T20:01:10.000Z | [
"region:us"
] | advancedcv | null | null | 0 | 32 | 2023-10-24T19:59:07 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
xin1997/vulfix_real_deduplicated_70_10_20 | 2023-10-25T04:16:54.000Z | [
"region:us"
] | xin1997 | null | null | 0 | 32 | 2023-10-25T04:16:26 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
midas/kp20k | 2023-09-25T05:14:59.000Z | [
"region:us"
] | midas | \ | @InProceedings{meng-EtAl:2017:Long,
author = {Meng, Rui and Zhao, Sanqiang and Han, Shuguang and He, Daqing and Brusilovsky, Peter and Chi, Yu},
title = {Deep Keyphrase Generation},
booktitle = {Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
month = {July},
year = {2017},
address = {Vancouver, Canada},
publisher = {Association for Computational Linguistics},
pages = {582--592},
url = {http://aclweb.org/anthology/P17-1054}
} | 2 | 31 | 2022-03-02T23:29:22 | A dataset for benchmarking keyphrase extraction and generation techniques from abstracts of English scientific papers. For more details about the dataset please refer the original paper - [http://memray.me/uploads/acl17-keyphrase-generation.pdf](http://memray.me/uploads/acl17-keyphrase-generation.pdf).
Data source - [https://github.com/memray/seq2seq-keyphrase](https://github.com/memray/seq2seq-keyphrase)
## Dataset Summary
## Dataset Structure
## Dataset Statistics
### Data Fields
- **id**: unique identifier of the document.
- **document**: Whitespace separated list of words in the document.
- **doc_bio_tags**: BIO tags for each word in the document. B stands for the beginning of a keyphrase and I stands for inside the keyphrase. O stands for outside the keyphrase and represents the word that isn't a part of the keyphrase at all.
- **extractive_keyphrases**: List of all the present keyphrases.
- **abstractive_keyphrase**: List of all the absent keyphrases.
### Data Splits
|Split| No. of datapoints |
|--|--|
| Train | 530,809 |
| Test | 20,000|
| Validation | 20,000|
## Usage
### Full Dataset
```python
from datasets import load_dataset
# get entire dataset
dataset = load_dataset("midas/kp20k", "raw")
# sample from the train split
print("Sample from training dataset split")
train_sample = dataset["train"][0]
print("Fields in the sample: ", [key for key in train_sample.keys()])
print("Tokenized Document: ", train_sample["document"])
print("Document BIO Tags: ", train_sample["doc_bio_tags"])
print("Extractive/present Keyphrases: ", train_sample["extractive_keyphrases"])
print("Abstractive/absent Keyphrases: ", train_sample["abstractive_keyphrases"])
print("\n-----------\n")
# sample from the validation split
print("Sample from validation dataset split")
validation_sample = dataset["validation"][0]
print("Fields in the sample: ", [key for key in validation_sample.keys()])
print("Tokenized Document: ", validation_sample["document"])
print("Document BIO Tags: ", validation_sample["doc_bio_tags"])
print("Extractive/present Keyphrases: ", validation_sample["extractive_keyphrases"])
print("Abstractive/absent Keyphrases: ", validation_sample["abstractive_keyphrases"])
print("\n-----------\n")
# sample from the test split
print("Sample from test dataset split")
test_sample = dataset["test"][0]
print("Fields in the sample: ", [key for key in test_sample.keys()])
print("Tokenized Document: ", test_sample["document"])
print("Document BIO Tags: ", test_sample["doc_bio_tags"])
print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
print("\n-----------\n")
```
**Output**
```bash
```
### Keyphrase Extraction
```python
from datasets import load_dataset
# get the dataset only for keyphrase extraction
dataset = load_dataset("midas/kp20k", "extraction")
print("Samples for Keyphrase Extraction")
# sample from the train split
print("Sample from training data split")
train_sample = dataset["train"][0]
print("Fields in the sample: ", [key for key in train_sample.keys()])
print("Tokenized Document: ", train_sample["document"])
print("Document BIO Tags: ", train_sample["doc_bio_tags"])
print("\n-----------\n")
# sample from the validation split
print("Sample from validation data split")
validation_sample = dataset["validation"][0]
print("Fields in the sample: ", [key for key in validation_sample.keys()])
print("Tokenized Document: ", validation_sample["document"])
print("Document BIO Tags: ", validation_sample["doc_bio_tags"])
print("\n-----------\n")
# sample from the test split
print("Sample from test data split")
test_sample = dataset["test"][0]
print("Fields in the sample: ", [key for key in test_sample.keys()])
print("Tokenized Document: ", test_sample["document"])
print("Document BIO Tags: ", test_sample["doc_bio_tags"])
print("\n-----------\n")
```
### Keyphrase Generation
```python
# get the dataset only for keyphrase generation
dataset = load_dataset("midas/kp20k", "generation")
print("Samples for Keyphrase Generation")
# sample from the train split
print("Sample from training data split")
train_sample = dataset["train"][0]
print("Fields in the sample: ", [key for key in train_sample.keys()])
print("Tokenized Document: ", train_sample["document"])
print("Extractive/present Keyphrases: ", train_sample["extractive_keyphrases"])
print("Abstractive/absent Keyphrases: ", train_sample["abstractive_keyphrases"])
print("\n-----------\n")
# sample from the validation split
print("Sample from validation data split")
validation_sample = dataset["validation"][0]
print("Fields in the sample: ", [key for key in validation_sample.keys()])
print("Tokenized Document: ", validation_sample["document"])
print("Extractive/present Keyphrases: ", validation_sample["extractive_keyphrases"])
print("Abstractive/absent Keyphrases: ", validation_sample["abstractive_keyphrases"])
print("\n-----------\n")
# sample from the test split
print("Sample from test data split")
test_sample = dataset["test"][0]
print("Fields in the sample: ", [key for key in test_sample.keys()])
print("Tokenized Document: ", test_sample["document"])
print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
print("\n-----------\n")
```
## Citation Information
Please cite the works below if you use this dataset in your work.
```
@InProceedings{meng-EtAl:2017:Long,
author = {Meng, Rui and Zhao, Sanqiang and Han, Shuguang and He, Daqing and Brusilovsky, Peter and Chi, Yu},
title = {Deep Keyphrase Generation},
booktitle = {Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
month = {July},
year = {2017},
address = {Vancouver, Canada},
publisher = {Association for Computational Linguistics},
pages = {582--592},
url = {http://aclweb.org/anthology/P17-1054}
}
@article{mahata2022ldkp,
title={LDKP: A Dataset for Identifying Keyphrases from Long Scientific Documents},
author={Mahata, Debanjan and Agarwal, Navneet and Gautam, Dibya and Kumar, Amardeep and Parekh, Swapnil and Singla, Yaman Kumar and Acharya, Anish and Shah, Rajiv Ratn},
journal={arXiv preprint arXiv:2203.15349},
year={2022}
}
```
## Contributions
Thanks to [@debanjanbhucs](https://github.com/debanjanbhucs), [@dibyaaaaax](https://github.com/dibyaaaaax), [@UmaGunturi](https://github.com/UmaGunturi) and [@ad6398](https://github.com/ad6398) for adding this dataset
| 6,633 | [
[
-0.00009077787399291992,
-0.02850341796875,
0.023223876953125,
0.0079193115234375,
-0.021636962890625,
0.01241302490234375,
-0.0137939453125,
-0.005908966064453125,
0.0008521080017089844,
0.01004791259765625,
-0.03875732421875,
-0.050140380859375,
-0.03552246093... |
projecte-aina/casum | 2023-09-13T12:49:03.000Z | [
"task_categories:summarization",
"annotations_creators:machine-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"language:ca",
"license:cc-by-nc-4.0",
"arxiv:2202.06871",
"region:us"
] | projecte-aina | CaSum is a summarization dataset. It is extracted from a newswire corpus crawled from the Catalan News Agency. The corpus consists of 217,735 instances that are composed by the headline and the body. | @misc{degibert2022sequencetosequence,
title={Sequence-to-Sequence Resources for Catalan},
author={Ona de Gibert and Ksenia Kharitonova and Blanca Calvo Figueras and Jordi Armengol-Estapé and Maite Melero},
year={2022},
eprint={2202.06871},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | 0 | 31 | 2022-03-02T23:29:22 | ---
annotations_creators:
- machine-generated
language_creators:
- expert-generated
language:
- ca
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets: []
task_categories:
- summarization
task_ids: []
pretty_name: casum
---
# Dataset Card for CaSum
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Paper:** [Sequence to Sequence Resources for Catalan](https://arxiv.org/pdf/2202.06871.pdf)
- **Point of Contact:** [Ona de Gibert Bonet](mailto:ona.degibert@bsc.es)
### Dataset Summary
CaSum is a summarization dataset. It is extracted from a newswire corpus crawled from the Catalan News Agency ([Agència Catalana de Notícies; ACN](https://www.acn.cat/)). The corpus consists of 217,735 instances that are composed by the headline and the body.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for abstractive summarization. Success on this task is typically measured by achieving a high Rouge score. The [mbart-base-ca-casum](https://huggingface.co/projecte-aina/bart-base-ca-casum) model currently achieves a 41.39.
### Languages
The dataset is in Catalan (`ca-ES`).
## Dataset Structure
### Data Instances
```
{
'summary': 'Mapfre preveu ingressar 31.000 milions d’euros al tancament de 2018',
'text': 'L’asseguradora llançarà la seva filial Verti al mercat dels EUA a partir de 2017 ACN Madrid.-Mapfre preveu assolir uns ingressos de 31.000 milions d'euros al tancament de 2018 i destinarà a retribuir els seus accionistes com a mínim el 50% dels beneficis del grup durant el període 2016-2018, amb una rendibilitat mitjana a l’entorn del 5%, segons ha anunciat la companyia asseguradora durant la celebració aquest divendres de la seva junta general d’accionistes. La firma asseguradora també ha avançat que llançarà la seva filial d’automoció i llar al mercat dels EUA a partir de 2017. Mapfre ha recordat durant la junta que va pagar més de 540 milions d'euros en impostos el 2015, amb una taxa impositiva efectiva del 30,4 per cent. La companyia també ha posat en marxa el Pla de Sostenibilitat 2016-2018 i el Pla de Transparència Activa, “que han de contribuir a afermar la visió de Mapfre com a asseguradora global de confiança”, segons ha informat en un comunicat.'
}
```
### Data Fields
- `summary` (str): Summary of the piece of news
- `text` (str): The text of the piece of news
### Data Splits
We split our dataset into train, dev and test splits
- train: 197,735 examples
- validation: 10,000 examples
- test: 10,000 examples
## Dataset Creation
### Curation Rationale
We created this corpus to contribute to the development of language models in Catalan, a low-resource language. There exist few resources for summarization in Catalan.
### Source Data
#### Initial Data Collection and Normalization
We obtained each headline and its corresponding body of each news piece on the Catalan News Agency ([Agència Catalana de Notícies; ACN](https://www.acn.cat/)) website and applied the following cleaning pipeline: deduplicating the documents, removing the documents with empty attributes, and deleting some boilerplate sentences.
#### Who are the source language producers?
The news portal Catalan News Agency ([Agència Catalana de Notícies; ACN](https://www.acn.cat/)).
### Annotations
The dataset is unannotated.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
Since all data comes from public websites, no anonymization process was performed.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this corpus contributes to the development of summarization models in Catalan, a low-resource language.
### Discussion of Biases
We are aware that since the data comes from unreliable web pages, some biases may be present in the dataset. Nonetheless, we have not applied any steps to reduce their impact.
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
This work was funded by MT4All CEF project and [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing information
[Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/).
### BibTeX citation
If you use any of these resources (datasets or models) in your work, please cite our latest preprint:
```bibtex
@misc{degibert2022sequencetosequence,
title={Sequence-to-Sequence Resources for Catalan},
author={Ona de Gibert and Ksenia Kharitonova and Blanca Calvo Figueras and Jordi Armengol-Estapé and Maite Melero},
year={2022},
eprint={2202.06871},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
[N/A] | 6,105 | [
[
-0.030029296875,
-0.023590087890625,
0.0038928985595703125,
0.042144775390625,
-0.038330078125,
0.012664794921875,
-0.00418853759765625,
-0.0216217041015625,
0.0634765625,
0.04168701171875,
-0.0233306884765625,
-0.07757568359375,
-0.04833984375,
0.0193023681... |
SetFit/amazon_reviews_multi_zh | 2022-03-23T15:30:49.000Z | [
"region:us"
] | SetFit | null | null | 0 | 31 | 2022-03-13T02:46:40 | #amazon reviews multi chinese
This dataset is a port of the official ['amazon_reviews_multi' dataset] (https://huggingface.co/datasets/amazon_reviews_multi) on the Hub. It has just the Chinese language version. It has been reduced to just 3 columns (and 4th "label_text") that are relevant to the SetFit task. | 310 | [
[
-0.039398193359375,
-0.0277557373046875,
-0.008697509765625,
0.0552978515625,
-0.0222320556640625,
-0.0009045600891113281,
0.0014600753784179688,
-0.039642333984375,
0.04217529296875,
0.0667724609375,
-0.0694580078125,
-0.025604248046875,
0.002353668212890625,
... |
adsabs/WIESP2022-NER | 2023-05-17T19:42:32.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-4.0",
"region:us"
] | adsabs | null | null | 6 | 31 | 2022-05-05T18:31:34 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: 'WIESP2022-NER'
size_categories:
- 1K<n<10K
source_datasets: []
task_categories:
- token-classification
task_ids:
- named-entity-recognition
---
# Dataset for the first <a href="https://ui.adsabs.harvard.edu/WIESP/" style="color:blue">Workshop on Information Extraction from Scientific Publications (WIESP/2022)</a>.
## Dataset Description
Datasets with text fragments from astrophysics papers, provided by the [NASA Astrophysical Data System](https://ui.adsabs.harvard.edu/) with manually tagged astronomical facilities and other entities of interest (e.g., celestial objects).
Datasets are in JSON Lines format (each line is a json dictionary).
The datasets are formatted similarly to the CONLL2003 format. Each token is associated with an NER tag. The tags follow the "B-" and "I-" convention from the [IOB2 syntax]("https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)")
Each entry consists of a dictionary with the following keys:
- `"unique_id"`: a unique identifier for this data sample. Must be included in the predictions.
- `"tokens"`: the list of tokens (strings) that form the text of this sample. Must be included in the predictions.
- `"ner_tags"`: the list of NER tags (in IOB2 format)
The following keys are not strictly needed by the participants:
- `"ner_ids"`: the pre-computed list of ids corresponding ner_tags, as given by the dictionary in ner_tags.json
- `"label_studio_id"`, `"section"`, `"bibcode"`: references for internal NASA/ADS use.
## Instructions for Workshop participants:
How to load the data using the Huggingface library:
```python
from datasets import load_dataset
dataset = load_dataset("adsabs/WIESP2022-NER")
```
How to load the data if you cloned the repository locally:
(assuming `./WIESP2022-NER-DEV.jsonl` is in the current directory, change as needed)
- python (as list of dictionaries):
```python
import json
with open("./WIESP2022-NER-DEV.jsonl", 'r') as f:
wiesp_dev_json = [json.loads(l) for l in list(f)]
```
- into Huggingface (as a Huggingface Dataset):
```python
from datasets import Dataset
wiesp_dev_from_json = Dataset.from_json(path_or_paths="./WIESP2022-NER-DEV.jsonl")
```
How to compute your scores on the training data:
1. format your predictions as a list of dictionaries, each with the same `"unique_id"` and `"tokens"` keys from the dataset, as well as the list of predicted NER tags under the `"pred_ner_tags"` key (see `WIESP2022-NER-DEV-sample-predictions.jsonl` for an example).
2. pass the references and predictions datasets to the `compute_MCC()` and `compute_seqeval()` functions (from the `.py` files with the same names).
Requirement to run the scoring scripts:
[NumPy](https://numpy.org/install/)
[scikit-learn](https://scikit-learn.org/stable/install.html)
[seqeval](https://github.com/chakki-works/seqeval#installation)
To get scores on the validation data, zip your predictions file (a single `.jsonl' file formatted following the same instructions as above) and upload the `.zip` file to the [Codalabs](https://codalab.lisn.upsaclay.fr/competitions/5062) competition.
## File list
```
├── WIESP2022-NER-TRAINING.jsonl : 1753 samples for training.
├── WIESP2022-NER-DEV.jsonl : 20 samples for development.
├── WIESP2022-NER-DEV-sample-predictions.jsonl : an example file with properly formatted predictions on the development data.
├── WIESP2022-NER-VALIDATION-NO-LABELS.jsonl : 1366 samples for validation without the NER labels. Used for the WIESP2022 workshop.
├── WIESP2022-NER-VALIDATION.jsonl : 1366 samples for validation
├── WIESP2022-NER-TESTING-NO-LABELS.jsonl : 2505 samples for testing without the NER labels. Used for the WIESP2022 workshop.
├── WIESP2022-NER-TESTING.jsonl : 2505 samples for testing
├── README.MD : this file.
├── tag_definitions.md : short descriptions and examples of the tags used in the task.
└── scoring-scripts/ : scripts used to evaluate submissions.
├── compute_MCC.py : computes the Matthews correlation coefficient between two datasets.
└── compute_seqeval.py : computes the seqeval scores (precision, recall, f1, overall and for each class) between two datasets.
```
## Cite as
[Overview of the First Shared Task on Detecting Entities in the Astrophysics Literature (DEAL)](https://aclanthology.org/2022.wiesp-1.1) (Grezes et al., WIESP 2022)
```python
@inproceedings{grezes-etal-2022-overview,
title = "Overview of the First Shared Task on Detecting Entities in the Astrophysics Literature ({DEAL})",
author = "Grezes, Felix and
Blanco-Cuaresma, Sergi and
Allen, Thomas and
Ghosal, Tirthankar",
booktitle = "Proceedings of the first Workshop on Information Extraction from Scientific Publications",
month = "nov",
year = "2022",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.wiesp-1.1",
pages = "1--7",
abstract = "In this article, we describe the overview of our shared task: Detecting Entities in the Astrophysics Literature (DEAL). The DEAL shared task was part of the Workshop on Information Extraction from Scientific Publications (WIESP) in AACL-IJCNLP 2022. Information extraction from scientific publications is critical in several downstream tasks such as identification of critical entities, article summarization, citation classification, etc. The motivation of this shared task was to develop a community-wide effort for entity extraction from astrophysics literature. Automated entity extraction would help to build knowledge bases, high-quality meta-data for indexing and search, and several other use-cases of interests. Thirty-three teams registered for DEAL, twelve of them participated in the system runs, and finally four teams submitted their system descriptions. We analyze their system and performance and finally discuss the findings of DEAL.",
}
``` | 6,047 | [
[
-0.049407958984375,
-0.035430908203125,
0.0245819091796875,
0.01922607421875,
0.001323699951171875,
-0.00478363037109375,
-0.018707275390625,
-0.04779052734375,
0.045196533203125,
0.037811279296875,
-0.035888671875,
-0.04620361328125,
-0.0531005859375,
0.016... |
strombergnlp/offenseval_2020 | 2022-05-12T10:04:57.000Z | [
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"arxiv:2006.07235",
"arxiv:2004.02192",
"arxiv:1908.04531",
"arxi... | strombergnlp | OffensEval 2020 features a multilingual dataset with five languages. The languages included in OffensEval 2020 are:
* Arabic
* Danish
* English
* Greek
* Turkish
The annotation follows the hierarchical tagset proposed in the Offensive Language Identification Dataset (OLID) and used in OffensEval 2019.
In this taxonomy we break down offensive content into the following three sub-tasks taking the type and target of offensive content into account.
The following sub-tasks were organized:
* Sub-task A - Offensive language identification;
* Sub-task B - Automatic categorization of offense types;
* Sub-task C - Offense target identification.
The English training data isn't included here (the text isn't available and needs rehydration of 9 million tweets;
see [https://zenodo.org/record/3950379#.XxZ-aFVKipp](https://zenodo.org/record/3950379#.XxZ-aFVKipp)) | @inproceedings{zampieri-etal-2020-semeval,
title = "{S}em{E}val-2020 Task 12: Multilingual Offensive Language Identification in Social Media ({O}ffens{E}val 2020)",
author = {Zampieri, Marcos and
Nakov, Preslav and
Rosenthal, Sara and
Atanasova, Pepa and
Karadzhov, Georgi and
Mubarak, Hamdy and
Derczynski, Leon and
Pitenis, Zeses and
Coltekin, Cagri,
booktitle = "Proceedings of the Fourteenth Workshop on Semantic Evaluation",
month = dec,
year = "2020",
address = "Barcelona (online)",
publisher = "International Committee for Computational Linguistics",
url = "https://aclanthology.org/2020.semeval-1.188",
doi = "10.18653/v1/2020.semeval-1.188",
pages = "1425--1447",
} | 1 | 31 | 2022-05-10T10:22:47 | ---
annotations_creators:
- expert-generated
language_creators:
- found
languages:
- ar
- da
- en
- gr
- tr
licenses:
- cc-by-4.0
multilinguality:
- multilingual
pretty_name: OffensEval 2020
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- hate-speech-detection
- text-classification-other-hate-speech-detection
extra_gated_prompt: "Warning: this repository contains harmful content (abusive language, hate speech)."
paperswithcode_id:
- dkhate
- ogtd
---
# Dataset Card for "offenseval_2020"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://sites.google.com/site/offensevalsharedtask/results-and-paper-submission](https://sites.google.com/site/offensevalsharedtask/results-and-paper-submission)
- **Repository:**
- **Paper:** [https://aclanthology.org/2020.semeval-1.188/](https://aclanthology.org/2020.semeval-1.188/), [https://arxiv.org/abs/2006.07235](https://arxiv.org/abs/2006.07235)
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
### Dataset Summary
OffensEval 2020 features a multilingual dataset with five languages. The languages included in OffensEval 2020 are:
* Arabic
* Danish
* English
* Greek
* Turkish
The annotation follows the hierarchical tagset proposed in the Offensive Language Identification Dataset (OLID) and used in OffensEval 2019.
In this taxonomy we break down offensive content into the following three sub-tasks taking the type and target of offensive content into account.
The following sub-tasks were organized:
* Sub-task A - Offensive language identification;
* Sub-task B - Automatic categorization of offense types;
* Sub-task C - Offense target identification.
English training data is omitted so needs to be collected otherwise (see [https://zenodo.org/record/3950379#.XxZ-aFVKipp](https://zenodo.org/record/3950379#.XxZ-aFVKipp))
The source datasets come from:
* Arabic [https://arxiv.org/pdf/2004.02192.pdf](https://arxiv.org/pdf/2004.02192.pdf), [https://aclanthology.org/2021.wanlp-1.13/](https://aclanthology.org/2021.wanlp-1.13/)
* Danish [https://arxiv.org/pdf/1908.04531.pdf](https://arxiv.org/pdf/1908.04531.pdf), [https://aclanthology.org/2020.lrec-1.430/?ref=https://githubhelp.com](https://aclanthology.org/2020.lrec-1.430/)
* English [https://arxiv.org/pdf/2004.14454.pdf](https://arxiv.org/pdf/2004.14454.pdf), [https://aclanthology.org/2021.findings-acl.80.pdf](https://aclanthology.org/2021.findings-acl.80.pdf)
* Greek [https://arxiv.org/pdf/2003.07459.pdf](https://arxiv.org/pdf/2003.07459.pdf), [https://aclanthology.org/2020.lrec-1.629/](https://aclanthology.org/2020.lrec-1.629/)
* Turkish [https://aclanthology.org/2020.lrec-1.758/](https://aclanthology.org/2020.lrec-1.758/)
### Supported Tasks and Leaderboards
* [OffensEval 2020](https://sites.google.com/site/offensevalsharedtask/results-and-paper-submission)
### Languages
Five are covered: bcp47 `ar;da;en;gr;tr`
## Dataset Structure
There are five named configs, one per language:
* `ar` Arabic
* `da` Danish
* `en` English
* `gr` Greek
* `tr` Turkish
The training data for English is absent - this is 9M tweets that need to be rehydrated on their own. See [https://zenodo.org/record/3950379#.XxZ-aFVKipp](https://zenodo.org/record/3950379#.XxZ-aFVKipp)
### Data Instances
An example of 'train' looks as follows.
```
{
'id': '0',
'text': 'PLACEHOLDER TEXT',
'subtask_a': 1,
}
```
### Data Fields
- `id`: a `string` feature.
- `text`: a `string`.
- `subtask_a`: whether or not the instance is offensive; `0: NOT, 1: OFF`
### Data Splits
| name |train|test|
|---------|----:|---:|
|ar|7839|1827|
|da|2961|329|
|en|0|3887|
|gr|8743|1544|
|tr|31277|3515|
## Dataset Creation
### Curation Rationale
Collecting data for abusive language classification. Different rational for each dataset.
### Source Data
#### Initial Data Collection and Normalization
Varies per language dataset
#### Who are the source language producers?
Social media users
### Annotations
#### Annotation process
Varies per language dataset
#### Who are the annotators?
Varies per language dataset; native speakers
### Personal and Sensitive Information
The data was public at the time of collection. No PII removal has been performed.
## Considerations for Using the Data
### Social Impact of Dataset
The data definitely contains abusive language. The data could be used to develop and propagate offensive language against every target group involved, i.e. ableism, racism, sexism, ageism, and so on.
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
The datasets is curated by each sub-part's paper authors.
### Licensing Information
This data is available and distributed under Creative Commons attribution license, CC-BY 4.0.
### Citation Information
```
@inproceedings{zampieri-etal-2020-semeval,
title = "{S}em{E}val-2020 Task 12: Multilingual Offensive Language Identification in Social Media ({O}ffens{E}val 2020)",
author = {Zampieri, Marcos and
Nakov, Preslav and
Rosenthal, Sara and
Atanasova, Pepa and
Karadzhov, Georgi and
Mubarak, Hamdy and
Derczynski, Leon and
Pitenis, Zeses and
{\c{C}}{\"o}ltekin, {\c{C}}a{\u{g}}r{\i}},
booktitle = "Proceedings of the Fourteenth Workshop on Semantic Evaluation",
month = dec,
year = "2020",
address = "Barcelona (online)",
publisher = "International Committee for Computational Linguistics",
url = "https://aclanthology.org/2020.semeval-1.188",
doi = "10.18653/v1/2020.semeval-1.188",
pages = "1425--1447",
abstract = "We present the results and the main findings of SemEval-2020 Task 12 on Multilingual Offensive Language Identification in Social Media (OffensEval-2020). The task included three subtasks corresponding to the hierarchical taxonomy of the OLID schema from OffensEval-2019, and it was offered in five languages: Arabic, Danish, English, Greek, and Turkish. OffensEval-2020 was one of the most popular tasks at SemEval-2020, attracting a large number of participants across all subtasks and languages: a total of 528 teams signed up to participate in the task, 145 teams submitted official runs on the test data, and 70 teams submitted system description papers.",
}
```
### Contributions
Author-added dataset [@leondz](https://github.com/leondz)
| 7,461 | [
[
-0.02435302734375,
-0.056610107421875,
-0.0003209114074707031,
0.0056915283203125,
-0.0151824951171875,
0.01015472412109375,
-0.022186279296875,
-0.0491943359375,
0.0196380615234375,
0.021209716796875,
-0.02880859375,
-0.07318115234375,
-0.04925537109375,
0.... |
crystina-z/mmarco-train | 2023-03-27T05:26:27.000Z | [
"region:us"
] | crystina-z | mMARCO translated datasets | @misc{bonifacio2021mmarco,
title={mMARCO: A Multilingual Version of the MS MARCO Passage Ranking Dataset},
author={Luiz Henrique Bonifacio and Israel Campiotti and Vitor Jeronymo and Hugo Queiroz Abonizio and Roberto Lotufo and Rodrigo Nogueira},
year={2021},
eprint={2108.13897},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | 0 | 31 | 2022-06-04T09:19:16 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.014984130859375,
0.05718994140625,
0.0288543701171875,
-0.0350341796875,
0.046478271484375,
0.052520751953125,
0.005062103271484375,
0.051361083984375,
0.016998291015625,
-0.0521240234375,
-0.01496124267578125,
-0.0604248046875,
0.037... |
biglam/brill_iconclass | 2023-07-25T13:38:02.000Z | [
"task_categories:image-classification",
"task_categories:image-to-text",
"task_categories:feature-extraction",
"task_ids:multi-class-image-classification",
"task_ids:multi-label-image-classification",
"task_ids:image-captioning",
"annotations_creators:expert-generated",
"language_creators:expert-gener... | biglam | A dataset for applying machine learning to collections described with the Iconclass classification system. | @MISC{iconclass,
title = {Brill Iconclass AI Test Set},
author={Etienne Posthumus},
year={2020}
} | 5 | 31 | 2022-07-11T13:16:25 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
license:
- cc0-1.0
multilinguality:
- other-iconclass-metadata
pretty_name: 'Brill Iconclass AI Test Set '
size_categories:
- 10K<n<100K
source_datasets: []
task_categories:
- image-classification
- image-to-text
- feature-extraction
task_ids:
- multi-class-image-classification
- multi-label-image-classification
- image-captioning
tags:
- lam
- art
---
# Dataset Card for Brill Iconclass AI Test Set
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://iconclass.org/testset/](https://iconclass.org/testset/)
- **Repository:**[https://iconclass.org/testset/](https://iconclass.org/testset/)
- **Paper:**[https://iconclass.org/testset/ICONCLASS_and_AI.pdf](https://iconclass.org/testset/ICONCLASS_and_AI.pdf)
- **Leaderboard:**
- **Point of Contact:**[info@iconclass.org](mailto:info@iconclass.org)
### Dataset Summary
> A test dataset and challenge to apply machine learning to collections described with the Iconclass classification system.
This dataset contains `87749` images with [Iconclass](https://iconclass.org/) metadata assigned to the images. The [iconclass](https://iconclass.org/) metadata classification system is intended to provide ['the comprehensive classification system for the content of images.'](https://iconclass.org/).
> Iconclass was developed in the Netherlands as a standard classification for recording collections, with the idea of assembling huge databases that will allow the retrieval of images featuring particular details, subjects or other common factors. It was developed in the 1970s and was loosely based on the Dewey Decimal System because it was meant to be used in art library card catalogs. [source](https://en.wikipedia.org/wiki/Iconclass)
The [Iconclass](https://iconclass.org)
> view of the world is subdivided in 10 main categories...An Iconclass concept consists of an alphanumeric class number (“notation”) and a corresponding content definition (“textual correlate”). An object can be tagged with as many concepts as the user sees fit. [source](https://iconclass.org/)
These ten divisions are as follows:
- 0 Abstract, Non-representational Art
- 1 Religion and Magic
- 2 Nature
- 3 Human being, Man in general
- 4 Society, Civilization, Culture
- 5 Abstract Ideas and Concepts
- 6 History
- 7 Bible
- 8 Literature
- 9 Classical Mythology and Ancient History
Within each of these divisions further subdivision's are possible (9 or 10 subdivisions). For example, under `4 Society, Civilization, Culture`, one can find:
- 41 · material aspects of daily life
- 42 · family, descendance
- 43 · recreation, amusement
- 44 · state; law; political life
- ...
See [https://iconclass.org/4](https://iconclass.org/4) for the full list.
To illustrate we can look at some example Iconclass classifications.
`41A12` represents `castle`. This classification is generated via building from the 'base' division `4`, with the following attributes:
- 4 · Society, Civilization, Culture
- 41 · material aspects of daily life
- 41A · housing
- 41A1 · civic architecture; edifices; dwellings
[source](https://iconclass.org/41A12)
The construction of Iconclass of parts makes it particularly interesting (and challenging) to tackle via Machine Learning. Whilst one could tackle this dataset as a (multi) label image classification problem, this is only one way of tackling it. For example in the above label `castle` giving the model the 'freedom' to predict only a partial label could result in the prediction `41A` i.e. housing. Whilst a very particular form of housing this prediction for 'castle' is not 'wrong' so much as it is not as precise as a human cataloguer may provide.
### Supported Tasks and Leaderboards
As discussed above this dataset could be tackled in various ways:
- as an image classification task
- as a multi-label classification task
- as an image to text task
- as a task whereby a model predicts partial sequences of the label.
This list is not exhaustive.
### Languages
This dataset doesn't have a natural language. The labels themselves can be treated as a form of language i.e. the label can be thought of as a sequence of tokens that construct a 'sentence'.
## Dataset Structure
The dataset contains a single configuration.
### Data Instances
An example instance of the dataset is as follows:
``` python
{'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=390x500 at 0x7FC7FFBBD2D0>,
'label': ['31A235', '31A24(+1)', '61B(+54)', '61B:31A2212(+1)', '61B:31D14']}
```
### Data Fields
The dataset is made up of
- an image
- a sequence of Iconclass labels
### Data Splits
The dataset doesn't provide any predefined train, validation or test splits.
## Dataset Creation
> To facilitate the creation of better models in the cultural heritage domain, and promote the research on tools and techniques using Iconclass, we are making this dataset freely available. All that we ask is that any use is acknowledged and results be shared so that we can all benefit. The content is sampled from the Arkyves database. [source](https://labs.brill.com/ictestset/)
[More Information Needed]
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The images are samples from the [Arkyves database](https://brill.com/view/db/arko?language=en). This collection includes images from
> from libraries and museums in many countries, including the Rijksmuseum in Amsterdam, the Netherlands Institute for Art History (RKD), the Herzog August Bibliothek in Wolfenbüttel, and the university libraries of Milan, Utrecht and Glasgow. [source](https://brill.com/view/db/arko?language=en)
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
The annotations are derived from the source dataset see above. Most annotations were likely created by staff with experience with the Iconclass metadata schema.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
Iconclass as a metadata standard absorbs biases from the time and place of its creation (1940s Netherlands). In particular, '32B human races, peoples; nationalities' has been subject to criticism. '32B36 'primitive', 'pre-modern' peoples' is one example of a category which we may not wish to adopt. In general, there are components of the subdivisions of `32B` which reflect a belief that race is a scientific category rather than socially constructed.
The Iconclass community is actively exploring these limitations; for example, see [Revising Iconclass section 32B human races, peoples; nationalities](https://web.archive.org/web/20210425131753/https://iconclass.org/Updating32B.pdf).
One should be aware of these limitations to Iconclass, and in particular, before deploying a model trained on this data in any production settings.
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Etienne Posthumus
### Licensing Information
[CC0 1.0](https://creativecommons.org/publicdomain/zero/1.0/)
### Citation Information
```
@MISC{iconclass,
title = {Brill Iconclass AI Test Set},
author={Etienne Posthumus},
year={2020}
}
```
### Contributions
Thanks to [@davanstrien](https://github.com/davanstrien) for adding this dataset. | 8,692 | [
[
-0.055633544921875,
-0.0294952392578125,
-0.0163421630859375,
-0.0183258056640625,
-0.01120758056640625,
0.00502777099609375,
-0.006214141845703125,
-0.0584716796875,
0.0021495819091796875,
0.0345458984375,
-0.0206756591796875,
-0.06060791015625,
-0.030944824218... |
jonathanli/law-stack-exchange | 2023-02-23T16:37:19.000Z | [
"task_categories:text-classification",
"language:en",
"stackexchange",
"law",
"region:us"
] | jonathanli | null | null | 6 | 31 | 2022-09-07T19:49:21 | ---
task_categories:
- text-classification
language:
- en
tags:
- stackexchange
- law
pretty_name: Law Stack Exchange
---
# Dataset Card for Law Stack Exchange Dataset
## Dataset Description
- **Paper: [Parameter-Efficient Legal Domain Adaptation](https://aclanthology.org/2022.nllp-1.10/)**
- **Point of Contact: jxl@queensu.ca**
### Dataset Summary
Dataset from the Law Stack Exchange, as used in "Parameter-Efficient Legal Domain Adaptation".
### Citation Information
```
@inproceedings{li-etal-2022-parameter,
title = "Parameter-Efficient Legal Domain Adaptation",
author = "Li, Jonathan and
Bhambhoria, Rohan and
Zhu, Xiaodan",
booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2022",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates (Hybrid)",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.nllp-1.10",
pages = "119--129",
}
``` | 987 | [
[
-0.0202789306640625,
-0.044403076171875,
0.008880615234375,
0.0169830322265625,
-0.051025390625,
-0.0117950439453125,
-0.0218048095703125,
-0.0234527587890625,
0.002964019775390625,
0.039093017578125,
-0.02886962890625,
-0.06158447265625,
-0.038330078125,
0.... |
bigbio/mednli | 2022-12-22T15:24:43.000Z | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | bigbio | State of the art models using deep neural networks have become very good in learning an accurate
mapping from inputs to outputs. However, they still lack generalization capabilities in conditions
that differ from the ones encountered during training. This is even more challenging in specialized,
and knowledge intensive domains, where training data is limited. To address this gap, we introduce
MedNLI - a dataset annotated by doctors, performing a natural language inference task (NLI),
grounded in the medical history of patients. As the source of premise sentences, we used the
MIMIC-III. More specifically, to minimize the risks to patient privacy, we worked with clinical
notes corresponding to the deceased patients. The clinicians in our team suggested the Past Medical
History to be the most informative section of a clinical note, from which useful inferences can be
drawn about the patient. | @misc{https://doi.org/10.13026/c2rs98,
title = {MedNLI — A Natural Language Inference Dataset For The Clinical Domain},
author = {Shivade, Chaitanya},
year = 2017,
publisher = {physionet.org},
doi = {10.13026/C2RS98},
url = {https://physionet.org/content/mednli/}
} | 4 | 31 | 2022-09-26T03:08:16 | ---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_short_name: PHYSIONET_LICENSE_1p5
pretty_name: MedNLI
homepage: https://physionet.org/content/mednli/1.0.0/
bigbio_pubmed: false
bigbio_public: false
bigbio_tasks:
- TEXTUAL_ENTAILMENT
paperswithcode_id: mednli
---
# Dataset Card for MedNLI
## Dataset Description
- **Homepage:** https://physionet.org/content/mednli/1.0.0/
- **Pubmed:** False
- **Public:** False
- **Tasks:** TE
State of the art models using deep neural networks have become very good in learning an accurate
mapping from inputs to outputs. However, they still lack generalization capabilities in conditions
that differ from the ones encountered during training. This is even more challenging in specialized,
and knowledge intensive domains, where training data is limited. To address this gap, we introduce
MedNLI - a dataset annotated by doctors, performing a natural language inference task (NLI),
grounded in the medical history of patients. As the source of premise sentences, we used the
MIMIC-III. More specifically, to minimize the risks to patient privacy, we worked with clinical
notes corresponding to the deceased patients. The clinicians in our team suggested the Past Medical
History to be the most informative section of a clinical note, from which useful inferences can be
drawn about the patient.
## Citation Information
```
@misc{https://doi.org/10.13026/c2rs98,
title = {MedNLI — A Natural Language Inference Dataset For The Clinical Domain},
author = {Shivade, Chaitanya},
year = 2017,
publisher = {physionet.org},
doi = {10.13026/C2RS98},
url = {https://physionet.org/content/mednli/}
}
```
| 1,768 | [
[
0.0110015869140625,
-0.029327392578125,
0.0474853515625,
-0.00830841064453125,
-0.01898193359375,
-0.0260772705078125,
-0.0204620361328125,
-0.029388427734375,
0.0257110595703125,
0.038604736328125,
-0.05517578125,
-0.0599365234375,
-0.0261688232421875,
-0.0... |
tomekkorbak/detoxify-pile-chunk3-100000-150000 | 2022-10-06T02:58:25.000Z | [
"region:us"
] | tomekkorbak | null | null | 0 | 31 | 2022-10-03T19:41:55 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
zhengxuanzenwu/wikitext-2-split-128 | 2022-10-13T00:11:29.000Z | [
"region:us"
] | zhengxuanzenwu | null | null | 0 | 31 | 2022-10-13T00:09:49 | This is a dataset created from the WikiText-2 dataset by splitting longer sequences into sequences with maximum of 128 tokens after using a wordpiece tokenizer. | 161 | [
[
-0.03240966796875,
-0.0135955810546875,
-0.00606536865234375,
0.007694244384765625,
-0.038238525390625,
-0.0003008842468261719,
0.00970458984375,
-0.00958251953125,
0.034088134765625,
0.040679931640625,
-0.06024169921875,
0.0014677047729492188,
-0.03616333007812... |
statworx/swiss-dialects | 2022-11-21T16:18:32.000Z | [
"task_categories:text-generation",
"task_categories:text-classification",
"task_ids:language-modeling",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:ch",
"license:cc-by-nc-4.0",
"dialect",
"region:us"
] | statworx | null | null | 1 | 31 | 2022-11-13T13:50:21 | ---
annotations_creators: []
language:
- ch
language_creators:
- found
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
pretty_name: ArchiMob Corpus
size_categories:
- 10K<n<100K
source_datasets: []
tags:
- dialect
task_categories:
- text-generation
- text-classification
task_ids:
- language-modeling
---
# Dataset Card for ArchiMod Corpus
## Dataset Description
- **Homepage:** https://wortschatz.uni-leipzig.de/en/download/Swiss%20German
- **Repository:** https://huggingface.co/datasets/statworx/leipzip-swiss
### Dataset Summary
The ArchiMob corpus represents German linguistic varieties spoken within the territory of Switzerland. This corpus is the first electronic resource containing long samples of transcribed text in Swiss German, intended for studying the spatial distribution of morphosyntactic features and for natural language processing.
### Languages
Swiss-German
## Dataset Structure
### Data Instances
``
{
'sentence': Sentence in Swiss-German,
'label': Dialect as category
}
``
### Data Fields
`sentence`: Text as string.
`label`: Label as string.
### Data Splits
[More Information Needed]
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
https://www.spur.uzh.ch/en/departments/research/textgroup/ArchiMob.html
## Additional Information
### Licensing Information
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
### Citation Information
Scherrer, Y., T. Samardžić, E. Glaser (2019). "Digitising Swiss German -- How to process and study a polycentric spoken language". Language Resources and Evaluation. (First online)
Scherrer, Y., T. Samardžić, E. Glaser (2019). "ArchiMob: Ein multidialektales Korpus schweizerdeutscher Spontansprache". Linguistik Online, 98(5), 425-454. https://doi.org/10.13092/lo.98.5947
| 1,831 | [
[
-0.055389404296875,
-0.0560302734375,
0.01422882080078125,
0.01259613037109375,
-0.040008544921875,
-0.006862640380859375,
-0.0218658447265625,
-0.03167724609375,
0.04119873046875,
0.0513916015625,
-0.049560546875,
-0.08135986328125,
-0.0433349609375,
0.0193... |
bigbio/nlmchem | 2022-12-22T15:46:07.000Z | [
"multilinguality:monolingual",
"language:en",
"license:cc0-1.0",
"region:us"
] | bigbio | NLM-Chem corpus consists of 150 full-text articles from the PubMed Central Open Access dataset,
comprising 67 different chemical journals, aiming to cover a general distribution of usage of chemical
names in the biomedical literature.
Articles were selected so that human annotation was most valuable (meaning that they were rich in bio-entities,
and current state-of-the-art named entity recognition systems disagreed on bio-entity recognition. | @Article{islamaj2021nlm,
title={NLM-Chem, a new resource for chemical entity recognition in PubMed full text literature},
author={Islamaj, Rezarta and Leaman, Robert and Kim, Sun and Kwon, Dongseop and Wei, Chih-Hsuan and Comeau, Donald C and Peng, Yifan and Cissel, David and Coss, Cathleen and Fisher, Carol and others},
journal={Scientific Data},
volume={8},
number={1},
pages={1--12},
year={2021},
publisher={Nature Publishing Group}
} | 1 | 31 | 2022-11-13T22:11:03 |
---
language:
- en
bigbio_language:
- English
license: cc0-1.0
multilinguality: monolingual
bigbio_license_shortname: CC0_1p0
pretty_name: NLM-Chem
homepage: https://biocreative.bioinformatics.udel.edu/tasks/biocreative-vii/track-2
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- NAMED_ENTITY_DISAMBIGUATION
- TEXT_CLASSIFICATION
---
# Dataset Card for NLM-Chem
## Dataset Description
- **Homepage:** https://biocreative.bioinformatics.udel.edu/tasks/biocreative-vii/track-2
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,NED,TXTCLASS
NLM-Chem corpus consists of 150 full-text articles from the PubMed Central Open Access dataset,
comprising 67 different chemical journals, aiming to cover a general distribution of usage of chemical
names in the biomedical literature.
Articles were selected so that human annotation was most valuable (meaning that they were rich in bio-entities,
and current state-of-the-art named entity recognition systems disagreed on bio-entity recognition.
## Citation Information
```
@Article{islamaj2021nlm,
title={NLM-Chem, a new resource for chemical entity recognition in PubMed full text literature},
author={Islamaj, Rezarta and Leaman, Robert and Kim, Sun and Kwon, Dongseop and Wei, Chih-Hsuan and Comeau, Donald C and Peng, Yifan and Cissel, David and Coss, Cathleen and Fisher, Carol and others},
journal={Scientific Data},
volume={8},
number={1},
pages={1--12},
year={2021},
publisher={Nature Publishing Group}
}
```
| 1,511 | [
[
-0.01151275634765625,
-0.0119476318359375,
0.048309326171875,
0.014312744140625,
-0.014007568359375,
0.0191497802734375,
-0.0296783447265625,
-0.041107177734375,
0.0226898193359375,
0.035003662109375,
-0.02606201171875,
-0.07275390625,
-0.031219482421875,
0.... |
tasksource/babi_nli | 2023-06-05T09:05:59.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:bsd",
"logical reasoning",
"nli"... | tasksource | bAbi tasks recasted as natural language inference. | null | 1 | 31 | 2023-01-01T14:39:33 | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
license: bsd
multilinguality:
- monolingual
pretty_name: babi_nli
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- natural-language-inference
tags:
- logical reasoning
- nli
- natural-language-inference
- reasoning
- logic
---
# bAbi_nli
bAbI tasks recasted as natural language inference.
https://github.com/facebookarchive/bAbI-tasks
tasksource recasting code:
https://colab.research.google.com/drive/1J_RqDSw9iPxJSBvCJu-VRbjXnrEjKVvr?usp=sharing
```bibtex
@article{weston2015towards,
title={Towards ai-complete question answering: A set of prerequisite toy tasks},
author={Weston, Jason and Bordes, Antoine and Chopra, Sumit and Rush, Alexander M and Van Merri{\"e}nboer, Bart and Joulin, Armand and Mikolov, Tomas},
journal={arXiv preprint arXiv:1502.05698},
year={2015}
}
``` | 943 | [
[
-0.0183868408203125,
-0.06951904296875,
0.033416748046875,
0.0163421630859375,
0.009521484375,
0.014373779296875,
-0.01113128662109375,
-0.03759765625,
0.0208740234375,
0.044708251953125,
-0.073486328125,
-0.0036258697509765625,
-0.019500732421875,
0.0173797... |
irds/trec-cast_v1 | 2023-01-05T04:03:19.000Z | [
"task_categories:text-retrieval",
"region:us"
] | irds | null | null | 1 | 31 | 2023-01-05T04:03:14 | ---
pretty_name: '`trec-cast/v1`'
viewer: false
source_datasets: []
task_categories:
- text-retrieval
---
# Dataset Card for `trec-cast/v1`
The `trec-cast/v1` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-cast#trec-cast/v1).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=38,622,444
This dataset is used by: [`trec-cast_v1_2020`](https://huggingface.co/datasets/irds/trec-cast_v1_2020), [`trec-cast_v1_2020_judged`](https://huggingface.co/datasets/irds/trec-cast_v1_2020_judged)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/trec-cast_v1', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Dalton2019Cast,
title={CAsT 2019: The Conversational Assistance Track Overview},
author={Jeffrey Dalton and Chenyan Xiong and Jamie Callan},
booktitle={TREC},
year={2019}
}
```
| 1,202 | [
[
-0.0255584716796875,
-0.0164794921875,
0.003154754638671875,
0.0074615478515625,
-0.027496337890625,
0.005535125732421875,
0.0030040740966796875,
-0.01059722900390625,
0.03558349609375,
0.038543701171875,
-0.043426513671875,
-0.0633544921875,
-0.03924560546875,
... |
sayakpaul/instructpix2pix-demo | 2023-02-22T04:38:14.000Z | [
"arxiv:2211.09800",
"region:us"
] | sayakpaul | null | null | 0 | 31 | 2023-02-21T12:21:29 | ---
dataset_info:
features:
- name: input
dtype: string
- name: edit
dtype: string
- name: output
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 2456199.0
num_examples: 5
download_size: 2460397
dataset_size: 2456199.0
---
# Dataset Card for "instructpix2pix-demo"
Dataset was created using [this notebook](https://colab.research.google.com/gist/sayakpaul/f90aa06f8f89c831f798dd5b3939818b/scratchpad.ipynb).
Paper reference: [InstructPix2Pix: Learning to Follow Image Editing Instructions](https://arxiv.org/abs/2211.09800) | 594 | [
[
-0.02197265625,
-0.0389404296875,
0.0318603515625,
-0.004116058349609375,
-0.0117034912109375,
-0.01110076904296875,
-0.00006085634231567383,
-0.012481689453125,
0.0070953369140625,
0.018463134765625,
-0.053314208984375,
-0.040313720703125,
-0.01611328125,
-... |
multimodalart/facesyntheticsspigacaptioned | 2023-03-23T14:56:28.000Z | [
"region:us"
] | multimodalart | null | null | 12 | 31 | 2023-03-21T02:37:14 | ---
dataset_info:
features:
- name: image
dtype: image
- name: image_seg
dtype: image
- name: landmarks
dtype: string
- name: spiga
sequence:
sequence: float64
- name: spiga_seg
dtype: image
- name: image_caption
dtype: string
splits:
- name: train
num_bytes: 31087489990.0
num_examples: 100000
download_size: 31011261945
dataset_size: 31087489990.0
---
# Dataset Card for "face_synthetics_spiga_captioned"
This is a copy of the [Microsoft FaceSynthetics dataset with SPIGA-calculated landmark annotations](https://huggingface.co/datasets/pcuenq/face_synthetics_spiga), and additional BLIP-generated captions.
For a copy of the original FaceSynthetics dataset with no extra annotations, please refer to [pcuenq/face_synthetics](https://huggingface.co/datasets/pcuenq/face_synthetics).
Here is the code for parsing the dataset and generating the BLIP captions:
```py
from transformers import pipeline
dataset_name = "pcuenq/face_synthetics_spiga"
faces = load_dataset(dataset_name)
faces = faces["train"]
captioner = pipeline("image-to-text",model="Salesforce/blip-image-captioning-large", device=0)
def caption_image_data(example):
image = example["image"]
image_caption = captioner(image)[0]['generated_text']
example['image_caption'] = image_caption
return example
faces_proc = faces.map(caption_image_data)
faces_proc.push_to_hub(f"multimodalart/face_synthetics_spiga_captioned")
```
| 1,470 | [
[
-0.018951416015625,
-0.0261688232421875,
0.0006256103515625,
0.05267333984375,
-0.0186004638671875,
0.0033969879150390625,
-0.00518035888671875,
-0.0159912109375,
0.032958984375,
0.05035400390625,
-0.072509765625,
-0.02423095703125,
-0.035400390625,
0.017776... |
RIW/small-coco-wm_10 | 2023-03-24T16:26:33.000Z | [
"region:us"
] | RIW | null | null | 0 | 31 | 2023-03-24T15:07:14 | ---
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
- name: url
dtype: string
- name: key
dtype: string
- name: status
dtype: string
- name: error_message
dtype: 'null'
- name: width
dtype: int64
- name: height
dtype: int64
- name: original_width
dtype: int64
- name: original_height
dtype: int64
- name: exif
dtype: string
- name: sha256
dtype: string
splits:
- name: train
num_bytes: 9528180597.872
num_examples: 99652
- name: validation
num_bytes: 9091548317.436
num_examples: 99694
download_size: 9948253256
dataset_size: 18619728915.308
---
# Dataset Card for "small-coco-wm_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 849 | [
[
-0.05517578125,
-0.023590087890625,
0.0171661376953125,
0.021881103515625,
-0.0154571533203125,
0.0022430419921875,
-0.006511688232421875,
-0.0207366943359375,
0.07135009765625,
0.0279693603515625,
-0.058074951171875,
-0.043792724609375,
-0.04852294921875,
-... |
Vision-CAIR/cc_sbu_align | 2023-04-19T22:21:39.000Z | [
"region:us"
] | Vision-CAIR | null | null | 29 | 31 | 2023-04-19T21:45:46 | # MiniGPT-4: Enhancing Vision-language Understanding with Advanced Large Language Models
[Deyao Zhu](https://tsutikgiau.github.io/)* (On Job Market!), [Jun Chen](https://junchen14.github.io/)* (On Job Market!), [Xiaoqian Shen](https://xiaoqian-shen.github.io), [Xiang Li](https://xiangli.ac.cn), and [Mohamed Elhoseiny](https://www.mohamed-elhoseiny.com/). *Equal Contribution
**King Abdullah University of Science and Technology**
## Online Demo
Click the image to chat with MiniGPT-4 around your images
[](https://minigpt-4.github.io)
## Examples
| | |
:-------------------------:|:-------------------------:
 | 
 | 
More examples can be found in the [project page](https://minigpt-4.github.io).
## Introduction
- MiniGPT-4 aligns a frozen visual encoder from BLIP-2 with a frozen LLM, Vicuna, using just one projection layer.
- We train MiniGPT-4 with two stages. The first traditional pretraining stage is trained using roughly 5 million aligned image-text pairs in 10 hours using 4 A100s. After the first stage, Vicuna is able to understand the image. But the generation ability of Vicuna is heavilly impacted.
- To address this issue and improve usability, we propose a novel way to create high-quality image-text pairs by the model itself and ChatGPT together. Based on this, we then create a small (3500 pairs in total) yet high-quality dataset.
- The second finetuning stage is trained on this dataset in a conversation template to significantly improve its generation reliability and overall usability. To our surprise, this stage is computationally efficient and takes only around 7 minutes with a single A100.
- MiniGPT-4 yields many emerging vision-language capabilities similar to those demonstrated in GPT-4.

## Getting Started
### Installation
**1. Prepare the code and the environment**
Git clone our repository, creating a python environment and ativate it via the following command
```bash
git clone https://github.com/Vision-CAIR/MiniGPT-4.git
cd MiniGPT-4
conda env create -f environment.yml
conda activate minigpt4
```
**2. Prepare the pretrained Vicuna weights**
The current version of MiniGPT-4 is built on the v0 versoin of Vicuna-13B.
Please refer to our instruction [here](PrepareVicuna.md)
to prepare the Vicuna weights.
The final weights would be in a single folder with the following structure:
```
vicuna_weights
├── config.json
├── generation_config.json
├── pytorch_model.bin.index.json
├── pytorch_model-00001-of-00003.bin
...
```
Then, set the path to the vicuna weight in the model config file
[here](minigpt4/configs/models/minigpt4.yaml#L16) at Line 16.
**3. Prepare the pretrained MiniGPT-4 checkpoint**
To play with our pretrained model, download the pretrained checkpoint
[here](https://drive.google.com/file/d/1a4zLvaiDBr-36pasffmgpvH5P7CKmpze/view?usp=share_link).
Then, set the path to the pretrained checkpoint in the evaluation config file
in [eval_configs/minigpt4_eval.yaml](eval_configs/minigpt4_eval.yaml#L10) at Line 11.
### Launching Demo Locally
Try out our demo [demo.py](demo.py) on your local machine by running
```
python demo.py --cfg-path eval_configs/minigpt4_eval.yaml --gpu-id 0
```
Here, we load Vicuna as 8 bit by default to save some GPU memory usage.
Besides, the default beam search width is 1.
Under this setting, the demo cost about 23G GPU memory.
If you have a more powerful GPU with larger GPU memory, you can run the model
in 16 bit by setting low_resource to False in the config file
[minigpt4_eval.yaml](eval_configs/minigpt4_eval.yaml) and use a larger beam search width.
### Training
The training of MiniGPT-4 contains two alignment stages.
**1. First pretraining stage**
In the first pretrained stage, the model is trained using image-text pairs from Laion and CC datasets
to align the vision and language model. To download and prepare the datasets, please check
our [first stage dataset preparation instruction](dataset/README_1_STAGE.md).
After the first stage, the visual features are mapped and can be understood by the language
model.
To launch the first stage training, run the following command. In our experiments, we use 4 A100.
You can change the save path in the config file
[train_configs/minigpt4_stage1_pretrain.yaml](train_configs/minigpt4_stage1_pretrain.yaml)
```bash
torchrun --nproc-per-node NUM_GPU train.py --cfg-path train_configs/minigpt4_stage1_pretrain.yaml
```
A MiniGPT-4 checkpoint with only stage one training can be downloaded
[here](https://drive.google.com/file/d/1u9FRRBB3VovP1HxCAlpD9Lw4t4P6-Yq8/view?usp=share_link).
Compared to the model after stage two, this checkpoint generate incomplete and repeated sentences frequently.
**2. Second finetuning stage**
In the second stage, we use a small high quality image-text pair dataset created by ourselves
and convert it to a conversation format to further align MiniGPT-4.
To download and prepare our second stage dataset, please check our
[second stage dataset preparation instruction](dataset/README_2_STAGE.md).
To launch the second stage alignment,
first specify the path to the checkpoint file trained in stage 1 in
[train_configs/minigpt4_stage1_pretrain.yaml](train_configs/minigpt4_stage2_finetune.yaml).
You can also specify the output path there.
Then, run the following command. In our experiments, we use 1 A100.
```bash
torchrun --nproc-per-node NUM_GPU train.py --cfg-path train_configs/minigpt4_stage2_finetune.yaml
```
After the second stage alignment, MiniGPT-4 is able to talk about the image coherently and user-friendly.
## Acknowledgement
+ [BLIP2](https://huggingface.co/docs/transformers/main/model_doc/blip-2) The model architecture of MiniGPT-4 follows BLIP-2. Don't forget to check this great open-source work if you don't know it before!
+ [Lavis](https://github.com/salesforce/LAVIS) This repository is built upon Lavis!
+ [Vicuna](https://github.com/lm-sys/FastChat) The fantastic language ability of Vicuna with only 13B parameters is just amazing. And it is open-source!
If you're using MiniGPT-4 in your research or applications, please cite using this BibTeX:
```bibtex
@misc{zhu2022minigpt4,
title={MiniGPT-4: Enhancing Vision-language Understanding with Advanced Large Language Models},
author={Deyao Zhu and Jun Chen and Xiaoqian Shen and xiang Li and Mohamed Elhoseiny},
year={2023},
}
```
## License
This repository is under [BSD 3-Clause License](LICENSE.md).
Many codes are based on [Lavis](https://github.com/salesforce/LAVIS) with
BSD 3-Clause License [here](LICENSE_Lavis.md).
| 6,801 | [
[
-0.042144775390625,
-0.051177978515625,
0.039825439453125,
-0.0033931732177734375,
-0.032928466796875,
-0.025482177734375,
-0.01398468017578125,
-0.033355712890625,
-0.00238800048828125,
0.0142059326171875,
-0.049224853515625,
-0.0297088623046875,
-0.03488159179... |
cardiffnlp/relentless | 2023-10-14T10:53:59.000Z | [
"multilinguality:monolingual",
"size_categories:n<1K",
"language:en",
"license:other",
"arxiv:2305.15002",
"region:us"
] | cardiffnlp | Named-entities Relation Ranking. | TBA | 0 | 31 | 2023-05-24T09:57:47 | ---
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- n<1K
pretty_name: relentless
---
# Dataset Card for "cardiffnlp/relentless"
***RelEntLess*** is a new benchmark, in which entity pairs have to be ranked according to how much they satisfy a given graded relation.
Essentially, the task is a ranking task where we provide five prototypical examples to each relation. Following brief description of each relation type
is used in our baseline in addition to the prototypical examples.
Please check our paper "[A RelEntLess Benchmark for Modelling Graded Relations between Named Entities](https://arxiv.org/abs/2305.15002)" for more detail.
```python
{
"friend/ally of": "entities that are friends or allies",
"competitor/rival of": "entities that are competitors or rivals",
"known for": "examples of what entities are known for",
"influenced by": "what has influenced different entities",
"similar to": "examples of entities that are similar"
}
```
## Dataset Description
- **Repository:** [https://huggingface.co/datasets/cardiffnlp/relentless](https://huggingface.co/datasets/cardiffnlp/relentless)
- **Paper:** [A RelEntLess Benchmark for Modelling Graded Relations between Named Entities](https://arxiv.org/abs/2305.15002)
- **Dataset:** [https://huggingface.co/datasets/cardiffnlp/relentless](https://huggingface.co/datasets/cardiffnlp/relentless)
### Dataset Summary
| relation_type | val. | test |
|:--------------------|-------:|-------:|
| competitor/rival of | 20 | 84 |
| friend/ally of | 19 | 88 |
| influenced by | 19 | 90 |
| known for | 18 | 105 |
| similar to | 19 | 89 |
## Dataset Structure
### Data Instances
```python
{
"pairs": [["Le Corbusier", "purism art"], ["Sean Connery", "Finding Forrester"], ...],
"scores_all": [[4.0, 5.0, 3.0, 4.0, 5.0, 3.0, 5.0], [4.0, 5.0, 2, 5.0, 5.0, 4.0, 2], ...],
"scores_mean": [4.142857142857143, 3.857142857142857, 4.857142857142857, ...],
"relation_type": "known for",
"ranks": [8.5, 11, 5, 14, 15, 5, 20, 13, 1.5, 18, 10, 1.5, 17, ...],
"prototypical_examples": [ [ "Russell Crowe", "Gladiator" ], [ "Cadbury", "chocolate" ],...]
}
```
### Citation Information
```
@misc{ushio2023relentless,
title={A RelEntLess Benchmark for Modelling Graded Relations between Named Entities},
author={Asahi Ushio and Jose Camacho Collados and Steven Schockaert},
year={2023},
eprint={2305.15002},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| 2,599 | [
[
-0.032440185546875,
-0.035675048828125,
0.0164947509765625,
0.024017333984375,
0.0029544830322265625,
-0.003231048583984375,
-0.0079345703125,
-0.0213165283203125,
0.0277099609375,
0.0235443115234375,
-0.044342041015625,
-0.0616455078125,
-0.032806396484375,
... |
Enno-Ai/fr-instructs | 2023-06-26T23:16:02.000Z | [
"task_categories:text2text-generation",
"task_categories:table-question-answering",
"size_categories:10M<n<100M",
"language:fr",
"license:cc-by-2.5",
"region:us"
] | Enno-Ai | null | null | 3 | 31 | 2023-05-29T14:11:48 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 5904510661
num_examples: 11794112
download_size: 1623654660
dataset_size: 5904510661
license: cc-by-2.5
task_categories:
- text2text-generation
- table-question-answering
language:
- fr
size_categories:
- 10M<n<100M
---
# A collection of 12 million french-only instructions deduplicated from various sources
Source :
- clips/mqa-fr-faq
- multilingual-wikihow-qa-16k
- MBZUAI/Bactrian-X
- argilla/databricks-dolly-15k-curated-multilingual
- innermost47/alpaca-fr
- etalab-ia/piaf | 708 | [
[
-0.026947021484375,
-0.05255126953125,
0.03375244140625,
0.035736083984375,
-0.01416015625,
0.01226806640625,
0.0178680419921875,
0.01483917236328125,
0.02459716796875,
0.06304931640625,
-0.0892333984375,
-0.0254974365234375,
-0.040496826171875,
0.0272979736... |
KaiLv/UDR_ComE | 2023-06-21T12:35:45.000Z | [
"region:us"
] | KaiLv | null | null | 0 | 31 | 2023-06-21T12:35:33 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: label
dtype: string
- name: question
dtype: string
- name: choices
dtype: string
- name: len_question
dtype: int64
- name: max_len_choices
dtype: int64
splits:
- name: train
num_bytes: 4855852
num_examples: 9996
- name: test
num_bytes: 468814
num_examples: 1000
- name: debug
num_bytes: 2432484
num_examples: 5000
download_size: 3748196
dataset_size: 7757150
---
# Dataset Card for "UDR_ComE"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 660 | [
[
-0.040863037109375,
-0.0188140869140625,
0.01049041748046875,
0.0166015625,
-0.01161956787109375,
0.0117034912109375,
0.022705078125,
-0.02313232421875,
0.03741455078125,
0.040130615234375,
-0.054046630859375,
-0.050628662109375,
-0.039886474609375,
-0.00685... |
sixf0ur/GuanacoDataset-de | 2023-07-04T09:11:39.000Z | [
"task_categories:text-generation",
"task_categories:question-answering",
"task_categories:conversational",
"size_categories:1K<n<10K",
"language:de",
"license:gpl-3.0",
"region:us"
] | sixf0ur | null | null | 1 | 31 | 2023-07-04T06:38:14 | ---
license: gpl-3.0
task_categories:
- text-generation
- question-answering
- conversational
language:
- de
pretty_name: German Guanaco Dataset
size_categories:
- 1K<n<10K
---
This dataset was taken from JosephusCheung/GuanacoDataset and filtered to German entries. | 267 | [
[
-0.024200439453125,
-0.03363037109375,
0.034637451171875,
-0.00864410400390625,
-0.0270538330078125,
-0.0124969482421875,
0.0006780624389648438,
-0.030853271484375,
0.06744384765625,
0.08404541015625,
-0.06976318359375,
-0.0521240234375,
-0.0577392578125,
0.... |
TariqJamil/guanaco-llama2-1k | 2023-08-05T13:09:17.000Z | [
"region:us"
] | TariqJamil | null | null | 0 | 31 | 2023-08-05T09:24:12 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1655208
num_examples: 1000
download_size: 966969
dataset_size: 1655208
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "guanaco-llama2-1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 444 | [
[
-0.0220489501953125,
-0.01284027099609375,
0.017364501953125,
0.0377197265625,
-0.03839111328125,
0.0008654594421386719,
0.0258941650390625,
-0.01904296875,
0.0645751953125,
0.0298919677734375,
-0.054718017578125,
-0.067138671875,
-0.050262451171875,
-0.0160... |
Photolens/oasst1-langchain-llama-2-formatted | 2023-08-11T15:23:33.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"language:en",
"language:es",
"language:ru",
"language:de",
"language:pl",
"language:th",
"language:vi",
"language:sv",
"language:bn",
"language:da",
"language:he",
"language:it",
"language:fa",
"language:sk",
"lang... | Photolens | null | null | 9 | 31 | 2023-08-07T18:45:27 | ---
language:
- en
- es
- ru
- de
- pl
- th
- vi
- sv
- bn
- da
- he
- it
- fa
- sk
- id
- nb
- el
- nl
- hu
- eu
- zh
- eo
- ja
- ca
- cs
- bg
- fi
- pt
- tr
- ro
- ar
- uk
- gl
- fr
- ko
task_categories:
- conversational
- text-generation
license: apache-2.0
---
## Dataset overview
Dataset license: apache-2.0
This dataset contains langchain formatted [**oasst1**](https://huggingface.co/datasets/OpenAssistant/oasst1) messages with llama-2-chat special tokens.
This dataset is intended for powering langchain applications. When an llm is trained with this data, its performance is expected to be high with langchain apps.
Format of new dataset for every prompter-assistant message pair:
```
<s>[INST] "{prompter_message}" [/INST] ```json
{"action": "Final Answer", "action_input": "{assistant_message}"}
``` </s>
```
*Note: When there is a conversation, the message pairs are seperated by "\ " in same row*
## Languages
**Languages with over 1000 messages**
- English: 71956
- Spanish: 43061
- Russian: 9089
- German: 5279
- Chinese: 4962
- French: 4251
- Thai: 3042
- Portuguese (Brazil): 2969
- Catalan: 2260
- Korean: 1553
- Ukrainian: 1352
- Italian: 1320
- Japanese: 1018
<details>
<summary><b>Languages with under 1000 messages</b></summary>
<ul>
<li>Vietnamese: 952</li>
<li>Basque: 947</li>
<li>Polish: 886</li>
<li>Hungarian: 811</li>
<li>Arabic: 666</li>
<li>Dutch: 628</li>
<li>Swedish: 512</li>
<li>Turkish: 454</li>
<li>Finnish: 386</li>
<li>Czech: 372</li>
<li>Danish: 358</li>
<li>Galician: 339</li>
<li>Hebrew: 255</li>
<li>Romanian: 200</li>
<li>Norwegian Bokmål: 133</li>
<li>Indonesian: 115</li>
<li>Bulgarian: 95</li>
<li>Bengali: 82</li>
<li>Persian: 72</li>
<li>Greek: 66</li>
<li>Esperanto: 59</li>
<li>Slovak: 19</li>
</ul>
</details>
## Contact
- Email: art.photolens.ai@gmail.com
- Discord: https://discord.gg/QJT3e6ABz8
- Twitter: @PhotolensAi | 1,979 | [
[
-0.0266265869140625,
-0.025177001953125,
0.00855255126953125,
0.01959228515625,
-0.0270843505859375,
0.007354736328125,
-0.0136871337890625,
-0.0190887451171875,
0.0234375,
0.041534423828125,
-0.054229736328125,
-0.0628662109375,
-0.04498291015625,
0.0204772... |
Admin08077/Taxonomy | 2023-10-21T05:38:46.000Z | [
"task_categories:token-classification",
"task_categories:text-classification",
"task_categories:table-question-answering",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:translation",
"task_categories:summarization",
"task_categories:conversational",... | Admin08077 | null | null | 2 | 31 | 2023-09-03T08:06:18 | ---
license: other
task_categories:
- token-classification
- text-classification
- table-question-answering
- question-answering
- zero-shot-classification
- translation
- summarization
- conversational
- feature-extraction
- text-generation
- text2text-generation
- sentence-similarity
- audio-classification
- fill-mask
- text-to-speech
- automatic-speech-recognition
- voice-activity-detection
- depth-estimation
- audio-to-audio
- image-classification
- image-segmentation
- object-detection
- text-to-image
- image-to-text
- image-to-image
- unconditional-image-generation
- reinforcement-learning
- robotics
- tabular-classification
- video-classification
- tabular-to-text
- tabular-regression
- multiple-choice
- table-to-text
- text-retrieval
- time-series-forecasting
- text-to-video
- visual-question-answering
- zero-shot-image-classification
- graph-ml
language:
- en
tags:
- finance
- quantum Banking
- '#U'
- XBRL
- 'TAXONOMY '
pretty_name: 'The Private Bank Taxonomy '
size_categories:
- n>1T
---
## API Calls
If you wish to programmatically fetch the Autonomous Private Banking Taxonomy dataset, you can do so via the following curl commands:
```bash
# Fetch rows of the dataset
curl -X GET "https://datasets-server.huggingface.co/rows?dataset=Admin08077%2FTaxonomy&config=default&split=train&offset=0&limit=100"
# Get dataset splits
curl -X GET "https://datasets-server.huggingface.co/splits?dataset=Admin08077%2FTaxonomy"
# Download the dataset in Parquet format
curl -X GET "https://huggingface.co/api/datasets/Admin08077/Taxonomy/parquet/default/train"
```
To clone the dataset repository, make sure you have git-lfs installed. Then run:
```bash
git lfs install
git clone https://huggingface.co/datasets/Admin08077/Taxonomy
```
If you want to clone without large files, you can use:
```bash
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/Admin08077/Taxonomy
```
### Python Code to Load Dataset
If you are using Python, you can easily load the dataset using the Hugging Face `datasets` library:
```python
from datasets import load_dataset
dataset = load_dataset("Admin08077/Taxonomy")
```
## Citation
If you use this dataset in your research or project, please cite it using the following BibTeX entry:
```bibtex
@misc{james_burvel_o'callaghan_iii_2023,
author = {James Burvel O'Callaghan III},
title = {Taxonomy (Revision 9e2a198)},
year = 2023,
url = {https://huggingface.co/datasets/Admin08077/Taxonomy},
doi = {10.57967/hf/1070},
publisher = {Hugging Face}
}
``` | 2,519 | [
[
-0.04241943359375,
-0.041839599609375,
0.003948211669921875,
0.0015163421630859375,
-0.01457977294921875,
0.0276947021484375,
0.0119781494140625,
-0.025909423828125,
0.057281494140625,
0.05938720703125,
-0.038818359375,
-0.042236328125,
-0.0275115966796875,
... |
daniel2588/website_defacement | 2023-10-31T09:06:41.000Z | [
"region:us"
] | daniel2588 | null | null | 0 | 31 | 2023-09-05T11:10:42 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
richardr1126/spider-context-validation-ranked-schema | 2023-09-07T22:12:48.000Z | [
"source_datasets:spider",
"language:en",
"license:cc-by-4.0",
"text-to-sql",
"SQL",
"spider",
"validation",
"eval",
"spider-eval",
"region:us"
] | richardr1126 | null | null | 0 | 31 | 2023-09-06T23:54:46 | ---
language:
- en
license:
- cc-by-4.0
source_datasets:
- spider
pretty_name: Spider Context Validation Schema Ranked
tags:
- text-to-sql
- SQL
- spider
- validation
- eval
- spider-eval
dataset_info:
features:
- name: index
dtype: int32
- name: db_id
dtype: string
- name: question
dtype: string
- name: db_info
dtype: string
- name: ground_truth
dtype: string
---
# Dataset Card for Spider Context Validation
### Ranked Schema by ChatGPT
The database context used here is generated from ChatGPT after telling it to reorder the schema with the most relevant columns in the beginning of the db_info.
### Dataset Summary
Spider is a large-scale complex and cross-domain semantic parsing and text-to-SQL dataset annotated by 11 Yale students
The goal of the Spider challenge is to develop natural language interfaces to cross-domain databases.
This dataset was created to validate spider-fine-tuned LLMs with database context.
### Yale Lily Spider Leaderboards
The leaderboard can be seen at https://yale-lily.github.io/spider
### Languages
The text in the dataset is in English.
### Licensing Information
The spider dataset is licensed under
the [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode)
### Citation
```
@article{yu2018spider,
title={Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task},
author={Yu, Tao and Zhang, Rui and Yang, Kai and Yasunaga, Michihiro and Wang, Dongxu and Li, Zifan and Ma, James and Li, Irene and Yao, Qingning and Roman, Shanelle and others},
journal={arXiv preprint arXiv:1809.08887},
year={2018}
}
``` | 1,692 | [
[
-0.0037555694580078125,
-0.0316162109375,
0.01509857177734375,
0.00995635986328125,
-0.01306915283203125,
0.00833892822265625,
-0.006687164306640625,
-0.0245819091796875,
0.021759033203125,
0.0194244384765625,
-0.03826904296875,
-0.059967041015625,
-0.0378417968... |
Linhz/qag_vico | 2023-09-08T04:03:22.000Z | [
"region:us"
] | Linhz | null | null | 0 | 31 | 2023-09-08T04:03:01 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
HumanCompatibleAI/ppo-seals-Walker2d-v1 | 2023-09-27T07:09:25.000Z | [
"region:us"
] | HumanCompatibleAI | null | null | 0 | 31 | 2023-09-26T14:45:14 | ---
dataset_info:
features:
- name: obs
sequence:
sequence: float64
- name: acts
sequence:
sequence: float32
- name: infos
sequence: string
- name: terminal
dtype: bool
- name: rews
sequence: float32
splits:
- name: train
num_bytes: 63405655
num_examples: 104
download_size: 20942934
dataset_size: 63405655
---
# Dataset Card for "ppo-seals-Walker2d-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 546 | [
[
-0.0269927978515625,
-0.003978729248046875,
0.0173187255859375,
0.0191650390625,
-0.030548095703125,
-0.007389068603515625,
0.056427001953125,
-0.0202484130859375,
0.050262451171875,
0.0472412109375,
-0.055145263671875,
-0.040740966796875,
-0.056243896484375,
... |
mhenrichsen/terra | 2023-09-27T13:01:48.000Z | [
"region:us"
] | mhenrichsen | null | null | 0 | 31 | 2023-09-26T20:39:26 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: timestamp
dtype: string
- name: url
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 96579266401
num_examples: 25424726
download_size: 22818976288
dataset_size: 96579266401
---
# Dataset Card for "terra"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 554 | [
[
-0.039642333984375,
-0.0236663818359375,
0.039947509765625,
0.009735107421875,
-0.00946044921875,
0.0101776123046875,
0.0146331787109375,
-0.007343292236328125,
0.05755615234375,
0.035888671875,
-0.062744140625,
-0.06268310546875,
-0.03515625,
-0.02490234375... |
teknium/trismegistus-project | 2023-10-14T06:37:45.000Z | [
"language:eng",
"license:mit",
"spirituality",
"occultism",
"region:us"
] | teknium | null | null | 21 | 31 | 2023-10-01T00:18:39 | ---
language:
- eng
pretty_name: "The Trismegistus Project"
tags:
- spirituality
- occultism
license: mit
---
# The Trismegistus Project Dataset

### General Information
- **Dataset Name**: Trismegistus Instruction Dataset
- **Version**: 1.0
- **Size**: ~10,000 instruction-response pairs
- **Domain**: Esoteric, Spiritual, Occult, Wisdom Traditions, Paranormal, etc.
- **Date Released**: Friday the 13th, October of 2023
### Short Description
The Trismegistus Project is a comprehensive dataset containing instruction-response pairs focused on the broad umbrella of Esoterica. Topics covered include Mysticism, Hermeticism, Necromancy, Religion, Trance, Meditation, Magick, Spirituality, Alchemy, Numerology, Tarot, and much more.
The entire dataset was generated synthetically, save for subtopics.
### Dataset Structure
Each data entry in the dataset follows this structure:
- `id`: Unique identifier for the entry.
- `system_prompt_used`: The system-wide prompt used for initializing the task with GPT.
- `domain_task_type`: Type of task being performed (e.g., "Task").
- `topic`: Specific topic or domain under which the instruction falls.
- `source`: Origin or expertise level of the instruction (e.g., "DomainExpert_Occult").
- `conversations`: An array of conversation turns, including:
- `from`: Identifier for the origin of the message (either "human" or "gpt").
- `value`: Actual content of the message.
### Example
```{
"id": "570a8404-3270-4aba-a47c-660359440835",
"system_prompt_used": "...",
"domain_task_type": "Task",
"topic": "'Big Man' society",
"source": "DomainExpert_Occult",
"conversations": [...]
}
```
### Use Cases
This dataset is specifically designed for training and evaluating models on esoteric, spiritual, and occult knowledge. Potential use cases include:
- Developing chatbots with a focus on esoteric and paranormal topics.
- Fine-tuning existing models to enhance their understanding of esoteric domains.
- Assisting researchers in esoteric studies with generated content.
## Disclaimer
Some topics and content in the dataset may (likely are) not suitable for all ages.
### Licensing & Citation
MIT License
---
*Note*: The dataset is released in tandem with the Mistral Trismegistus 7B model available on HuggingFace.
| 2,387 | [
[
-0.03057861328125,
-0.048797607421875,
0.0038852691650390625,
0.0066680908203125,
-0.01407623291015625,
0.01171875,
-0.0182342529296875,
-0.028472900390625,
0.021392822265625,
0.053955078125,
-0.06842041015625,
-0.07421875,
-0.0293731689453125,
-0.0145263671... |
MegPaulson/Melanoma_Train | 2023-10-03T22:33:26.000Z | [
"region:us"
] | MegPaulson | null | null | 0 | 31 | 2023-10-03T19:04:46 | ---
dataset_info:
features:
- name: image
dtype: image
- name: image_seg
dtype: image
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 35945944.0
num_examples: 26
download_size: 1333203
dataset_size: 35945944.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Melanoma_Train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 518 | [
[
-0.0312347412109375,
-0.005764007568359375,
0.02069091796875,
0.019012451171875,
-0.01666259765625,
-0.00325775146484375,
0.0192108154296875,
-0.011077880859375,
0.046661376953125,
0.04571533203125,
-0.057037353515625,
-0.0679931640625,
-0.043731689453125,
-... |
someone13574/topic-to-question | 2023-10-09T03:55:30.000Z | [
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"region:us"
] | someone13574 | null | null | 0 | 31 | 2023-10-03T21:32:55 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
---
# Topic -> Question
This dataset consists of just under 10.5k question-topic pairs, for use as prompts in synthetic Q&A datasets. It was generated using [StableBeluga2](https://huggingface.co/stabilityai/StableBeluga2) and the prompt listed below.
## Generation
As stated above, this dataset was created using StableBeluga2. This was done by prompting the model to generate a question to fit a specific topic which were taken from Wikipedia's [Level-4 Vital Articles](https://en.wikipedia.org/wiki/Wikipedia:Vital_articles/Level/4) as well as a small amount of random articles from the [Electronics](https://en.wikipedia.org/wiki/Category:Electronics) and [Engineering](https://en.wikipedia.org/wiki/Category:Engineering) categories (not vital articles). The article names list was created using [PetScan](https://petscan.wmflabs.org/) and links to the queries are below.
The following prompt was used to generate each question: **"Drawing on your expertise regarding the topic '{topic}', create a thought-provoking question about it that goes beyond basic facts. Your question should encourage deep analysis, critical thinking, and profound understanding. Avoid a question that can be readily answered through a quick search, aiming instead for one that necessitates your expert insights."**
Here is the list of PetScan queries used to obtain the topic list (each topic was only used once):
| Category | All topics? | Query Link |
|--------------------------------|-------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| People | Yes | [Link](https://petscan.wmflabs.org/?page_image=any&edits%5Bflagged%5D=both&interface_language=en&min_redlink_count=1&sparql=&show_redirects=both&sortby=none&wpiu=any&search_filter=&referrer_url=&common_wiki_other=&pagepile=&outlinks_no=&search_max_results=500&min_sitelink_count=&langs_labels_any=&cb_labels_no_l=1&output_compatability=catscan&ores_prob_from=&depth=0&sitelinks_yes=&common_wiki=auto&ns%5B0%5D=1&max_sitelink_count=&cb_labels_yes_l=1&before=&language=en&ores_prob_to=&templates_no=&labels_yes=&wikidata_prop_item_use=&cb_labels_any_l=1&labels_no=&active_tab=tab_templates_n_links&search_wiki=&ores_type=any&after=&edits%5Bbots%5D=both&sitelinks_any=&project=wikipedia&outlinks_any=Wikipedia%3AVital_articles%2FLevel%2F4%2FPeople&doit=) |
| History | Yes | [Link](https://petscan.wmflabs.org/?referrer_name=&langs_labels_yes=&wikidata_item=no&wpiu=any&search_max_results=500&larger=&labels_yes=&namespace_conversion=keep&langs_labels_any=&show_redirects=both&cb_labels_any_l=1&cb_labels_no_l=1&ores_prob_from=&max_sitelink_count=&smaller=&show_soft_redirects=both&links_to_no=&sitelinks_any=&cb_labels_yes_l=1&after=&depth=0&templates_yes=&wikidata_prop_item_use=&edits%5Bbots%5D=both&links_to_any=&manual_list_wiki=&interface_language=en&since_rev0=&subpage_filter=either&templates_any=&ns%5B0%5D=1&active_tab=tab_templates_n_links&templates_no=&project=wikipedia&outlinks_any=Wikipedia%3AVital_articles%2FLevel%2F4%2FHistory&language=en&sortby=none&min_sitelink_count=&max_age=&min_redlink_count=1&edits%5Banons%5D=both&doit=) |
| Geography | Yes | [Link](https://petscan.wmflabs.org/?common_wiki=auto&manual_list=&edits%5Banons%5D=both&outlinks_yes=&outlinks_any=Wikipedia%3AVital_articles%2FLevel%2F4%2FGeography&sortorder=ascending&minlinks=&langs_labels_yes=&combination=subset&ores_prob_from=&negcats=&wikidata_label_language=&cb_labels_any_l=1&ns%5B0%5D=1&search_filter=&categories=&search_wiki=&format=html&sitelinks_any=&max_age=&labels_yes=&output_limit=&wikidata_item=no&cb_labels_yes_l=1&search_max_results=500&maxlinks=&cb_labels_no_l=1&active_tab=tab_templates_n_links&links_to_any=&manual_list_wiki=&after=&language=en&page_image=any&pagepile=&depth=0&max_sitelink_count=®exp_filter=&referrer_name=&interface_language=en&project=wikipedia&output_compatability=catscan&wikidata_source_sites=&doit=) |
| Arts | Yes | [Link](https://petscan.wmflabs.org/?cb_labels_any_l=1&outlinks_yes=&depth=0&language=en&edits%5Bflagged%5D=both&referrer_name=&source_combination=&cb_labels_yes_l=1&search_max_results=500&manual_list=&ores_prob_from=&min_redlink_count=1&sitelinks_no=&sitelinks_yes=&cb_labels_no_l=1&sparql=&wikidata_label_language=&wikidata_item=no&links_to_all=&maxlinks=&outlinks_no=&wikidata_prop_item_use=&templates_yes=&project=wikipedia&active_tab=tab_templates_n_links&combination=subset&after=&wpiu=any&langs_labels_no=&interface_language=en&ores_type=any&links_to_any=&outlinks_any=Wikipedia%3AVital_articles%2FLevel%2F4%2FArts&larger=&wikidata_source_sites=&manual_list_wiki=&edits%5Bbots%5D=both&ns%5B0%5D=1&page_image=any&min_sitelink_count=&ores_prediction=any&pagepile=&doit=) |
| Philosophy and religion | Yes | [Link](https://petscan.wmflabs.org/?wpiu=any&edits%5Bbots%5D=both&langs_labels_yes=&sitelinks_no=&max_sitelink_count=&sortorder=ascending&before=&outlinks_no=&language=en&labels_yes=&show_disambiguation_pages=both®exp_filter=&show_redirects=both&project=wikipedia&depth=0&ores_prob_from=&page_image=any&outlinks_any=Wikipedia%3AVital_articles%2FLevel%2F4%2FPhilosophy_and_religion&min_sitelink_count=&labels_any=&edits%5Banons%5D=both&search_max_results=500&cb_labels_yes_l=1&cb_labels_no_l=1&ns%5B0%5D=1&sparql=&manual_list=&cb_labels_any_l=1&interface_language=en&sitelinks_any=&active_tab=tab_templates_n_links&wikidata_source_sites=&links_to_any=&templates_no=&links_to_all=&links_to_no=&ores_prediction=any&categories=&manual_list_wiki=&common_wiki=auto&doit=) |
| Everyday life | Yes | [Link](https://petscan.wmflabs.org/?templates_any=&edits%5Bbots%5D=both&templates_no=&maxlinks=&wikidata_label_language=&cb_labels_any_l=1&outlinks_any=Wikipedia%3AVital_articles%2FLevel%2F4%2FEveryday_life&edits%5Banons%5D=both&links_to_all=&ns%5B0%5D=1&larger=&search_filter=&language=en&show_disambiguation_pages=both&sitelinks_any=&langs_labels_any=&cb_labels_yes_l=1&cb_labels_no_l=1&wpiu=any&after=&project=wikipedia&sparql=&output_limit=&manual_list=&since_rev0=&langs_labels_no=&edits%5Bflagged%5D=both&wikidata_source_sites=&sitelinks_yes=&before=&combination=subset&sortorder=ascending&ores_type=any&min_redlink_count=1&referrer_url=&search_max_results=500&active_tab=tab_templates_n_links&wikidata_item=no&categories=&sortby=none&interface_language=en&doit=) |
| Society and social sciences | Yes | [Link](https://petscan.wmflabs.org/?before=&labels_yes=&output_limit=&labels_no=&active_tab=tab_templates_n_links&outlinks_yes=&templates_yes=&larger=&ores_prob_to=&search_max_results=500&cb_labels_no_l=1&max_age=&templates_any=&common_wiki=auto&max_sitelink_count=&minlinks=&cb_labels_any_l=1&show_soft_redirects=both&langs_labels_any=&interface_language=en&search_wiki=&links_to_any=&outlinks_any=Wikipedia%3AVital_articles%2FLevel%2F4%2FSociety_and_social_sciences&min_sitelink_count=&ores_prob_from=&search_filter=&ns%5B0%5D=1&common_wiki_other=&sitelinks_no=&sitelinks_any=&labels_any=&wikidata_item=no&wikidata_prop_item_use=&wikidata_label_language=&links_to_all=&language=en&output_compatability=catscan&categories=&cb_labels_yes_l=1&project=wikipedia&smaller=&doit=) |
| Biological and health sciences | Yes | [Link](https://petscan.wmflabs.org/?common_wiki=auto&links_to_all=&common_wiki_other=&search_filter=&format=html&project=wikipedia&negcats=&interface_language=en&labels_yes=&templates_no=&show_disambiguation_pages=both&pagepile=&cb_labels_no_l=1&search_max_results=500&sitelinks_any=&wikidata_label_language=&templates_yes=&max_age=&page_image=any&cb_labels_yes_l=1&manual_list=&language=en&larger=&outlinks_any=Wikipedia%3AVital_articles%2FLevel%2F4%2FBiology_and_health_sciences&source_combination=&cb_labels_any_l=1&min_redlink_count=1&active_tab=tab_templates_n_links&show_redirects=both&show_soft_redirects=both&ores_type=any&search_wiki=&max_sitelink_count=&labels_any=®exp_filter=&manual_list_wiki=&ores_prob_to=&ns%5B0%5D=1&edits%5Banons%5D=both&links_to_no=&links_to_any=&wikidata_prop_item_use=&doit=) |
| Physical sciences | Yes | [Link](https://petscan.wmflabs.org/?page_image=any&cb_labels_yes_l=1&sortby=none&interface_language=en&language=en&outlinks_yes=&cb_labels_any_l=1&outlinks_any=Wikipedia%3AVital_articles%2FLevel%2F4%2FPhysical_sciences&active_tab=tab_templates_n_links&sitelinks_no=&project=wikipedia&categories=&edits%5Bbots%5D=both&labels_any=&search_wiki=&cb_labels_no_l=1&show_soft_redirects=both&wikidata_item=no&depth=0&ores_prediction=any&search_query=&wikidata_label_language=&smaller=&langs_labels_yes=&edits%5Banons%5D=both&namespace_conversion=keep&show_disambiguation_pages=both&search_max_results=500&wikidata_prop_item_use=&wpiu=any&sitelinks_yes=&common_wiki=auto&show_redirects=both&langs_labels_any=&ns%5B0%5D=1&templates_any=&format=html&ores_prob_from=&min_redlink_count=1&output_compatability=catscan&ores_type=any&max_sitelink_count=&doit=) |
| Technology | Yes | [Link](https://petscan.wmflabs.org/?output_limit=&since_rev0=&categories=&labels_no=&manual_list=&labels_yes=&max_age=&langs_labels_any=&referrer_name=&search_max_results=500&outlinks_no=&cb_labels_yes_l=1&edits%5Bbots%5D=both&language=en&combination=subset&wikidata_source_sites=&langs_labels_no=&referrer_url=&cb_labels_any_l=1&interface_language=en&templates_any=&ores_prob_to=&search_wiki=&show_redirects=both&ns%5B0%5D=1&sitelinks_yes=&sitelinks_no=®exp_filter=&edits%5Banons%5D=both&active_tab=tab_templates_n_links&project=wikipedia&depth=0&negcats=&after=&outlinks_any=Wikipedia%3AVital_articles%2FLevel%2F4%2FTechnology&smaller=&show_disambiguation_pages=both&subpage_filter=either&cb_labels_no_l=1&outlinks_yes=&doit=) |
| Mathematics | Yes | [Link](https://petscan.wmflabs.org/?project=wikipedia&links_to_no=&max_age=&minlinks=&combination=subset&search_max_results=500&labels_no=&sortby=none&interface_language=en&active_tab=tab_templates_n_links&wpiu=any&larger=&wikidata_prop_item_use=&since_rev0=&cb_labels_no_l=1&ns%5B0%5D=1&common_wiki=auto&labels_any=&cb_labels_any_l=1&sortorder=ascending&show_disambiguation_pages=both&show_soft_redirects=both&outlinks_any=Wikipedia%3AVital_articles%2FLevel%2F4%2FMathematics&manual_list_wiki=&wikidata_label_language=&negcats=&links_to_all=&maxlinks=&after=&cb_labels_yes_l=1&edits%5Bbots%5D=both&output_limit=&langs_labels_any=&edits%5Banons%5D=both&referrer_url=&sitelinks_any=&ores_prob_to=&subpage_filter=either&output_compatability=catscan&ores_prob_from=&language=en&edits%5Bflagged%5D=both&doit=) |
| Engineering | **No** | [Link](https://petscan.wmflabs.org/?format=html®exp_filter=&since_rev0=&templates_no=&search_filter=&langs_labels_any=&min_sitelink_count=&outlinks_any=&wikidata_label_language=&show_redirects=both&links_to_no=&referrer_name=&min_redlink_count=1&langs_labels_no=&source_combination=&cb_labels_any_l=1&referrer_url=&sitelinks_yes=&ores_type=any&cb_labels_yes_l=1&cb_labels_no_l=1&ores_prob_from=&project=wikipedia&search_max_results=500&common_wiki_other=&wikidata_item=no&categories=Engineering&output_limit=&depth=2&manual_list=&interface_language=en&minlinks=&namespace_conversion=keep&subpage_filter=either&manual_list_wiki=&links_to_all=&edits%5Banons%5D=both&ns%5B0%5D=1&language=en&edits%5Bflagged%5D=both&doit=) |
| Electronics | **No** | [Link](https://petscan.wmflabs.org/?links_to_any=&language=en&labels_no=&outlinks_yes=&categories=Electronics&sortorder=ascending&combination=subset&links_to_no=&labels_any=&ns%5B0%5D=1&edits%5Bbots%5D=both&outlinks_no=&format=html&templates_yes=&wikidata_prop_item_use=&cb_labels_no_l=1&langs_labels_no=&active_tab=tab_categories&ores_type=any&templates_no=&common_wiki=auto&source_combination=&search_max_results=500&ores_prediction=any&show_disambiguation_pages=both&cb_labels_any_l=1&min_redlink_count=1&project=wikipedia&referrer_name=&after=&show_redirects=both&langs_labels_any=&depth=1&cb_labels_yes_l=1&interface_language=en&ores_prob_to=&negcats=&wikidata_item=no&max_age=&langs_labels_yes=&edits%5Bflagged%5D=both&doit=) |
### Post-Processing
A small amount of post-processing was done to the models outputs. Here is a list of all modifications made:
- Strip leading and trailing whitespace (automatic)
- Filter generated questions for the following words: ["question", "expert", "sorry", "opinion"]. This was done to filter out rare instances where the model responded in way which contained stuff other than just the question, or didn't generate a question.
- Manually fixing some stray tokens (`'s` was sometimes `'S` or `'t`, years sometimes had a random character inserted in them, and other rare cases of tokens which didn't make since, even if the rest of the question was good)
| 15,583 | [
[
-0.06951904296875,
-0.02899169921875,
0.0261688232421875,
0.01253509521484375,
-0.004192352294921875,
-0.00624847412109375,
0.0033397674560546875,
-0.0269927978515625,
0.0604248046875,
0.0222015380859375,
-0.031463623046875,
-0.03924560546875,
-0.021957397460937... |
librarian-bots/paper-recommendations | 2023-10-30T09:45:11.000Z | [
"region:us"
] | librarian-bots | null | null | 0 | 31 | 2023-10-04T10:01:00 | ---
dataset_info:
features:
- name: paper_url
dtype: string
- name: comment
dtype: string
splits:
- name: train
num_bytes: 163112
num_examples: 153
download_size: 46750
dataset_size: 163112
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "paper-recommendations"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 485 | [
[
-0.0406494140625,
-0.0151519775390625,
0.037109375,
0.00949859619140625,
-0.013763427734375,
-0.01299285888671875,
0.0132904052734375,
-0.015533447265625,
0.06732177734375,
0.038177490234375,
-0.051116943359375,
-0.0531005859375,
-0.0384521484375,
-0.0296478... |
RogerB/clean-unsupervised-kin-tweets | 2023-10-05T16:42:39.000Z | [
"region:us"
] | RogerB | null | null | 0 | 31 | 2023-10-05T09:51:36 | ---
dataset_info:
features:
- name: preprocessed_cased
dtype: string
- name: preprocessed_uncased
dtype: string
splits:
- name: train
num_bytes: 10103300
num_examples: 45138
download_size: 7301533
dataset_size: 10103300
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "clean-unsupervised-kin-tweets"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 523 | [
[
-0.0159454345703125,
-0.002727508544921875,
0.0114593505859375,
0.0036525726318359375,
-0.04144287109375,
0.02789306640625,
0.00954437255859375,
0.005359649658203125,
0.07122802734375,
0.038330078125,
-0.061370849609375,
-0.06884765625,
-0.0455322265625,
-0.... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.