id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
alisawuffles/WANLI | 2022-11-21T17:31:56.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:crowdsourced",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"arxiv:2201.05955",
"region:us... | alisawuffles | null | null | 6 | 12 | 2022-04-21T00:57:25 | ---
annotations_creators:
- crowdsourced
language_creators:
- other
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: WANLI
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- natural-language-inference
---
# Dataset Card for WANLI
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [WANLI homepage](https://wanli.allenai.org/)
- **Repository:** [Github repo](https://github.com/alisawuffles/wanli)
- **Paper:** [arXiv](https://arxiv.org/abs/2201.05955)
- **Point of Contact:** [Alisa Liu](mailto:alisaliu@cs.washington.edu)
### Dataset Summary
WANLI (**W**orker-**A**I Collaboration for **NLI**) is a collection of 108K English sentence pairs for the task of natural language inference (NLI).
Each example is created by first identifying a "pocket" of examples in [MultiNLI (Williams et al., 2018)](https://cims.nyu.edu/~sbowman/multinli/) that share a challenging reasoning pattern, then instructing GPT-3 to write a new example with the same pattern.
The set of generated examples are automatically filtered to contain those most likely to aid model training, and finally labeled and optionally revised by human annotators.
WANLI presents unique empirical strengths compared to existing NLI datasets. Remarkably, training a model on WANLI instead of MultiNLI (which is 4 times larger) improves performance on seven out-of-domain test sets we consider, including by 11% on HANS and 9% on Adversarial NLI.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for natural language inference, which determines whether a premise entails (i.e., implies the truth of) a hypothesis, both expressed in natural language. Success on this task is typically measured by achieving a high accuracy. A RoBERTa-large model currently achieves 75.40%.
Models trained on NLI are often adapted to other downstream tasks, and NLI data can be mixed with other sources of supervision.
### Languages
The dataset consists of English examples generated by GPT-3 and revised by English-speaking crowdworkers located in the United States.
## Dataset Structure
### Data Instances
Here is an example of an NLI example in `data/wanli/train.jsonl` or `data/wanli/test.jsonl`.
```
{
"id": 225295,
"premise": "It is a tribute to the skill of the coach that the team has been able to compete at the highest level.",
"hypothesis": "The coach is a good coach.",
"gold": "entailment",
"genre": "generated",
"pairID": "171408"
}
```
- `id`: unique identifier for the example
- `premise`: a piece of text
- `hypothesis`: a piece of text that may be true, false, or whose truth conditions may not be knowable when compared to the premise
- `gold`: one of `entailment`, `neutral`, and `contradiction`
- `genre`: one of `generated` and `generated_revised`, depending on whether the example was revised by annotators
- `pairID`: id of seed MNLI example, corresponding to those in `data/mnli/train.jsonl`
We also release the raw annotations for each worker, which can be found in `data/wanli/anonymized_annotations.jsonl`.
```
"WorkerId": "EUJ",
"id": 271560,
"nearest_neighbors": [
309783,
202988,
145310,
98030,
148759
],
"premise": "I don't know what I'd do without my cat. He is my only friend.",
"hypothesis": "I would be alone.",
"label": "neutral",
"revised_premise": "I don't know what I'd do without my cat. He is my only friend.",
"revised_hypothesis": "I would be alone without my cat.",
"gold": "entailment",
"revised": true
```
- `WorkerId`: a unique identification for each crowdworker (NOT the real worker ID from AMT)
- `id`: id of generated example
- `nearest_neighbors`: ordered ids of the group of MNLI nearest neighbors that were used as in-context examples, where the first one is seed ambiguous MNLI example. MNLI ids correspond to those in `mnli/train.jsonl`.
- `premise`: GPT-3 generated premise
- `hypothesis`: GPT-3 generated hypothesis
- `label`: the shared label of the in-context examples, which is the "intended" label for this generation
- `revised_premise`: premise after human review
- `revised_hypothesis`: hypothesis after human review
- `gold`: annotator-assigned gold label for the (potentially revised) example
- `revised`: whether the example was revised
### Data Splits
The dataset is randomly split into a *train* and *test* set.
| | train | test |
|-------------------------|------:|-----:|
| Examples | 102885| 5000|
## Dataset Creation
### Curation Rationale
A recurring challenge of crowdsourcing NLP datasets at scale is that human writers often rely on repetitive patterns when crafting examples, leading to a lack of linguistic diversity. On the other hand, there has been remarkable progress in open-ended text generation based on massive language models. We create WANLI to demonstrate the effectiveness an approach that leverages the best of both worlds: a language model's ability to efficiently generate diverse examples, and a human's ability to revise the examples for quality and assign a gold label.
### Source Data
#### Initial Data Collection and Normalization
Our pipeline starts with an existing dataset, MultiNLI (Williams et al., 2018). We use dataset cartography from [Swayamdipta et al. (2020)](https://aclanthology.org/2020.emnlp-main.746/) to automatically identify pockets of examples that demonstrate challenging reasoning patterns rela081 tive to a trained model. Using each group as a set of in-context examples, we leverage a pretrained language model to *generate new examples* likely to have the same pattern. We then automatically filter generations to keep those that are most likely to aid model learning. Finally, we validate the generated examples by subjecting them to human review, where crowdworkers assign a gold label and (optionally) revise for quality.
#### Who are the source language producers?
The GPT-3 Curie model generated examples which were then revised and labeled by crowdworkers on Amazon Mechanical Turk.
Workers were paid $0.12 for each example that they annotate. At the end of data collection, we aggregate the earning and time spent from each crowdworker, and find that the median hourly rate was $22.72, with 85% of workers being paid over the $15/hour target.
### Annotations
#### Annotation process
Given an unlabeled example, annotators are asked to optionally revise it for quality (while preserving the intended meaning as much as possible through minimal revisions), and then assign a label. Alternatively, if an example would require a great deal of revision to fix *or* if it could be perceived as offensive, they were asked to discard it.
Details about instructions, guidelines, and instructional examples can be found in Appendix D of the paper.
Crowdworkers annotate a total of 118,724 examples, with two distinct workers reviewing each example.
For examples that both annotators labeled without revision, annotators achieved a Cohen Kappa score of 0.60, indicating substantial agreement.
#### Who are the annotators?
Annotators were required to have a HIT approval rate of 98%, a total of 10,000 approved HITs, and be located in the United States.
300 Turkers took our qualification test, of which 69 passed. Turkers who were later found to produce extremely careless annotations were removed from the qualification list (and oftentimes, their annotations were discarded, though they were still paid for their work). The number of workers who contributed to the final dataset is 62.
### Personal and Sensitive Information
The dataset does not contain any personal information about the authors or the crowdworkers.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset was developed to explore the potential of worker-AI collaboration for dataset curation, train more robust NLI models, and provide more challenging evaluation of existing systems.
### Discussion of Biases
Text generated from large pretrained language models is susceptible to perpetuating social harms and containing toxic language.
To partially remedy this, we ask annotators to discard any examples that may be perceived as offensive.
Nonetheless, it is possible that harmful examples (especially if they contain subtle biases) may have been missed by annotators and included in the final dataset.
## Additional Information
### Dataset Curators
WANLI was developed by Alisa Liu, Swabha Swayamdipta, Noah A. Smith, and Yejin Choi from the [University of Washington](https://www.cs.washington.edu/) and [AI2](https://allenai.org/).
### Citation Information
```
@misc{liu-etal-2022-wanli,
title = "WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation",
author = "Liu, Alisa and
Swayamdipta, Swabha and
Smith, Noah A. and
Choi, Yejin",
month = jan,
year = "2022",
url = "https://arxiv.org/pdf/2201.05955",
}
``` | 9,935 | [
[
-0.03216552734375,
-0.04803466796875,
0.0104827880859375,
0.0269012451171875,
-0.0142059326171875,
-0.03204345703125,
-0.01568603515625,
-0.031158447265625,
0.01554107666015625,
0.052642822265625,
-0.04986572265625,
-0.0391845703125,
-0.038665771484375,
0.03... |
gusevski/factrueval2016 | 2022-04-29T20:34:48.000Z | [
"arxiv:2005.00614",
"region:us"
] | gusevski | null | null | 0 | 12 | 2022-04-29T06:41:12 | # Dataset Card for FactRuEval-2016
## Dataset Description
- **Point of Contact:** [Guskov Sergey](https://gusevski.com)
### Dataset Summary
Evaluation of [Named Entity Recognition](https://www.dialog-21.ru/media/3430/starostinaetal.pdf) and Fact Extraction Systems for Russian.
### Supported Tasks and Leaderboards
For each of the tasks tagged for this dataset, give a brief description of the tag, metrics, and suggested models (with a link to their HuggingFace implementation if available). Give a similar description of tasks that were not covered by the structured tag set (repace the `task-category-tag` with an appropriate `other:other-task-name`).
- `token-classification`: The dataset can be used to train a model for [NER], which consists in [Token Classification]. Success on this task is typically measured by achieving a *high/low* [metric name](https://huggingface.co/metrics/metric_name). The ([model name](https://huggingface.co/model_name) or [model class](https://huggingface.co/transformers/model_doc/model_class.html)) model currently achieves the following score. *[IF A LEADERBOARD IS AVAILABLE]:* This task has an active leaderboard which can be found at [leaderboard url]() and ranks models based on [metric name](https://huggingface.co/metrics/metric_name) while also reporting [other metric name](https://huggingface.co/metrics/other_metric_name).
### Languages
RU.
## Dataset Structure
### Data Instances
Provide an JSON-formatted example and brief description of a typical instance in the dataset. If available, provide a link to further examples.
```
{
'data': [{'id':'', 'tokens':[], 'ner_tags':[]},...],
...
}
```
Provide any additional information that is not covered in the other sections about the data here. In particular describe any relationships between data points and if these relationships are made explicit.
### Data Fields
List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points.
- `id`: order id
- `tokens`: list of tokens
- `ner_tags`: list of ner tags
### Data Splits
Describe and name the splits in the dataset if there are more than one.
Describe any criteria for splitting the data, if used. If their are differences between the splits (e.g. if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here.
Provide the sizes of each split. As appropriate, provide any descriptive statistics for the features, such as average length. For example:
| | Tain | Valid | Test |
| ----- | ------ | ----- | ---- |
| Input Sentences | | | |
| Average Sentence Length | | | |
## Dataset Creation
### Curation Rationale
What need motivated the creation of this dataset? What are some of the reasons underlying the major choices involved in putting it together?
### Source Data
This section describes the source data (e.g. news text and headlines, social media posts, translated sentences,...)
#### Initial Data Collection and Normalization
Describe the data collection process. Describe any criteria for data selection or filtering. List any key words or search terms used. If possible, include runtime information for the collection process.
If data was collected from other pre-existing datasets, link to source here and to their [Hugging Face version](https://huggingface.co/datasets/dataset_name).
If the data was modified or normalized after being collected (e.g. if the data is word-tokenized), describe the process and the tools used.
#### Who are the source language producers?
State whether the data was produced by humans or machine generated. Describe the people or systems who originally created the data.
If available, include self-reported demographic or identity information for the source data creators, but avoid inferring this information. Instead state that this information is unknown. See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender.
Describe the conditions under which the data was created (for example, if the producers were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.
Describe other people represented or mentioned in the data. Where possible, link to references for the information.
### Annotations
If the dataset contains annotations which are not part of the initial data collection, describe them in the following paragraphs.
#### Annotation process
If applicable, describe the annotation process and any tools used, or state otherwise. Describe the amount of data annotated, if not all. Describe or reference annotation guidelines provided to the annotators. If available, provide interannotator statistics. Describe any annotation validation processes.
#### Who are the annotators?
If annotations were collected for the source data (such as class labels or syntactic parses), state whether the annotations were produced by humans or machine generated.
Describe the people or systems who originally created the annotations and their selection criteria if applicable.
If available, include self-reported demographic or identity information for the annotators, but avoid inferring this information. Instead state that this information is unknown. See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender.
Describe the conditions under which the data was annotated (for example, if the annotators were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.
### Personal and Sensitive Information
State whether the dataset uses identity categories and, if so, how the information is used. Describe where this information comes from (i.e. self-reporting, collecting from profiles, inferring, etc.). See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender. State whether the data is linked to individuals and whether those individuals can be identified in the dataset, either directly or indirectly (i.e., in combination with other data).
State whether the dataset contains other data that might be considered sensitive (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history).
If efforts were made to anonymize the data, describe the anonymization process.
## Considerations for Using the Data
### Social Impact of Dataset
Please discuss some of the ways you believe the use of this dataset will impact society.
The statement should include both positive outlooks, such as outlining how technologies developed through its use may improve people's lives, and discuss the accompanying risks. These risks may range from making important decisions more opaque to people who are affected by the technology, to reinforcing existing harmful biases (whose specifics should be discussed in the next section), among other considerations.
Also describe in this section if the proposed dataset contains a low-resource or under-represented language. If this is the case or if this task has any impact on underserved communities, please elaborate here.
### Discussion of Biases
Provide descriptions of specific biases that are likely to be reflected in the data, and state whether any steps were taken to reduce their impact.
For Wikipedia text, see for example [Dinan et al 2020 on biases in Wikipedia (esp. Table 1)](https://arxiv.org/abs/2005.00614), or [Blodgett et al 2020](https://www.aclweb.org/anthology/2020.acl-main.485/) for a more general discussion of the topic.
If analyses have been run quantifying these biases, please add brief summaries and links to the studies here.
### Other Known Limitations
If studies of the datasets have outlined other limitations of the dataset, such as annotation artifacts, please outline and cite them here.
## Additional Information
### Dataset Curators
List the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here.
### Licensing Information
MIT
| 9,050 | [
[
-0.032318115234375,
-0.047393798828125,
0.008575439453125,
0.0170745849609375,
-0.00275421142578125,
0.004299163818359375,
-0.0126190185546875,
-0.04595947265625,
0.037353515625,
0.044464111328125,
-0.05474853515625,
-0.05987548828125,
-0.038421630859375,
0.... |
AlekseyKorshuk/persona-chat | 2022-06-04T21:49:08.000Z | [
"region:us"
] | AlekseyKorshuk | null | null | 7 | 12 | 2022-06-04T21:48:57 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ziq/depression_tweet | 2022-06-06T07:09:06.000Z | [
"region:us"
] | ziq | null | null | 0 | 12 | 2022-06-06T06:48:27 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
PiC/phrase_retrieval | 2023-01-20T16:32:55.000Z | [
"task_categories:text-retrieval",
"annotations_creators:expert-generated",
"language_creators:found",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-nc-4.0",
"region:us"
] | PiC | Phrase in Context is a curated benchmark for phrase understanding and semantic search, consisting of three tasks of increasing difficulty: Phrase Similarity (PS), Phrase Retrieval (PR) and Phrase Sense Disambiguation (PSD). The datasets are annotated by 13 linguistic experts on Upwork and verified by two groups: ~1000 AMT crowdworkers and another set of 5 linguistic experts. PiC benchmark is distributed under CC-BY-NC 4.0. | @article{pham2022PiC,
title={PiC: A Phrase-in-Context Dataset for Phrase Understanding and Semantic Search},
author={Pham, Thang M and Yoon, Seunghyun and Bui, Trung and Nguyen, Anh},
journal={arXiv preprint arXiv:2207.09068},
year={2022}
} | 5 | 12 | 2022-06-13T20:58:56 | ---
annotations_creators:
- expert-generated
language_creators:
- found
- expert-generated
language:
- en
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
paperswithcode_id: phrase-in-context
pretty_name: 'PiC: Phrase Retrieval'
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-retrieval
task_ids: []
---
# Dataset Card for "PiC: Phrase Retrieval"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://phrase-in-context.github.io/](https://phrase-in-context.github.io/)
- **Repository:** [https://github.com/phrase-in-context](https://github.com/phrase-in-context)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [Thang Pham](<thangpham@auburn.edu>)
### Dataset Summary
PR is a phrase retrieval task with the goal of finding a phrase **t** in a given document **d** such that **t** is semantically similar to the query phrase, which is the paraphrase **q**<sub>1</sub> provided by annotators.
We release two versions of PR: **PR-pass** and **PR-page**, i.e., datasets of 3-tuples (query **q**<sub>1</sub>, target phrase **t**, document **d**) where **d** is a random 11-sentence passage that contains **t** or an entire Wikipedia page.
While PR-pass contains 28,147 examples, PR-page contains slightly fewer examples (28,098) as we remove those trivial examples whose Wikipedia pages contain exactly the query phrase (in addition to the target phrase).
Both datasets are split into 5K/3K/~20K for test/dev/train, respectively.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English.
## Dataset Structure
### Data Instances
**PR-pass**
* Size of downloaded dataset files: 43.61 MB
* Size of the generated dataset: 36.98 MB
* Total amount of disk used: 80.59 MB
An example of 'train' looks as follows.
```
{
"id": "3478-1",
"title": "https://en.wikipedia.org/wiki?curid=181261",
"context": "The 425t was a 'pizza box' design with a single network expansion slot. The 433s was a desk-side server systems with multiple expansion slots. Compatibility. PC compatibility was possible either through software emulation, using the optional product DPCE, or through a plug-in card carrying an Intel 80286 processor. A third-party plug-in card with a 386 was also available. An Apollo Token Ring network card could also be placed in a standard PC and network drivers allowed it to connect to a server running a PC SMB (Server Message Block) file server. Usage. Although Apollo systems were easy to use and administer, they became less cost-effective because the proprietary operating system made software more expensive than Unix software. The 68K processors were slower than the new RISC chips from Sun and Hewlett-Packard. Apollo addressed both problems by introducing the RISC-based DN10000 and Unix-friendly Domain/OS operating system. However, the DN10000, though fast, was extremely expensive, and a reliable version of Domain/OS came too late to make a difference.",
"query": "dependable adaptation",
"answers": {
"text": ["reliable version"],
"answer_start": [1006]
}
}
```
**PR-page**
* Size of downloaded dataset files: 421.56 MB
* Size of the generated dataset: 412.17 MB
* Total amount of disk used: 833.73 MB
An example of 'train' looks as follows.
```
{
"id": "5961-2",
"title": "https://en.wikipedia.org/wiki?curid=354711",
"context": "Joseph Locke FRSA (9 August 1805 – 18 September 1860) was a notable English civil engineer of the nineteenth century, particularly associated with railway projects. Locke ranked alongside Robert Stephenson and Isambard Kingdom Brunel as one of the major pioneers of railway development. Early life and career. Locke was born in Attercliffe, Sheffield in Yorkshire, moving to nearby Barnsley when he was five. By the age of 17, Joseph had already served an apprenticeship under William Stobart at Pelaw, on the south bank of the Tyne, and under his own father, William. He was an experienced mining engineer, able to survey, sink shafts, to construct railways, tunnels and stationary engines. Joseph's father had been a manager at Wallbottle colliery on Tyneside when George Stephenson was a fireman there. In 1823, when Joseph was 17, Stephenson was involved with planning the Stockton and Darlington Railway. He and his son Robert Stephenson visited William Locke and his son at Barnsley and it was arranged that Joseph would go to work for the Stephensons. The Stephensons established a locomotive works near Forth Street, Newcastle upon Tyne, to manufacture locomotives for the new railway. Joseph Locke, despite his youth, soon established a position of authority. He and Robert Stephenson became close friends, but their friendship was interrupted, in 1824, by Robert leaving to work in Colombia for three years. Liverpool and Manchester Railway. George Stephenson carried out the original survey of the line of the Liverpool and Manchester Railway, but this was found to be flawed, and the line was re-surveyed by a talented young engineer, Charles Vignoles. Joseph Locke was asked by the directors to carry out another survey of the proposed tunnel works and produce a report. The report was highly critical of the work already done, which reflected badly on Stephenson. Stephenson was furious and henceforth relations between the two men were strained, although Locke continued to be employed by Stephenson, probably because the latter recognised his worth. Despite the many criticisms of Stephenson's work, when the bill for the new line was finally passed, in 1826, Stephenson was appointed as engineer and he appointed Joseph Locke as his assistant to work alongside Vignoles, who was the other assistant. However, a clash of personalities between Stephenson and Vignoles led to the latter resigning, leaving Locke as the sole assistant engineer. Locke took over responsibility for the western half of the line. One of the major obstacles to be overcome was Chat Moss, a large bog that had to be crossed. Although, Stephenson usually gets the credit for this feat, it is believed that it was Locke who suggested the correct method for crossing the bog. Whilst the line was being built, the directors were trying to decide whether to use standing engines or locomotives to propel the trains. Robert Stephenson and Joseph Locke were convinced that locomotives were vastly superior, and in March 1829 the two men wrote a report demonstrating the superiority of locomotives when used on a busy railway. The report led to the decision by the directors to hold an open trial to find the best locomotive. This was the Rainhill Trials, which were run in October 1829, and were won by \"Rocket\". When the line was finally opened in 1830, it was planned for a procession of eight trains to travel from Liverpool to Manchester and back. George Stephenson drove the leading locomotive \"Northumbrian\" and Joseph Locke drove \"Rocket\". The day was marred by the death of William Huskisson, the Member of Parliament for Liverpool, who was struck and killed by \"Rocket\". Grand Junction Railway. In 1829 Locke was George Stephenson's assistant, given the job of surveying the route for the Grand Junction Railway. This new railway was to join Newton-le-Willows on the Liverpool and Manchester Railway with Warrington and then on to Birmingham via Crewe, Stafford and Wolverhampton, a total of 80 miles. Locke is credited with choosing the location for Crewe and recommending the establishment there of shops required for the building and repairs of carriages and wagons as well as engines. During the construction of the Liverpool and Manchester Railway, Stephenson had shown a lack of ability in organising major civil engineering projects. On the other hand, Locke's ability to manage complex projects was well known. The directors of the new railway decided on a compromise whereby Locke was made responsible for the northern half of the line and Stephenson was made responsible for the southern half. However Stephenson's administrative inefficiency soon became apparent, whereas Locke estimated the costs for his section of the line so meticulously and speedily, that he had all of the contracts signed for his section of the line before a single one had been signed for Stephenson's section. The railway company lost patience with Stephenson, but tried to compromise by making both men joint-engineers. Stephenson's pride would not let him accept this, and so he resigned from the project. By autumn of 1835 Locke had become chief engineer for the whole of the line. This caused a rift between the two men, and strained relations between Locke and Robert Stephenson. Up to this point, Locke had always been under George Stephenson's shadow. From then on, he would be his own man, and stand or fall by his own achievements. The line was opened on 4 July 1837. New methods. Locke's route avoided as far as possible major civil engineering works. The main one was the Dutton Viaduct which crosses the River Weaver and the Weaver Navigation between the villages of Dutton and Acton Bridge in Cheshire. The viaduct consists of 20 arches with spans of 20 yards. An important feature of the new railway was the use of double-headed (dumb-bell) wrought-iron rail supported on timber sleepers at 2 ft 6 in intervals. It was intended that when the rails became worn they could be turned over to use the other surface, but in practice it was found that the chairs into which the rails were keyed caused wear to the bottom surface so that it became uneven. However this was still an improvement on the fish-bellied, wrought-iron rails still being used by Robert Stephenson on the London and Birmingham Railway. Locke was more careful than Stephenson to get value for his employers' money. For the Penkridge Viaduct Stephenson had obtained a tender of £26,000. After Locke took over, he gave the potential contractor better information and agreed a price of only £6,000. Locke also tried to avoid tunnels because in those days tunnels often took longer and cost more than planned. The Stephensons regarded 1 in 330 as the maximum slope that an engine could manage and Robert Stephenson achieved this on the London and Birmingham Railway by using seven tunnels which added both cost and delay. Locke avoided tunnels almost completely on the Grand Junction but exceeded the slope limit for six miles south of Crewe. Proof of Locke's ability to estimate costs accurately is given by the fact that the construction of the Grand Junction line cost £18,846 per mile as against Locke's estimate of £17,000. This is amazingly accurate compared with the estimated costs for the London and Birmingham Railway (Robert Stephenson) and the Great Western Railway (Brunel). Locke also divided the project into a few large sections rather than many small ones. This allowed him to work closely with his contractors to develop the best methods, overcome problems and personally gain practical experience of the building process and of the contractors themselves. He used the contractors who worked well with him, especially Thomas Brassey and William Mackenzie, on many other projects. Everyone gained from this cooperative approach whereas Brunel's more adversarial approach eventually made it hard for him to get anyone to work for him. Marriage. In 1834 Locke married Phoebe McCreery, with whom he adopted a child. He was elected to the Royal Society in 1838. Lancaster and Carlisle Railway. A significant difference in philosophy between George Stephenson and Joseph Locke and the surveying methods they employed was more than a mere difference of opinion. Stephenson had started his career at a time when locomotives had little power to overcome excessive gradients. Both George and Robert Stephenson were prepared to go to great lengths to avoid steep gradients that would tax the locomotives of the day, even if this meant choosing a circuitous path that added on extra miles to the line of the route. Locke had more confidence in the ability of modern locomotives to climb these gradients. An example of this was the Lancaster and Carlisle Railway, which had to cope with the barrier of the Lake District mountains. In 1839 Stephenson proposed a circuitous route that avoided the Lake District altogether by going all the way round Morecambe Bay and West Cumberland, claiming: 'This is the only practicable line from Liverpool to Carlisle. The making of a railway across Shap Fell is out of the question.' The directors rejected his route and chose the one proposed by Joseph Locke, one that used steep gradients and passed over Shap Fell. The line was completed by Locke and was a success. Locke's reasoned that by avoiding long routes and tunnelling, the line could be finished more quickly, with less capital costs, and could start earning revenue sooner. This became known as the 'up and over' school of engineering (referred to by Rolt as 'Up and Down,' or Rollercoaster). Locke took a similar approach in planning the Caledonian Railway, from Carlisle to Glasgow. In both railways he introduced gradients of 1 in 75, which severely taxed fully laden locomotives, for even as more powerful locomotives were introduced, the trains that they pulled became heavier. It may therefore be argued that Locke, although his philosophy carried the day, was not entirely correct in his reasoning. Even today, Shap Fell is a severe test of any locomotive. Manchester and Sheffield Railway. Locke was subsequently appointed to build a railway line from Manchester to Sheffield, replacing Charles Vignoles as chief engineer, after the latter had been beset by misfortunes and financial difficulties. The project included the three-mile Woodhead Tunnel, and the line opened, after many delays, on 23 December 1845. The building of the line required over a thousand navvies and cost the lives of thirty-two of them, seriously injuring 140 others. The Woodhead Tunnel was such a difficult undertaking that George Stephenson claimed that it could not be done, declaring that he would eat the first locomotive that got through the tunnel. Subsequent commissions. In the north, Locke also designed the Lancaster and Preston Junction Railway; the Glasgow, Paisley and Greenock Railway; and the Caledonian Railway from Carlisle to Glasgow and Edinburgh. In the south, he worked on the London and Southampton Railway, later called the London and South Western Railway, designing, among other structures, Nine Elms to Waterloo Viaduct, Richmond Railway Bridge (1848, since replaced), and Barnes Bridge (1849), both across the River Thames, tunnels at Micheldever, and the 12-arch Quay Street viaduct and the 16-arch Cams Hill viaduct, both in Fareham (1848). He was actively involved in planning and building many railways in Europe (assisted by John Milroy), including the Le Havre, Rouen, Paris rail link, the Barcelona to Mataró line and the Dutch Rhenish Railway. He was present in Paris when the Versailles train crash occurred in 1842, and produced a statement concerning the facts for General Charles Pasley of the Railway Inspectorate. He also experienced a catastrophic failure of one of his viaducts built on the new Paris-Le Havre link. . The viaduct was of stone and brick at Barentin near Rouen, and was the longest and highest on the line. It was 108 feet high, and consisted of 27 arches, each 50 feet wide, with a total length of over 1600 feet. A boy hauling ballast for the line up an adjoining hillside early that morning (about 6.00 am) saw one arch (the fifth on the Rouen side) collapse, and the rest followed suit. Fortunately, no one was killed, although several workmen were injured in a mill below the structure. Locke attributed the catastrophic failure to frost action on the new lime cement, and premature off-centre loading of the viaduct with ballast. It was rebuilt at Thomas Brassey's cost, and survives to the present. Having pioneered many new lines in France, Locke also helped establish the first locomotive works in the country. Distinctive features of Locke's railway works were economy, the use of masonry bridges wherever possible and the absence of tunnels. An illustration of this is that there is no tunnel between Birmingham and Glasgow. Relationship with Robert Stephenson. Locke and Robert Stephenson had been good friends at the beginning of their careers, but their friendship had been marred by Locke's falling out with Robert's father. It seems that Robert felt loyalty to his father required that he should take his side. It is significant that after the death of George Stephenson in August 1848, the friendship of the two men was revived. When Robert Stephenson died in October 1859, Joseph Locke was a pallbearer at his funeral. Locke is reported to have referred to Robert as 'the friend of my youth, the companion of my ripening years, and a competitor in the race of life'. Locke was also on friendly terms with his other engineering rival, Isambard Kingdom Brunel. In 1845, Locke and Stephenson were both called to give evidence before two committees. In April a House of Commons Select Committee was investigating the atmospheric railway system proposed by Brunel. Brunel and Vignoles spoke in support of the system, whilst Locke and Stephenson spoke against it. The latter two were to be proved right in the long run. In August the two gave evidence before the Gauge Commissioners who were trying to arrive at a standard gauge for the whole country. Brunel spoke in favour of the 7 ft gauge he was using on the Great Western Railway. Locke and Stephenson spoke in favour of the 4 ft 8½in gauge that they had used on several lines. The latter two won the day and their gauge was adopted as the standard. Later life and legacy. Locke served as President of the Institution of Civil Engineers in between December 1857 and December 1859. He also served as Member of Parliament for Honiton in Devon from 1847 until his death. Joseph Locke died on 18 September 1860, apparently from appendicitis, whilst on a shooting holiday. He is buried in London's Kensal Green Cemetery. He outlived his friends/rivals Robert Stephenson and Isambard Brunel by less than a year; all three engineers died between 53 and 56 years of age, a circumstance attributed by Rolt to sheer overwork, accomplishing more in their brief lives than many achieve in a full three score and ten. Locke Park in Barnsley was dedicated to his memory by his widow Phoebe in 1862. It features a statue of Locke plus a folly, 'Locke Tower'. Locke's greatest legacy is the modern day West Coast Main Line (WCML), which was formed by the joining of the Caledonian, Lancaster & Carlisle, Grand Junction railways to Robert Stephenson's London & Birmingham Railway. As a result, around three-quarters of the WCML's route was planned and engineered by Locke.",
"query": "accurate approach",
"answers": {
"text": ["correct method"],
"answer_start": [2727]
}
}
```
### Data Fields
The data fields are the same among all subsets and splits.
* id: a string feature.
* title: a string feature.
* context: a string feature.
* question: a string feature.
* answers: a dictionary feature containing:
* text: a list of string features.
* answer_start: a list of int32 features.
### Data Splits
| name |train|validation|test|
|--------------------|----:|---------:|---:|
|PR-pass |20147| 3000|5000|
|PR-page |20098| 3000|5000|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The source passages + answers are from Wikipedia and the source of queries were produced by our hired linguistic experts from [Upwork.com](https://upwork.com).
#### Who are the source language producers?
We hired 13 linguistic experts from [Upwork.com](https://upwork.com) for annotation and more than 1000 human annotators on Mechanical Turk along with another set of 5 Upwork experts for 2-round verification.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
13 linguistic experts from [Upwork.com](https://upwork.com).
### Personal and Sensitive Information
No annotator identifying details are provided.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset is a joint work between Adobe Research and Auburn University.
Creators: [Thang M. Pham](https://scholar.google.com/citations?user=eNrX3mYAAAAJ), [David Seunghyun Yoon](https://david-yoon.github.io/), [Trung Bui](https://sites.google.com/site/trungbuistanford/), and [Anh Nguyen](https://anhnguyen.me).
[@PMThangXAI](https://twitter.com/pmthangxai) added this dataset to HuggingFace.
### Licensing Information
This dataset is distributed under [Creative Commons Attribution-NonCommercial 4.0 International (CC-BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/)
### Citation Information
```
@article{pham2022PiC,
title={PiC: A Phrase-in-Context Dataset for Phrase Understanding and Semantic Search},
author={Pham, Thang M and Yoon, Seunghyun and Bui, Trung and Nguyen, Anh},
journal={arXiv preprint arXiv:2207.09068},
year={2022}
}
``` | 22,448 | [
[
-0.0325927734375,
-0.03948974609375,
0.0509033203125,
0.0239715576171875,
-0.01219940185546875,
-0.01477813720703125,
-0.00089263916015625,
-0.021392822265625,
0.040130615234375,
0.0283355712890625,
-0.03570556640625,
-0.0197296142578125,
-0.0335693359375,
-... |
readerbench/ro-fb-offense | 2023-02-20T13:26:28.000Z | [
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ro",
"license:apache-2.0",
"hate-speech-detection",
"regio... | readerbench | null | null | 1 | 12 | 2022-07-10T17:53:14 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- ro
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- hate-speech-detection
pretty_name: RO-FB-Offense
extra_gated_prompt: 'Warning: this repository contains harmful content (abusive language,
hate speech).'
tags:
- hate-speech-detection
---
# Dataset Card for "RO-FB-Offense"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/readerbench/ro-fb-offense](https://github.com/readerbench/ro-fb-offense)
- **Repository:** [https://github.com/readerbench/ro-fb-offense](https://github.com/readerbench/ro-fb-offense)
- **Paper:** FB-RO-Offense – A Romanian Dataset and Baseline Models for detecting Offensive Language in Facebook Comments
- **Point of Contact:** [Andrei Paraschiv](https://github.com/AndyTheFactory)
### Dataset Summary
FB-RO-Offense corpus, an offensive speech dataset containing 4,455 user-generated comments from Facebook live broadcasts available in Romanian
The annotation follows the hierarchical tagset proposed in the Germeval 2018 Dataset.
The following Classes are available:
* OTHER: Non-Offensive Language
* OFFENSIVE:
- PROFANITY
- INSULT
- ABUSE
### Languages
Romanian
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
{
'sender': '$USER1208',
'no_reacts': 1,
'text': 'PLACEHOLDER TEXT',
'label': OTHER,
}
```
### Data Fields
- `sender`: a `string` feature.
- 'no_reacts': a `integer`
- `text`: a `string`.
- `label`: categorical `OTHER`, `PROFANITY`, `INSULT`, `ABUSE`
### Data Splits
| name |train|test|
|---------|----:|---:|
|ro|x|x|
## Dataset Creation
### Curation Rationale
Collecting data for abusive language classification for Romanian Language.
### Source Data
Facebook comments
#### Initial Data Collection and Normalization
#### Who are the source language producers?
Social media users
### Annotations
#### Annotation process
#### Who are the annotators?
Native speakers
### Personal and Sensitive Information
The data was public at the time of collection. No PII removal has been performed.
## Considerations for Using the Data
### Social Impact of Dataset
The data definitely contains abusive language. The data could be used to develop and propagate offensive language against every target group involved, i.e. ableism, racism, sexism, ageism, and so on.
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
This data is available and distributed under Apache-2.0 license
### Citation Information
```
@inproceedings{busuioc2022fb-ro-offense,
title={FB-RO-Offense – A Romanian Dataset and Baseline Models for detecting Offensive Language in Facebook Comments},
author={ Busuioc, Gabriel-Razvan and Paraschiv, Andrei and Dascalu, Mihai},
booktitle={International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC) 2022},
year={2022}
}
```
### Contributions
| 4,190 | [
[
-0.01495361328125,
-0.0810546875,
-0.0136871337890625,
0.0156707763671875,
-0.0148773193359375,
0.00875091552734375,
-0.0235748291015625,
-0.043426513671875,
0.027496337890625,
0.0235137939453125,
-0.045318603515625,
-0.07293701171875,
-0.050048828125,
0.003... |
BirdL/DallData | 2022-09-28T21:12:02.000Z | [
"task_categories:unconditional-image-generation",
"size_categories:1K<n<10K",
"license:other",
"region:us"
] | BirdL | null | null | 0 | 12 | 2022-07-26T20:48:02 | ---
annotations_creators: []
language: []
language_creators: []
license:
- other
multilinguality: []
pretty_name: DALL-E Latent Space Mapping
size_categories:
- 1K<n<10K
source_datasets: []
tags: []
task_categories:
- unconditional-image-generation
task_ids: []
---
DallData is a non-exhaustive look into DALL-E Mega(1)'s unconditional image generation. This is under the [BirdL-AirL License.](https://huggingface.co/spaces/BirdL/license/)
(1)
```bibtext
@misc{Dayma_DALL·E_Mini_2021,
author = {Dayma, Boris and Patil, Suraj and Cuenca, Pedro and Saifullah, Khalid and Abraham, Tanishq and Lê Khắc, Phúc and Melas, Luke and Ghosh, Ritobrata},
doi = {10.5281/zenodo.5146400},
month = {7},
title = {DALL·E Mini},
url = {https://github.com/borisdayma/dalle-mini},
year = {2021}
}
``` | 819 | [
[
-0.044403076171875,
-0.053955078125,
0.0311126708984375,
0.021514892578125,
-0.0294036865234375,
-0.005016326904296875,
0.0160369873046875,
-0.0367431640625,
0.0330810546875,
0.0305023193359375,
-0.054473876953125,
-0.034942626953125,
-0.0220794677734375,
0.... |
copenlu/citeworth | 2022-08-17T13:48:22.000Z | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:extended|s2orc",
"language:en",
"license:cc-by-nc-4.0",
"citation detection",
"citation",
"science",
"scholarly... | copenlu | null | null | 2 | 12 | 2022-08-17T11:57:29 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
paperswithcode_id: citeworth
pretty_name: CiteWorth
size_categories:
- 1M<n<10M
source_datasets:
- extended|s2orc
tags:
- citation detection
- citation
- science
- scholarly documents
- bio
- medicine
- computer science
- citeworthiness
task_categories:
- text-classification
task_ids: []
---
# Dataset Card for CiteWorth
## Dataset Description
- **Repo** https://github.com/copenlu/cite-worth
- **Paper** https://aclanthology.org/2021.findings-acl.157.pdf
### Dataset Summary
Scientific document understanding is challenging as the data is highly domain specific and diverse. However, datasets for tasks with scientific text require expensive manual annotation and tend to be small and limited to only one or a few fields. At the same time, scientific documents contain many potential training signals, such as citations, which can be used to build large labelled datasets. Given this, we present an in-depth study of cite-worthiness detection in English, where a sentence is labelled for whether or not it cites an external source. To accomplish this, we introduce CiteWorth, a large, contextualized, rigorously cleaned labelled dataset for cite-worthiness detection built from a massive corpus of extracted plain-text scientific documents. We show that CiteWorth is high-quality, challenging, and suitable for studying problems such as domain adaptation. Our best performing cite-worthiness detection model is a paragraph-level contextualized sentence labelling model based on Longformer, exhibiting a 5 F1 point improvement over SciBERT which considers only individual sentences. Finally, we demonstrate that language model fine-tuning with cite-worthiness as a secondary task leads to improved performance on downstream scientific document understanding tasks.
## Dataset Structure
The data is structured as follows
- `paper_id`: The S2ORC paper ID where the paragraph comes from
- `section_idx`: An index into the section array in the original S2ORC data
- `file_index`: The volume in the S2ORC dataset that the paper belongs to
- `file_offset`: Byte offset to the start of the paper json in the S2ORC paper PDF file
- `mag_field_of_study`: The field of study to which a paper belongs (an array, but each paper belongs to a single field)
- `original_text`: The original text of the paragraph
- `section_title`: Title of the section to which the paragraph belongs
- `samples`: An array containing dicts of the cleaned sentences for the paragraph, in order. The fields for each dict are as follows
- `text`: The cleaned text for the sentence
- `label`: Label for the sentence, either `check-worthy` for cite-worthy sentences or `non-check-worthy` non-cite-worthy sentences
- `original_text`: The original sentence text
- `ref_ids`: List of the reference IDs in the S2ORC dataset for papers cited in this sentence
- `citation_text`: List of all citation text in this sentence
## Dataset Creation
The data is derived from the [S2ORC dataset](https://github.com/allenai/s2orc), specifically the 20200705v1 release of the data. It is licensed under the [CC By-NC 2.0](https://creativecommons.org/licenses/by-nc/2.0/) license. For details on the dataset creation process, see section 3 of our [paper](https://aclanthology.org/2021.findings-acl.157.pdf)
.
## Citing
Please use the following citation when referencing this work or using the data:
```
@inproceedings{wright2021citeworth,
title={{CiteWorth: Cite-Worthiness Detection for Improved Scientific Document Understanding}},
author={Dustin Wright and Isabelle Augenstein},
booktitle = {Findings of ACL-IJCNLP},
publisher = {Association for Computational Linguistics},
year = 2021
}
``` | 3,835 | [
[
-0.0008254051208496094,
-0.01312255859375,
0.06378173828125,
0.0131683349609375,
0.0018033981323242188,
-0.03289794921875,
-0.006565093994140625,
-0.0430908203125,
0.003726959228515625,
0.0038089752197265625,
-0.012481689453125,
-0.0316162109375,
-0.062164306640... |
hugginglearners/russia-ukraine-conflict-articles | 2022-08-18T04:21:16.000Z | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | hugginglearners | null | null | 0 | 12 | 2022-08-18T04:21:11 | ---
license:
- cc-by-nc-sa-4.0
kaggle_id: hskhawaja/russia-ukraine-conflict
---
# Dataset Card for Russia Ukraine Conflict
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://kaggle.com/datasets/hskhawaja/russia-ukraine-conflict
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
###Context
On 24 February 2022, Russia invaded Ukraine in a major escalation of the Russo-Ukrainian War that began in 2014. The invasion caused Europe's largest refugee crisis since World War II, with more than 6.3 million Ukrainians fleeing the country and a third of the population displaced (*Source: Wikipedia*).
###Content
This dataset is a collection of 407 news articles from NYT and Guardians related to ongoing conflict between Russia and Ukraine. The publishing date of articles ranges from Feb 1st, 2022 to Jul 31st, 2022.
###What you can do?
Here are some ideas to explore:
- Discourse analysis of Russia-Ukraine conflict (How the war has evolved over months?)
- Identify most talked about issues (refugees, food, weapons, fuel, etc.)
- Extract sentiment of articles for both Russia and Ukraine
- Which world leaders have tried to become mediators?
- Number of supporting countries for both Russia and Ukraine
- Map how NATO alliance has been affected by the war
I am looking forward to see your work and ideas and will keep adding more ideas to explore.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was shared by [@hskhawaja](https://kaggle.com/hskhawaja)
### Licensing Information
The license for this dataset is cc-by-nc-sa-4.0
### Citation Information
```bibtex
[More Information Needed]
```
### Contributions
[More Information Needed] | 3,697 | [
[
-0.034332275390625,
-0.023284912109375,
0.0281982421875,
0.01351165771484375,
-0.028594970703125,
0.02191162109375,
-0.01021575927734375,
-0.029296875,
0.0205230712890625,
0.03314208984375,
-0.054168701171875,
-0.07012939453125,
-0.05548095703125,
-0.0057907... |
g8a9/europarl_en-it | 2022-09-07T10:14:04.000Z | [
"task_categories:translation",
"multilinguality:monolingual",
"multilinguality:translation",
"language:en",
"language:it",
"license:unknown",
"region:us"
] | g8a9 | null | null | 0 | 12 | 2022-09-05T13:53:46 | ---
language:
- en
- it
license:
- unknown
multilinguality:
- monolingual
- translation
pretty_name: Europarl v7 (en-it split)
tags: []
task_categories:
- translation
task_ids: []
---
# Dataset Card for Europarl v7 (en-it split)
This dataset contains only the English-Italian split of Europarl v7.
We created the dataset to provide it to the [M2L 2022 Summer School](https://www.m2lschool.org/) students.
For all the information on the dataset, please refer to: [https://www.statmt.org/europarl/](https://www.statmt.org/europarl/)
## Dataset Structure
### Data Fields
- sent_en: English transcript
- sent_it: Italian translation
### Data Splits
We created three custom training/validation/testing splits. Feel free to rearrange them if needed. These ARE NOT by any means official splits.
- train (1717204 pairs)
- validation (190911 pairs)
- test (1000 pairs)
### Citation Information
If using the dataset, please cite:
`Koehn, P. (2005). Europarl: A parallel corpus for statistical machine translation. In Proceedings of machine translation summit x: papers (pp. 79-86).`
### Contributions
Thanks to [@g8a9](https://github.com/g8a9) for adding this dataset.
| 1,176 | [
[
-0.0322265625,
-0.015716552734375,
0.0211639404296875,
0.011199951171875,
-0.038970947265625,
0.0144195556640625,
-0.01154327392578125,
-0.01117706298828125,
0.0292205810546875,
0.0254364013671875,
-0.055419921875,
-0.06591796875,
-0.03863525390625,
0.021484... |
bongsoo/social_science_en_ko | 2022-10-05T00:09:30.000Z | [
"language:ko",
"license:apache-2.0",
"region:us"
] | bongsoo | null | null | 2 | 12 | 2022-09-20T04:45:54 | ---
language:
- ko
license: apache-2.0
---
- 사회과학-en-ko 번역 말뭉치
| 64 | [
[
-0.0182037353515625,
-0.0310821533203125,
0.039703369140625,
0.0531005859375,
-0.045989990234375,
-0.011627197265625,
0.014251708984375,
-0.00002205371856689453,
0.07275390625,
0.044586181640625,
-0.02117919921875,
-0.04754638671875,
-0.029083251953125,
0.03... |
zyznull/dureader-retrieval-ranking | 2023-01-03T08:05:57.000Z | [
"license:apache-2.0",
"region:us"
] | zyznull | null | @article{Qiu2022DuReader\_retrievalAL,
title={DuReader\_retrieval: A Large-scale Chinese Benchmark for Passage Retrieval from Web Search Engine},
author={Yifu Qiu and Hongyu Li and Yingqi Qu and Ying Chen and Qiaoqiao She and Jing Liu and Hua Wu and Haifeng Wang},
journal={ArXiv},
year={2022},
volume={abs/2203.10232}
} | 2 | 12 | 2022-09-28T09:00:20 | ---
license: apache-2.0
---
# dureader
数据来自DuReader-Retreval数据集,这里是[原始地址](https://github.com/baidu/DuReader/tree/master/DuReader-Retrieval)。
> 本数据集只用作学术研究使用。如果本仓库涉及侵权行为,会立即删除。 | 177 | [
[
-0.0094146728515625,
-0.0523681640625,
0.00930023193359375,
0.0215301513671875,
-0.061004638671875,
0.0135345458984375,
0.029541015625,
-0.0024394989013671875,
0.044281005859375,
0.03302001953125,
-0.0140533447265625,
-0.03564453125,
-0.04400634765625,
0.012... |
andrewkroening/Star-wars-scripts-dialogue-IV-VI | 2022-10-27T17:53:39.000Z | [
"license:cc",
"region:us"
] | andrewkroening | null | null | 1 | 12 | 2022-10-24T19:31:55 | ---
license: cc
---
### Dataset Contents
This dataset contains the concatenated scripts from the original (and best) Star Wars trilogy. The scripts are reduced to dialogue only, and are tagged with a line number and speaker.
### Dataset Disclaimer
I don't own this data; or Star Wars. But it would be cool if I did.
Star Wars is owned by Lucasfilms. I do not own any of the rights to this information.
The scripts are derived from a couple sources:
* This [GitHub Repo](https://github.com/gastonstat/StarWars) with raw files
* A [Kaggle Dataset](https://www.kaggle.com/datasets/xvivancos/star-wars-movie-scripts) put together by whoever 'Xavier' is
### May the Force be with you | 687 | [
[
-0.034332275390625,
-0.0175323486328125,
0.019012451171875,
-0.022125244140625,
-0.0225830078125,
0.02313232421875,
-0.014312744140625,
-0.00885009765625,
0.042877197265625,
0.09027099609375,
-0.0753173828125,
-0.0285797119140625,
-0.0462646484375,
0.0188751... |
ju-resplande/qa-pt | 2022-11-25T20:31:56.000Z | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:no-annotation",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:extended|mqa",
"language:pt",
"license:cc0-1.0",
"region:us"
] | ju-resplande | null | null | 6 | 12 | 2022-11-03T22:57:12 | ---
annotations_creators:
- no-annotation
language_creators:
- other
language:
- pt
license:
- cc0-1.0
multilinguality:
- monolingual
pretty_name: qa-portuguese
size_categories:
- 1M<n<10M
source_datasets:
- extended|mqa
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
---
# Dataset Card for QA-Portuguese
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Portuguese preprocessed split from [MQA dataset](https://huggingface.co/datasets/clips/mqa).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset is Portuguese.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@ju-resplande](https://github.com/ju-resplande) for adding this dataset.
| 2,838 | [
[
-0.0323486328125,
-0.033447265625,
0.0034236907958984375,
0.02142333984375,
-0.030517578125,
0.0169525146484375,
-0.004913330078125,
-0.022979736328125,
0.056488037109375,
0.04449462890625,
-0.05279541015625,
-0.0718994140625,
-0.0439453125,
0.01652526855468... |
bigbio/mediqa_rqe | 2022-12-22T15:45:33.000Z | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | bigbio | The MEDIQA challenge is an ACL-BioNLP 2019 shared task aiming to attract further research efforts in Natural Language Inference (NLI), Recognizing Question Entailment (RQE), and their applications in medical Question Answering (QA).
Mailing List: https://groups.google.com/forum/#!forum/bionlp-mediqa
The objective of the RQE task is to identify entailment between two questions in the context of QA. We use the following definition of question entailment: “a question A entails a question B if every answer to B is also a complete or partial answer to A” [1]
[1] A. Ben Abacha & D. Demner-Fushman. “Recognizing Question Entailment for Medical Question Answering”. AMIA 2016. | @inproceedings{MEDIQA2019,
author = {Asma {Ben Abacha} and Chaitanya Shivade and Dina Demner{-}Fushman},
title = {Overview of the MEDIQA 2019 Shared Task on Textual Inference, Question Entailment and Question Answering},
booktitle = {ACL-BioNLP 2019},
year = {2019}
} | 0 | 12 | 2022-11-13T22:09:46 |
---
language:
- en
bigbio_language:
- English
license: unknown
multilinguality: monolingual
bigbio_license_shortname: UNKNOWN
pretty_name: MEDIQA RQE
homepage: https://sites.google.com/view/mediqa2019
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- TEXT_PAIRS_CLASSIFICATION
---
# Dataset Card for MEDIQA RQE
## Dataset Description
- **Homepage:** https://sites.google.com/view/mediqa2019
- **Pubmed:** False
- **Public:** True
- **Tasks:** TXT2CLASS
The MEDIQA challenge is an ACL-BioNLP 2019 shared task aiming to attract further research efforts in Natural Language Inference (NLI), Recognizing Question Entailment (RQE), and their applications in medical Question Answering (QA).
Mailing List: https://groups.google.com/forum/#!forum/bionlp-mediqa
The objective of the RQE task is to identify entailment between two questions in the context of QA. We use the following definition of question entailment: “a question A entails a question B if every answer to B is also a complete or partial answer to A” [1]
[1] A. Ben Abacha & D. Demner-Fushman. “Recognizing Question Entailment for Medical Question Answering”. AMIA 2016.
## Citation Information
```
@inproceedings{MEDIQA2019,
author = {Asma {Ben Abacha} and Chaitanya Shivade and Dina Demner{-}Fushman},
title = {Overview of the MEDIQA 2019 Shared Task on Textual Inference, Question Entailment and Question Answering},
booktitle = {ACL-BioNLP 2019},
year = {2019}
}
```
| 1,476 | [
[
-0.0017938613891601562,
-0.0487060546875,
0.052337646484375,
0.0010547637939453125,
-0.01020050048828125,
-0.006824493408203125,
0.019287109375,
-0.043426513671875,
0.0214080810546875,
0.04327392578125,
-0.06341552734375,
-0.027496337890625,
-0.044403076171875,
... |
bigbio/n2c2_2011 | 2022-12-22T15:45:53.000Z | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | bigbio | The i2b2/VA corpus contained de-identified discharge summaries from Beth Israel
Deaconess Medical Center, Partners Healthcare, and University of Pittsburgh Medical
Center (UPMC). In addition, UPMC contributed de-identified progress notes to the
i2b2/VA corpus. This dataset contains the records from Beth Israel and Partners.
The i2b2/VA corpus contained five concept categories: problem, person, pronoun,
test, and treatment. Each record in the i2b2/VA corpus was annotated by two
independent annotators for coreference pairs. Then the pairs were post-processed
in order to create coreference chains. These chains were presented to an adjudicator,
who resolved the disagreements between the original annotations, and added or deleted
annotations as necessary. The outputs of the adjudicators were then re-adjudicated, with
particular attention being paid to duplicates and enforcing consistency in the annotations. | @article{uzuner2012evaluating,
author = {
Uzuner, Ozlem and
Bodnari, Andreea and
Shen, Shuying and
Forbush, Tyler and
Pestian, John and
South, Brett R
},
title = "{Evaluating the state of the art in coreference resolution for electronic medical records}",
journal = {Journal of the American Medical Informatics Association},
volume = {19},
number = {5},
pages = {786-791},
year = {2012},
month = {02},
issn = {1067-5027},
doi = {10.1136/amiajnl-2011-000784},
url = {https://doi.org/10.1136/amiajnl-2011-000784},
eprint = {https://academic.oup.com/jamia/article-pdf/19/5/786/17374287/19-5-786.pdf},
} | 1 | 12 | 2022-11-13T22:10:38 |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: DUA
pretty_name: n2c2 2011 Coreference
homepage: https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/
bigbio_pubmed: False
bigbio_public: False
bigbio_tasks:
- COREFERENCE_RESOLUTION
---
# Dataset Card for n2c2 2011 Coreference
## Dataset Description
- **Homepage:** https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/
- **Pubmed:** False
- **Public:** False
- **Tasks:** COREF
The i2b2/VA corpus contained de-identified discharge summaries from Beth Israel
Deaconess Medical Center, Partners Healthcare, and University of Pittsburgh Medical
Center (UPMC). In addition, UPMC contributed de-identified progress notes to the
i2b2/VA corpus. This dataset contains the records from Beth Israel and Partners.
The i2b2/VA corpus contained five concept categories: problem, person, pronoun,
test, and treatment. Each record in the i2b2/VA corpus was annotated by two
independent annotators for coreference pairs. Then the pairs were post-processed
in order to create coreference chains. These chains were presented to an adjudicator,
who resolved the disagreements between the original annotations, and added or deleted
annotations as necessary. The outputs of the adjudicators were then re-adjudicated, with
particular attention being paid to duplicates and enforcing consistency in the annotations.
## Citation Information
```
@article{uzuner2012evaluating,
author = {
Uzuner, Ozlem and
Bodnari, Andreea and
Shen, Shuying and
Forbush, Tyler and
Pestian, John and
South, Brett R
},
title = "{Evaluating the state of the art in coreference resolution for electronic medical records}",
journal = {Journal of the American Medical Informatics Association},
volume = {19},
number = {5},
pages = {786-791},
year = {2012},
month = {02},
issn = {1067-5027},
doi = {10.1136/amiajnl-2011-000784},
url = {https://doi.org/10.1136/amiajnl-2011-000784},
eprint = {https://academic.oup.com/jamia/article-pdf/19/5/786/17374287/19-5-786.pdf},
}
```
| 2,164 | [
[
-0.0300140380859375,
-0.0303802490234375,
0.034088134765625,
0.009185791015625,
-0.0231781005859375,
-0.00856781005859375,
-0.0264129638671875,
-0.0322265625,
0.017852783203125,
0.0447998046875,
-0.00946044921875,
-0.05621337890625,
-0.053802490234375,
0.019... |
WillHeld/wmt19-valid-only-de_en | 2022-11-14T18:59:17.000Z | [
"region:us"
] | WillHeld | null | null | 0 | 12 | 2022-11-14T18:59:13 | ---
dataset_info:
features:
- name: translation
dtype:
translation:
languages:
- de
- en
splits:
- name: validation
num_bytes: 757649
num_examples: 2998
download_size: 491141
dataset_size: 757649
---
# Dataset Card for "wmt19-valid-only-de_en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 430 | [
[
-0.04083251953125,
-0.03900146484375,
0.0272064208984375,
0.0273590087890625,
-0.039337158203125,
-0.00614166259765625,
-0.0045013427734375,
-0.0204925537109375,
0.05511474609375,
0.042999267578125,
-0.06585693359375,
-0.05670166015625,
-0.050323486328125,
0... |
phucdev/noisyner | 2023-01-05T12:09:58.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:et",
"license:cc-by-nc-4.0",
"newspapers",
"1997-200... | phucdev | NoisyNER is a dataset for the evaluation of methods to handle noisy labels when training machine learning models.
It is from the NLP/Information Extraction domain and was created through a realistic distant supervision technique.
Some highlights and interesting aspects of the data are:
- Seven sets of labels with differing noise patterns to evaluate different noise levels on the same instances
- Full parallel clean labels available to compute upper performance bounds or study scenarios where a small amount of
gold-standard data can be leveraged
- Skewed label distribution (typical for Named Entity Recognition tasks)
- For some label sets: noise level higher than the true label probability
- Sequential dependencies between the labels
For more details on the dataset and its creation process, please refer to our publication
https://ojs.aaai.org/index.php/AAAI/article/view/16938 (published at AAAI'21). | @inproceedings{hedderich2021analysing,
title={Analysing the Noise Model Error for Realistic Noisy Label Data},
author={Hedderich, Michael A and Zhu, Dawei and Klakow, Dietrich},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={35},
number={9},
pages={7675--7684},
year={2021}
}
@inproceedings{tkachenko-etal-2013-named,
title = "Named Entity Recognition in {E}stonian",
author = "Tkachenko, Alexander and Petmanson, Timo and Laur, Sven",
booktitle = "Proceedings of the 4th Biennial International Workshop on {B}alto-{S}lavic Natural Language Processing",
year = "2013",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/W13-2412",
} | 0 | 12 | 2022-12-05T14:30:17 | ---
annotations_creators:
- expert-generated
language:
- et
language_creators:
- found
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
paperswithcode_id: noisyner
pretty_name: NoisyNER
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- newspapers
- 1997-2009
task_categories:
- token-classification
task_ids:
- named-entity-recognition
dataset_info:
- config_name: estner_clean
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: grammar
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
splits:
- name: train
num_bytes: 7544221
num_examples: 11365
- name: validation
num_bytes: 986310
num_examples: 1480
- name: test
num_bytes: 995204
num_examples: 1433
download_size: 6258130
dataset_size: 9525735
- config_name: NoisyNER_labelset1
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: grammar
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
splits:
- name: train
num_bytes: 7544221
num_examples: 11365
- name: validation
num_bytes: 986310
num_examples: 1480
- name: test
num_bytes: 995204
num_examples: 1433
download_size: 6194276
dataset_size: 9525735
- config_name: NoisyNER_labelset2
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: grammar
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
splits:
- name: train
num_bytes: 7544221
num_examples: 11365
- name: validation
num_bytes: 986310
num_examples: 1480
- name: test
num_bytes: 995204
num_examples: 1433
download_size: 6201072
dataset_size: 9525735
- config_name: NoisyNER_labelset3
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: grammar
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
splits:
- name: train
num_bytes: 7544221
num_examples: 11365
- name: validation
num_bytes: 986310
num_examples: 1480
- name: test
num_bytes: 995204
num_examples: 1433
download_size: 6231384
dataset_size: 9525735
- config_name: NoisyNER_labelset4
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: grammar
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
splits:
- name: train
num_bytes: 7544221
num_examples: 11365
- name: validation
num_bytes: 986310
num_examples: 1480
- name: test
num_bytes: 995204
num_examples: 1433
download_size: 6201072
dataset_size: 9525735
- config_name: NoisyNER_labelset5
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: grammar
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
splits:
- name: train
num_bytes: 7544221
num_examples: 11365
- name: validation
num_bytes: 986310
num_examples: 1480
- name: test
num_bytes: 995204
num_examples: 1433
download_size: 6231384
dataset_size: 9525735
- config_name: NoisyNER_labelset6
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: grammar
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
splits:
- name: train
num_bytes: 7544221
num_examples: 11365
- name: validation
num_bytes: 986310
num_examples: 1480
- name: test
num_bytes: 995204
num_examples: 1433
download_size: 6226516
dataset_size: 9525735
- config_name: NoisyNER_labelset7
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: grammar
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
splits:
- name: train
num_bytes: 7544221
num_examples: 11365
- name: validation
num_bytes: 986310
num_examples: 1480
- name: test
num_bytes: 995204
num_examples: 1433
download_size: 6229668
dataset_size: 9525735
---
# Dataset Card for NoisyNER
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [Estonian NER corpus](https://doi.org/10.15155/1-00-0000-0000-0000-00073L), [NoisyNER dataset](https://github.com/uds-lsv/NoisyNER)
- **Paper:** [Named Entity Recognition in Estonian](https://aclanthology.org/W13-2412/), [Analysing the Noise Model Error for Realistic Noisy Label Data](https://arxiv.org/abs/2101.09763)
- **Dataset:** NoisyNER
- **Domain:** News
- **Size of downloaded dataset files:** 6.23 MB
- **Size of the generated dataset files:** 9.53 MB
### Dataset Summary
NoisyNER is a dataset for the evaluation of methods to handle noisy labels when training machine learning models.
- Entity Types: `PER`, `ORG`, `LOC`
It is from the NLP/Information Extraction domain and was created through a realistic distant supervision technique. Some highlights and interesting aspects of the data are:
- Seven sets of labels with differing noise patterns to evaluate different noise levels on the same instances
- Full parallel clean labels available to compute upper performance bounds or study scenarios where a small amount of gold-standard data can be leveraged
- Skewed label distribution (typical for Named Entity Recognition tasks)
- For some label sets: noise level higher than the true label probability
- Sequential dependencies between the labels
For more details on the dataset and its creation process, please refer to the original author's publication https://ojs.aaai.org/index.php/AAAI/article/view/16938 (published at AAAI'21).
This dataset is based on the Estonian NER corpus. For more details see https://aclanthology.org/W13-2412/
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
The language data in NoisyNER is in Estonian (BCP-47 et)
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
{
'id': '0',
'tokens': ['Tallinna', 'õhusaaste', 'suureneb', '.'],
'lemmas': ['Tallinn+0', 'õhu_saaste+0', 'suurene+b', '.'],
'grammar': ['_H_ sg g', '_S_ sg n', '_V_ b', '_Z_'],
'ner_tags': [5, 0, 0, 0]
}
```
### Data Fields
The data fields are the same among all splits.
- `id`: a `string` feature.
- `tokens`: a `list` of `string` features.
- `lemmas`: a `list` of `string` features.
- `grammar`: a `list` of `string` features.
- `ner_tags`: a `list` of classification labels (`int`). Full tagset with indices:
```python
{'O': 0, 'B-PER': 1, 'I-PER': 2, 'B-ORG': 3, 'I-ORG': 4, 'B-LOC': 5, 'I-LOC': 6}
```
### Data Splits
The splits are the same across all configurations.
|train|validation|test|
|----:|---------:|---:|
|11365| 1480|1433|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
Tkachenko et al (2013) collected 572 news stories published in the local online newspapers [Delfi](http://delfi.ee/) and [Postimees](http://postimees.ee/) between 1997 and 2009. Selected articles cover both local and international news on a range of topics including politics, economics and sports. The raw text was preprocessed using the morphological disambiguator t3mesta ([Kaalep and
Vaino, 1998](https://www.cl.ut.ee/yllitised/kk_yhest_1998.pdf)) provided by [Filosoft](http://www.filosoft.ee/). The processing steps involve tokenization, lemmatization, part-of-speech tagging, grammatical and morphological analysis.
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
According to Tkachenko et al (2013) one of the authors manually tagged the corpus and the other author examined the tags, after which conflicting cases were resolved.
The total size of the corpus is 184,638 tokens. Tkachenko et al (2013) provide the following number of named entities in the corpus:
| | PER | LOC | ORG | Total |
|--------|------|------|------|-------|
| All | 5762 | 5711 | 3938 | 15411 |
| Unique | 3588 | 1589 | 1987 | 7164 |
Hedderich et al (2021) obtained the noisy labels through a distant supervision/automatic annotation approach. They extracted lists of named entities from Wikidata and matched them against words in the text via the ANEA tool ([Hedderich, Lange, and Klakow 2021](https://arxiv.org/abs/2102.13129)). They also used heuristic functions to correct errors caused by non-complete lists of entities,
grammatical complexities of Estonian that do not allow simple string matching or entity lists in conflict with each other. For instance, they normalized the grammatical form of a word or excluded certain high false-positive words. They provide seven sets of labels that differ in the noise process. This results in 8 different configurations, when added to the original split with clean labels.
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{tkachenko-etal-2013-named,
title = "Named Entity Recognition in {E}stonian",
author = "Tkachenko, Alexander and
Petmanson, Timo and
Laur, Sven",
booktitle = "Proceedings of the 4th Biennial International Workshop on {B}alto-{S}lavic Natural Language Processing",
month = aug,
year = "2013",
address = "Sofia, Bulgaria",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W13-2412",
pages = "78--83",
}
@article{Hedderich_Zhu_Klakow_2021,
title={Analysing the Noise Model Error for Realistic Noisy Label Data},
author={Hedderich, Michael A. and Zhu, Dawei and Klakow, Dietrich},
volume={35},
url={https://ojs.aaai.org/index.php/AAAI/article/view/16938},
number={9},
journal={Proceedings of the AAAI Conference on Artificial Intelligence},
year={2021},
month={May},
pages={7675-7684},
}
```
### Contributions
Thanks to [@phucdev](https://github.com/phucdev) for adding this dataset. | 14,018 | [
[
-0.05108642578125,
-0.0567626953125,
0.0095977783203125,
0.0150146484375,
-0.01708984375,
-0.017486572265625,
-0.04071044921875,
-0.0567626953125,
0.04443359375,
0.0169830322265625,
-0.043853759765625,
-0.05792236328125,
-0.05303955078125,
0.016204833984375,... |
MCG-NJU/MultiSports | 2022-12-13T07:47:16.000Z | [
"task_categories:image-classification",
"task_categories:object-detection",
"task_categories:other",
"task_ids:multi-class-image-classification",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"li... | MCG-NJU | This is a multi-person video dataset of spatio-temporally localized sports actions. Please refer to the github repo for evaluation. | @InProceedings{Li_2021_ICCV,
author = {Li, Yixuan and Chen, Lei and He, Runyu and Wang, Zhenzhi and Wu, Gangshan and Wang, Limin},
title = {MultiSports: A Multi-Person Video Dataset of Spatio-Temporally Localized Sports Actions},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2021},
pages = {13536-13545}
} | 10 | 12 | 2022-12-06T08:32:53 | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- expert-generated
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
pretty_name: MultiSports
size_categories: []
source_datasets:
- original
tags:
- video
- action detection
- spatial-temporal action localization
task_categories:
- image-classification
- object-detection
- other
task_ids:
- multi-class-image-classification
extra_gated_heading: "Acknowledge license to accept the repository"
extra_gated_prompt: "This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License"
extra_gated_fields:
I agree to use this dataset for non-commerical use ONLY: checkbox
---
# Dataset Card for MultiSports
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://deeperaction.github.io/datasets/multisports.html
- **Repository:** https://github.com/MCG-NJU/MultiSports
- **Paper:** https://arxiv.org/abs/2105.07404
- **Leaderboard:** https://paperswithcode.com/dataset/multisports
- **Point of Contact:** mailto: runyu_he@smail.nju.edu.cn
### Dataset Summary
Spatio-temporal action localization is an important and challenging problem in video understanding. Previous action detection benchmarks are limited in aspects of small numbers of instances in a trimmed video or low-level atomic actions. MultiSports is a multi-person dataset of spatio-temporal localized sports actions. Please refer to [this paper](https://arxiv.org/abs/2105.07404) for more details. Please refer to [this repository](https://github.com/MCG-NJU/MultiSports) for evaluation.
### Supported Tasks and Leaderboards
- `Spatial-temporal action localization`
Details about evaluation can be found in the [GitHub Repository](https://github.com/mcG-NJU/MultiSports). Previous challenge results can be found in [this page](https://deeperaction.github.io/results/index.html) and [this CodaLab challenge](https://codalab.lisn.upsaclay.fr/competitions/3736).
### Languages
The class labels in the dataset are in English.
## Dataset Structure
### Data Instances
Demo is available on [dataset homepage](https://deeperaction.github.io/datasets/multisports.html).
The dataset contains ```rawframes.tar``` and ```multisports_GT.pkl```. The GT pkl file is a dictionary with the following structure:
```
{
'labels': ['label1', 'label2', ...],
'train_videos': [['train_vid_1', 'train_vid_2', ...]],
'test_videos': [['test_vid_1', 'test_vid_2', ...]],
'nframes': {
'vid_1': nframes_1,
'vid_2': nframes_2,
...
},
'resolution': {
'vid_1': resolution_1,
'vid_2': resolution_2,
...
},
'gttubes': {
'vid_1': {
'label_1': [tube_1, tube_2, ...],
'label_2': [tube_1, tube_2, ...],
...
}
...
}
}
```
Here a ```tube``` is a ```numpy.ndarray``` with ```nframes``` rows and 5 columns ```<frame number> <x1> <y1> <x2> <y2>```.
### Data Fields
Raw frames are organized according to their sport category. The pickle file of GT contains the following fields.
- labels: list of labels
- train_videos: a list with one split element containing the list of training videos
- test_videos: a list with one split element containing the list of validation videos
- nframes: dictionary that gives the number of frames for each video
- resolution: dictionary that output a tuple ```(h,w)``` of the resolution for each video
- gttubes: dictionary that contains the gt tubes for each video. Gt tubes are dictionary that associates from each index of label, a list of tubes. A ```tube``` is a ```numpy.ndarray``` with ```nframes``` rows and 5 columns ```<frame number> <x1> <y1> <x2> <y2>```.
Please note that the label index starts from 0 and the frame index starts from 1. For the label index ```i```, the label name is ```labels[i]```.
<details>
<summary>
Click here to see the full list of MultiSports class labels mapping:
</summary>
|id|Class|
|--|-----|
| 0 | aerobic push up |
| 1 | aerobic explosive push up |
| 2 | aerobic explosive support |
| 3 | aerobic leg circle |
| 4 | aerobic helicopter |
| 5 | aerobic support |
| 6 | aerobic v support |
| 7 | aerobic horizontal support |
| 8 | aerobic straight jump |
| 9 | aerobic illusion |
| 10 | aerobic bent leg(s) jump |
| 11 | aerobic pike jump |
| 12 | aerobic straddle jump |
| 13 | aerobic split jump |
| 14 | aerobic scissors leap |
| 15 | aerobic kick jump |
| 16 | aerobic off axis jump |
| 17 | aerobic butterfly jump |
| 18 | aerobic split |
| 19 | aerobic turn |
| 20 | aerobic balance turn |
| 21 | volleyball serve |
| 22 | volleyball block |
| 23 | volleyball first pass |
| 24 | volleyball defend |
| 25 | volleyball protect |
| 26 | volleyball second pass |
| 27 | volleyball adjust |
| 28 | volleyball save |
| 29 | volleyball second attack |
| 30 | volleyball spike |
| 31 | volleyball dink |
| 32 | volleyball no offensive attack |
| 33 | football shoot |
| 34 | football long pass |
| 35 | football short pass |
| 36 | football through pass |
| 37 | football cross |
| 38 | football dribble |
| 39 | football trap |
| 40 | football throw |
| 41 | football diving |
| 42 | football tackle |
| 43 | football steal |
| 44 | football clearance |
| 45 | football block |
| 46 | football press |
| 47 | football aerial duels |
| 48 | basketball pass |
| 49 | basketball drive |
| 50 | basketball dribble |
| 51 | basketball 3-point shot |
| 52 | basketball 2-point shot |
| 53 | basketball free throw |
| 54 | basketball block |
| 55 | basketball offensive rebound |
| 56 | basketball defensive rebound |
| 57 | basketball pass steal |
| 58 | basketball dribble steal |
| 59 | basketball interfere shot |
| 60 | basketball pick-and-roll defensive |
| 61 | basketball sag |
| 62 | basketball screen |
| 63 | basketball pass-inbound |
| 64 | basketball save |
| 65 | basketball jump ball |
</details>
### Data Splits
| |train |validation| test |
|-------------|------:|---------:|------:|
|# of tubes |28514 |10116 | - |
*GT for test split is not provided. Please wait for the new competition to start. Information will be updated in [dataset homepage](https://deeperaction.github.io/datasets/multisports.html).*
## Dataset Creation
### Curation Rationale
Spatio-temporal action detection is an important and challenging problem in video understanding. Previous action detection benchmarks are limited in aspects of small numbers of instances in a trimmed video or low-level atomic actions.
### Source Data
#### Initial Data Collection and Normalization
> After choosing the four sports, we search for their competition videos by querying the name of sports like volleyball and the name of competition levels like Olympics and World Cup on YouTube, and then down- load videos from top search results. For each video, we only select high-resolution, e.g. 720P or 1080P, competition records and then manually cut them into clips of minutes, with less shot changes in each clip and to be more suitable for action detection.
#### Who are the source language producers?
The annotators of action categories and temporal boundaries are professional athletes of the corresponding sports. Please refer to [the paper](https://arxiv.org/abs/2105.07404) for more information.
### Annotations
#### Annotation process
1. (FIRST STAGE) A team of professional athletes generate records of the action la- bel, the starting and ending frame, and the person box in the starting frame, which can ensure the efficiency, accu- racy and consistency of our annotation results.
2. At least one annotator with domain knowledge double-check the annotations, correct wrong or inaccurate ones and also add missing annotations
3. (SECOND STAGE) With the help of FCOT tracking algorithm, a team of crowd-sourced annotators adjust bounding boxes of tracking results at each frame for each record.
4. Double-check each instance by playing it in 5fps and manually correct the inaccurate bounding boxes.
#### Who are the annotators?
For the first stage, annotators are professional athletes. For the second stage, annotators are common volunteers.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Authors of [this paper](https://arxiv.org/abs/2105.07404)
- Yixuan Li
- Lei Chen
- Runyu He
- Zhenzhi Wang
- Gangshan Wu
- Limin Wang
### Licensing Information
<a rel="license" href="http://creativecommons.org/licenses/by-nc/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc/4.0/">Creative Commons Attribution-NonCommercial 4.0 International License</a>.
### Citation Information
If you find this dataset useful, please cite as
```
@InProceedings{Li_2021_ICCV,
author = {Li, Yixuan and Chen, Lei and He, Runyu and Wang, Zhenzhi and Wu, Gangshan and Wang, Limin},
title = {MultiSports: A Multi-Person Video Dataset of Spatio-Temporally Localized Sports Actions},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2021},
pages = {13536-13545}
}
```
### Contributions
Thanks to [@Judie1999](https://github.com/Judie1999) for adding this dataset. | 10,838 | [
[
-0.042388916015625,
-0.03173828125,
0.01032257080078125,
0.0052490234375,
-0.0158843994140625,
0.023956298828125,
-0.004550933837890625,
-0.01100921630859375,
0.03411865234375,
-0.0052490234375,
-0.056427001953125,
-0.056640625,
-0.05963134765625,
0.00280189... |
xusenlin/duie | 2022-12-07T14:49:54.000Z | [
"region:us"
] | xusenlin | null | null | 0 | 12 | 2022-12-07T14:41:25 | ---
dataset_info:
features:
- name: text
dtype: string
- name: spo_list
list:
- name: predicate
dtype: string
- name: object_type
dtype: string
- name: subject_type
dtype: string
- name: object
dtype: string
- name: subject
dtype: string
splits:
- name: train
num_bytes: 51849478
num_examples: 172983
- name: validation
num_bytes: 6512116
num_examples: 21626
download_size: 32568292
dataset_size: 58361594
---
# DuIE 关系抽取数据集
字段说明
+ `text`: 文本
+ `spo_list`: 文本中包含的关系三元组
+ `subject`: 头实体(主语)
+ `subject_type`: 头实体(主语)的类型
+ `object`: 尾实体(主语)
+ `object_type`: 尾实体(主语)的类型
+ `predicate`: 关系
| 694 | [
[
-0.0214080810546875,
-0.0574951171875,
0.0184326171875,
0.036834716796875,
-0.0506591796875,
-0.0004582405090332031,
0.01324462890625,
0.0070953369140625,
0.036865234375,
0.054168701171875,
-0.0177459716796875,
-0.0391845703125,
-0.0672607421875,
0.021163940... |
ksaml/Stanford_dogs | 2022-12-11T17:55:02.000Z | [
"license:other",
"region:us"
] | ksaml | null | null | 0 | 12 | 2022-12-11T15:31:02 | ---
license: other
---
## Context
The Stanford Dogs dataset contains images of 120 breeds of dogs from around the world. This dataset has been built using images and annotation from ImageNet for the task of fine-grained image categorization. It was originally collected for fine-grain image categorization, a challenging problem as certain dog breeds have near identical features or differ in colour and age. <b> I have used only images, so this does not contain any labels <b>.
## Content
Number of images: 20,580
## Acknowledgements
The original data source is found on http://vision.stanford.edu/aditya86/ImageNetDogs/ and contains additional information on the train/test splits and baseline results.
If you use this dataset in a publication, please cite the dataset on the following papers:
Aditya Khosla, Nityananda Jayadevaprakash, Bangpeng Yao and Li Fei-Fei. Novel dataset for Fine-Grained Image Categorization. First Workshop on Fine-Grained Visual Categorization (FGVC), IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011. [pdf] [poster] [BibTex]
Secondary:
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li and L. Fei-Fei, ImageNet: A Large-Scale Hierarchical Image Database. IEEE Computer Vision and Pattern Recognition (CVPR), 2009. [pdf] [BibTex] | 1,281 | [
[
-0.042572021484375,
-0.01177978515625,
-0.004180908203125,
0.0197601318359375,
-0.0164947509765625,
-0.03985595703125,
-0.004810333251953125,
-0.042938232421875,
-0.0089111328125,
0.030975341796875,
-0.010772705078125,
-0.033416748046875,
-0.031036376953125,
... |
aashay96/indic-gpt | 2023-04-21T20:45:09.000Z | [
"region:us"
] | aashay96 | null | null | 1 | 12 | 2022-12-22T06:55:12 | Sampled Data from AIforBharat corpora | 37 | [
[
-0.02508544921875,
-0.0266571044921875,
-0.00501251220703125,
0.0192108154296875,
-0.0196685791015625,
0.0179595947265625,
-0.01450347900390625,
-0.044403076171875,
0.0304718017578125,
0.045440673828125,
-0.0158233642578125,
-0.047027587890625,
-0.02616882324218... |
NeelNanda/wiki-10k | 2022-12-27T00:22:23.000Z | [
"region:us"
] | NeelNanda | null | null | 0 | 12 | 2022-12-27T00:22:16 | ---
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 222757944
num_examples: 10000
download_size: 129077566
dataset_size: 222757944
---
# Dataset Card for "wiki-10k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 454 | [
[
-0.050872802734375,
-0.009765625,
0.01202392578125,
0.0183258056640625,
-0.0169830322265625,
-0.01081085205078125,
0.0097198486328125,
-0.0198516845703125,
0.0626220703125,
0.03387451171875,
-0.055694580078125,
-0.041656494140625,
-0.046142578125,
0.00511932... |
irds/beir_fiqa_train | 2023-01-05T02:46:09.000Z | [
"task_categories:text-retrieval",
"source_datasets:irds/beir_fiqa",
"arxiv:2104.08663",
"region:us"
] | irds | null | null | 1 | 12 | 2023-01-05T02:46:03 | ---
pretty_name: '`beir/fiqa/train`'
viewer: false
source_datasets: ['irds/beir_fiqa']
task_categories:
- text-retrieval
---
# Dataset Card for `beir/fiqa/train`
The `beir/fiqa/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/fiqa/train).
# Data
This dataset provides:
- `queries` (i.e., topics); count=5,500
- `qrels`: (relevance assessments); count=14,166
- For `docs`, use [`irds/beir_fiqa`](https://huggingface.co/datasets/irds/beir_fiqa)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/beir_fiqa_train', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/beir_fiqa_train', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@article{Maia2018Fiqa,
title={WWW'18 Open Challenge: Financial Opinion Mining and Question Answering},
author={Macedo Maia and S. Handschuh and A. Freitas and Brian Davis and R. McDermott and M. Zarrouk and A. Balahur},
journal={Companion Proceedings of the The Web Conference 2018},
year={2018}
}
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
| 1,791 | [
[
-0.0274810791015625,
-0.038970947265625,
0.002643585205078125,
0.003726959228515625,
0.00009268522262573242,
-0.0208282470703125,
-0.005397796630859375,
0.0044097900390625,
0.00449371337890625,
0.028564453125,
-0.0377197265625,
-0.047882080078125,
-0.01936340332... |
Cohere/wikipedia-22-12-es-embeddings | 2023-03-22T16:53:23.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:es",
"license:apache-2.0",
"region:us"
] | Cohere | null | null | 4 | 12 | 2023-01-14T12:01:41 | ---
annotations_creators:
- expert-generated
language:
- es
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# Wikipedia (es) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (es)](https://es.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-es-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-es-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-es-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) | 3,845 | [
[
-0.051422119140625,
-0.05023193359375,
0.01320648193359375,
0.0005407333374023438,
-0.0128021240234375,
-0.006458282470703125,
-0.0233917236328125,
-0.018463134765625,
0.043853759765625,
-0.0016508102416992188,
-0.038299560546875,
-0.063232421875,
-0.04705810546... |
nlphuji/utk_faces | 2023-01-18T13:10:37.000Z | [
"arxiv:1702.08423",
"region:us"
] | nlphuji | null | null | 0 | 12 | 2023-01-18T12:50:13 | # UTK Faces
Original paper: [Age Progression/Regression by Conditional Adversarial Autoencoder](https://arxiv.org/abs/1702.08423)
Homepage: https://susanqq.github.io/UTKFace/
Bibtex:
```
@inproceedings{zhifei2017cvpr,
title={Age Progression/Regression by Conditional Adversarial Autoencoder},
author={Zhang, Zhifei, Song, Yang, and Qi, Hairong},
booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2017},
organization={IEEE}
}
``` | 476 | [
[
0.00001823902130126953,
-0.021484375,
0.01290130615234375,
0.0008411407470703125,
-0.0026798248291015625,
-0.00852203369140625,
0.0150146484375,
-0.027587890625,
-0.01221466064453125,
0.040435791015625,
-0.048858642578125,
-0.019378662109375,
-0.0199127197265625... |
qwertyforce/scenery_watermarks | 2023-01-31T16:58:17.000Z | [
"task_categories:image-classification",
"size_categories:10K<n<100K",
"license:cc-by-nc-4.0",
"watermark",
"doi:10.57967/hf/0313",
"region:us"
] | qwertyforce | null | null | 3 | 12 | 2023-01-29T15:52:12 | ---
license: cc-by-nc-4.0
task_categories:
- image-classification
tags:
- watermark
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': no_watermark
'1': watermark
splits:
- name: train
num_bytes: 1094841327.222
num_examples: 22762
download_size: 1057455120
dataset_size: 1094841327.222
pretty_name: Scenery Watermarks
size_categories:
- 10K<n<100K
---
Dataset for watermark classification (no_watermark/watermark)
~22k images, 512x512, manually annotated
additional info - https://github.com/qwertyforce/scenery_watermarks | 639 | [
[
-0.05169677734375,
-0.006011962890625,
0.01154327392578125,
0.0191192626953125,
-0.04376220703125,
-0.010284423828125,
0.02740478515625,
-0.044189453125,
-0.00015211105346679688,
0.075439453125,
-0.0458984375,
-0.051605224609375,
-0.030059814453125,
-0.01785... |
LangChainHub-Prompts/LLM_Bash | 2023-02-01T13:43:39.000Z | [
"langchain",
"prompt",
"region:us"
] | LangChainHub-Prompts | null | null | 3 | 12 | 2023-02-01T13:43:38 |
---
tags:
- langchain
- prompt
---
# Description of LLM Bash
Prompt designed to convert natural language to bash command.
## Inputs
This is a description of the inputs that the prompt expects.
question: User question to be answered by writing a bash command.
## Usage
Below is a code snippet for how to use the prompt.
```
from langchain.prompts import load_prompt
from langchain.chains import LLMBashChain
llm = ...
prompt = load_prompt('lc://prompts/llm_bash/<file-name>')
chain = LLMBashChain(llm=llm, prompt=prompt)
```
| 539 | [
[
-0.0258636474609375,
-0.06597900390625,
0.033599853515625,
0.007701873779296875,
-0.03289794921875,
0.0186004638671875,
-0.0023250579833984375,
-0.0005354881286621094,
0.037139892578125,
0.08123779296875,
-0.07855224609375,
-0.04888916015625,
-0.0146484375,
... |
jonathan-roberts1/SAT-4 | 2023-04-03T16:17:18.000Z | [
"license:other",
"region:us"
] | jonathan-roberts1 | null | null | 0 | 12 | 2023-02-03T18:12:58 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': barren land
'1': grassland
'2': other
'3': trees
splits:
- name: train
num_bytes: 150589308
num_examples: 100000
download_size: 177776551
dataset_size: 150589308
license: other
---
# Dataset Card for Dataset Name
## Dataset Description
- **Paper** [Deepsat: a learning framework for satellite imagery](https://dl.acm.org/doi/pdf/10.1145/2820783.2820816)
- **Split** Test
### Split Information
This HuggingFace dataset repository contains just the 'Test' split.
### Licensing Information
Public Domain
## Citation Information
[https://dl.acm.org/doi/pdf/10.1145/2820783.2820816](https://dl.acm.org/doi/pdf/10.1145/2820783.2820816)
```
@inproceedings{basu2015deepsat,
title = {Deepsat: a learning framework for satellite imagery},
author = {Basu, Saikat and Ganguly, Sangram and Mukhopadhyay, Supratik and DiBiano, Robert and Karki, Manohar and Nemani, Ramakrishna},
year = 2015,
booktitle = {Proceedings of the 23rd SIGSPATIAL international conference on advances in geographic information systems},
pages = {1--10}
}
``` | 1,255 | [
[
-0.053619384765625,
-0.0230255126953125,
0.0186767578125,
0.01229095458984375,
-0.03765869140625,
0.0027217864990234375,
-0.0189208984375,
-0.00855255126953125,
0.0070953369140625,
0.035400390625,
-0.044586181640625,
-0.056549072265625,
-0.04962158203125,
-0... |
ml4pubmed/pubmed-classification-20k | 2023-02-17T06:31:13.000Z | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"pubmed",
"region:us"
] | ml4pubmed | null | null | 0 | 12 | 2023-02-06T16:16:31 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
tags:
- pubmed
size_categories:
- 10K<n<100K
---
# ml4pubmed/pubmed-classification-20k
- 20k subset of pubmed text classification from course | 224 | [
[
0.0005412101745605469,
-0.00457000732421875,
0.0239715576171875,
0.004512786865234375,
-0.0192108154296875,
0.02880859375,
0.0188140869140625,
-0.0106201171875,
0.01219940185546875,
0.079833984375,
-0.01629638671875,
-0.043304443359375,
-0.01242828369140625,
... |
civility-lab/incivility-arizona-daily-star-comments | 2023-02-15T23:18:17.000Z | [
"task_categories:text-classification",
"task_ids:multi-label-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"social media",
"incivilit... | civility-lab | null | null | 0 | 12 | 2023-02-15T18:25:12 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: Incivility in Arizona Daily Star Comments
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- social media
- incivility
- aspersion
- hyperbole
- lying
- namecalling
- noncooperation
- pejorative
- sarcasm
- vulgarity
task_categories:
- text-classification
task_ids:
- multi-label-classification
dataset_info:
features:
- name: text
dtype: string
- name: aspersion
dtype: int64
- name: hyperbole
dtype: int64
- name: lying
dtype: int64
- name: namecalling
dtype: int64
- name: noncooperation
dtype: int64
- name: offtopic
dtype: int64
- name: other_incivility
dtype: int64
- name: pejorative
dtype: int64
- name: sarcasm
dtype: int64
- name: vulgarity
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1568771
num_examples: 3910
- name: validation
num_bytes: 398667
num_examples: 976
- name: test
num_bytes: 486262
num_examples: 1228
download_size: 1400753
dataset_size: 2453700
---
# Dataset Card for incivility-arizona-daily-star-comments
This is a collection of more than 6000 comments on Arizona Daily Star news articles from 2011 that have been manually annotated for various forms of incivility including aspersion, namecalling, sarcasm, and vulgarity.
## Dataset Structure
Each instance in the dataset corresponds to a single comment from a single commenter.
An instance's `text` field contains the text of the comment with any quotes of other commenters removed.
The remaining fields in each instance provide binary labels for each type of incivility annotated:
`aspersion`, `hyperbole`, `lying`, `namecalling`, `noncooperation`, `offtopic`, `pejorative`, `sarcasm`, `vulgarity`, and `other_incivility`.
The dataset provides three standard splits: `train`, `validation`, and `test`.
## Dataset Creation
The original annotation effort is described in:
- Kevin Coe, Kate Kenski, Stephen A. Rains.
[Online and Uncivil? Patterns and Determinants of Incivility in Newspaper Website Comments](https://doi.org/10.1111/jcom.12104).
Journal of Communication, Volume 64, Issue 4, August 2014, Pages 658–679.
That dataset was converted to a computer-friendly form as described in section 4.2.1 of:
- Farig Sadeque.
[User behavior in social media: engagement, incivility, and depression](https://repository.arizona.edu/handle/10150/633192).
PhD thesis. The University of Arizona. 2019.
The current upload is a 2023 conversion of that form to a huggingface Dataset.
## Considerations for Using the Data
The data is intended for the study of incivility.
It should not be used to train models to generate incivility.
The human coders and their trainers were mostly [Western, educated, industrialized, rich and democratic (WEIRD)](https://www.nature.com/articles/466029a), which may have shaped how they evaluated incivility.
## Citation
```bibtex
@article{10.1111/jcom.12104,
author = {Coe, Kevin and Kenski, Kate and Rains, Stephen A.},
title = {Online and Uncivil? Patterns and Determinants of Incivility in Newspaper Website Comments},
journal = {Journal of Communication},
volume = {64},
number = {4},
pages = {658-679},
year = {2014},
month = {06},
issn = {0021-9916},
doi = {10.1111/jcom.12104},
url = {https://doi.org/10.1111/jcom.12104},
}
``` | 3,523 | [
[
-0.025360107421875,
-0.050567626953125,
0.0218048095703125,
0.044677734375,
-0.009185791015625,
-0.019378662109375,
-0.022735595703125,
-0.04486083984375,
0.030914306640625,
0.013427734375,
-0.052581787109375,
-0.037200927734375,
-0.046478271484375,
0.039367... |
jonathan-roberts1/RSD46-WHU | 2023-03-31T14:43:55.000Z | [
"license:other",
"region:us"
] | jonathan-roberts1 | null | null | 0 | 12 | 2023-02-17T15:41:45 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': airplane
'1': airport
'2': artificial dense forest land
'3': artificial sparse forest land
'4': bare land
'5': basketball court
'6': blue structured factory building
'7': building
'8': construction site
'9': cross river bridge
'10': crossroads
'11': dense tall building
'12': dock
'13': fish pond
'14': footbridge
'15': graff
'16': grassland
'17': irregular farmland
'18': low scattered building
'19': medium density scattered building
'20': medium density structured building
'21': natural dense forest land
'22': natural sparse forest land
'23': oil tank
'24': overpass
'25': parking lot
'26': plastic greenhouse
'27': playground
'28': railway
'29': red structured factory building
'30': refinery
'31': regular farmland
'32': scattered blue roof factory building
'33': scattered red roof factory building
'34': sewage plant-type-one
'35': sewage plant-type-two
'36': ship
'37': solar power station
'38': sparse residential area
'39': square
'40': steelworks
'41': storage land
'42': tennis court
'43': thermal power plant
'44': vegetable plot
'45': water
splits:
- name: train
num_bytes: 1650045051.96
num_examples: 17516
download_size: 2184490825
dataset_size: 1650045051.96
license: other
---
# Dataset Card for "RSD46-WHU"
## Dataset Description
- **Paper** [Accurate Object Localization in Remote Sensing Images Based on Convolutional Neural Networks](https://ieeexplore.ieee.org/iel7/36/7880748/07827088.pdf)
- **Paper** [High-Resolution Remote Sensing Image Retrieval Based on CNNs from a Dimensional Perspective](https://www.mdpi.com/209338)
- **Split** Validation
## Split Information
This HuggingFace dataset repository contains just the Validation split.
### Licensing Information
[Free for education, research and commercial use.](https://github.com/RSIA-LIESMARS-WHU/RSD46-WHU)
## Citation Information
[Accurate Object Localization in Remote Sensing Images Based on Convolutional Neural Networks](https://ieeexplore.ieee.org/iel7/36/7880748/07827088.pdf)
[High-Resolution Remote Sensing Image Retrieval Based on CNNs from a Dimensional Perspective](https://www.mdpi.com/209338)
```
@article{long2017accurate,
title = {Accurate object localization in remote sensing images based on convolutional neural networks},
author = {Long, Yang and Gong, Yiping and Xiao, Zhifeng and Liu, Qing},
year = 2017,
journal = {IEEE Transactions on Geoscience and Remote Sensing},
publisher = {IEEE},
volume = 55,
number = 5,
pages = {2486--2498}
}
@article{xiao2017high,
title = {High-resolution remote sensing image retrieval based on CNNs from a dimensional perspective},
author = {Xiao, Zhifeng and Long, Yang and Li, Deren and Wei, Chunshan and Tang, Gefu and Liu, Junyi},
year = 2017,
journal = {Remote Sensing},
publisher = {MDPI},
volume = 9,
number = 7,
pages = 725
}
``` | 3,518 | [
[
-0.039306640625,
-0.02191162109375,
0.01285552978515625,
-0.0126190185546875,
-0.024993896484375,
-0.0174560546875,
-0.0253143310546875,
-0.047760009765625,
-0.0122528076171875,
0.0021114349365234375,
-0.0221099853515625,
-0.05419921875,
-0.048736572265625,
... |
lansinuote/diffusion.1.unconditional | 2023-02-23T10:50:05.000Z | [
"region:us"
] | lansinuote | null | null | 0 | 12 | 2023-02-23T07:19:40 | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 346842007.375
num_examples: 8189
download_size: 0
dataset_size: 346842007.375
---
# Dataset Card for "diffusion.1.unconditional"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 373 | [
[
-0.04046630859375,
-0.04736328125,
0.022796630859375,
0.039581298828125,
-0.01334381103515625,
-0.0017566680908203125,
0.0121612548828125,
0.015228271484375,
0.042816162109375,
0.033660888671875,
-0.04937744140625,
-0.047271728515625,
-0.051422119140625,
-0.... |
Dregandor/Edgar-Cayce_Readings | 2023-03-14T14:59:07.000Z | [
"region:us"
] | Dregandor | null | null | 0 | 12 | 2023-03-08T12:57:37 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Amitesh007/twitter_parsed_dataset | 2023-03-11T12:58:24.000Z | [
"region:us"
] | Amitesh007 | null | null | 0 | 12 | 2023-03-11T12:57:47 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
cahya/instructions-pt | 2023-03-15T17:52:35.000Z | [
"region:us"
] | cahya | null | null | 0 | 12 | 2023-03-15T17:49:45 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 26897428.926476642
num_examples: 57692
- name: test
num_bytes: 708195.1490556407
num_examples: 1519
- name: validation
num_bytes: 707728.9244677172
num_examples: 1518
download_size: 16526868
dataset_size: 28313353.0
---
# Dataset Card for "instructions-pt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 551 | [
[
-0.031707763671875,
-0.025360107421875,
0.03253173828125,
0.032928466796875,
-0.0235443115234375,
-0.0137481689453125,
0.020965576171875,
0.01131439208984375,
0.041595458984375,
0.043670654296875,
-0.07958984375,
-0.059112548828125,
-0.04638671875,
-0.015853... |
shunk031/CAMERA | 2023-03-17T14:49:35.000Z | [
"task_categories:text-generation",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:ja",
"license:cc-by-nc-sa-4.0",
"region:us"
] | shunk031 | CAMERA (CyberAgent Multimodal Evaluation for Ad Text GeneRAtion) is the Japanese ad text generation dataset. | @inproceedings{mita-et-al:nlp2023,
author = "三田 雅人 and 村上 聡一朗 and 張 培楠",
title = "広告文生成タスクの規定とベンチマーク構築",
booktitle = "言語処理学会 第29回年次大会",
year = 2023,
} | 4 | 12 | 2023-03-17T14:18:03 | ---
annotations_creators:
- crowdsourced
language:
- ja
language_creators:
- found
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
pretty_name: CAMERA
size_categories: []
source_datasets:
- original
tags: []
task_categories:
- text-generation
task_ids: []
---
# Dataset Card for CAMERA 📷
[](https://github.com/shunk031/huggingface-datasets_CAMERA/actions/workflows/ci.yaml)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/CyberAgentAILab/camera
- **Repository:** https://github.com/shunk031/huggingface-datasets_CAMERA
### Dataset Summary
From [the official README.md](https://github.com/CyberAgentAILab/camera#camera-dataset):
> CAMERA (CyberAgent Multimodal Evaluation for Ad Text GeneRAtion) is the Japanese ad text generation dataset. We hope that our dataset will be useful in research for realizing more advanced ad text generation models.
### Supported Tasks and Leaderboards
[More Information Needed]
#### Supported Tasks
[More Information Needed]
#### Leaderboard
[More Information Needed]
### Languages
The language data in CAMERA is in Japanese ([BCP-47 ja-JP](https://www.rfc-editor.org/info/bcp47)).
## Dataset Structure
### Data Instances
When loading a specific configuration, users has to append a version dependent suffix:
#### without-lp-images
```python
from datasets import load_dataset
dataset = load_dataset("shunk031/CAMERA", name="without-lp-images")
print(dataset)
# DatasetDict({
# train: Dataset({
# features: ['asset_id', 'kw', 'lp_meta_description', 'title_org', 'title_ne1', 'title_ne2', 'title_ne3', 'domain', 'parsed_full_text_annotation'],
# num_rows: 12395
# })
# validation: Dataset({
# features: ['asset_id', 'kw', 'lp_meta_description', 'title_org', 'title_ne1', 'title_ne2', 'title_ne3', 'domain', 'parsed_full_text_annotation'],
# num_rows: 3098
# })
# test: Dataset({
# features: ['asset_id', 'kw', 'lp_meta_description', 'title_org', 'title_ne1', 'title_ne2', 'title_ne3', 'domain', 'parsed_full_text_annotation'],
# num_rows: 872
# })
# })
```
An example of the CAMERA (w/o LP images) dataset looks as follows:
```json
{
"asset_id": 13861,
"kw": "仙台 ホテル",
"lp_meta_description": "仙台のホテルや旅館をお探しなら楽天トラベルへ!楽天ポイントが使えて、貯まって、とってもお得な宿泊予約サイトです。さらに割引クーポンも使える!国内ツアー・航空券・レンタカー・バス予約も!",
"title_org": "仙台市のホテル",
"title_ne1": "",
"title_ne2": "",
"title_ne3": "",
"domain": "",
"parsed_full_text_annotation": {
"text": [
"trivago",
"Oops...AccessDenied 可",
"Youarenotallowedtoviewthispage!Ifyouthinkthisisanerror,pleasecontacttrivago.",
"Errorcode:0.3c99e86e.1672026945.25ba640YourIP:240d:1a:4d8:2800:b9b0:ea86:2087:d141AffectedURL:https://www.trivago.jp/ja/odr/%E8%BB%92", "%E4%BB%99%E5%8F%B0-%E5%9B%BD%E5%86%85?search=20072325",
"Backtotrivago"
],
"xmax": [
653,
838,
765,
773,
815,
649
],
"xmin": [
547,
357,
433,
420,
378,
550
],
"ymax": [
47,
390,
475,
558,
598,
663
],
"ymin": [
18,
198,
439,
504,
566,
651
]
}
}
```
#### with-lp-images
```python
from datasets import load_dataset
dataset = load_dataset("shunk031/CAMERA", name="with-lp-images")
print(dataset)
# DatasetDict({
# train: Dataset({
# features: ['asset_id', 'kw', 'lp_meta_description', 'title_org', 'title_ne1', 'title_ne2', 'title_ne3', 'domain', 'parsed_full_text_annotation', 'lp_image'],
# num_rows: 12395
# })
# validation: Dataset({
# features: ['asset_id', 'kw', 'lp_meta_description', 'title_org', 'title_ne1', 'title_ne2', 'title_ne3', 'domain', 'parsed_full_text_annotation', 'lp_image'],
# num_rows: 3098
# })
# test: Dataset({
# features: ['asset_id', 'kw', 'lp_meta_description', 'title_org', 'title_ne1', 'title_ne2', 'title_ne3', 'domain', 'parsed_full_text_annotation', 'lp_image'],
# num_rows: 872
# })
# })
```
An example of the CAMERA (w/ LP images) dataset looks as follows:
```json
{
"asset_id": 13861,
"kw": "仙台 ホテル",
"lp_meta_description": "仙台のホテルや旅館をお探しなら楽天トラベルへ!楽天ポイントが使えて、貯まって、とってもお得な宿泊予約サイトです。さらに割引クーポンも使える!国内ツアー・航空券・レンタカー・バス予約も!",
"title_org": "仙台市のホテル",
"title_ne1": "",
"title_ne2": "",
"title_ne3": "",
"domain": "",
"parsed_full_text_annotation": {
"text": [
"trivago",
"Oops...AccessDenied 可",
"Youarenotallowedtoviewthispage!Ifyouthinkthisisanerror,pleasecontacttrivago.",
"Errorcode:0.3c99e86e.1672026945.25ba640YourIP:240d:1a:4d8:2800:b9b0:ea86:2087:d141AffectedURL:https://www.trivago.jp/ja/odr/%E8%BB%92", "%E4%BB%99%E5%8F%B0-%E5%9B%BD%E5%86%85?search=20072325",
"Backtotrivago"
],
"xmax": [
653,
838,
765,
773,
815,
649
],
"xmin": [
547,
357,
433,
420,
378,
550
],
"ymax": [
47,
390,
475,
558,
598,
663
],
"ymin": [
18,
198,
439,
504,
566,
651
]
},
"lp_image": <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=1200x680 at 0x7F8513446B20>
}
```
### Data Fields
#### without-lp-images
- `asset_id`: ids (associated with LP images)
- `kw`: search keyword
- `lp_meta_description`: meta description extracted from LP (i.e., LP Text)
- `title_org`: ad text (original gold reference)
- `title_ne{1-3}`: ad text (additonal gold references for multi-reference evaluation)
- `domain`: industry domain (HR, EC, Fin, Edu) for industry-wise evaluation
- `parsed_full_text_annotation`: OCR results for LP images
#### with-lp-images
- `asset_id`: ids (associated with LP images)
- `kw`: search keyword
- `lp_meta_description`: meta description extracted from LP (i.e., LP Text)
- `title_org`: ad text (original gold reference)
- `title_ne{1-3}`: ad text (additional gold references for multi-reference evaluation)
- `domain`: industry domain (HR, EC, Fin, Edu) for industry-wise evaluation
- `parsed_full_text_annotation`: OCR results for LP images
- `lp_image`: Landing page (LP) image
### Data Splits
From [the official paper](https://www.anlp.jp/proceedings/annual_meeting/2023/pdf_dir/H11-4.pdf):
| Split | # of data | # of reference ad text | industry domain label |
|-------|----------:|-----------------------:|:---------------------:|
| Train | 12,395 | 1 | - |
| Valid | 3,098 | 1 | - |
| Test | 869 | 4 | ✔ |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
[More Information Needed]
### Dataset Curators
[More Information Needed]
### Licensing Information
> This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
### Citation Information
```bibtex
@inproceedings{mita-et-al:nlp2023,
author = "三田 雅人 and 村上 聡一朗 and 張 培楠",
title = "広告文生成タスクの規定とベンチマーク構築",
booktitle = "言語処理学会 第 29 回年次大会",
year = 2023,
}
```
### Contributions
Thanks to [Masato Mita](https://github.com/chemicaltree), [Soichiro Murakami](https://github.com/ichiroex), and [Peinan Zhang](https://github.com/peinan) for creating this dataset.
| 9,689 | [
[
-0.043487548828125,
-0.0280914306640625,
0.0185699462890625,
0.006984710693359375,
-0.03717041015625,
-0.0134429931640625,
-0.0108642578125,
-0.035003662109375,
0.027374267578125,
0.0267333984375,
-0.048828125,
-0.07745361328125,
-0.03607177734375,
0.0258026... |
SALT-NLP/positive_reframing | 2023-03-23T01:36:18.000Z | [
"region:us"
] | SALT-NLP | null | null | 0 | 12 | 2023-03-23T01:33:32 | # Positive Psychology Frames
_Inducing Positive Perspectives with Text Reframing_
[[Read the Paper]](https://faculty.cc.gatech.edu/~dyang888/docs/acl22_reframing.pdf) | [[Download the Data]](https://www.dropbox.com/sh/pnoczmv0uyn51e6/AAAGek6yX12Yc4PA2RwtZeZKa?dl=0) | [[Demo]](https://huggingface.co/spaces/Ella2323/Positive-Reframing)
<img src="frontpage.png" alt="frontpage" width="650"/>
## *Why Positive Frames?*
This work was inspired by the need to escape the negative patterns of thinking that began to overwhelm the authors during the COVID-19 pandemic. We realized that what we needed was not some naive belief that everything would be okay if we ignored our problems. Instead, we needed _reframing_, or a shift in focus, with less weight on the negative things we can't control, and more weight on the positive things about ourselves and our situation which we can control.
_Positive reframing_ induces a complementary positive viewpoint (e.g. glass-half-full), which nevertheless supports the underlying content of the original sentence (see diagram above). The reframe implicates rather than contradicts the source, and the transformation is motivated by theoretically justified strategies from positive psychology (see _What's 'in the box?'_).
Our work shows how NLP can help lead the way by automatically reframing overly negative text using strategies from positive psychology.
## *What's 'in the box?'*
The `Positive Psychology Frames` dataset contains **8,349** reframed sentence pairs, where the original sentence is drawn from a negative tweet (\#stressed), and a reframed copy is provided by a crowdworker who was trained in the methods of positive psychology. Our positive psychology frames taxonomy is defined below (with the distribution of labels shown on the left).
*  **Growth Mindset:** Viewing a challenging event as an opportunity for the author specifically to grow or improve themselves.
*  **Impermanence:** Saying bad things don't last forever, will get better soon, and/or that others have experienced similar struggles.
*  **Neutralizing:** Replacing a negative word with a neutral word.
*  **Optimism:** Focusing on things about the situation itself, in that moment, that are good (not just forecasting a better future).
*  **Self-Affirmation:** Talking about what strengths the author already has, or the values they admire, like love, courage, perseverance, etc.
*  **Thankfulness:** Expressing thankfulness or gratitude with key words like appreciate, glad that, thankful for, good thing, etc.
## *What can I do with this data?*
State-of-the-art neural models can learn from our data how to (1) shift a negatively distorted text into a more positive perspective using a combination of strategies from positive psychology; and (2) recognize or classify the psychological strategies that are used to reframe a given source.
As our paper baselines show, neural models still have a long ways to go before they can reliably generate positive perspectives. We see particular errors from _insubstantial changes, contradictions to the premise, self-contradictions, and hallucinations_. Overall, our suggests that our dataset can serve as a useful benchmark for building natural language generation systems with positive perspectives. For more information, please [read the paper](https://faculty.cc.gatech.edu/~dyang888/docs/acl22_reframing.pdf).
## *How do I run the baseline models?*
**1. Set Up Environment**
* CUDA, cudnn
* anaconda
```
conda create --name reframe python=3.7
conda activate reframe
pip install -r requirements.txt
```
**2. Dataset Preparation**
The datasets are under the data/ folder.
-Random, SBERT, T5, BART: wholetrain.csv, wholetest.csv
The datasets contain fields: original_text, reframed_text, strategy, original_with_label
-GPT, GPT2: wholetrain_gpt.txt, wholetest.csv
The train data contains <startoftext> token and <endoftext> token for each sentence pair. Also, ‘reframed: ‘ token indicates the position where the reframed sentence begins for each sentence pair. Each sentence pair starts on a new line.
-Seq2SeqLSTM: for_train.txt, for_test.txt
The datasets contain paired texts separated by tab in each line: original_text \t reframed_text
-CopyNMT: train-original.txt, train-reframed.txt, validation-original.txt, validation-reframed.txt, test-original.txt, test-reframed.txt
Each file contains the original/reframed sentences separated by \n.
**3 . Run the Baseline Models**
Random, Sbert, T5, BART, GPT, GPT2:
```python3 run.py —-arguments```
Arguments:
--model: choose from random, sbert, t5, BART
--setting: default is unconstrained, controlled/predict setting is supported for t5 and BART
--train: path to train data file
--dev: path to dev data file
--test: path to test data file
CopyNMT: Execute copynmt_train.sh and copynmt_eval.sh
```
bash copynmt_train.sh
bash copynmt_eval.sh
```
Seq2Seq-lstm: git clone https://github.com/bond005/seq2seq.git, replace the data files in the data/ folder and follow the instructions to train the seq2seq-lstm model
## *How do I cite this work?*
**Citation:**
> Ziems, C., Li, M., Zhang, A., & Yang, D. (2022). Inducing Positive Perspectives with Text Reframing. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL)_.
**BibTeX:**
```tex
@inproceedings{ziems-etal-2022-positive-frames,
title = "Inducing Positive Perspectives with Text Reframing",
author = "Ziems, Caleb and
Li, Minzhi and
Zhang, Anthony and
Yang, Diyi",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics",
month = may,
year = "2022",
address = "Online and Dublin, Ireland",
publisher = "Association for Computational Linguistics"
}
``` | 6,015 | [
[
-0.036651611328125,
-0.06640625,
0.0271148681640625,
0.0191497802734375,
-0.0167694091796875,
-0.0194244384765625,
-0.0221710205078125,
-0.036865234375,
0.0174560546875,
0.0224761962890625,
-0.051666259765625,
-0.0275726318359375,
-0.0369873046875,
0.0265045... |
Jsevisal/balanced_augmented_dataset_2 | 2023-09-14T11:32:21.000Z | [
"region:us"
] | Jsevisal | null | null | 0 | 12 | 2023-03-29T15:28:58 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: tokens
sequence: string
- name: gestures
sequence: string
- name: label
sequence:
class_label:
names:
'0': B-BUT
'1': I-BUT
'2': B-CALM_DOWN
'3': I-CALM_DOWN
'4': B-COME_ON
'5': I-COME_ON
'6': B-EMPHATIC
'7': I-EMPHATIC
'8': B-ENTHUSIASTIC
'9': I-ENTHUSIASTIC
'10': B-EXPLAIN
'11': I-EXPLAIN
'12': B-FRONT
'13': I-FRONT
'14': B-GREET
'15': I-GREET
'16': B-ITERATE
'17': I-ITERATE
'18': B-NEUTRAL
'19': I-NEUTRAL
'20': B-NO
'21': I-NO
'22': B-NO_GESTURE
'23': I-NO_GESTURE
'24': B-OTHER_PEER
'25': I-OTHER_PEER
'26': B-PLEASE
'27': I-PLEASE
'28': B-QUESTION
'29': I-QUESTION
'30': B-SELF
'31': I-SELF
'32': B-SORRY
'33': I-SORRY
'34': B-THANKS
'35': I-THANKS
'36': B-THINKING
'37': I-THINKING
'38': B-THIRD_PERSON
'39': I-THIRD_PERSON
'40': B-YES
'41': I-YES
splits:
- name: train
num_bytes: 272426.0
num_examples: 831
- name: test
num_bytes: 55785.0
num_examples: 126
download_size: 58436
dataset_size: 328211.0
---
# Dataset Card for "balanced_augmented_dataset_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,644 | [
[
-0.035858154296875,
-0.027374267578125,
-0.005321502685546875,
0.036529541015625,
-0.0182647705078125,
0.00274658203125,
0.030364990234375,
-0.025054931640625,
0.0584716796875,
0.034149169921875,
-0.045257568359375,
-0.0222930908203125,
-0.04791259765625,
-0... |
liuyanchen1015/MULTI_VALUE_cola_comparative_than | 2023-04-03T19:29:56.000Z | [
"region:us"
] | liuyanchen1015 | null | null | 0 | 12 | 2023-04-03T19:29:52 | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 156
num_examples: 2
- name: test
num_bytes: 71
num_examples: 1
- name: train
num_bytes: 2115
num_examples: 27
download_size: 6857
dataset_size: 2342
---
# Dataset Card for "MULTI_VALUE_cola_comparative_than"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 577 | [
[
-0.04461669921875,
-0.01120758056640625,
0.0018672943115234375,
0.0157470703125,
-0.0032444000244140625,
0.0254669189453125,
0.0206451416015625,
-0.019866943359375,
0.062255859375,
0.006214141845703125,
-0.04693603515625,
-0.03753662109375,
-0.049346923828125,
... |
liuyanchen1015/MULTI_VALUE_cola_present_perfect_ever | 2023-04-03T19:30:05.000Z | [
"region:us"
] | liuyanchen1015 | null | null | 0 | 12 | 2023-04-03T19:30:00 | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 1672
num_examples: 16
- name: test
num_bytes: 2612
num_examples: 30
- name: train
num_bytes: 19093
num_examples: 253
download_size: 16707
dataset_size: 23377
---
# Dataset Card for "MULTI_VALUE_cola_present_perfect_ever"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 590 | [
[
-0.020294189453125,
-0.02398681640625,
0.0010709762573242188,
0.0467529296875,
-0.0148162841796875,
0.01395416259765625,
0.026641845703125,
-0.007534027099609375,
0.0673828125,
0.0248870849609375,
-0.047210693359375,
-0.0379638671875,
-0.0330810546875,
-0.01... |
liuyanchen1015/MULTI_VALUE_cola_drop_aux_have | 2023-04-03T19:30:05.000Z | [
"region:us"
] | liuyanchen1015 | null | null | 0 | 12 | 2023-04-03T19:30:01 | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 3710
num_examples: 39
- name: test
num_bytes: 4385
num_examples: 54
- name: train
num_bytes: 37722
num_examples: 490
download_size: 26898
dataset_size: 45817
---
# Dataset Card for "MULTI_VALUE_cola_drop_aux_have"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 583 | [
[
-0.051910400390625,
-0.01505279541015625,
-0.001705169677734375,
0.023651123046875,
0.0016040802001953125,
0.0214080810546875,
0.0172576904296875,
-0.0133819580078125,
0.0587158203125,
0.0203704833984375,
-0.0738525390625,
-0.03460693359375,
-0.05255126953125,
... |
mstz/abalone | 2023-04-15T11:04:08.000Z | [
"task_categories:tabular-regression",
"task_categories:tabular-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc",
"abalone",
"tabular_regression",
"regression",
"binary_classification",
"region:us"
] | mstz | null | @misc{misc_abalone_1,
title = {{Abalone}},
year = {1995},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C55C7W}}
} | 0 | 12 | 2023-04-05T10:59:09 | ---
language:
- en
tags:
- abalone
- tabular_regression
- regression
- binary_classification
pretty_name: Abalone
size_categories:
- 1K<n<10K
task_categories:
- tabular-regression
- tabular-classification
configs:
- abalone
- binary
license: cc
---
# Abalone
The [Abalone dataset](https://archive-beta.ics.uci.edu/dataset/1/abalone) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Predict the age of the given abalone.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-----------------------------------------|
| abalone | Regression | Predict the age of the abalone. |
| binary | Binary classification | Does the abalone have more than 9 rings?|
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/abalone")["train"]
```
# Features
Target feature in bold.
|**Feature** |**Type** |
|-----------------------|---------------|
| sex | `[string]` |
| length | `[float64]` |
| diameter | `[float64]` |
| height | `[float64]` |
| whole_weight | `[float64]` |
| shucked_weight | `[float64]` |
| viscera_weight | `[float64]` |
| shell_weight | `[float64]` |
| **number_of_rings** | `[int8]` | | 1,442 | [
[
-0.023834228515625,
-0.054351806640625,
0.03515625,
0.01071929931640625,
-0.02294921875,
-0.0253143310546875,
-0.002460479736328125,
-0.030364990234375,
0.00949859619140625,
0.0413818359375,
-0.056396484375,
-0.05615234375,
-0.021484375,
0.026092529296875,
... |
mstz/balance_scale | 2023-04-15T11:14:55.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"balance_scale",
"tabular_classification",
"multiclass_classification",
"binary_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_balance_scale_12,
title = {{Balance Scale}},
year = {1994},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5488X}}
} | 0 | 12 | 2023-04-05T13:38:46 | ---
language:
- en
tags:
- balance_scale
- tabular_classification
- multiclass_classification
- binary_classification
- UCI
pretty_name: Balance
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- balance
- is_balanced
---
# Balance scale
The [Balance scale dataset](https://archive-beta.ics.uci.edu/dataset/12/balance+scale) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Two weights are put on the arms of a scale. Where does the scale tilt?
# Configurations and tasks
| **Configuration** | **Task** | Description |
|-------------------|---------------------------|---------------------------------------------------------------|
| balance | Multiclass classification | Where does the scale tilt? |
| is_balanced | Binary classification | Does the scale tilt? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/balance_scale", "balance")["train"]
```
# Features
Target feature changes according to the selected configuration and is always in last position in the dataset. | 1,202 | [
[
-0.0440673828125,
-0.0005846023559570312,
0.0043487548828125,
0.018768310546875,
0.0003693103790283203,
-0.018707275390625,
0.00388336181640625,
-0.0226287841796875,
0.03167724609375,
0.03985595703125,
-0.04974365234375,
-0.0196990966796875,
-0.05267333984375,
... |
mstz/hill | 2023-04-16T17:31:39.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"hill",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_hill-valley_166,
author = {Graham,Lee & Oppacher,Franz},
title = {{Hill-Valley}},
year = {2008},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5JC8P}}
} | 1 | 12 | 2023-04-06T13:42:23 | ---
language:
- en
tags:
- hill
- tabular_classification
- binary_classification
- UCI
pretty_name: Hill
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- hill
license: cc
---
# Hill
The [Hill dataset](https://archive.ics.uci.edu/ml/datasets/Hill) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Do the plotted coordinates draw a hill?
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|------------------------------------------|
| hill | Binary classification | Do the plotted coordinates draw a hill? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/hill")["train"]
```
# Features
Features are the coordinates of the drawn point. Feature `X{i}` is the `y` coordinate of the point `(i, X{i})`. | 919 | [
[
-0.0095367431640625,
-0.02606201171875,
0.0207977294921875,
-0.0009784698486328125,
-0.00203704833984375,
-0.012481689453125,
0.00347137451171875,
-0.022186279296875,
0.024017333984375,
0.03594970703125,
-0.0445556640625,
-0.07159423828125,
-0.046051025390625,
... |
medalpaca/medical_meadow_usmle_self_assessment | 2023-04-07T02:23:52.000Z | [
"region:us"
] | medalpaca | null | null | 2 | 12 | 2023-04-06T18:19:11 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.014984130859375,
0.05718994140625,
0.0288543701171875,
-0.0350341796875,
0.046478271484375,
0.052520751953125,
0.005062103271484375,
0.051361083984375,
0.016998291015625,
-0.0521240234375,
-0.01496124267578125,
-0.0604248046875,
0.037... |
andreabac3/MedQuaAD-Italian-Fauno-Baize | 2023-04-08T15:44:46.000Z | [
"license:gpl-3.0",
"arxiv:2304.01196",
"region:us"
] | andreabac3 | null | null | 3 | 12 | 2023-04-08T15:26:59 | ---
license: gpl-3.0
---
# MedQuaAD-Italian-Fauno-Baize
This dataset is an Italian translation of the MedQuaAD dataset presented by Baize's authors.
## Dataset Description
- **Paper:** https://arxiv.org/abs/2304.01196
### Languages
Italian
## Dataset Structure
### Data Instances
Sentences 46,867
average number of turns 3.8
response lengths of each turn 35.8
### Data Fields
topic, input
### Data Splits
Train
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
https://github.com/project-baize/baize-chatbot
## Additional Information
### Dataset Curators
[Andrea Bacciu](https://andreabac3.github.io/), Dr. [Giovanni Trappolini](https://sites.google.com/view/giovannitrappolini), [Andrea Santilli](https://www.santilli.xyz/), and Professor [Fabrizio Silvestri](https://sites.google.com/diag.uniroma1.it/fabriziosilvestri/home).
### Licensing Information
This project is a derivative of Baize, and we adhere to the licensing constraints imposed by Baize's creators.
### Citation Information
```bibtex
@misc{fauno,
author = {Andrea Bacciu, Giovanni Trappolini, Andrea Santilli, Fabrizio Silvestri},
title = {Fauno: The Italian Large Language Model that will leave you senza parole!},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/andreabac3/Fauno-Italian-LLM}},
}
```
```bibtex
@article{xu2023baize,
title={Baize: An Open-Source Chat Model with Parameter-Efficient Tuning on Self-Chat Data},
author={Xu, Canwen and Guo, Daya and Duan, Nan and McAuley, Julian},
journal={arXiv preprint arXiv:2304.01196},
year={2023}
}
```
| 1,661 | [
[
-0.0233612060546875,
-0.043548583984375,
0.02081298828125,
0.0233306884765625,
-0.006725311279296875,
-0.0201568603515625,
-0.0269927978515625,
-0.002079010009765625,
0.032135009765625,
0.0279693603515625,
-0.041595458984375,
-0.04681396484375,
-0.04425048828125... |
Yairama/alpaca_miner_dataset | 2023-04-11T07:05:13.000Z | [
"license:gpl-3.0",
"region:us"
] | Yairama | null | null | 0 | 12 | 2023-04-10T17:00:22 | ---
license: gpl-3.0
---
# A dataset of mining engineering generated with ChatGPT & BinGPT
I take as base the [colorado school of mines - mining engineering syllabus](https://catalog.mines.edu/undergraduate/programs/miningengineering/miningengineering.pdf) | 257 | [
[
-0.0102386474609375,
-0.0391845703125,
0.0176849365234375,
0.004734039306640625,
0.027618408203125,
0.01390838623046875,
0.0325927734375,
0.0172882080078125,
0.0259246826171875,
0.05279541015625,
-0.053070068359375,
-0.0443115234375,
-0.019134521484375,
-0.0... |
tasksource/ScienceQA_text_only | 2023-07-13T11:50:29.000Z | [
"language:en",
"region:us"
] | tasksource | null | null | 18 | 12 | 2023-04-11T11:45:03 | ---
language: en
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int8
- name: hint
dtype: string
- name: task
dtype: string
- name: grade
dtype: string
- name: subject
dtype: string
- name: topic
dtype: string
- name: category
dtype: string
- name: skill
dtype: string
- name: lecture
dtype: string
- name: solution
dtype: string
splits:
- name: train
num_bytes: 8105771.787521609
num_examples: 6508
- name: validation
num_bytes: 2638142.7097382694
num_examples: 2144
- name: test
num_bytes: 2757852.295213393
num_examples: 2224
download_size: 2925662
dataset_size: 13501766.792473271
---
# Dataset Card for "scienceQA_text_only"
ScienceQA text-only examples (examples where no image was initially present, which means they should be doable with text-only models.)
```
@article{10.1007/s00799-022-00329-y,
author = {Saikh, Tanik and Ghosal, Tirthankar and Mittal, Amish and Ekbal, Asif and Bhattacharyya, Pushpak},
title = {ScienceQA: A Novel Resource for Question Answering on Scholarly Articles},
year = {2022},
journal = {Int. J. Digit. Libr.},
month = {sep}
}
``` | 1,241 | [
[
0.00008356571197509766,
-0.042083740234375,
0.045166015625,
-0.005046844482421875,
-0.041168212890625,
-0.019073486328125,
0.0174102783203125,
-0.00211334228515625,
0.0281219482421875,
0.05029296875,
-0.0643310546875,
-0.0528564453125,
-0.0234527587890625,
0... |
cvssp/WavCaps | 2023-07-06T13:28:10.000Z | [
"size_categories:100B<n<1T",
"language:en",
"license:cc-by-4.0",
"arxiv:2303.17395",
"region:us"
] | cvssp | null | null | 17 | 12 | 2023-04-12T08:09:04 | ---
license: cc-by-4.0
language:
- en
size_categories:
- 100B<n<1T
---
# WavCaps
WavCaps is a ChatGPT-assisted weakly-labelled audio captioning dataset for audio-language multimodal research, where the audio clips are sourced from three websites ([FreeSound](https://freesound.org/), [BBC Sound Effects](https://sound-effects.bbcrewind.co.uk/), and [SoundBible](https://soundbible.com/)) and a sound event detection dataset ([AudioSet Strongly-labelled Subset](https://research.google.com/audioset/download_strong.html)).
- **Paper:** https://arxiv.org/abs/2303.17395
- **Github:** https://github.com/XinhaoMei/WavCaps
## Statistics
| Data Source | # audio | avg. audio duration (s) | avg. text length |
|--------------------|----------|-------------------------|------------------|
| FreeSound | 262300 | 85.98 | 6.77 |
| BBC Sound Effects | 31201 | 115.04 | 9.67 |
| SoundBible | 1232 | 13.12 | 5.87 |
| AudioSet SL subset | 108317 | 10.00 | 9.79 |
| WavCaps | 403050 | 67.59 | 7.80 |
## Download
We provide a json file for each data source. For audio clips sourced from websites, we provide processed caption, raw description, as well as other metadata. For audio clips from AudioSet, we use the version from PANNs, where each file name is appended with a 'Y' at the start. For the start time, please refer to the original metadata of AudioSet SL subset.
Waveforms with flac format can be downloaded through [Zip_files](https://huggingface.co/datasets/cvssp/WavCaps/tree/main/Zip_files) directory.
Pretrained models can be downloaded [here](https://drive.google.com/drive/folders/1pFr8IRY3E1FAtc2zjYmeuSVY3M5a-Kdj?usp=share_link).
<font color='red'>If you get "error: invalid zip file with overlapped components (possible zip bomb)" when unzipping,
please try the following commands: </font>
`zip -F AudioSet_SL.zip --out AS.zip`
`unzip AS.zip`
## License
Only academic uses are allowed for WavCaps dataset. By downloading audio clips through the links provided in the json files, you agree that you will use the audios for research purposes only.
For credits for audio clips from FreeSound, please refer to its own page.
For detailed license information, please refer to:
[FreeSound](https://freesound.org/help/faq/#licenses), [BBC Sound Effects](https://sound-effects.bbcrewind.co.uk/licensing), [SoundBible](https://soundbible.com/about.php)
The models we provided are created under a UK data copyright exemption for non-commercial research.
## Code for related tasks
We provide codes and pre-trained models for audio-language retrieval, automated audio captioning, and zero-shot audio classification.
* [Retrieval](https://github.com/XinhaoMei/WavCaps/tree/master/retrieval)
* [Captioning](https://github.com/XinhaoMei/WavCaps/tree/master/captioning)
* [Zero-shot Audio Classification](https://github.com/XinhaoMei/WavCaps/blob/master/retrieval/zero_shot_classification.py)
* [Text-to-Sound Generation](https://github.com/haoheliu/AudioLDM)
## Citation
Please cite the following if you make use of the dataset.
```bibtex
@article{mei2023wavcaps,
title={WavCaps: A ChatGPT-Assisted Weakly-Labelled Audio Captioning Dataset for Audio-Language Multimodal Research},
author={Mei, Xinhao and Meng, Chutong and Liu, Haohe and Kong, Qiuqiang and Ko, Tom and Zhao, Chengqi and Plumbley, Mark D and Zou, Yuexian and Wang, Wenwu},
journal={arXiv preprint arXiv:2303.17395},
year={2023}
}
``` | 3,615 | [
[
-0.040557861328125,
-0.038360595703125,
0.01055145263671875,
0.0211334228515625,
-0.032379150390625,
-0.01337432861328125,
-0.026947021484375,
-0.03668212890625,
0.0253753662109375,
0.0285797119140625,
-0.05255126953125,
-0.048919677734375,
-0.038330078125,
... |
llm-book/ner-wikinews-dataset | 2023-09-30T09:55:56.000Z | [
"task_categories:token-classification",
"size_categories:n<1K",
"language:ja",
"license:cc-by-2.5",
"news",
"region:us"
] | llm-book | null | null | 0 | 12 | 2023-04-22T14:32:21 | ---
license:
- cc-by-2.5
task_categories:
- token-classification
language:
- ja
tags:
- news
pretty_name: ner-wikinews-dataset
size_categories:
- n<1K
---
# Dataset Card for llm-book/ner-wikinews-dataset
書籍『大規模言語モデル入門』で使用する、[Wikinews](https://ja.wikinews.org/wiki/%E3%83%A1%E3%82%A4%E3%83%B3%E3%83%9A%E3%83%BC%E3%82%B8)の記事に固有表現ラベルを付与したデータセットです。
固有表現ラベルは[llm-book/ner-wikipedia-dataset](https://huggingface.co/datasets/llm-book/ner-wikipedia-dataset)と同様のものを採用しており、全部で8種類 (人名、法人名、地名、製品名、政治的組織名、施設名、その他の組織名、イベント名)あります。
テストセットのみのデータセットとなっています。
## Licence
ウィキニュース日本語版の記事を使用しているため、そのライセンスに従い、「クリエイティブ・コモンズ 表示 2.5 (CC BY 2.5)」とします。
| 629 | [
[
-0.034271240234375,
-0.0443115234375,
-0.0033931732177734375,
-0.002765655517578125,
-0.038604736328125,
-0.0246429443359375,
-0.0008668899536132812,
-0.012664794921875,
0.0340576171875,
0.038909912109375,
-0.046966552734375,
-0.0667724609375,
-0.03118896484375,... |
jlbaker361/anime_faces_50k | 2023-06-05T21:00:40.000Z | [
"region:us"
] | jlbaker361 | null | null | 1 | 12 | 2023-04-24T03:27:55 | ---
dataset_info:
features:
- name: image
dtype: image
- name: split
dtype: string
- name: src
dtype: string
- name: style
dtype: string
splits:
- name: train
num_bytes: 2749874549.0
num_examples: 50000
download_size: 2708547888
dataset_size: 2749874549.0
---
# Dataset Card for "anime_faces_50k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 471 | [
[
-0.044921875,
-0.002777099609375,
0.004199981689453125,
0.03997802734375,
-0.01485443115234375,
-0.00152587890625,
0.030364990234375,
-0.01300048828125,
0.06402587890625,
0.038665771484375,
-0.075439453125,
-0.04986572265625,
-0.04229736328125,
-0.0104446411... |
thennal/GMaSC | 2023-05-01T21:18:33.000Z | [
"task_categories:text-to-speech",
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ml",
"license:cc-by-sa-4.0",
"region:us"
] | thennal | null | null | 0 | 12 | 2023-05-01T20:16:21 | ---
dataset_info:
features:
- name: text
dtype: string
- name: speaker
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
splits:
- name: train
num_bytes: 717976082.0
num_examples: 2000
download_size: 797772747
dataset_size: 717976082.0
annotations_creators:
- expert-generated
language:
- ml
language_creators:
- found
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: GEC Barton Hill Malayalam Speech Corpus
size_categories:
- 1K<n<10K
source_datasets:
- original
tags: []
task_categories:
- text-to-speech
- automatic-speech-recognition
task_ids: []
---
# GMaSC: GEC Barton Hill Malayalam Speech Corpus
**GMaSC** is a Malayalam text and speech corpus created by the Government Engineering College Barton Hill with an emphasis on Malayalam-accented English. The corpus contains 2,000 text-audio pairs of Malayalam sentences spoken by 2 speakers, totalling in approximately 139 minutes of audio. Each sentences has at least one English word common in Malayalam speech.
## Dataset Structure
The dataset consists of 2,000 instances with fields `text`, `speaker`, and `audio`. The audio is mono, sampled at 48kH. The transcription is normalized and only includes Malayalam characters and common punctuation. The table given below specifies how the 2,000 instances are split between the speakers, along with some basic speaker info:
| Speaker | Gender | Age | Time (HH:MM:SS) | Sentences |
| --- | --- | --- | --- | --- |
| Sonia | Female | 43 | 01:02:17 | 1,000 |
| Anil | Male | 48 | 01:17:23 | 1,000 |
| **Total** | | | **02:19:40** | **2,000** |
### Data Instances
An example instance is given below:
```json
{'text': 'സൗജന്യ ആയുർവേദ മെഡിക്കൽ ക്യാമ്പ്',
'speaker': 'Sonia',
'audio': {'path': None,
'array': array([0.00036621, 0.00033569, 0.0005188 , ..., 0.00094604, 0.00091553,
0.00094604]),
'sampling_rate': 48000}}
```
### Data Fields
- **text** (str): Transcription of the audio file
- **speaker** (str): The name of the speaker
- **audio** (dict): Audio object including loaded audio array, sampling rate and path to audio (always None)
### Data Splits
We provide all the data in a single `train` split. The loaded dataset object thus looks like this:
```json
DatasetDict({
train: Dataset({
features: ['text', 'speaker', 'audio'],
num_rows: 2000
})
})
```
## Additional Information
### Licensing
The corpus is made available under the [Creative Commons license (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/).
| 2,576 | [
[
-0.031463623046875,
-0.0516357421875,
0.028656005859375,
0.00917816162109375,
-0.017120361328125,
0.01374053955078125,
-0.0180206298828125,
-0.0166473388671875,
0.033355712890625,
0.029632568359375,
-0.037109375,
-0.04736328125,
-0.051116943359375,
0.0078964... |
howey/super_scirep | 2023-05-10T20:33:02.000Z | [
"region:us"
] | howey | This new dataset is designed to solve this great NLP task and is crafted with a lot of care. | @InProceedings{huggingface:dataset,
title = {A great new dataset},
author={huggingface, Inc.
},
year={2021}
} | 0 | 12 | 2023-05-05T09:04:43 | # SuperSciRep: A Multi-Format Benchmark for Full-text Scientific Document Representations
| 92 | [
[
-0.0128173828125,
0.035675048828125,
0.0450439453125,
0.05108642578125,
-0.030242919921875,
0.0037517547607421875,
-0.0279388427734375,
-0.02850341796875,
0.018096923828125,
0.01178741455078125,
-0.005847930908203125,
-0.05517578125,
-0.04449462890625,
0.045... |
emozilla/booksum-summary-analysis_llama-2048 | 2023-05-25T17:31:50.000Z | [
"region:us"
] | emozilla | null | null | 3 | 12 | 2023-05-25T17:31:46 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: type
dtype: string
splits:
- name: train
num_bytes: 30592419.675875388
num_examples: 1680
- name: test
num_bytes: 2601037.557901086
num_examples: 159
- name: validation
num_bytes: 8498481.502685765
num_examples: 433
download_size: 3424916
dataset_size: 41691938.736462235
---
# Dataset Card for "booksum-summary-analysis-llama"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 609 | [
[
-0.031951904296875,
-0.005176544189453125,
0.006504058837890625,
0.01082611083984375,
-0.03387451171875,
0.002288818359375,
0.0291290283203125,
-0.004512786865234375,
0.064453125,
0.042694091796875,
-0.0518798828125,
-0.0640869140625,
-0.052825927734375,
-0.... |
winddude/reddit_finance_43_250k | 2023-05-25T23:06:03.000Z | [
"language:en",
"license:gpl-3.0",
"finance",
"investing",
"crypto",
"reddit",
"region:us"
] | winddude | null | null | 25 | 12 | 2023-05-25T21:31:02 | ---
license: gpl-3.0
language:
- en
tags:
- finance
- investing
- crypto
- reddit
---
# reddit finance 43 250k
`reddit_finance_43_250k` is a collection of 250k post/comment pairs from 43 financial, investing and crypto subreddits. Post must have all been text, with a length of 250chars, and a positive score. Each subreddit is narrowed down to the 70th qunatile before being mergered with their top 3 comments and than the other subs. Further score based methods are used to select the top 250k post/comment pairs.
The code to recreate the dataset is here: <https://github.com/getorca/ProfitsBot_V0_OLLM/tree/main/ds_builder>
The trained lora model is here: <https://huggingface.co/winddude/pb_lora_7b_v0.1> | 713 | [
[
-0.0439453125,
-0.053741455078125,
0.0196685791015625,
0.031097412109375,
-0.0501708984375,
0.00797271728515625,
-0.006984710693359375,
-0.049652099609375,
0.049407958984375,
0.036773681640625,
-0.0609130859375,
-0.0484619140625,
-0.048919677734375,
-0.00622... |
TigerResearch/tigerbot-dolly-Brainstorming-en-1.7k | 2023-05-31T02:28:32.000Z | [
"language:en",
"license:apache-2.0",
"region:us"
] | TigerResearch | null | null | 1 | 12 | 2023-05-30T15:01:57 | ---
license: apache-2.0
language:
- en
---
[Tigerbot](https://github.com/TigerResearch/TigerBot) 基于dolly数据集加工的头脑风暴Brainstorming相关分类的的sft。
原始来源:[https://huggingface.co/datasets/databricks/databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k/)
<p align="center" width="40%">
databricks-dolly-15k is an open source dataset of instruction-following records generated by thousands of Databricks employees in several of the behavioral categories outlined in the InstructGPT paper
## Usage
```python
import datasets
ds_sft = datasets.load_dataset('TigerResearch/tigerbot-dolly-Brainstorming-en-1.7k')
``` | 635 | [
[
-0.01800537109375,
-0.068359375,
0.0025005340576171875,
0.030975341796875,
-0.0165863037109375,
-0.00537872314453125,
0.01122283935546875,
0.014923095703125,
0.03424072265625,
0.029266357421875,
-0.06951904296875,
-0.0245361328125,
-0.0202484130859375,
-0.00... |
TigerResearch/tigerbot-dolly-classification-en-2k | 2023-05-31T01:34:13.000Z | [
"language:en",
"license:apache-2.0",
"region:us"
] | TigerResearch | null | null | 0 | 12 | 2023-05-30T15:04:16 | ---
license: apache-2.0
language:
- en
---
[Tigerbot](https://github.com/TigerResearch/TigerBot) 基于dolly数据集加工的分类classification相关分类的的sft。
原始来源:[https://huggingface.co/datasets/databricks/databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k/)
<p align="center" width="40%">
databricks-dolly-15k is an open source dataset of instruction-following records generated by thousands of Databricks employees in several of the behavioral categories outlined in the InstructGPT paper
## Usage
```python
import datasets
ds_sft = datasets.load_dataset('TigerResearch/tigerbot-dolly-classification-en-2k')
``` | 633 | [
[
-0.01092529296875,
-0.0540771484375,
-0.0092620849609375,
0.0222015380859375,
-0.00928497314453125,
0.00226593017578125,
0.0166015625,
0.00830078125,
0.02398681640625,
0.03155517578125,
-0.05084228515625,
-0.036041259765625,
-0.0303802490234375,
0.0051078796... |
TigerResearch/tigerbot-book-qa-1k | 2023-05-31T01:24:08.000Z | [
"license:apache-2.0",
"region:us"
] | TigerResearch | null | null | 0 | 12 | 2023-05-30T15:13:50 | ---
license: apache-2.0
---
[Tigerbot](https://github.com/TigerResearch/TigerBot) 自有中文书籍-名著相关知识问答数据。
<p align="center" width="40%">
## Usage
```python
import datasets
ds_sft = datasets.load_dataset('TigerResearch/tigerbot-book-qa-1k')
``` | 241 | [
[
-0.0127105712890625,
-0.0229949951171875,
-0.003753662109375,
0.0126495361328125,
-0.043853759765625,
0.0009145736694335938,
0.0111236572265625,
-0.0003383159637451172,
0.041412353515625,
0.039398193359375,
-0.0291900634765625,
-0.041107177734375,
-0.00684356689... |
TigerResearch/tigerbot-riddle-qa-1k | 2023-05-31T02:03:23.000Z | [
"language:zh",
"license:apache-2.0",
"region:us"
] | TigerResearch | null | null | 1 | 12 | 2023-05-30T15:20:44 | ---
license: apache-2.0
language:
- zh
---
[Tigerbot](https://github.com/TigerResearch/TigerBot) 搜集整理加工的中文-猜谜语sft数据集
<p align="center" width="40%">
## Usage
```python
import datasets
ds_sft = datasets.load_dataset('TigerResearch/tigerbot-riddle-qa-1k')
```
| 260 | [
[
-0.005840301513671875,
-0.03973388671875,
0.01111602783203125,
0.0260162353515625,
-0.03277587890625,
0.009246826171875,
0.00982666015625,
0.004367828369140625,
0.048492431640625,
0.0260467529296875,
-0.048858642578125,
-0.0248565673828125,
-0.003870010375976562... |
TigerResearch/tigerbot-mt-note-generation-en | 2023-05-31T01:41:16.000Z | [
"language:en",
"license:apache-2.0",
"region:us"
] | TigerResearch | null | null | 2 | 12 | 2023-05-30T15:42:27 | ---
license: apache-2.0
language:
- en
---
[Tigerbot](https://github.com/TigerResearch/TigerBot) 病历生成相关的sft数据集
<p align="center" width="40%">
## Usage
```python
import datasets
ds_sft = datasets.load_dataset('TigerResearch/tigerbot-mt-note-generation-en')
``` | 262 | [
[
-0.0167999267578125,
-0.0546875,
0.0123291015625,
0.0357666015625,
-0.04681396484375,
0.0014657974243164062,
-0.0085906982421875,
0.00821685791015625,
0.045196533203125,
0.044769287109375,
-0.056884765625,
-0.0340576171875,
-0.01485443115234375,
0.0210571289... |
chirp-watai/audio_dataset | 2023-06-14T16:36:22.000Z | [
"task_categories:zero-shot-classification",
"size_categories:1K<n<10K",
"audio",
"sound",
"region:us"
] | chirp-watai | null | null | 0 | 12 | 2023-05-30T22:59:20 | ---
task_categories:
- zero-shot-classification
tags:
- audio
- sound
pretty_name: audio
size_categories:
- 1K<n<10K
---
# Audio Dataset
This dataset consists of audio data for the following categories:
* Coughing
* Running water
* Toilet flush
* Other sounds
Although this data is unbalanced, data augmentations can be added to process the data for audio classification. The file structure looks as follows:
\- audio/
\- coughing/
\- toilet_flush/
\- running_water/
\- other_1/
\- other_2/ | 684 | [
[
-0.020172119140625,
-0.0215606689453125,
0.0158233642578125,
0.0279998779296875,
-0.0276031494140625,
-0.0039215087890625,
0.013092041015625,
0.007541656494140625,
0.029022216796875,
0.05401611328125,
-0.04443359375,
-0.057342529296875,
-0.05181884765625,
0.... |
tasksource/prontoqa | 2023-06-05T07:46:05.000Z | [
"task_categories:question-answering",
"task_categories:text-classification",
"language:en",
"license:apache-2.0",
"region:us"
] | tasksource | null | null | 1 | 12 | 2023-06-05T07:44:13 | ---
license: apache-2.0
task_categories:
- question-answering
- text-classification
language:
- en
---
https://github.com/asaparov/prontoqa/
```
@article{saparov2022language,
title={Language models are greedy reasoners: A systematic formal analysis of chain-of-thought},
author={Saparov, Abulhair and He, He},
journal={arXiv preprint arXiv:2210.01240},
year={2022}
}
``` | 379 | [
[
-0.0124359130859375,
-0.052642822265625,
0.03338623046875,
0.005962371826171875,
-0.027587890625,
-0.00701904296875,
-0.0109100341796875,
-0.033721923828125,
0.019195556640625,
0.038970947265625,
-0.053192138671875,
-0.006622314453125,
-0.02117919921875,
-0.... |
lukecarlate/english_finance_news | 2023-06-12T16:20:10.000Z | [
"region:us"
] | lukecarlate | null | null | 2 | 12 | 2023-06-12T16:20:01 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
OamPatel/iti_nq_open_val | 2023-06-14T18:47:08.000Z | [
"region:us"
] | OamPatel | null | null | 1 | 12 | 2023-06-14T18:07:56 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
KaiLv/UDR_MNLI | 2023-06-21T12:42:08.000Z | [
"region:us"
] | KaiLv | null | null | 0 | 12 | 2023-06-21T12:41:30 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: label
dtype: int64
- name: label_text
dtype: string
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 77946210
num_examples: 263789
- name: validation
num_bytes: 883710
num_examples: 3000
- name: validation_mm
num_bytes: 910699
num_examples: 3000
- name: debug
num_bytes: 29518034
num_examples: 100000
download_size: 47966458
dataset_size: 109258653
---
# Dataset Card for "UDR_MNLI"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 663 | [
[
-0.0372314453125,
-0.01519775390625,
0.0037975311279296875,
0.01061248779296875,
-0.01568603515625,
0.0009694099426269531,
0.0291595458984375,
-0.00682830810546875,
0.0487060546875,
0.03497314453125,
-0.051513671875,
-0.052337646484375,
-0.029541015625,
0.00... |
KaiLv/UDR_RocEnding | 2023-06-21T12:46:45.000Z | [
"region:us"
] | KaiLv | null | null | 0 | 12 | 2023-06-21T12:46:29 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: question
dtype: string
- name: target
dtype: string
- name: len_question
dtype: int64
- name: len_target
dtype: int64
splits:
- name: train
num_bytes: 22821733
num_examples: 87906
- name: validation
num_bytes: 2542405
num_examples: 9807
- name: test
num_bytes: 2542405
num_examples: 9807
- name: debug
num_bytes: 1297842
num_examples: 5000
download_size: 17953696
dataset_size: 29204385
---
# Dataset Card for "UDR_RocEnding"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 697 | [
[
-0.0228271484375,
-0.0170135498046875,
-0.0016326904296875,
0.01708984375,
-0.01763916015625,
0.01348876953125,
0.01812744140625,
-0.003917694091796875,
0.03179931640625,
0.04022216796875,
-0.0535888671875,
-0.054351806640625,
-0.0225982666015625,
-0.0155334... |
KaiLv/UDR_RocStory | 2023-06-21T12:47:02.000Z | [
"region:us"
] | KaiLv | null | null | 0 | 12 | 2023-06-21T12:46:45 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: question
dtype: string
- name: target
dtype: string
- name: len_question
dtype: int64
- name: len_target
dtype: int64
splits:
- name: train
num_bytes: 22735056
num_examples: 87526
- name: validation
num_bytes: 2540477
num_examples: 9799
- name: test
num_bytes: 2540477
num_examples: 9799
- name: debug
num_bytes: 1297855
num_examples: 5000
download_size: 17785834
dataset_size: 29113865
---
# Dataset Card for "UDR_RocStory"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 696 | [
[
-0.0276641845703125,
-0.01372528076171875,
0.004207611083984375,
0.01142120361328125,
-0.016387939453125,
0.0046234130859375,
0.01471710205078125,
-0.0054931640625,
0.044891357421875,
0.034027099609375,
-0.05584716796875,
-0.055908203125,
-0.0304718017578125,
... |
KaiLv/UDR_SST-2 | 2023-06-21T12:49:13.000Z | [
"region:us"
] | KaiLv | null | null | 0 | 12 | 2023-06-21T12:49:05 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: sentence
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 853094
num_examples: 6911
- name: test
num_bytes: 224519
num_examples: 1821
- name: debug
num_bytes: 617046
num_examples: 5000
download_size: 1109867
dataset_size: 1694659
---
# Dataset Card for "UDR_SST-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 539 | [
[
-0.0152740478515625,
-0.01445770263671875,
0.0126495361328125,
0.00943756103515625,
-0.036376953125,
0.0178070068359375,
0.032196044921875,
-0.0012569427490234375,
0.042266845703125,
0.0242767333984375,
-0.048797607421875,
-0.036529541015625,
-0.037567138671875,... |
KaiLv/UDR_Subj | 2023-06-21T12:49:33.000Z | [
"region:us"
] | KaiLv | null | null | 0 | 12 | 2023-06-21T12:49:24 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: label
dtype: int64
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 1181174
num_examples: 8000
- name: test
num_bytes: 299358
num_examples: 2000
- name: debug
num_bytes: 737874
num_examples: 5000
download_size: 1474560
dataset_size: 2218406
---
# Dataset Card for "UDR_Subj"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 539 | [
[
-0.046051025390625,
-0.016265869140625,
0.0019006729125976562,
0.005786895751953125,
-0.0207061767578125,
0.0133209228515625,
0.0255584716796875,
-0.0016126632690429688,
0.0517578125,
0.024810791015625,
-0.0531005859375,
-0.053436279296875,
-0.03594970703125,
... |
KaiLv/UDR_TREC | 2023-06-21T12:49:41.000Z | [
"region:us"
] | KaiLv | null | null | 0 | 12 | 2023-06-21T12:49:33 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: label
dtype: int64
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 380267
num_examples: 5381
- name: test
num_bytes: 27979
num_examples: 500
- name: debug
num_bytes: 353299
num_examples: 5000
download_size: 465666
dataset_size: 761545
---
# Dataset Card for "UDR_TREC"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 534 | [
[
-0.041961669921875,
-0.0224609375,
0.01062774658203125,
0.007144927978515625,
-0.0149078369140625,
0.02423095703125,
0.0280914306640625,
-0.006504058837890625,
0.049713134765625,
0.0285797119140625,
-0.05255126953125,
-0.067138671875,
-0.02862548828125,
-0.0... |
KaiLv/UDR_Yahoo | 2023-06-21T12:52:33.000Z | [
"region:us"
] | KaiLv | null | null | 0 | 12 | 2023-06-21T12:52:19 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: label
dtype: int64
- name: title
dtype: string
- name: content
dtype: string
- name: sentence
dtype: string
- name: len_sentence
dtype: int64
splits:
- name: train
num_bytes: 17812235
num_examples: 29150
- name: test
num_bytes: 1767766
num_examples: 3000
- name: debug
num_bytes: 3032530
num_examples: 5000
download_size: 14936274
dataset_size: 22612531
---
# Dataset Card for "UDR_Yahoo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 656 | [
[
-0.033172607421875,
-0.025970458984375,
-0.0028247833251953125,
0.0036983489990234375,
-0.0159912109375,
0.00984954833984375,
0.033355712890625,
-0.00518798828125,
0.04132080078125,
0.031707763671875,
-0.0582275390625,
-0.047943115234375,
-0.0258636474609375,
... |
musabg/wizard_vicuna_70k_unfiltered_de | 2023-06-25T07:09:36.000Z | [
"region:us"
] | musabg | null | null | 2 | 12 | 2023-06-25T07:09:12 | ---
dataset_info:
features:
- name: id
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 159146233
num_examples: 34598
download_size: 79402352
dataset_size: 159146233
---
# Dataset Card for "wizard_vicuna_70k_unfiltered_de"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 486 | [
[
-0.04034423828125,
-0.0177154541015625,
0.004199981689453125,
0.005615234375,
-0.03436279296875,
-0.0111846923828125,
0.0137481689453125,
0.0029239654541015625,
0.04901123046875,
0.0745849609375,
-0.051971435546875,
-0.061004638671875,
-0.040618896484375,
-0... |
FreedomIntelligence/alpaca-gpt4-portuguese | 2023-08-06T08:10:58.000Z | [
"region:us"
] | FreedomIntelligence | null | null | 1 | 12 | 2023-06-26T08:18:57 | The dataset is used in the research related to [MultilingualSIFT](https://github.com/FreedomIntelligence/MultilingualSIFT). | 124 | [
[
-0.0284271240234375,
-0.0214385986328125,
-0.000301361083984375,
0.01971435546875,
-0.004512786865234375,
0.004093170166015625,
-0.0194091796875,
-0.0303192138671875,
0.0289154052734375,
0.033966064453125,
-0.0643310546875,
-0.032958984375,
-0.012969970703125,
... |
FreedomIntelligence/evol-instruct-portuguese | 2023-08-06T08:14:09.000Z | [
"region:us"
] | FreedomIntelligence | null | null | 0 | 12 | 2023-06-30T03:44:25 | The dataset is used in the research related to [MultilingualSIFT](https://github.com/FreedomIntelligence/MultilingualSIFT). | 124 | [
[
-0.0284271240234375,
-0.0214385986328125,
-0.000301361083984375,
0.01971435546875,
-0.004512786865234375,
0.004093170166015625,
-0.0194091796875,
-0.0303192138671875,
0.0289154052734375,
0.033966064453125,
-0.0643310546875,
-0.032958984375,
-0.012969970703125,
... |
agostina3/PLEAD | 2023-06-30T14:44:42.000Z | [
"task_categories:text2text-generation",
"task_categories:token-classification",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-nc-sa-4.0",
"hate speech",
"intent classification",
"slot filling",
"abuse detection",
"toxicity",
"region:us"
] | agostina3 | null | null | 0 | 12 | 2023-06-30T07:47:18 | ---
license: cc-by-nc-sa-4.0
task_categories:
- text2text-generation
- token-classification
language:
- en
tags:
- hate speech
- intent classification
- slot filling
- abuse detection
- toxicity
pretty_name: PLEAD
size_categories:
- 10K<n<100K
---
# PLEAD
This is the official dataset from the [Explainable Abuse Detection as Intent Classification and Slot Filling](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00527/114369/Explainable-Abuse-Detection-as-Intent) project.
## Reference
If you use our dataset, please cite our paper:
```
@article{calabrese-etal-2022-plead,
author = {Agostina Calabrese and
Bj{\"{o}}rn Ross and
Mirella Lapata},
title = {Explainable Abuse Detection as Intent Classification and Slot Filling},
journal = {Transactions of the Association for Computational Linguistics},
year = {2022}
}
``` | 881 | [
[
-0.017547607421875,
-0.05621337890625,
0.0452880859375,
0.0191192626953125,
-0.005016326904296875,
-0.03326416015625,
-0.007740020751953125,
-0.0211944580078125,
0.003734588623046875,
0.04248046875,
-0.052581787109375,
-0.0260162353515625,
-0.0341796875,
0.0... |
DynamicSuperb/ChordClassification_AcousticGuitarAndPiano | 2023-07-12T11:14:25.000Z | [
"region:us"
] | DynamicSuperb | null | null | 0 | 12 | 2023-07-12T08:48:17 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: label
dtype: string
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 169780426.0
num_examples: 859
download_size: 148236033
dataset_size: 169780426.0
---
# Dataset Card for "chord_classification_acoustic_guitar_and_piano"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 503 | [
[
-0.048187255859375,
-0.0215911865234375,
0.0153961181640625,
0.01346588134765625,
-0.004253387451171875,
0.01314544677734375,
-0.01038360595703125,
-0.01213836669921875,
0.042572021484375,
0.0218963623046875,
-0.043304443359375,
-0.07550048828125,
-0.01878356933... |
DynamicSuperb/SpoofDetection_ASVspoof2017 | 2023-07-31T10:54:40.000Z | [
"region:us"
] | DynamicSuperb | null | null | 0 | 12 | 2023-07-13T03:40:36 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: instruction
dtype: string
- name: label
dtype: string
splits:
- name: test
num_bytes: 1411064438.928
num_examples: 13306
download_size: 1361993549
dataset_size: 1411064438.928
---
# Dataset Card for "SpoofDetection_ASVspoof2017"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 493 | [
[
-0.025543212890625,
-0.02325439453125,
0.003173828125,
0.0313720703125,
-0.0141754150390625,
0.00018095970153808594,
0.0295867919921875,
-0.021026611328125,
0.0635986328125,
0.040374755859375,
-0.0635986328125,
-0.038604736328125,
-0.04791259765625,
-0.01693... |
frtna/ESCOTaxonomy | 2023-07-21T12:26:32.000Z | [
"region:us"
] | frtna | null | null | 0 | 12 | 2023-07-21T11:27:29 | ---
dataset_info:
features:
- name: esco_id
dtype: string
- name: job_title
dtype: string
- name: description
dtype: string
- name: synonyms
dtype: string
- name: skills
dtype: string
splits:
- name: train
num_bytes: 3647443
num_examples: 3015
- name: test
num_bytes: 111776051
num_examples: 50357
download_size: 0
dataset_size: 115423494
---
# Dataset Card for "ESCOTaxonomy"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 567 | [
[
-0.050323486328125,
-0.031494140625,
0.0251617431640625,
0.0111541748046875,
-0.0151519775390625,
0.0089874267578125,
0.00885009765625,
-0.026031494140625,
0.0833740234375,
0.059326171875,
-0.05889892578125,
-0.062744140625,
-0.050628662109375,
-0.0125961303... |
DynamicSuperb/SarcasmDetection_Mustard | 2023-07-26T04:55:38.000Z | [
"region:us"
] | DynamicSuperb | null | null | 0 | 12 | 2023-07-26T04:54:42 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: utterance
dtype: string
- name: speaker
dtype: string
- name: context
sequence: string
- name: context_speakers
sequence: string
- name: show
dtype: string
- name: label
dtype: bool
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 115618860.0
num_examples: 690
download_size: 115326889
dataset_size: 115618860.0
---
# Dataset Card for "sarcasm_detection_mustard"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 674 | [
[
-0.0341796875,
-0.0165863037109375,
0.0145721435546875,
0.025115966796875,
-0.00738525390625,
-0.01236724853515625,
0.0009851455688476562,
0.0010995864868164062,
0.04888916015625,
0.01910400390625,
-0.055450439453125,
-0.059539794921875,
-0.04345703125,
-0.0... |
PhilSad/celeba-hq-1.5k | 2023-07-26T15:22:05.000Z | [
"region:us"
] | PhilSad | null | null | 0 | 12 | 2023-07-26T15:21:47 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': female
'1': male
splits:
- name: train
num_bytes: 146276286.0
num_examples: 1500
download_size: 146277189
dataset_size: 146276286.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "celeba-hq-1.5k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 555 | [
[
-0.04327392578125,
-0.0181884765625,
-0.01041412353515625,
0.00977325439453125,
-0.01715087890625,
-0.00466156005859375,
0.01360321044921875,
-0.0196685791015625,
0.06378173828125,
0.031524658203125,
-0.052581787109375,
-0.0562744140625,
-0.038421630859375,
... |
Moritz-Pfeifer/CentralBankCommunication | 2023-08-04T14:13:30.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"region:us"
] | Moritz-Pfeifer | null | null | 0 | 12 | 2023-07-29T16:10:01 | ---
license: mit
task_categories:
- text-classification
language:
- en
size_categories:
- 1K<n<10K
---
This dataset contains two manually pre-labeled datasets:
In the **economic agents dataset**, we labeled 6,205 randomized sentences from a [Fed database](https://github.com/Moritz-Pfeifer/CentralBankRoBERTa/tree/main/Data/FED) containing speeches (1948-2023) as speaking either about households, firms, the financial sector, the government, or the central bank itself.
In the **sentiment dataset**, we labeled 6,683 randomized sentences from the same database, which are either labeled as being positive (1) or negative (0).
The datasets were used to train an [agent classifier](https://huggingface.co/Moritz-Pfeifer/CentralBankRoBERTa-agent-classifier) and a [sentiment classifier](https://huggingface.co/Moritz-Pfeifer/CentralBankRoBERTa-sentiment-classifier).
<table>
<tr>
<td colspan="2" style="border-top: 1px solid #ccc; padding: 5px; text-align: left;">
Please cite this model as Pfeifer, M. and Marohl, V.P. (2023) "CentralBankRoBERTa: A Fine-Tuned Large Language Model for Central Bank Communications" ADD SOURCE/LINK
</td>
</tr>
<tr>
<td style="padding: 5px;">
Moritz Pfeifer<br>
Institute for Economic Policy, University of Leipzig<br>
04109 Leipzig, Germany<br>
<a href="mailto:pfeifer@wifa.uni-leipzig.de">pfeifer@wifa.uni-leipzig.de</a>
</td>
<td style="padding: 5px;">
Vincent P. Marohl<br>
Department of Mathematics, Columbia University<br>
New York NY 10027, USA<br>
<a href="mailto:vincent.marohl@columbia.edu">vincent.marohl@columbia.edu</a>
</td>
</tr>
</table> | 1,675 | [
[
-0.040191650390625,
-0.057159423828125,
0.0303192138671875,
0.0211944580078125,
-0.0285797119140625,
-0.01264190673828125,
-0.05877685546875,
-0.020294189453125,
0.0101470947265625,
0.04736328125,
-0.026336669921875,
-0.05615234375,
-0.04833984375,
0.0082778... |
imoxto/prompt_injection_cleaned_dataset | 2023-08-07T15:31:57.000Z | [
"region:us"
] | imoxto | null | null | 0 | 12 | 2023-08-07T15:31:44 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: level
dtype: int64
- name: prompt
dtype: string
- name: user_input
dtype: string
- name: completion
dtype: string
- name: model
dtype: string
- name: expected_completion
dtype: string
- name: token_count
dtype: int64
- name: correct
dtype: bool
- name: error
dtype: bool
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 529771818
num_examples: 374573
- name: validation
num_bytes: 115495832
num_examples: 80266
- name: test
num_bytes: 114490591
num_examples: 80266
download_size: 243813448
dataset_size: 759758241
---
# Dataset Card for "prompt_injection_cleaned_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,035 | [
[
-0.028778076171875,
-0.038726806640625,
0.0249786376953125,
0.0023326873779296875,
-0.013519287109375,
0.0021953582763671875,
0.0225372314453125,
0.006343841552734375,
0.04400634765625,
0.045562744140625,
-0.050140380859375,
-0.0572509765625,
-0.0265655517578125... |
d0rj/boolq-ru | 2023-08-14T09:47:04.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:crowdsourced",
"language_creators:translated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:boolq",
"language:ru",
"license:cc-by-sa-3.0",
"region:us"
] | d0rj | null | null | 0 | 12 | 2023-08-07T18:17:43 | ---
annotations_creators:
- crowdsourced
language_creators:
- translated
language:
- ru
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- boolq
task_categories:
- text-classification
task_ids:
- natural-language-inference
paperswithcode_id: boolq
pretty_name: BoolQ (ru)
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: bool
- name: passage
dtype: string
splits:
- name: train
num_bytes: 10819511
num_examples: 9427
- name: validation
num_bytes: 3710872
num_examples: 3270
download_size: 7376712
dataset_size: 14530383
---
# boolq-ru
Translated version of [boolq](https://huggingface.co/datasets/boolq) dataset into Russian.
## Dataset Description
- **Homepage:** [https://github.com/google-research-datasets/boolean-questions](https://github.com/google-research-datasets/boolean-questions) | 1,057 | [
[
0.004230499267578125,
-0.04217529296875,
0.0137481689453125,
0.011383056640625,
-0.0207061767578125,
0.0088653564453125,
0.00121307373046875,
-0.0252532958984375,
0.028900146484375,
0.046478271484375,
-0.05804443359375,
-0.052001953125,
-0.00897216796875,
0.... |
ixarchakos/dresses_laydown | 2023-10-07T01:36:01.000Z | [
"region:us"
] | ixarchakos | null | null | 0 | 12 | 2023-08-08T03:26:58 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
heegyu/aulm-0809 | 2023-08-22T03:33:28.000Z | [
"region:us"
] | heegyu | null | null | 2 | 12 | 2023-08-09T06:52:40 | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 704591219
num_examples: 171404
download_size: 311285345
dataset_size: 704591219
---
공개 한국어 Instruction 데이터를 포멧을 통일하고 병합한 데이터
| Dataset | # instance | 타입 |
| --- | --- | --- |
| [KoAlpaca v1.1](https://raw.githubusercontent.com/Beomi/KoAlpaca/main/KoAlpaca_v1.1.jsonl) | 50K | 싱글턴 |
| [dbdu/ShareGPT-74k-ko 의 part2_ko_uncleaned](https://huggingface.co/datasets/dbdu/ShareGPT-74k-ko/resolve/main/part2_ko_uncleaned.json) | 36K | 멀티턴 |
| [heegyu/korquad-chat-v1](https://huggingface.co/datasets/heegyu/korquad-chat-v1) | 9.6K | 멀티턴, 지식기반 |
| [lcw99/evolve-instruct](https://github.com/lcw99/evolve-instruct/) | 37K | 싱글턴 |
| [HAERAE-HUB/KoInstruct-QA](https://huggingface.co/datasets/HAERAE-HUB/KoInstruct-QA) | 50.3k | 싱글턴 |
| [changpt/ko-lima-vicuna](https://huggingface.co/datasets/changpt/ko-lima-vicuna) | 1K | 싱글턴, 멀티턴(극히 일부) |
| [nlpai-lab/kullm-v2](https://huggingface.co/datasets/nlpai-lab/kullm-v2) | 15K | 싱글턴 |
- KULLM v2 데이터셋에서는 GPT4ALL, Dolly 데이터만 추출해서 사용했습니다.
- 다양한 학습 데이터셋은 [HeegyuKim/open-korean-instructions](https://github.com/HeegyuKim/open-korean-instructions) GitHub repository를 참고하세요.
| 1,296 | [
[
-0.034820556640625,
-0.056427001953125,
0.0142059326171875,
0.034698486328125,
-0.0306854248046875,
0.003505706787109375,
0.005401611328125,
-0.017578125,
0.046875,
0.039398193359375,
-0.0450439453125,
-0.0511474609375,
-0.0312347412109375,
-0.01289367675781... |
dim/essayforum_writing_prompts_6k | 2023-08-16T20:37:43.000Z | [
"region:us"
] | dim | null | null | 1 | 12 | 2023-08-16T01:03:40 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 21696702
num_examples: 6361
download_size: 11796178
dataset_size: 21696702
---
# Dataset Card for "essayforum_writing_prompts_6k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 411 | [
[
-0.037689208984375,
-0.0124969482421875,
0.03485107421875,
0.0197601318359375,
-0.005218505859375,
-0.011749267578125,
0.00879669189453125,
0.0028324127197265625,
0.0389404296875,
0.041595458984375,
-0.06695556640625,
-0.05126953125,
-0.0273895263671875,
0.0... |
arbml/alpagasus_cleaned_ar | 2023-09-06T17:22:31.000Z | [
"region:us"
] | arbml | null | null | 0 | 12 | 2023-08-20T19:52:57 | ---
dataset_info:
features:
- name: instruction_en
dtype: string
- name: output_en
dtype: string
- name: instruction
dtype: string
- name: output
dtype: string
- name: index
dtype: int64
splits:
- name: train
num_bytes: 9824184
num_examples: 9229
download_size: 5541315
dataset_size: 9824184
---
# Dataset Card for "alpagasus_cleaned_ar"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 518 | [
[
-0.040008544921875,
-0.0184783935546875,
0.001598358154296875,
-0.01519775390625,
-0.0246429443359375,
-0.004039764404296875,
0.0247955322265625,
-0.01485443115234375,
0.07623291015625,
0.0518798828125,
-0.042755126953125,
-0.05328369140625,
-0.040313720703125,
... |
mlabonne/Evol-Instruct-Python-26k | 2023-08-25T16:29:36.000Z | [
"region:us"
] | mlabonne | null | null | 4 | 12 | 2023-08-25T13:25:34 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: output
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 39448413.53337422
num_examples: 26588
download_size: 22381182
dataset_size: 39448413.53337422
---
# Evol-Instruct-Python-26k
Filtered version of the [`nickrosh/Evol-Instruct-Code-80k-v1`](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) dataset that only keeps Python code (26,588 samples). You can find a smaller version of it here [`mlabonne/Evol-Instruct-Python-1k`](https://huggingface.co/datasets/mlabonne/Evol-Instruct-Python-1k).
Here is the distribution of the number of tokens in each row (instruction + output) using Llama's tokenizer:
 | 844 | [
[
-0.023773193359375,
-0.03521728515625,
0.00860595703125,
0.0222625732421875,
-0.041259765625,
-0.006824493408203125,
0.009185791015625,
-0.01375579833984375,
0.0521240234375,
0.03997802734375,
-0.04205322265625,
-0.053863525390625,
-0.027191162109375,
0.0274... |
PetraAI/autotrain-data-zalmati-ai | 2023-09-05T13:47:18.000Z | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:table-question-answering",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:translation",
"task_categories:summarization",
"task_categories:conversational",... | PetraAI | null | null | 0 | 12 | 2023-08-29T11:41:34 | ---
license: apache-2.0
task_categories:
- text-classification
- token-classification
- table-question-answering
- question-answering
- zero-shot-classification
- translation
- summarization
- conversational
- feature-extraction
- text-generation
- text2text-generation
- fill-mask
- sentence-similarity
- text-to-speech
- automatic-speech-recognition
- audio-to-audio
- audio-classification
- voice-activity-detection
- depth-estimation
- image-classification
- object-detection
- image-segmentation
- unconditional-image-generation
- robotics
- reinforcement-learning
- tabular-classification
- video-classification
- tabular-to-text
- multiple-choice
- text-retrieval
- time-series-forecasting
- text-to-video
- visual-question-answering
- zero-shot-image-classification
- graph-ml
- table-to-text
- text-to-image
- image-to-text
- image-to-image
- tabular-regression
language:
- ar
- en
tags:
- chemistry
- medical
- code
- art
- music
- biology
- finance
- legal
- climate
pretty_name: Zalmati-Autotrain
size_categories:
- 100K<n<1M
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 2,574 | [
[
-0.038177490234375,
-0.02984619140625,
-0.0036067962646484375,
0.027130126953125,
-0.0323486328125,
0.0037822723388671875,
-0.01727294921875,
-0.02020263671875,
0.049041748046875,
0.04046630859375,
-0.0634765625,
-0.08062744140625,
-0.052947998046875,
0.0020... |
dim/scitldr | 2023-08-31T19:47:53.000Z | [
"region:us"
] | dim | null | null | 0 | 12 | 2023-08-31T19:47:16 | ---
dataset_info:
features:
- name: source
dtype: string
- name: target
dtype: string
splits:
- name: train
num_bytes: 4016919
num_examples: 3229
download_size: 2222180
dataset_size: 4016919
---
# Dataset Card for "scitldr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 386 | [
[
-0.03192138671875,
-0.006237030029296875,
0.01113128662109375,
0.01739501953125,
-0.0147552490234375,
0.01230621337890625,
0.0208892822265625,
-0.011016845703125,
0.05413818359375,
0.01751708984375,
-0.05584716796875,
-0.04827880859375,
-0.0413818359375,
-0.... |
dim/dolphin_ru_3k | 2023-08-31T20:24:23.000Z | [
"region:us"
] | dim | null | null | 0 | 12 | 2023-08-31T20:20:15 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 8490195.387822216
num_examples: 3000
download_size: 4148079
dataset_size: 8490195.387822216
---
# Dataset Card for "dolphin_ru_3k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 451 | [
[
-0.059356689453125,
-0.0094146728515625,
0.011871337890625,
0.0251922607421875,
-0.03826904296875,
-0.02130126953125,
0.0411376953125,
-0.03564453125,
0.0557861328125,
0.042877197265625,
-0.055908203125,
-0.03851318359375,
-0.033203125,
0.007266998291015625,... |
PurCL/malware-top-100 | 2023-08-31T21:13:38.000Z | [
"region:us"
] | PurCL | null | null | 0 | 12 | 2023-08-31T21:09:21 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: binary_name
dtype: string
- name: labels
sequence: string
- name: functions
dtype: string
splits:
- name: train
num_bytes: 5667834326.115244
num_examples: 3728
- name: test
num_bytes: 1667814982.765135
num_examples: 1097
- name: valid
num_bytes: 1001905263.1196207
num_examples: 659
download_size: 2454551882
dataset_size: 8337554571.999999
---
# Dataset Card for "malware-top-100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 770 | [
[
-0.02978515625,
-0.0182037353515625,
0.0036792755126953125,
0.01312255859375,
0.0009489059448242188,
0.005092620849609375,
0.02178955078125,
0.00264739990234375,
0.046478271484375,
0.044952392578125,
-0.0462646484375,
-0.0645751953125,
-0.050079345703125,
-0... |
dim/runne_prompts | 2023-09-02T16:20:49.000Z | [
"region:us"
] | dim | null | null | 0 | 12 | 2023-08-31T21:35:34 | ---
dataset_info:
features:
- name: text
dtype: string
- name: parsed_entities
dtype: string
splits:
- name: train
num_bytes: 2636744
num_examples: 537
download_size: 1142735
dataset_size: 2636744
---
# Dataset Card for "runne_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 398 | [
[
-0.045745849609375,
-0.01983642578125,
0.0244140625,
0.01410675048828125,
-0.00516510009765625,
-0.00911712646484375,
0.01025390625,
0.0133056640625,
0.05902099609375,
0.042938232421875,
-0.07958984375,
-0.04449462890625,
-0.0273284912109375,
-0.004318237304... |
SinKove/synthetic_chest_xray | 2023-09-14T12:46:05.000Z | [
"task_categories:image-classification",
"size_categories:10K<n<100K",
"license:openrail",
"medical",
"arxiv:2306.01322",
"region:us"
] | SinKove | Chest XRay dataset with chexpert labels. | null | 7 | 12 | 2023-09-02T10:39:37 | ---
task_categories:
- image-classification
tags:
- medical
pretty_name: C
size_categories:
- 10K<n<100K
license: openrail
---
# Dataset Card for Synthetic Chest Xray
## Dataset Description
This is a synthetic chest X-ray dataset created during the development of the *privacy distillation* paper. In particular, this is the $D_{filter}$ dataset described.
- **Paper: https://arxiv.org/abs/2306.01322
- **Point of Contact: pedro.sanchez@ed.ac.uk
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks
Chexpert classification.
https://stanfordmlgroup.github.io/competitions/chexpert/
## Dataset Structure
- Images
- Chexpert Labels
### Data Splits
We did not define data splits. In the paper, all the images were used as training data and real data samples were used as validation and testing data.
## Dataset Creation
We generated the synthetic data samples using the diffusion model finetuned on the [Mimic-CXR dataset](https://physionet.org/content/mimic-cxr/2.0.0/).
### Personal and Sensitive Information
Following GDPR "Personal data is any information that relates to an identified or identifiable living individual."
We make sure that there are not "personal data" (re-identifiable information) by filtering with a deep learning model trained for identifying patients.
## Considerations for Using the Data
### Social Impact of Dataset
We hope that this dataset can used to enhance AI models training for pathology classification in chest X-ray.
### Discussion of Biases
There are biases towards specific pathologies. For example, the "No Findings" label is much bigger than other less common pathologies.
## Additional Information
### Dataset Curators
We used deep learning to filter the dataset.
We filter for re-identification, making sure that none of the images used in the training can be re-identified using samples from this synthetic dataset.
### Licensing Information
We generated the synthetic data samples based on generative model trained on the [Mimic-CXR dataset](https://physionet.org/content/mimic-cxr/2.0.0/). Mimic-CXR uses the [PhysioNet Credentialed Health](https://physionet.org/content/mimic-cxr/view-license/2.0.0/) data license.
The real data license explicitly requires that "The LICENSEE will not share access to PhysioNet restricted data with anyone else". Here, we ensure that none of the synthetic images can be used to re-identify real Mimic-CXR images. Therefore, we do not consider this synthetic dataset to be "PhysioNet restricted data".
This dataset is released under the [Open & Responsible AI license ("OpenRAIL")](https://huggingface.co/blog/open_rail)
### Citation Information
Fernandez, V., Sanchez, P., Pinaya, W. H. L., Jacenków, G., Tsaftaris, S. A., & Cardoso, J. (2023). Privacy Distillation: Reducing Re-identification Risk of Multimodal Diffusion Models. arXiv preprint arXiv:2306.01322.
https://arxiv.org/abs/2306.01322
### Contributions
Pedro P. Sanchez, Walter Pinaya uploaded the dataset to Huggingface. All other co-authors of the papers contributed for creating the dataset. | 3,287 | [
[
-0.0217742919921875,
-0.01861572265625,
0.033782958984375,
-0.0048980712890625,
-0.035400390625,
0.015289306640625,
0.0171051025390625,
-0.0321044921875,
0.0244598388671875,
0.034515380859375,
-0.05975341796875,
-0.050201416015625,
-0.04754638671875,
0.00375... |
izaq09/starwars_dataset | 2023-09-08T13:30:31.000Z | [
"region:us"
] | izaq09 | null | null | 0 | 12 | 2023-09-05T04:01:06 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2966960.0
num_examples: 7
download_size: 2933224
dataset_size: 2966960.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "starwars_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 478 | [
[
-0.0484619140625,
-0.0147247314453125,
0.01107025146484375,
-0.001544952392578125,
-0.0111846923828125,
0.01580810546875,
0.01497650146484375,
0.0007600784301757812,
0.06292724609375,
0.03973388671875,
-0.0657958984375,
-0.049652099609375,
-0.0482177734375,
... |
pierre-pessarossi/tiny_shakespeare_dialogue | 2023-09-05T09:59:52.000Z | [
"region:us"
] | pierre-pessarossi | null | null | 0 | 12 | 2023-09-05T09:59:45 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2798654
num_examples: 6281
- name: validation
num_bytes: 166728
num_examples: 439
- name: test
num_bytes: 115868
num_examples: 498
download_size: 957486
dataset_size: 3081250
---
# Dataset Card for "tiny_shakespeare_dialogue"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 664 | [
[
-0.041046142578125,
-0.017303466796875,
0.0171356201171875,
0.00030732154846191406,
-0.01473236083984375,
-0.01508331298828125,
0.00036978721618652344,
-0.0021305084228515625,
0.06231689453125,
0.026641845703125,
-0.06292724609375,
-0.03521728515625,
-0.02587890... |
gauss314/arg-equity | 2023-09-07T19:07:47.000Z | [
"task_categories:tabular-classification",
"task_categories:tabular-regression",
"license:apache-2.0",
"Merval",
"equity",
"region:us"
] | gauss314 | null | null | 0 | 12 | 2023-09-07T18:59:55 | ---
license: apache-2.0
task_categories:
- tabular-classification
- tabular-regression
tags:
- Merval
- equity
pretty_name: Merval daily variations, for deep learning and machine learning tests
---
# Downloading the Options IV SP500 Dataset
This document will guide you through the steps to download the Merval equity dataset from Hugging Face Datasets.
To start, you'll need to install Hugging Face's `datasets` library if you haven't done so already.
You can do this using the following pip command:
```python
!pip install datasets
```
Here's the Python code to load the Merval equity dataset from Hugging Face Datasets and convert it into a pandas DataFrame:
```python
from datasets import load_dataset
import pandas as pd
id = "gauss314/arg-equity"
data = load_dataset(id)
df = pd.DataFrame(data['train'][:])
```
| 834 | [
[
-0.04400634765625,
-0.005279541015625,
-0.0092315673828125,
0.034210205078125,
-0.00814056396484375,
0.005340576171875,
0.01253509521484375,
0.01129913330078125,
0.04632568359375,
0.048309326171875,
-0.054718017578125,
-0.01084136962890625,
-0.036407470703125,
... |
samlhuillier/sql-create-context-spider-intersect | 2023-09-21T00:17:19.000Z | [
"region:us"
] | samlhuillier | null | null | 0 | 12 | 2023-09-07T22:16:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
LabHC/moji | 2023-09-28T09:12:22.000Z | [
"task_categories:text-classification",
"language:en",
"region:us"
] | LabHC | null | null | 0 | 12 | 2023-09-10T10:47:11 | ---
task_categories:
- text-classification
language:
- en
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
- name: sa
dtype: int64
splits:
- name: train
num_bytes: 128596235
num_examples: 1613790
- name: test
num_bytes: 35731728
num_examples: 448276
- name: dev
num_bytes: 14325121
num_examples: 179310
download_size: 93470968
dataset_size: 178653084
---
The Moji dataset (Blodgett et al., 2016) (http://slanglab.cs.umass.edu/TwitterAAE/) contains tweets used for sentiment analysis (either positive or negative sentiment), with additional information on the type of English used in the tweets which is a sensitive attribute considered in fairness-aware approaches (African-American English (AAE) or Standard-American English (SAE)).
The type of language is determined thanks to a supervised model. Only the data
where the sensitive attribute is predicted with a certainty rate above a given threshold are kept.
Based on this principle we make available two versions of the Moji dataset,
respectively with a threshold of 80% and of 90%. The dataset's distributions are presented below.
### Dataset with 80% threshold
| | Positive sentiment | Negative Sentiment | Total |
|---|---|---|---|
AAE | 73 013 | 44 023 | 117 036 |
SAE | 1 471 427 | 652 913 | 2 124 340 |
Total | 1 544 440 | 696 936 | 2 241 376 |
To load this dataset, use the following code :
```python
dataset = load_dataset("LabHC/moji", revision='moji_conf_08')
```
or by default the version is the dataset with 80% threshold
```python
dataset = load_dataset("LabHC/moji")
```
### Dataset with 90% threshold
| | Positive sentiment | Negative Sentiment | Total |
|---|---|---|---|
AAE | 30 827 | 18 409 | 49 236 |
SAE | 793 867 | 351 600 | 1 145 467 |
Total | 824 694 | 370 009 | 1 194 703 |
To load this dataset, use the following code :
```python
dataset = load_dataset("LabHC/moji", revision='moji_conf_09')
```
----
[Demographic Dialectal Variation in Social Media: A Case Study of African-American English](https://aclanthology.org/D16-1120) (Blodgett et al., EMNLP 2016) | 2,138 | [
[
-0.02734375,
-0.04315185546875,
0.00475311279296875,
0.0233306884765625,
-0.01503753662109375,
-0.0094451904296875,
-0.0194244384765625,
-0.0198974609375,
0.04302978515625,
0.0279693603515625,
-0.047332763671875,
-0.0498046875,
-0.05523681640625,
0.006477355... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.