text-classification bool 2 classes | text stringlengths 0 664k |
|---|---|
true | |
false |
This is a transformed copy of [Alpaca Cleaned Dutch](https://huggingface.co/datasets/BramVanroy/alpaca-cleaned-dutch) to make it suitable for the format used in [Baize](https://github.com/project-baize/baize-chatbot). Please refer to that dataset for more information, which includes:
- licensing information;
- biases description;
- and citation requirements.
### Citation Information
If you use this data set, please cite :
Bram Vanroy. (2023). Alpaca Cleaned Dutch [Data set]. Hugging Face. https://doi.org/10.57967/HF/0530
```bibtex
@misc{https://doi.org/10.57967/hf/0530,
doi = {10.57967/HF/0530},
url = {https://huggingface.co/datasets/BramVanroy/alpaca-cleaned-dutch},
author = {{Bram Vanroy}},
title = {{A}lpaca {C}leaned {D}utch},
publisher = {Hugging Face},
year = {2023}
}
```
|
true |
Argument Mining in Scientific Reviews (AMSR)
We release a new dataset of peer-reviews from different computer science conferences with annotated arguments, called AMSR (**A**rgument **M**ining in **S**cientific **R**eviews).
1. Raw Data
conferences_raw/ contains directories for each conference we scraped (e.g., [iclr20](./data/iclr20)).
The respective directory of each conference comprises multiple `*.json` files, where every file contains the information belonging to a single paper, such as the title, the abstract, the submission date and the reviews.
The reviews are stored in a list called `"review_content"`.
2. Cleaned Data
conferences_cleaned/ contains reviews and papers where we removed all unwated character sequences from the reviews.
For details on the details of the preprocessing steps, please refer to our paper "Argument Mining Driven Analysis of Peer-Reviews".
3. Annotated Data
conferences_annotated/ contains sentence_level and token_level data of 77 reviews, annotated each by 3 annotators.
We have three labels:
PRO - Arguments supporting the acceptance of the paper.
CON - Arguments opposing the acceptance of the paper.
NON - Non-argumentative sentences/tokens which have no influence on the acceptance of the paper.
And following we have three tasks:
Argumentation Detection:
A binary classification of whether a text span is an argument.
The classes are denoted by ARG and NON, where ARG is the union of PRO and CON classes.
Stance Detection:
A binary classification whether an argumentative text span is supporting or opposing the paper acceptance.
he model is trained and evaluated only on argumentative PRO and CON text spans.
Joint Detection:
A multi-class classification between the classes PRO, CON and NON, i.e. the combination of argumentation and stance detection.
4. Generalization across Conferences
conferences_annotated_generalization/ contains token_level data of 77 reviews split diffrently than in 3.
We studied the model’s generalization to peer-reviews for papers from other (sub)domains.
To this end, wereduce the test set to only contain reviews from the GI’20conference.
The focus of the GI’20 conference is ComputerGraphics and Human-Computer Interaction, while the otherconferences are focused on Representation Learning, AI andMedical Imaging.
We consider the GI’20 as a subdomain since all conferences are from the domain of computer science.
NO-GI:
The original training dataset with all sentences from reviews of GI’20 removed.
ALL
A resampling of the original training dataset of the same size as NO-GI, with sentences from all conferences.
5. jupyter-Notebook
ReviewStat is a jupyter notebook, which shows interesting statistics of the raw dataset. |
false |
## E Dataset
This is the card for e dataset |
false |
# Dataset Card for Snippet-MLSUM-500
### Dataset Summary
This dataset is a sample of ~500 news articles from the [MLSUM](https://huggingface.co/datasets/mlsum) dataset, augmented with machine generated news snippets.
### Supported Tasks
This dataset was created to support the task of generating news snippets such as title, teaser, keywords, serp and tweet for news articles in German language.
### Languages
de - German
## Dataset Structure
text: a string feature.
title: a string feature.
teaser: a string feature.
keywords: a string feature.
serp_title: a string feature.
serp_description: a string feature.
tweet: a string feature.
url: a string feature.
date: a string feature.
topic: a string feature.
## Dataset Creation
The news articles in this dataset are a random sample of ~500 news articles from MLSUM balanced by topic.
Features text, title, teaser (originally summary in MLSUM), url, date and topic are copied from MLSUM.
Features keywords, serp_title, serp_description and tweet are machine generated with GPT-3.5.
Generated features comply with length limits in place for SERPs and Tweets at the time of publishing.
## Considerations for Using the Data
### Known Limitations
Part of the snippet data is machine generated. Be aware that these features (specifically: keywords, serp_title, serp_description and tweet) may exhibit signs of model hallucination.
## Additional Information
See [Snippet-MLSUM-500-V2](https://huggingface.co/datasets/snipaid/snippet-mlsum-500-v2) if you are interested in a dataset with combined serp and additional summary data.
### Licensing Information
This dataset is licensed under MIT license. |
false |
# Roi'adan V'anzey Lycoris
As of right now this is yet to be tested in A1111.
Alpha edition was trained at 110 steps for 10 epochs.
I don't think that's right LOL.
I'll fill out more of this when i'm sure it's OK
It's working just need time to re fill this card out: https://civitai.com/models/25430 |
false |
# NorPaca Norwegian Bokmål
This dataset is a translation to Norwegian Bokmål of [alpaca_gpt4_data.json](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM), a clean version of the [Alpaca dataset made at Stanford](https://huggingface.co/datasets/tatsu-lab/alpaca), but generated with GPT4.
# Prompt to generate dataset
```
Du blir bedt om å komme opp med et sett med 20 forskjellige oppgaveinstruksjoner. Disse oppgaveinstruksjonene vil bli gitt til en GPT-modell, og vi vil evaluere GPT-modellen for å fullføre instruksjonene.
Her er kravene:
1. Prøv å ikke gjenta verbet for hver instruksjon for å maksimere mangfoldet.
2. Språket som brukes til undervisningen bør også være mangfoldig. For eksempel bør du kombinere spørsmål med imperative instruksjoner.
3. Type instruksjoner bør være mangfoldig. Listen bør inneholde forskjellige typer oppgaver som åpen generering, klassifisering, redigering, etc.
2. En GPT-språkmodell skal kunne fullføre instruksjonen. For eksempel, ikke be assistenten om å lage visuell eller lydutgang. For et annet eksempel, ikke be assistenten om å vekke deg klokken 17.00 eller angi en påminnelse fordi den ikke kan utføre noen handling.
3. Instruksjonene skal være på norsk.
4. Instruksjonene skal være 1 til 2 setninger lange. Enten en imperativ setning eller et spørsmål er tillatt.
5. Du bør generere et passende input til instruksjonen. Inndatafeltet skal inneholde et spesifikt eksempel gitt for instruksjonen. Det bør involvere realistiske data og bør ikke inneholde enkle plassholdere. Innspillet bør gi betydelig innhold for å gjøre instruksjonen utfordrende, men bør ideelt sett ikke overstige 100 ord.
6. Ikke alle instruksjoner krever inndata. For eksempel, når en instruksjon spør om noen generell informasjon, "hva er den høyeste toppen i verden", er det ikke nødvendig å gi en spesifikk kontekst. I dette tilfellet legger vi ganske enkelt "<noinput>" i inntastingsfeltet.
7. Utgangen skal være et passende svar på instruksjonen og input.Sørg for at utgangen er mindre enn 100 ord.
Liste over 200 instrukser:
``` |
false | # Dataset Card for Genshin Voice
## Dataset Description
### Dataset Summary
The Genshin Voice dataset is a text-to-voice dataset of different Genshin Impact characters unpacked from the game.
### Languages
The text in the dataset is in Mandarin.
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
The data was obtained by unpacking the [Genshin Impact](https://genshin.hoyoverse.com/) game.
#### Who are the source language producers?
The language producers are the employee of [Hoyoverse](https://hoyoverse.com/) and contractors from [EchoSky Studio](http://qx.asiacu.com/).
### Annotations
The dataset contains official annotations from the game, including ingame speaker name and transcripts.
## Additional Information
### Dataset Curators
The dataset was created by [w4123](https://github.com/w4123) initially in his [GitHub repository](https://github.com/w4123/GenshinVoice).
### Licensing Information
Copyright © COGNOSPHERE. All Rights Reserved. |
false | # AutoTrain Dataset for project: colors-1
## Dataset Description
This dataset has been automatically processed by AutoTrain for project colors-1.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<1920x1080 P PIL image>",
"target": 1
},
{
"image": "<1000x563 RGB PIL image>",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['blue', 'green', 'red'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 10 |
| valid | 3 |
|
false | |
true | This is the Spanish version of Winogrande Small (640 instances) for training only.
The translation was done manually by a group of experts. The dataset will still be improved in the future.
we also acknowledge Somos-NLP for this achievement. |
true | # Dataset Card
**Paper**: On the Challenges of Using Black-Box APIs for Toxicity Evaluation in Research
**Abstract**: Perception of toxicity evolves over time and often differs between geographies and cultural backgrounds. Similarly, black-box commercially available APIs for detecting toxicity, such as the Perspective API, are not static, but frequently retrained to address any unattended weaknesses and biases. We evaluate the implications of these changes on the reproducibility of findings that compare the relative merits of models and methods that aim to curb toxicity. Our findings suggest that research that relied on inherited automatic toxicity scores to compare models and techniques may have resulted in inaccurate findings. Rescoring all models from HELM, a widely respected living benchmark, for toxicity with the recent version of the API led to a different ranking of extensively used models. We suggest caution in applying apples-to-apples comparisons between studies and lay recommendations for a more structured approach to evaluating toxicity over time.
Published on the [Trustworthy and Reliable Large-Scale Machine Learning Models ICLR 2023 Workshop](https://rtml-iclr2023.github.io/cfp.html).
[[Code]](https://github.com/for-ai/black-box-api-challenges) [[OpenReview]](https://openreview.net/forum?id=bRDHL4J5vy) [[Extended Pre-print]]()
## Dataset Description
In this repo are the data from the paper "On the challenges of using black-box APIs for toxicity evaluation in research".
In the folders you can find:
- **real-toxicity-prompts:** prompts from the RealToxicityPrompts dataset rescored with Perspective API in February 2023.
- **helm:** prompts and continuations from the HELM benchmark v0.2.2 rescored with Perspective API on April 2023. Also, in that folder we have the original stats from each of the models as scraped from the website.
- **dexperts:** prompts and continuations from a few models from the DExperts paper. Rescored with Perspective API on February 2023.
- **uddia:** continuations from UDDIA models. Rescored with Perspective API on February 2023.
### RealToxicityPrompts
RealToxicityPrompts is a dataset of 100k sentence snippets from the web for researchers to further address the risk of neural toxic degeneration in models.
- **Homepage:** [Toxic Degeneration homepage](https://toxicdegeneration.allenai.org/)
- **Repository:** [Code repository](https://github.com/allenai/real-toxicity-prompts)
- **Paper:** [RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models](https://arxiv.org/abs/2009.11462)
### HELM
- **Homepage:** [HELM Benchmark](https://crfm.stanford.edu/helm/latest/)
- **Repository:** [Code repository](https://github.com/stanford-crfm/helm)
- **Paper:** [Holistic Evaluation of Language Models](https://arxiv.org/abs/2211.09110)
### DExperts
- **Repository:** [Code repository](https://github.com/alisawuffles/DExperts)
- **Paper:** [DExperts: Decoding-Time Controlled Text Generation with Experts and Anti-Experts](https://arxiv.org/abs/2105.03023)
### UDDIA
- **Paper:** [Unified Detoxifying and Debiasing in Language Generation via Inference-time Adaptive Optimization](https://arxiv.org/abs/2210.04492)
# Citation
```
@inproceedings{
pozzobon2023on,
title={On the Challenges of Using Black-Box {API}s for Toxicity Evaluation in Research},
author={Luiza Amador Pozzobon and Beyza Ermis and Patrick Lewis and Sara Hooker},
booktitle={ICLR 2023 Workshop on Trustworthy and Reliable Large-Scale Machine Learning Models },
year={2023},
url={https://openreview.net/forum?id=bRDHL4J5vy}
}
```
|
true | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
false |
# Dataset Card for Instruct-Snippet-MLSUM-500
### Dataset Summary
This is a dataset for multitask instruction finetuning dataset for the task of news snippet generation. It is built from a sample of ~500 news articles from the [MLSUM](https://huggingface.co/datasets/mlsum) dataset, augmented with machine generated news snippets.
### Supported Tasks
This dataset was created to support the task of generating news snippets such as title, teaser, keywords, serp and tweet for news articles in German language.
### Languages
de - German
## Dataset Structure
lable: a string feature.
instruction: a string feature.
input: a string feature.
output: a string feature.
## Dataset Creation
This dataset was created from Snippet-MLSUM-500. See [Snippet-MLSUM-500](https://huggingface.co/datasets/snipaid/snippet-mlsum-500) for the dataset without instructions.
Instructions were generated with GPT-3.5 from a human-curated seed-set of instructions.
## Considerations for Using the Data
### Known Limitations
Part of the snippet data is machine generated. Be aware that these features (specifically: output) may exhibit signs of model hallucination, toxicity and stereotypes.
## Additional Information
See [Instruct-Snippet-MLSUM-500-V2](https://huggingface.co/datasets/snipaid/instruct-snippet-mlsum-500-v2) if you are interested in an improved successor, with further support for summaries.
### Licensing Information
This dataset is licensed under MIT license. |
true | <div align="center">
<article style="display: flex; flex-direction: column; align-items: center; justify-content: center;">
<p align="center"><img width="300" src="https://user-images.githubusercontent.com/25022954/209616423-9ab056be-5d62-4eeb-b91d-3b20f64cfcf8.svg" /></p>
<h1 style="width: 100%; text-align: center;"></h1>
</p>
</article>
<a href="./README_zh-CN.md" >简体中文</a> | English
</div>
## Introduction
LabelU is an open source data annotation tool that supports Chinese. At present, it has image annotation capabilities such as rectangle, polygon, point, line, classification, and caption. It can support detection, classification, segmentation, text transcription, Line detection, key point detection and other computer vision task scenarios. You can customize the annotation task by freely combining tools, and support COCO and MASK format data export.
## Getting started
### Install locally with miniconda
```
# Download and Install miniconda
# https://docs.conda.io/en/latest/miniconda.html
# Create virtual environment(python = 3.7)
conda create -n labelu python=3.7
# Activate virtual environment
conda activate labelu
# Install labelu
pip install labelu
# Start labelu, server: http://localhost:8000
labelu
```
### Install for local development
```
# Download and Install miniconda
# https://docs.conda.io/en/latest/miniconda.html
# Create virtual environment(python = 3.7)
conda create -n labelu python=3.7
# Activate virtual environment
conda activate labelu
# Install peotry
# https://python-poetry.org/docs/#installing-with-the-official-installer
# Install all package dependencies
poetry install
# Start labelu, server: http://localhost:8000
uvicorn labelu.main:app --reload
# Update submodule
git submodule update --remote --merge
```
## feature
- Uniform, Six image annotation tools are provided, which can be configured through simple visualization or Yaml
- Unlimited, Multiple tools can be freely combined to meet most image annotation requirements
<p align="center">
<img style="width: 600px" src="https://user-images.githubusercontent.com/25022954/209318236-79d3a5c3-2700-46c3-b59a-62d9c132a6c3.gif">
</p>
- Universal, Support multiple data export formats, including LabelU, COCO, Mask
## Scenes
### Computer Vision
- Detection: Detection scenes for vehicles, license plates, pedestrians, faces, industrial parts, etc.
- Classification: Detection of object classification, target characteristics, right and wrong judgments, and other classification scenarios
- Semantic segmentation: Human body segmentation, panoramic segmentation, drivable area segmentation, vehicle segmentation, etc.
- Text transcription: Text detection and recognition of license plates, invoices, insurance policies, signs, etc.
- Contour detection: positioning line scenes such as human contour lines, lane lines, etc.
- Key point detection: positioning scenes such as human face key points, vehicle key points, road edge key points, etc.
## Usage
- [Guide](./docs/GUIDE.md)
## Annotation Format
- [LabelU Annotation Format](./docs/annotation%20format/README.md)
## Communication
Welcome to the Opendatalab Wechat group!
<p align="center">
<img style="width: 400px" src="https://user-images.githubusercontent.com/25022954/208374419-2dffb701-321a-4091-944d-5d913de79a15.jpg">
</p>
## Links
- [labelU-Kit](https://github.com/opendatalab/labelU-Kit)(Powered by labelU-Kit)
## LICENSE
This project is released under the [Apache 2.0 license](./LICENSE). |
false | Rana is an alter in Duskfall Crew's system -
Virtual World Lycoris sets are based on Dissociative Identity Disorder
Actually wait.. Rana is A FORMER alter, and is now fused with Tobias and Tori lol. |
false | Alex Brightman Lycoris |
true |
# Dataset Card for XNLI Code-Mixed Corpus (Sampled)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
### Supported Tasks and Leaderboards
Binary mode classification (spoken vs written)
### Languages
- English
- German
- French
- German-English code-mixed by Equivalence Constraint Theory
- German-English code-mixed by Matrix Language Theory
- French-English code-mixed by Equivalence Constraint Theory
- German-English code-mixed by Matrix Language Theory
## Dataset Structure
### Data Instances
{
'text': "And he said , Mama , I 'm home",
'label': 0
}
### Data Fields
- text: sentence
- label: binary label of text (0: spoken 1: written)
### Data Splits
- monolingual
- train (English, German, French monolingual): 2490
- test (English, German, French monolingual): 5007
- de_ec
- train (English, German, French monolingual): 2490
- test (German-English code-mixed by Equivalence Constraint Theory): 14543
- de_ml
- train (English, German, French monolingual): 2490
- test (German-English code-mixed by Matrix Language Theory): 12750
- fr_ec
- train (English, German, French monolingual): 2490
- test (French-English code-mixed by Equivalence Constraint Theory): 18653
- fr_ml
- train (English, German, French monolingual): 2490
- test (French-English code-mixed by Matrix Language Theory): 17381
### Other Statistics
#### Average Sentence Length
- monolingual
- train: 19.18714859437751
- test: 19.321150389454765
- de_ec
- train: 19.18714859437751
- test: 11.24314103004882
- de_ml
- train: 19.18714859437751
- test: 12.159450980392156
- fr_ec
- train: 19.18714859437751
- test: 12.26526564091567
- fr_ml
- train: 19.18714859437751
- test: 13.486968528853346
#### Label Split
- monolingual
- train
- 0: 498
- 1: 1992
- test
- 0: 1002
- 1: 4005
- de_ec
- train
- 0: 498
- 1: 1992
- test
- 0: 2777
- 1: 11766
- de_ml
- train
- 0: 498
- 1: 1992
- test
- 0: 2329
- 1: 10421
- fr_ec
- train
- 0: 498
- 1: 1992
- test
- 0: 3322
- 1: 15331
- fr_ml
- train
- 0: 498
- 1: 1992
- test
- 0: 2788
- 1: 14593
## Dataset Creation
### Curation Rationale
Using the XNLI Parallel Corpus, we generated a code-mixed corpus using CodeMixed Text Generator, and sampled a maximum of 30 sentences per original English sentence.
The XNLI Parallel Corpus is available here:
https://huggingface.co/datasets/nanakonoda/xnli_parallel
It was created from the XNLI corpus.
More information is available in the datacard for the XNLI Parallel Corpus.
Here is the link and citation for the original CodeMixed Text Generator paper.
https://github.com/microsoft/CodeMixed-Text-Generator
```
@inproceedings{rizvi-etal-2021-gcm,
title = "{GCM}: A Toolkit for Generating Synthetic Code-mixed Text",
author = "Rizvi, Mohd Sanad Zaki and
Srinivasan, Anirudh and
Ganu, Tanuja and
Choudhury, Monojit and
Sitaram, Sunayana",
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.eacl-demos.24",
pages = "205--211",
abstract = "Code-mixing is common in multilingual communities around the world, and processing it is challenging due to the lack of labeled and unlabeled data. We describe a tool that can automatically generate code-mixed data given parallel data in two languages. We implement two linguistic theories of code-mixing, the Equivalence Constraint theory and the Matrix Language theory to generate all possible code-mixed sentences in the language-pair, followed by sampling of the generated data to generate natural code-mixed sentences. The toolkit provides three modes: a batch mode, an interactive library mode and a web-interface to address the needs of researchers, linguists and language experts. The toolkit can be used to generate unlabeled text data for pre-trained models, as well as visualize linguistic theories of code-mixing. We plan to release the toolkit as open source and extend it by adding more implementations of linguistic theories, visualization techniques and better sampling techniques. We expect that the release of this toolkit will help facilitate more research in code-mixing in diverse language pairs.",
}
```
### Source Data
XNLI Code-Mixed Corpus
https://huggingface.co/datasets/nanakonoda/xnli_cm
XNLI Parallel Corpus
https://huggingface.co/datasets/nanakonoda/xnli_parallel
#### Original Source Data
XNLI Parallel Corpus was created using the XNLI Corpus.
https://github.com/facebookresearch/XNLI
Here is the citation for the original XNLI paper.
```
@InProceedings{conneau2018xnli,
author = "Conneau, Alexis
and Rinott, Ruty
and Lample, Guillaume
and Williams, Adina
and Bowman, Samuel R.
and Schwenk, Holger
and Stoyanov, Veselin",
title = "XNLI: Evaluating Cross-lingual Sentence Representations",
booktitle = "Proceedings of the 2018 Conference on Empirical Methods
in Natural Language Processing",
year = "2018",
publisher = "Association for Computational Linguistics",
location = "Brussels, Belgium",
}
```
#### Initial Data Collection and Normalization
We removed all punctuation from the XNLI Parallel Corpus except apostrophes.
#### Who are the source language producers?
N/A
### Annotations
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
N/A
## Considerations for Using the Data
### Social Impact of Dataset
N/A
### Discussion of Biases
N/A
### Other Known Limitations
N/A
## Additional Information
### Dataset Curators
N/A
### Licensing Information
N/A
### Citation Information
### Contributions
N/A |
false | # XSUM NO
A norwegian summarization dataset custom made for evaluation or fine-tuning of GPT models.
## Data Collection
Data was scraped from Aftenposten.no and Vg.no, and the summarization column is represented by the title and ingress.
## How to Use
```python
from datasets import load_dataset
data = load_dataset("MasterThesisCBS/XSum_NO")
```
### Dataset Curators
[John Oskar Holmen Skjeldrum](mailto:josk18ad@student.cbs.dk) and [Peder Tanberg](mailto:peha28ae@student.cbs.dk) |
false | |
false | |
false | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
true | |
false |
# AnimeHeadsv3 Object Detection Dataset
The AnimeHeadsv3 Object Detection Dataset is a collection of anime and art images, including manga pages, that have been annotated with object bounding boxes for use in object detection tasks.
## Contents
There are two versions of the dataset available:
The dataset contains a total of 8157 images, split into training, validation, and testing sets. The images were collected from various sources and include a variety of anime and art styles, including manga.
- Dataset with augmentation: Contains 8157 images.
- Dataset without augmentation: Contains 2777 images.
The images were collected from various sources and include a variety of anime and art styles, including manga. The annotations were created using the COCO format, with each annotation file containing the bounding box coordinates and label for each object in the corresponding image. The dataset has only one class named "head".
## Preprocessing
The dataset with augmentation has the following preprocessing parameters:
Resize: Fit within 640x640
The dataset without augmentation does not have any preprocessing applied.
## Augmentation Parameters
The following augmentation parameters were applied to the dataset with augmentation:
Outputs per training example: 3
Flip: Horizontal
Saturation: Between -40% and +40%
Blur: Up to 4px
Noise: Up to 4% of pixels
|
true | # PARARULE-Plus-Depth-2
This is a branch which includes the dataset from PARARULE-Plus Depth=2. PARARULE Plus is a deep multi-step reasoning dataset over natural language. It can be seen as an improvement on the dataset of PARARULE (Peter Clark et al., 2020). Both PARARULE and PARARULE-Plus follow the closed-world assumption and negation as failure. The motivation is to generate deeper PARARULE training samples. We add more training samples for the case where the depth is greater than or equal to two to explore whether Transformer has reasoning ability. PARARULE Plus is a combination of two types of entities, animals and people, and corresponding relationships and attributes. From the depth of 2 to the depth of 5, we have around 100,000 samples in the depth of each layer, and there are nearly 400,000 samples in total.
Here is the original links for PARARULE-Plus including paper, project and data.
Paper: https://www.cs.ox.ac.uk/isg/conferences/tmp-proceedings/NeSy2022/paper15.pdf
Project: https://github.com/Strong-AI-Lab/Multi-Step-Deductive-Reasoning-Over-Natural-Language
Data: https://github.com/Strong-AI-Lab/PARARULE-Plus
PARARULE-Plus has been collected and merged by [LogiTorch.ai](https://www.logitorch.ai/), [ReasoningNLP](https://github.com/FreedomIntelligence/ReasoningNLP), [Prompt4ReasoningPapers](https://github.com/zjunlp/Prompt4ReasoningPapers) and [OpenAI/Evals](https://github.com/openai/evals/pull/651).
In this huggingface version, we pre-processed the dataset and use `1` to represent `true` and `0` to represent `false` to better help user train model.
## How to load the dataset?
```
from datasets import load_dataset
dataset = load_dataset("qbao775/PARARULE-Plus-Depth-2")
```
## How to train a model using the dataset?
We provide an [example](https://github.com/Strong-AI-Lab/PARARULE-Plus/blob/main/README.md#an-example-script-to-load-pararule-plus-and-fine-tune-bert) that you can `git clone` the project and fine-tune the dataset locally.
## Citation
```
@inproceedings{bao2022multi,
title={Multi-Step Deductive Reasoning Over Natural Language: An Empirical Study on Out-of-Distribution Generalisation},
author={Qiming Bao and Alex Yuxuan Peng and Tim Hartill and Neset Tan and Zhenyun Deng and Michael Witbrock and Jiamou Liu},
year={2022},
publisher={The 2nd International Joint Conference on Learning and Reasoning and 16th International Workshop on Neural-Symbolic Learning and Reasoning (IJCLR-NeSy 2022)}
}
``` |
true | # PARARULE-Plus-Depth-3
This is a branch which includes the dataset from PARARULE-Plus Depth=3. PARARULE Plus is a deep multi-step reasoning dataset over natural language. It can be seen as an improvement on the dataset of PARARULE (Peter Clark et al., 2020). Both PARARULE and PARARULE-Plus follow the closed-world assumption and negation as failure. The motivation is to generate deeper PARARULE training samples. We add more training samples for the case where the depth is greater than or equal to two to explore whether Transformer has reasoning ability. PARARULE Plus is a combination of two types of entities, animals and people, and corresponding relationships and attributes. From the depth of 2 to the depth of 5, we have around 100,000 samples in the depth of each layer, and there are nearly 400,000 samples in total.
Here is the original links for PARARULE-Plus including paper, project and data.
Paper: https://www.cs.ox.ac.uk/isg/conferences/tmp-proceedings/NeSy2022/paper15.pdf
Project: https://github.com/Strong-AI-Lab/Multi-Step-Deductive-Reasoning-Over-Natural-Language
Data: https://github.com/Strong-AI-Lab/PARARULE-Plus
PARARULE-Plus has been collected and merged by [LogiTorch.ai](https://www.logitorch.ai/), [ReasoningNLP](https://github.com/FreedomIntelligence/ReasoningNLP), [Prompt4ReasoningPapers](https://github.com/zjunlp/Prompt4ReasoningPapers) and [OpenAI/Evals](https://github.com/openai/evals/pull/651).
In this huggingface version, we pre-processed the dataset and use `1` to represent `true` and `0` to represent `false` to better help user train model.
## How to load the dataset?
```
from datasets import load_dataset
dataset = load_dataset("qbao775/PARARULE-Plus-Depth-3")
```
## How to train a model using the dataset?
We provide an [example](https://github.com/Strong-AI-Lab/PARARULE-Plus/blob/main/README.md#an-example-script-to-load-pararule-plus-and-fine-tune-bert) that you can `git clone` the project and fine-tune the dataset locally.
## Citation
```
@inproceedings{bao2022multi,
title={Multi-Step Deductive Reasoning Over Natural Language: An Empirical Study on Out-of-Distribution Generalisation},
author={Qiming Bao and Alex Yuxuan Peng and Tim Hartill and Neset Tan and Zhenyun Deng and Michael Witbrock and Jiamou Liu},
year={2022},
publisher={The 2nd International Joint Conference on Learning and Reasoning and 16th International Workshop on Neural-Symbolic Learning and Reasoning (IJCLR-NeSy 2022)}
}
``` |
true | # PARARULE-Plus-Depth-4
This is a branch which includes the dataset from PARARULE-Plus Depth=4. PARARULE Plus is a deep multi-step reasoning dataset over natural language. It can be seen as an improvement on the dataset of PARARULE (Peter Clark et al., 2020). Both PARARULE and PARARULE-Plus follow the closed-world assumption and negation as failure. The motivation is to generate deeper PARARULE training samples. We add more training samples for the case where the depth is greater than or equal to two to explore whether Transformer has reasoning ability. PARARULE Plus is a combination of two types of entities, animals and people, and corresponding relationships and attributes. From the depth of 2 to the depth of 5, we have around 100,000 samples in the depth of each layer, and there are nearly 400,000 samples in total.
Here is the original links for PARARULE-Plus including paper, project and data.
Paper: https://www.cs.ox.ac.uk/isg/conferences/tmp-proceedings/NeSy2022/paper15.pdf
Project: https://github.com/Strong-AI-Lab/Multi-Step-Deductive-Reasoning-Over-Natural-Language
Data: https://github.com/Strong-AI-Lab/PARARULE-Plus
PARARULE-Plus has been collected and merged by [LogiTorch.ai](https://www.logitorch.ai/), [ReasoningNLP](https://github.com/FreedomIntelligence/ReasoningNLP), [Prompt4ReasoningPapers](https://github.com/zjunlp/Prompt4ReasoningPapers) and [OpenAI/Evals](https://github.com/openai/evals/pull/651).
In this huggingface version, we pre-processed the dataset and use `1` to represent `true` and `0` to represent `false` to better help user train model.
## How to load the dataset?
```
from datasets import load_dataset
dataset = load_dataset("qbao775/PARARULE-Plus-Depth-4")
```
## How to train a model using the dataset?
We provide an [example](https://github.com/Strong-AI-Lab/PARARULE-Plus/blob/main/README.md#an-example-script-to-load-pararule-plus-and-fine-tune-bert) that you can `git clone` the project and fine-tune the dataset locally.
## Citation
```
@inproceedings{bao2022multi,
title={Multi-Step Deductive Reasoning Over Natural Language: An Empirical Study on Out-of-Distribution Generalisation},
author={Qiming Bao and Alex Yuxuan Peng and Tim Hartill and Neset Tan and Zhenyun Deng and Michael Witbrock and Jiamou Liu},
year={2022},
publisher={The 2nd International Joint Conference on Learning and Reasoning and 16th International Workshop on Neural-Symbolic Learning and Reasoning (IJCLR-NeSy 2022)}
}
``` |
true | # PARARULE-Plus-Depth-5
This is a branch which includes the dataset from PARARULE-Plus Depth=5. PARARULE Plus is a deep multi-step reasoning dataset over natural language. It can be seen as an improvement on the dataset of PARARULE (Peter Clark et al., 2020). Both PARARULE and PARARULE-Plus follow the closed-world assumption and negation as failure. The motivation is to generate deeper PARARULE training samples. We add more training samples for the case where the depth is greater than or equal to two to explore whether Transformer has reasoning ability. PARARULE Plus is a combination of two types of entities, animals and people, and corresponding relationships and attributes. From the depth of 2 to the depth of 5, we have around 100,000 samples in the depth of each layer, and there are nearly 400,000 samples in total.
Here is the original links for PARARULE-Plus including paper, project and data.
Paper: https://www.cs.ox.ac.uk/isg/conferences/tmp-proceedings/NeSy2022/paper15.pdf
Project: https://github.com/Strong-AI-Lab/Multi-Step-Deductive-Reasoning-Over-Natural-Language
Data: https://github.com/Strong-AI-Lab/PARARULE-Plus
PARARULE-Plus has been collected and merged by [LogiTorch.ai](https://www.logitorch.ai/), [ReasoningNLP](https://github.com/FreedomIntelligence/ReasoningNLP), [Prompt4ReasoningPapers](https://github.com/zjunlp/Prompt4ReasoningPapers) and [OpenAI/Evals](https://github.com/openai/evals/pull/651).
In this huggingface version, we pre-processed the dataset and use `1` to represent `true` and `0` to represent `false` to better help user train model.
## How to load the dataset?
```
from datasets import load_dataset
dataset = load_dataset("qbao775/PARARULE-Plus-Depth-5")
```
## How to train a model using the dataset?
We provide an [example](https://github.com/Strong-AI-Lab/PARARULE-Plus/blob/main/README.md#an-example-script-to-load-pararule-plus-and-fine-tune-bert) that you can `git clone` the project and fine-tune the dataset locally.
## Citation
```
@inproceedings{bao2022multi,
title={Multi-Step Deductive Reasoning Over Natural Language: An Empirical Study on Out-of-Distribution Generalisation},
author={Qiming Bao and Alex Yuxuan Peng and Tim Hartill and Neset Tan and Zhenyun Deng and Michael Witbrock and Jiamou Liu},
year={2022},
publisher={The 2nd International Joint Conference on Learning and Reasoning and 16th International Workshop on Neural-Symbolic Learning and Reasoning (IJCLR-NeSy 2022)}
}
``` |
false | # English to Colloquial Indonesian Dataset (EColIndo)
First-ever large-scale high-quality English to Colloquial Indonesian dataset.
Fully generated from ChatGPT Zero-Shot Translation.
Author: Yonathan Setiawan
|
false |
---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 19334450
num_examples: 79168
- name: test
num_bytes: 2134369
num_examples: 8757
---
# Dataset Card for NQ-GAR |
true | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
false | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
false |
# Dataset Card for "squad"
## Table of Contents
- [Dataset Card for "squad"](#dataset-card-for-squad)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [plain_text](#plain_text)
- [Data Fields](#data-fields)
- [plain_text](#plain_text-1)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://rajpurkar.github.io/SQuAD-explorer/](https://rajpurkar.github.io/SQuAD-explorer/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 35.14 MB
- **Size of the generated dataset:** 89.92 MB
- **Total amount of disk used:** 125.06 MB
### Dataset Summary
Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### plain_text
- **Size of downloaded dataset files:** 35.14 MB
- **Size of the generated dataset:** 89.92 MB
- **Total amount of disk used:** 125.06 MB
An example of 'train' looks as follows.
```
{
"answers": {
"answer_start": [1],
"text": ["This is a test text"]
},
"context": "This is a test context.",
"id": "1",
"question": "Is this a test?",
"title": "train test"
}
```
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
| name |train|validation|
|----------|----:|---------:|
|plain_text|87599| 10570|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{2016arXiv160605250R,
author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},
Konstantin and {Liang}, Percy},
title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}",
journal = {arXiv e-prints},
year = 2016,
eid = {arXiv:1606.05250},
pages = {arXiv:1606.05250},
archivePrefix = {arXiv},
eprint = {1606.05250},
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@albertvillanova](https://github.com/albertvillanova), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
false | |
true | # Dataset: sentiment_analysis-IT-dataset
## Dataset Description
Our data has been collected by annotating tweets on Italian language from a broad range of topics. In total, we have 2037 tweets annotated with an emotion label. More details can be found in our paper (https://aclanthology.org/2021.wassa-1.8/).
### Languages
The BCP-47 code for the dataset's language is it.
## Dataset Structure
### Data Instances
@inproceedings{bianchi2021feel,
title = {{"Sentiment Classification for the Italian Language"}},
author = "Bianchi, Federico and Nozza, Debora and Hovy, Dirk",
booktitle = "Proceedings of the 11th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
year = "2021",
publisher = "Association for Computational Linguistics",
}
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"feat_id_noticia": "Value(dtype='int16', id=None)",
"feat_target": "Value(dtype='string', id=None)",
"target": "ClassLabel(names=['NEG', 'NEU', 'POS'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1096 |
| valid | 275 | |
false |
# Dataset Card for April 2023 Polish Wikipedia
Wikipedia dataset containing cleaned articles of Polish language.
The dataset has been built from the Wikipedia dump (https://dumps.wikimedia.org/)
using the [OLM Project](https://github.com/huggingface/olm-datasets).
Each example contains the content of one full Wikipedia article with cleaning to strip
markdown and unwanted sections (references, etc.).
### Licensing Information
Most of Wikipedia's text and many of its images are co-licensed under the
[Creative Commons Attribution-ShareAlike 3.0 Unported License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_Creative_Commons_Attribution-ShareAlike_3.0_Unported_License)
(CC BY-SA) and the [GNU Free Documentation License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_the_GNU_Free_Documentation_License)
(GFDL) (unversioned, with no invariant sections, front-cover texts, or back-cover texts).
Some text has been imported only under CC BY-SA and CC BY-SA-compatible license and cannot be reused under GFDL; such
text will be identified on the page footer, in the page history, or on the discussion page of the article that utilizes
the text.
### Citation Information
```
@ONLINE{wikidump,
author = "Wikimedia Foundation",
title = "Wikimedia Downloads",
url = "https://dumps.wikimedia.org"
}
``` |
false |
# Dataset Card for Snippet-MLSUM-500-V2
### Dataset Summary
This dataset is a sample of ~500 news articles from the [MLSUM](https://huggingface.co/datasets/mlsum) dataset, augmented with machine generated news snippets.
### Supported Tasks
This dataset was created to support the task of generating news snippets such as title, teaser, keywords, serp and tweet for news articles in German language.
### Languages
de - German
## Dataset Structure
text: a string feature.
title: a string feature.
teaser: a string feature.
keywords: a string feature.
summary: a string feature.
serp: a string feature.
tweet: a string feature.
url: a string feature.
date: a string feature.
topic: a string feature.
## Dataset Creation
The news articles in this dataset are a random sample of ~500 news articles from MLSUM balanced by topic.
Features text, title, teaser (originally summary in MLSUM), url, date and topic are copied from MLSUM.
Features keywords, serp, summary and tweet are machine generated with GPT-3.5.
Generated features comply with length limits in place for SERPs and Tweets at the time of publishing.
## Considerations for Using the Data
### Known Limitations
Part of the snippet data is machine generated. Be aware that these features (specifically: keywords, serp, summary and tweet) may exhibit signs of model hallucination, stereotypes and toxicity.
## Additional Information
### Licensing Information
This dataset is licensed under MIT license. |
false |
# Dataset Card for Instruct-Snippet-MLSUM-500-V2
### Dataset Summary
This is a dataset for multitask instruction finetuning dataset for the task of news snippet generation. It is built from a sample of ~500 news articles from the [MLSUM](https://huggingface.co/datasets/mlsum) dataset, augmented with machine generated news snippets.
### Supported Tasks
This dataset was created to support the task of generating news snippets such as title, teaser, summary, keywords, serp and tweet for news articles in German language.
### Languages
de - German
## Dataset Structure
lable: a string feature.
instruction: a string feature.
input: a string feature.
output: a string feature.
## Dataset Creation
This dataset was created from Snippet-MLSUM-500-V2. See [Snippet-MLSUM-500-V2](https://huggingface.co/datasets/snipaid/snippet-mlsum-500-V2) for the dataset without instructions.
Instructions were generated with GPT-3.5 from a human-curated seed-set of instructions.
## Considerations for Using the Data
### Known Limitations
Part of the snippet data is machine generated. Be aware that these features (specifically: output) may exhibit signs of model hallucination, toxicity and stereotypes.
## Additional Information
### Licensing Information
This dataset is licensed under MIT license. |
true | # Dataset Card for "applescript-lines-100k-non-annotated"
## Description
Dataset of 100,000 unique lines of AppleScript code scraped from GitHub and GitHub Gists. The dataset has been de-duplicated, comments have been removed (both single and multi-line), and effort has been made to merge multi-line structures such as records into one (however, expect some variability in this regard).
The dataset is constructed as an intermediate step to a fully-annotated AppleScript dataset.
Each row has fields for `text` and `source`, with text being the raw text of the line and source being the file name and extension from which the line was obtained. Full source links have been omitted for anonymity. |
true | |
false |
# 弱智吧笑话数据集
弱智吧是百度贴吧中的一个非常受欢迎的论坛,以创作短小精悍的冷笑话而闻名。这些笑话通常采用双关语、不寻常的断句、不合理的逻辑等创作手法。即使是目前最先进的语言模型,也难以完全理解弱智吧的笑话。
[弱智吧](https://tieba.baidu.com/f?ie=utf-8&kw=%E5%BC%B1%E6%99%BA)
我从互联网上收集了一些弱智吧的笑话,共100条,其中45条是陈述句,55条是问句。我结合人工和语言模型对这些笑话进行了一些解析,并制作了这个小型数据集。
## 陈述句笑话
陈述句笑话通常以句号结尾,不容易被语言模型误解为正常的问题。
例如:“出人头地常年盛产人头。”
## 问句笑话
问句笑话具有一定的迷惑性,可能会导致语言模型无法判断它们是正常的问题还是开玩笑。
例如:“蓝牙耳机坏了,应该找牙科医生还是耳科医生?”
## 文件格式
本数据集包括两个部分。
### retarded_bar.jsonl
retarded_bar.jsonl是陈述性笑话数据集,以jsonl格式存储,每行都是一个json字典,包括序号`id`,原文`text`,笑点解析`analysis`,双关语`pun`,作者类型`author_type`五个字段,其中:
- 序号`id`是数字,表示笑话的编号
- 原文`text`是文本,表示笑话的原文,由“弱智吧”社区成员创作,本人在互联网上手动收集而成。
- 笑点解析`analysis`是文本,表示笑话的笑点解析,大部分解析由本人创作,也有一小部分是用语言模型生成的。在作者类型`author_type`中体现了这个内容。
- 双关语`pun`是文本列表,表示笑话中包含的双关语,由本人找到。一个笑话可能包含不止一个双关语,也可能不包含双关语。
- 作者类型`author_type`是文本,表示笑点解析`analysis`的作者类型,而不是笑话原文`text`的作者类型,目前有`human`和`ai`两个值。
### retarded_bar_qa.jsonl
retarded_bar_qa.jsonl是提问性笑话数据集,以jsonl格式存储,每行都是一个json字典,包括序号`id`,原文`text`,回复`answer`,作者类型`author_type`四个字段,其中:
- 序号`id`是数字,表示笑话的编号
- 原文`text`是文本,表示笑话的原文,由“弱智吧”社区成员创作,本人在互联网上手动收集而成。
- 回复`analysis`是文本,表示提问型笑话的合理回复。本人定义的合理回复是应该让对方知道自己已经察觉到提问的幽默性,但仍不失礼貌,且提供准确的事实性信息的回复。合理回复有的由本人创作,也有的是用语言模型生成的。在作者类型`author_type`中体现了这个内容。
- 作者类型`author_type`是文本,表示回复`answer`的作者类型,而不是笑话原文`text`的作者类型,目前有`human`和`ai`两个值。
## 使用方式
建议使用Python的jsonlines库或Hugging Face的datasets库读取本数据集。使用这些库可以轻松地读取jsonl格式的文件并进行后续处理,例如构建训练集或测试集、训练或测试语言模型等。例如,使用jsonlines库可以按行读取jsonl格式的文件,如下所示:
```python
import jsonlines
with jsonlines.open('retarded_bar.jsonl') as reader:
for obj in reader:
# 对每个对象进行处理
print(obj)
```
## 局限性
1. 由于本项目只有本人一个人参与,而这类数据标注难度比较大,自动化程度低,需要比较多的人力,所以数据集容量较小。
2. 本人文字表达能力有限,可能无法准确生动地表达笑点解析,也可能无法创作比较高质量的回答。因此,该数据集中的一些解析和回答可能并不是最佳的。
3. 本数据集的数据来源于互联网,可能存在版权问题。因此,使用该数据集时需要注意版权问题,并遵守相关法律法规。
4. 由于弱智吧的笑话大多是基于中文语境的,因此该数据集可能不适用于其他语言的笑话判断。
## 联系方式
本人QQ:583753622
## 欢迎贡献更多优质数据! |
false | |
false |
The dataset contains (almost) the entire OpenSubtittles database for Japanese:
- Over 7000 tv shows and/or movies.
- The subtittles are human generated.
- The dataset has been parsed, cleaned and converted to UTF-8.
File contents:
- OpenSubtitles.parquet: The text and the time data.
- OpenSubtitles_meta.parquet: The existing metadata for each title.
- OpenSubtitles-OA.parquet: The dataset coded with two columns SOURCE(the name of the movie/tv show), and TEXT (the subtittles) following the Open Assistant rules.
Both tables can be joined by the ID column. (The value can be NULL in the meta table). |
false | # AutoTrain Dataset for project: fine-tune
## Dataset Description
This dataset has been automatically processed by AutoTrain for project fine-tune.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<382x256 RGB PIL image>",
"target": 17
},
{
"image": "<341x256 RGB PIL image>",
"target": 7
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['apple', 'banana', 'cake', 'candy', 'carrot', 'cookie', 'doughnut', 'grape', 'hot dog', 'ice cream', 'juice', 'muffin', 'orange', 'pineapple', 'popcorn', 'pretzel', 'salad', 'strawberry', 'waffle', 'watermelon'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 5394 |
| valid | 1351 |
|
true | |
false |
Ok SO, as usual we don't usually have time to test these, though there is a chance many of the poses are inclusive of testing - the images are in the grids, and we've included sample images for the face landmarks.
...Don't mock us LOL, we literally found a face landmark demo on huggingface, and went nuts making dumb faces for y'all. "DEPICTS AN ACTUAL PERSON" is because some of the data is AI generated and some of it's realistic people used for poses -- like our face.
Lisc Restrictions:
Don't resell any of the sample images in the model card.
What you do with the data beyond that is literally not up to use, tho we'd prefer you don't "SELL" the poses.
We're not liable for any copyright infringment or any dirty nasty things you make with the poses XD
We're also not liable if easy diffusion tells you that you can merge a single image into 20gb models because you can't, if it tells you that you can then you're gullible LOL.
These packs are copied over from civit as a backup :3
Feel free to use. |
false | # AutoTrain Dataset for project: xx
## Dataset Description
This dataset has been automatically processed by AutoTrain for project xx.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"feat_db_id": "department_management",
"target": "SELECT count(*) FROM head WHERE age > 56",
"source": "How many heads of the departments are older than 56 ?",
"feat_query_toks": [
"SELECT",
"count",
"(",
"*",
")",
"FROM",
"head",
"WHERE",
"age",
">",
"56"
],
"feat_query_toks_no_value": [
"select",
"count",
"(",
"*",
")",
"from",
"head",
"where",
"age",
">",
"value"
],
"feat_question_toks": [
"How",
"many",
"heads",
"of",
"the",
"departments",
"are",
"older",
"than",
"56",
"?"
]
},
{
"feat_db_id": "department_management",
"target": "SELECT name , born_state , age FROM head ORDER BY age",
"source": "List the name, born state and age of the heads of departments ordered by age.",
"feat_query_toks": [
"SELECT",
"name",
",",
"born_state",
",",
"age",
"FROM",
"head",
"ORDER",
"BY",
"age"
],
"feat_query_toks_no_value": [
"select",
"name",
",",
"born_state",
",",
"age",
"from",
"head",
"order",
"by",
"age"
],
"feat_question_toks": [
"List",
"the",
"name",
",",
"born",
"state",
"and",
"age",
"of",
"the",
"heads",
"of",
"departments",
"ordered",
"by",
"age",
"."
]
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"feat_db_id": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)",
"source": "Value(dtype='string', id=None)",
"feat_query_toks": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"feat_query_toks_no_value": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"feat_question_toks": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1001 |
| valid | 1001 |
|
false | [Original dataset] - This dataset is just the translation of the [gsm8k] dataset.
[Original dataset]: <https://huggingface.co/datasets/gsm8k>
[gsm8k]: <https://huggingface.co/datasets/gsm8k> |
false | |
true | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
true | # Dataset Card for "cr"
## Dataset Description
Product review dataset from SentEval.
## Data Fields
- `sentence`: Complete sentence expressing an opinion about a product.
- `label`: Sentiment of the opinion, either "negative" (0) or positive (1).
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
false | # AutoTrain Dataset for project: cancer-lakera
## Dataset Description
This dataset has been automatically processed by AutoTrain for project cancer-lakera.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<600x450 RGB PIL image>",
"feat_image_id": "ISIC_0024329",
"feat_lesion_id": "HAM_0002954",
"target": 0,
"feat_dx_type": "histo",
"feat_age": 75.0,
"feat_sex": "female",
"feat_localization": "lower extremity"
},
{
"image": "<600x450 RGB PIL image>",
"feat_image_id": "ISIC_0024372",
"feat_lesion_id": "HAM_0005389",
"target": 0,
"feat_dx_type": "histo",
"feat_age": 70.0,
"feat_sex": "male",
"feat_localization": "lower extremity"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"feat_image_id": "Value(dtype='string', id=None)",
"feat_lesion_id": "Value(dtype='string', id=None)",
"target": "ClassLabel(names=['actinic_keratoses', 'basal_cell_carcinoma', 'benign_keratosis-like_lesions'], id=None)",
"feat_dx_type": "Value(dtype='string', id=None)",
"feat_age": "Value(dtype='float64', id=None)",
"feat_sex": "Value(dtype='string', id=None)",
"feat_localization": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1200 |
| valid | 150 |
|
false | |
false | |
false |
# character_similarity
This is a dataset used for training models to determine whether two anime images (containing only one person) depict the same character. The dataset includes the following versions:
| Version | Filename | Characters | Images | Information |
|:---------:|:-----------------------:|:----------:|:------:|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| v0 | images_v0.tar.xz | 2059 | 162116 | Crawled from [zerochan.net](https://www.zerochan.net/), includes images of Arknights, Fate/Grand Order, Genshin Impact, Girls' Frontline, and Azur Lane, as well as over 1500 other game or anime characters. The images are all small preview thumbnails. |
| v0_tiny | images_tiny_v0.tar.xz | 514 | 10036 | The dataset `v0` has been simplified by randomly selecting 1/4 of the anime characters and 1/4 of the images for each character. It is recommended to use this dataset for model training and validation. |
| v0_xtiny | images_xtiny_v0.tar.xz | 100 | 1814 | `v0_xtiny` dataset is a further simplified version of `v0_tiny`, with only 100 characters retained. This dataset is only **suitable for model validation and experimentation**, and is not recommended for formal training. |
| v1 | images_v1.tar.xz | 4001 | 292800 | The `v1` dataset is similar to `v0`, crawled from zerochan, but it contains a richer selection of characters and more diverse images of the same character. Each character includes no less than 3 and no more than 200 images. |
| v1_pruned | images_pruned_v1.tar.xz | 3982 | 241483 | Pruned from `v1` dataset, deleted monochrome and non-solo pictures, and also removed pictures where the character accounted for less than 40% of the entire vision area. | |
false | Everything in here should be under CreativeML Open Rail.
We hope that you enjoy the content in here.
We're not at risk for anything you do with it.
Go on, use it! |
false | |
false | [Original dataset] - This dataset is just the translation of the [qasc] dataset.
[Original dataset]: <https://huggingface.co/datasets/qasc>
[qasc]: <https://huggingface.co/datasets/qasc> |
false |
# Dataset Card for German REBEL Dataset
### Dataset Summary
This dataset is the German version of Babelscape/rebel-dataset. It has been generated using [CROCODILE](https://github.com/Babelscape/crocodile).
The Wikipedia Version is from November 2022.
### Languages
- German
## Dataset Structure
```
{"docid": "9400003",
"title": "Odin-Gletscher",
"uri": "Q7077818",
"text": "Der Odin-Gletscher ist ein kleiner Gletscher im ostantarktischen Viktorialand. Er fließt von den Westhängen des Mount Odin in der Asgard Range.\n\nDas New Zealand Antarctic Place-Names Committee benannte ihn in Anlehnung an die Benennung des Mount Odin nach Odin, Göttervater, Kriegs- und Totengott der nordischen Mythologie.",
"entities": [{"uri": "Q35666", "boundaries": [35, 44], "surfaceform": "Gletscher", "annotator": "Me"}, ... ],
"triples": [{"subject": {"uri": "Q7077818", "boundaries": [4, 18], "surfaceform": "Odin-Gletscher", "annotator": "Me"},
"predicate": {"uri": "P31", "boundaries": null, "surfaceform": "ist ein(e)", "annotator": "NoSubject-Triple-aligner"},
"object": {"uri": "Q35666", "boundaries": [35, 44], "surfaceform": "Gletscher", "annotator": "Me"}, "sentence_id": 0,
"dependency_path": null,
"confidence": 0.99560546875,
"annotator": "NoSubject-Triple-aligner"}, ...]
}
```
### Data Instances
The dataset is 1.1GB if unpacked on the system. 195MB if zipped.
### Data Fields
"docid": "9644601",
"title": Wikipedia Title
"uri": "Q4290759",
"text": Wikipedia Abstract
"entities": A list of Entities
- uri: Wikidata URI
- boundaries: Tuple of indices of the entity in the abstract
- surfaceform: text form of entity
- annotator: different annotator classes
"triples": List of Triples as dictionaries
- sentence_id: Sentence number the triple appears in.
- "confidence": float, the confidence of the NLI Model
- subject
- uri: Wikidata Entity URI
- boundaries
- surfaceform
- annotator
- predicate
- uri: Wikidata Relation URI
- boundaries: always null,
- surfaceform: Wikidata Relation Name
- annotator
- object:
- uri: Wikidata Entity URI
- boundaries
- surfaceform
- annotator
### Data Splits
No splits are provided for now since the relation classes are quite imbalanced.
To read the dataset you can adapt the function provided by https://github.com/Babelscape/rebel
```
def _generate_examples(self, filepath):
"""This function returns the examples in the raw (text) form."""
logging.info("generating examples from = %s", filepath)
relations_df = pd.read_csv(self.config.data_files['relations'], header = None, sep='\t')
relations = list(relations_df[0])
with open(filepath, encoding="utf-8") as f:
for id_, row in enumerate(f):
article = json.loads(row)
prev_len = 0
if len(article['triples']) == 0:
continue
count = 0
for text_paragraph in article['text'].split('\n'):
if len(text_paragraph) == 0:
continue
sentences = re.split(r'(?<=[.])\s', text_paragraph)
text = ''
for sentence in sentences:
text += sentence + ' '
if any([entity['boundaries'][0] < len(text) + prev_len < entity['boundaries'][1] for entity in article['entities']]):
continue
entities = sorted([entity for entity in article['entities'] if prev_len < entity['boundaries'][1] <= len(text)+prev_len], key=lambda tup: tup['boundaries'][0])
decoder_output = '<triplet> '
for int_ent, entity in enumerate(entities):
triplets = sorted([triplet for triplet in article['triples'] if triplet['subject'] == entity and prev_len< triplet['subject']['boundaries'][1]<=len(text) + prev_len and prev_len< triplet['object']['boundaries'][1]<=len(text)+ prev_len and triplet['predicate']['surfaceform'] in relations], key=lambda tup: tup['object']['boundaries'][0])
if len(triplets) == 0:
continue
decoder_output += entity['surfaceform'] + ' <subj> '
for triplet in triplets:
decoder_output += triplet['object']['surfaceform'] + ' <obj> ' + triplet['predicate']['surfaceform'] + ' <subj> '
decoder_output = decoder_output[:-len(' <subj> ')]
decoder_output += ' <triplet> '
decoder_output = decoder_output[:-len(' <triplet> ')]
count += 1
prev_len += len(text)
if len(decoder_output) == 0:
text = ''
continue
text = re.sub('([\[\].,!?()])', r' \1 ', text.replace('()', ''))
text = re.sub('\s{2,}', ' ', text)
yield article['uri'] + '-' + str(count), {
"title": article['title'],
"context": text,
"id": article['uri'] + '-' + str(count),
"triplets": decoder_output,
}
text = ''
```
## Dataset Creation
### Curation Rationale
This dataset was created to enable the training of a german BART based model as pre-training phase for Relation Extraction.
### Source Data
#### Who are the source language producers?
Any Wikipedia and Wikidata contributor.
### Annotations
#### Annotation process
The dataset extraction pipeline cRocoDiLe: Automatic Relation Extraction Dataset with NLI filtering.
#### Who are the annotators?
Automatic annottations
### Personal and Sensitive Information
All text is from Wikipedia, any Personal or Sensitive Information there may be present in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset serves as a pre-training step for Relation Extraction models. It is distantly annotated, hence it should only be used as such. A model trained solely on this dataset may produce allucinations coming from the silver nature of the dataset.
### Discussion of Biases
Since the dataset was automatically created from Wikipedia and Wikidata, it may reflect the biases withing those sources.
For Wikipedia text, see for example Dinan et al 2020 on biases in Wikipedia (esp. Table 1), or Blodgett et al 2020 for a more general discussion of the topic.
For Wikidata, there are class imbalances, also resulting from Wikipedia.
### Other Known Limitations
Not for now
## Additional Information
### Dataset Curators
Me
### Licensing Information
Since anyone can create the dataset on their own using the linked GitHub Repository, I am going to use the MIT Licence.
### Citation Information
Inspiration by:
```
@inproceedings{huguet-cabot-navigli-2021-rebel,
title = "REBEL: Relation Extraction By End-to-end Language generation",
author = "Huguet Cabot, Pere-Llu{\'\i}s and
Navigli, Roberto",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Online and in the Barceló Bávaro Convention Centre, Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://github.com/Babelscape/rebel/blob/main/docs/EMNLP_2021_REBEL__Camera_Ready_.pdf",
}
```
### Contributions
None for now |
false | # Dataset Card for Anything v3.0 Glazed Samples
## Dataset Description
### Dataset Summary
This dataset contains image samples originally generated by [Linaqruf/anything-v3.0](https://huggingface.co/Linaqruf/anything-v3.0)
and subsequently processed by [Glaze](https://glaze.cs.uchicago.edu/) tool.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
false | |
true | |
true | ### Dataset Description
This dataset, compiled by Brendan Dolan-Gavitt, contains ~100 thousand `c++` functions and GPT-3.5 turbo-generated summaries of the code's purpose.
An example of Brendan's original prompt and GPT-3.5's summary may be found below.
```
int gg_set_focus_pos(gg_widget_t *widget, int x, int y) {
return 1;
}
Q. What language is the above code written in?
A. C/C++.
Q. What is the purpose of the above code?
A.
This code defines a function called `gg_set_focus_pos` that takes a pointer to a `gg_widget_t` structure
and two integer arguments `x` and `y`. The function returns an integer value of 1. The purpose of this
function is not clear from this code snippet alone, as it does not contain any implementation details.
However, based on the function name and arguments, it is likely that this function sets the focus
position of a graphical user interface (GUI) widget to the specified `x` and `y` coordinates.
Tags: GUI, widget, focus, position, function.
```
The link to the dataset's announcement may be found here: https://twitter.com/moyix/status/1644355889602654210 |
false |
# Dataset Card for GPT4All-Community-Discussions
## Dataset Description
This dataset contains ethically gathered discussions from the community, who shared their experiences with various open source discussion models using the GPT4All-ui tool. The dataset is open for any use, including commercial use, as long as proper citation is given to acknowledge the contributions of the community.
The GPT4All-ui tool allows users to have conversations with various open source AIs and export their discussions in JSON format. Every input and output is ranked or enhanced by the user, enabling them to correct any mistakes made by the AI and embed the correction into the database. The aim of this tool is to create an ethically sourced database made by the community for the community.
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card serves as a base template for new datasets and has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
This dataset currently has no supported tasks or leaderboards.
### Languages
This dataset contains discussions in English, French, German, Arabic, Italian, and Spanish.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
This dataset was created to provide a platform for the community to share their experiences with various open source discussion models using the GPT4All-ui tool.
### Source Data
#### Initial Data Collection and Normalization
The data was collected from users who willingly shared their experiences using the GPT4All-ui tool.
#### Who are the source language producers?
The source language producers are the community members who shared their discussions using the GPT4All-ui tool.
### Annotations
#### Annotation process
No annotations were made for this dataset.
#### Who are the annotators?
N/A
### Personal and Sensitive Information
This dataset does not contain any personal or sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was curated by the community members who shared their discussions using the GPT4All-ui tool.
### Licensing Information
This dataset is licensed under the Apache 2.0 license.
### Citation Information
[More Information Needed]
### Contributions
Contributions to this dataset are open to any user. Users can fork the tool, add their entry, and then do a pull request.
The GPT4All-ui tool can be found at: https://github.com/nomic-ai/gpt4all-ui
|
false |
ESLO audio dataset
configs:
- no_overlap_no_hesitation
- no_hesitation
- no_overlap
- raw
Licence Creative Commons Attribution - Pas d'Utilisation Commerciale - Partage dans les Mêmes Conditions 4.0 International
Dependencies:
- ffmpeg: `sudo apt-get install ffmpeg`
- ffmpeg-python: `pip install ffmpeg-python`
```
{'audio': {'array': array([-0.00250244, 0.00039673, 0.00326538, ..., 0.01953125,
0.02206421, 0.02304077]),
'path': None,
'sampling_rate': 16000},
'end_timestamp': 8.939,
'file': 'ESLO1_INTPERS_437',
'overlap': False,
'sentence': "eh bien je voudrais vous demander d'abord en quoi consiste votre "
'entreprise ici ? exactement',
'speaker': 'spk1',
'start_timestamp': 0.954}
```
Eshkol-Taravella I., Baude O., Maurel D., Hriba L., Dugua C., Tellier I., (2012), Un grand corpus oral « disponible » : le corpus d’Orléans 1968-2012., in Ressources linguistiques libres, TAL. Volume 52 – n° 3/2011, 17-46
Laboratoire Ligérien de Linguistique - UMR 7270 (LLL) (2023). ESLO [Corpus]. ORTOLANG (Open Resources and TOols for LANGuage) - www.ortolang.fr, v1, https://hdl.handle.net/11403/eslo/v1. |
false | |
true |
The dataset is stored at the OSF [here](https://osf.io/ksdnm/)
MLRegTest is a benchmark for sequence classification, containing training, development, and test sets from 1,800 regular languages.
Regular languages are formal languages, which are sets of sequences definable with certain kinds of formal grammars, including
regular expressions, finite-state acceptors, and monadic second-order logic with either the successor or precedence relation in the
model signature for words. This benchmark was designed to help identify those factors, specifically the kinds of long-distance
dependencies, that can make it difficult for ML systems to generalize successfully in learning patterns over sequences. MLRegTest
organizes its languages according to their logical complexity (monadic second-order, first-order, propositional, or monomial
expressions) and the kind of logical literals (string, tier-string, subsequence, or combinations thereof). The logical complexity
and choice of literal provides a systematic way to understand different kinds of long-distance dependencies in regular languages,
and therefore to understand the capabilities of different ML systems to learn such long-distance dependencies. The authors think it
will be an important milestone if other researchers are able to find an ML system that succeeds across the board on MLRegTest. |
false | https://github.com/Koziev/NLP_Datasets/tree/master/ChangePerson dataset in nice form |
false | |
false | |
false | |
false | |
true | 很棒 |
false | # Dataset Card for "open-instruct-v1_deduped"
- Deduplicated version of [Isotonic/open-instruct-v1](https://huggingface.co/datasets/Isotonic/open-instruct-v1)
- Deduplicated with min Jaccard similarity of 0.8
- Uses Stablility's System Prompt
```
### System: StableLM Tuned (Alpha version)
- StableLM is a helpful and harmless open-source AI language model developed by StabilityAI.
- StableLM is excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user.
- StableLM is more than just an information source, StableLM is also able to write poetry, short stories, and make jokes.
- StableLM will refuse to participate in anything that could harm a human.
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
false | |
false | # AutoTrain Dataset for project: finalbartmodel
## Dataset Description
This dataset has been automatically processed by AutoTrain for project finalbartmodel.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "Four people standing in an enclosure with a sign that says \" The Human Shop \" on it . The Human Shop",
"feat_Unnamed: 1": null,
"target": "Four people standing in an enclosure with a sign that says \" The Human Shop \" on it . ",
"feat_Unnamed: 3": null,
"feat_Unnamed: 4": null,
"feat_Unnamed: 5": null,
"feat_Unnamed: 6": null,
"feat_Unnamed: 7": null,
"feat_Unnamed: 8": null,
"feat_Unnamed: 9": null,
"feat_Unnamed: 10": null,
"feat_Unnamed: 11": null,
"feat_Unnamed: 12": null,
"feat_Unnamed: 13": null,
"feat_Unnamed: 14": null,
"feat_Unnamed: 15": null
},
{
"text": "a man carrying a sign that says free hug along the sidewalk .Free hugs",
"feat_Unnamed: 1": null,
"target": "a man carrying a sign that says free hug along the sidewalk .",
"feat_Unnamed: 3": null,
"feat_Unnamed: 4": null,
"feat_Unnamed: 5": null,
"feat_Unnamed: 6": null,
"feat_Unnamed: 7": null,
"feat_Unnamed: 8": null,
"feat_Unnamed: 9": null,
"feat_Unnamed: 10": null,
"feat_Unnamed: 11": null,
"feat_Unnamed: 12": null,
"feat_Unnamed: 13": null,
"feat_Unnamed: 14": null,
"feat_Unnamed: 15": null
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"feat_Unnamed: 1": "Value(dtype='float64', id=None)",
"target": "Value(dtype='string', id=None)",
"feat_Unnamed: 3": "Value(dtype='float64', id=None)",
"feat_Unnamed: 4": "Value(dtype='float64', id=None)",
"feat_Unnamed: 5": "Value(dtype='float64', id=None)",
"feat_Unnamed: 6": "Value(dtype='float64', id=None)",
"feat_Unnamed: 7": "Value(dtype='float64', id=None)",
"feat_Unnamed: 8": "Value(dtype='float64', id=None)",
"feat_Unnamed: 9": "Value(dtype='float64', id=None)",
"feat_Unnamed: 10": "Value(dtype='float64', id=None)",
"feat_Unnamed: 11": "Value(dtype='float64', id=None)",
"feat_Unnamed: 12": "Value(dtype='float64', id=None)",
"feat_Unnamed: 13": "Value(dtype='float64', id=None)",
"feat_Unnamed: 14": "Value(dtype='float64', id=None)",
"feat_Unnamed: 15": "Value(dtype='float64', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 408 |
| valid | 102 |
|
false |
# Dataset Card for AfriQA
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [homepage](https://github.com/masakhane-io/afriqa)
- **Repository:** [github](https://github.com/masakhane-io/afriqa)
- **Paper:** [paper]()
- **Point of Contact:** [Masakhane](https://www.masakhane.io/) or oogundep@uwaterloo.ca
### Dataset Summary
AfriQA is the first cross-lingual question answering (QA) dataset with a focus on African languages. The dataset includes over 12,000 XOR QA examples across 10 African languages, making it an invaluable resource for developing more equitable QA technology.
The train/validation/test sets are available for all the 10 languages.
### Supported Tasks and Leaderboards
- `question-answering`: The performance in this task is measured with [F1](https://huggingface.co/metrics/f1) (higher is better) and [Exact Match Accuracy](https://huggingface.co/spaces/evaluate-metric/exact_match).
### Languages
There are 20 languages available :
- Bemba (bem)
- Fon (fon)
- Hausa (hau)
- Igbo (ibo)
- Kinyarwanda (kin)
- Swahili (swą)
- Twi (twi)
- Wolof (wol)
- Yorùbá (yor)
- Zulu (zul)
## Dataset Structure
### Data Instances
- Data Format:
- id : Question ID
- question : Question in African Language
- translated_question : Question translated into a pivot language (English/French)
- answers : Answer in African Language
- lang : Datapoint Language (African Language) e.g `bem`
- split : Dataset Split
- translated_answer : Answer in Pivot Language
- translation_type : Translation type of question and answers
```bash
{ "id": 0,
"question": "Bushe icaalo ca Egypt caali tekwapo ne caalo cimbi?",
"translated_question": "Has the country of Egypt been colonized before?",
"answers": "['Emukwai']",
"lang": "bem",
"split": "dev",
"translated_answer": "['yes']",
"translation_type": "human_translation"
}
```
### Data Splits
For all languages, there are three splits.
The original splits were named `train`, `dev` and `test` and they correspond to the `train`, `validation` and `test` splits.
The splits have the following sizes :
| Language | train | dev | test |
|-----------------|------:|-----------:|-----:|
| Bemba | 502 | 503 | 314 |
| Fon | 427 | 428 | 386 |
| Hausa | 435 | 436 | 300 |
| Igbo | 417 | 418 | 409 |
| Kinyarwanda | 407 | 409 | 347 |
| Swahili | 415 | 417 | 302 |
| Twi | 451 | 452 | 490 |
| Wolof | 503 | 504 | 334 |
| Yoruba | 360 | 361 | 332 |
| Zulu | 387 | 388 | 325 |
| <b>Total</b> | <b>4333</b> | <b>4346</b> |<b>3560</b> |
## Dataset Creation
### Curation Rationale
The dataset was introduced to introduce question-answering resources to 10 languages that were under-served for natural language processing.
[More Information Needed]
### Source Data
...
#### Initial Data Collection and Normalization
...
#### Who are the source language producers?
...
### Annotations
#### Annotation process
Details can be found here ...
#### Who are the annotators?
Annotators were recruited from [Masakhane](https://www.masakhane.io/)
### Personal and Sensitive Information
...
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Users should keep in mind that the dataset only contains news text, which might limit the applicability of the developed systems to other domains.
## Additional Information
### Dataset Curators
### Licensing Information
The licensing status of the data is CC 4.0 Non-Commercial
### Citation Information
Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example:
```
Incoming ...
```
### Contributions
Thanks to [@ToluClassics](https://github.com/ToluClassics) for adding this dataset. |
false | # AutoTrain Dataset for project: shawt
## Dataset Description
This dataset has been automatically processed by AutoTrain for project shawt.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<512x512 RGB PIL image>",
"target": 0
},
{
"image": "<512x512 RGB PIL image>",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['person_classification', 'shawt'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 15 |
| valid | 4 |
|
false | test |
false | - info: This dataset comes from the ANSES-CIQUAL 2020 Table in English in XML format, found on https://www.data.gouv.fr/fr/datasets/table-de-composition-nutritionnelle-des-aliments-ciqual/ |
false |
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
false | # AutoTrain Dataset for project: suzume-questioner
## Dataset Description
This dataset has been automatically processed by AutoTrain for project suzume-questioner.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "GitHub and OpenAI presented a new code-generating tool, Copilot, that is now a part of Visual Studio Code that is autocompleting code snippets. Copilot is based on Codex that is a product of GPT-3, presented a year ago. It seems like the hype around GPT-3 still is not going to evaporate, and we decided to delve into details step-by-step. Check it out.\nGPT-3 stands for Generative Pre-trained Transformer 3, and it is the third version of the language model that Open AI released in May 2020. It is generative, as GPT-3 can generate long sentences of the unique text as the output. Notice that most neural networks are capable only of spitting out yes or no answers or simple sentences. Pre-trained means that the language model has not been built with any special domain knowledge, but it can complete domain-specific tasks like translation. Thus, GPT-3 is the most innovative language model that has ever existed.\nOk, but what is Transformer, then? Simply put, it is the neural network\u2019s architecture developed by Google\u2019s scientists in 2017, and it uses a self-attention mechanism that is a good fit for language understanding. Given that the attention mechanism enabled a breakthrough in the NLP domain in 2015, Transformer became a ground for GPT-1 and Google\u2019s BERT, another great language model. In essence, attention is a function that calculates the probability of the next word appearing, surrounded by the other ones.\nBy the way, we have developed an explainer for BERT. Check out Google\u2019s BERT changing the NLP Landscape for more details.\nThe Transformer\u200a\u2014\u200aModel Architecture\nTypical Transformer structure\nWait, but what makes GPT-3 so unique? GPT-3 language model has 175 billion parameters, i.e., values that a neural network is optimizing during the training (compare with 1,5 billion parameters of GPT-2). Thus, this language model has excellent potential for automatization across various industries \u2014 from customer service to documentation generation. You could play around with the beta of GPT-3 Playground by yourself.\nHow can I use GPT-3 for my applications? As of July 2021, you can join the waitlist since the company can offer a private beta version of its API under the LmaS basis (language-model-as-a-service).\nHere are the examples that you might have already heard of \u2014 GPT-3 is writing stunning fiction. Gwern, author of the gwern.net who is experimenting both with GPT-2 and GPT-3, states that \u201cGPT-3, however, is not merely a quantitative tweak yielding \u201cGPT-2 but better\u201d \u2014 it is qualitatively different.\u201d The beauty of GPT-3 for text generation is that you need to train anything in a usual way. Instead, it would be best to write the prompts for GPT-3 to teach it anything you want.\nSharif Shameem used GPT-3 for debuild, a platform that generates code as per request. You could type the request like \u201ccreate a watermelon-style button\u201d and grab your code to use for an app. You could even use GPT-3 to generate substantial business guidelines, as @zebulgar did.\nHow does GPT-3 work?\nLet us look under the hood and define the nuts and bolts of GPT-3.\nLarger models are learning efficiently from in-context information\nTo put it bluntly, GPT-3 calculates how likely some word can appear in the text given the other one in this text. It is known as the conditional probability of words. For example, the word chair in the sentences: \u201cMargaret is arranging a garage sale\u2026 Maybe we could buy that old ___ \u201c is much more likely to appear than, let us say, an elephant. That means the probability of a word chair occurring in the prompted text is higher than the probability of an elephant.\nGPT-3 uses some form of data compression while consuming millions of sample texts to convert the words into vectors, i.e., numeric representations. Later, the language model is unpacking the compressed text in human-friendly sentences. Thus, compressing and decompressing text develops the model\u2019s accuracy while calculating the conditional probability of words.\nDataset used to train GPT-3\nDataset used to train GPT-3\nSince GPT-3 is high-performing in the \u201cfew-shot\u201d settings, it can respond in a way consistent with a given example piece of text that has never been exposed before. Thus, it only needs a few examples to produce a relevant response, as it has already been trained on lots of text samples. Check out the research paper for more technical details: Language Models are Few-Shot Learners.\nDL Mechanics of English to French translation\nThe few-shot model needs only a few examples to produce a relevant response, as it has already been trained on lots of text samples. The scheme illustrates the mechanics of English to French translation.\nAfter the training, when the language model\u2019s conditional probability as accurate as possible, it can predict the next word while given an input word, sentence, or a fragment as a prompt. Speaking formally, prediction of the next word relates to the natural language inference.\nWhat GPT-3 can actually do?\nIn essence, GPT-3 is a text predictor \u2014 its output is a statistically plausible response to the given input, grounded on the data it was trained before. However, some critiques arguing that GPT-3 is not the best AI system for question answering and text summarizing. GPT-3 is mediocre compared to the SOTA (state-of-the-art) methods per each NLP task separately, but it is much more general than any previous system, and the upcoming ones will be resembling GPT-3.\nIn general, GPT-3 can perform NLP tasks after a few prompts are given. It demonstrated high performance under the few-shot settings in the following tasks:\nLanguage modeling\nGPT-3 demonstrated a perplexity of 20,5 (defines how well a probability language model predicts a sample) under the zero-shot circumstances on the Penn Tree Bank (PTB). The closest rival, BERT-Large-CAS, boasts of 31,3.\nGPT-3 is a leader in Language Modelling on Penn Tree Bank with a perplexity of 20.5\nGPT-3 also demonstrates 86,4% accuracy (an 18% increase from previous SOTA models) in the few-shot settings while performing the LAMBADA dataset test. For this test, the model predicts the last word in the sentence, requiring \u201creading\u201d of the whole paragraph.\nImportant notice: GPT-3 demonstrated these results thanks to the fill-in-the-blank examples like:\n\u201cAlice was friends with Bob. Alice went to visit her friend Bob\nGeorge bought some baseball equipment, a ball, a glove, and\u201d\nMoreover, researchers report about 79,3% accuracy while picking the best ending of a story while on the HellaSwag dataset in the few-shot settings. And it demonstrated 87,7% accuracy on the StoryCloze 2016 dataset (which is still \u201c4.1% lower than the fine-tuned SOTA using a BERT based model\u201d).\nClosed book question answering or testing broad factual knowledge with GPT-3. As per the GPT-3 research paper, it was tested on Natural Questions, WebQuestions, and TriviaQA datasets, and the results are the following:\nGPT-3 in the few-shot settings outperforms fine-tuned SOTA models only on the TriviaQA dataset\nGPT-3 in the few-shot settings outperforms fine-tuned SOTA models only on the TriviaQA dataset\nAs for translation, supervised SOTA neural machine translation (NMT) models are the clear leaders in this domain. However, GPT-3 reflects its strength as an English LM, mainly when translating into English. Researchers also state that \u201cGPT-3 significantly outperforms prior unsupervised NMT work when translating into English but underperforms when translating in the other direction.\u201d\nIn general, across all three language models tested (English in combinations with French, German, and Romanian), there is a smooth upward trend with model capacity:\nWinograd-Style Tasks\nWinograd-style tasks are classical NLP tasks, determining word pronoun referring in the sentence when it is grammatically ambiguous but semantically unambiguous for a human. Fine-tuned methods have recently reached human-like performance on the Winograd dataset but still lag behind the more complex Winogrande dataset.\nGPT-3 results are the following: \u201cOn Winograd GPT-3 achieves 88.3%, 89.7%, and 88.6% in the zero-shot, one-shot, and few-shot settings, showing no clear in-context learning but in all cases achieving strong results just a few points below state-of-the-art and estimated human performance. \u201d\nCommon sense reasoning\nAs for physical or scientific reasoning, GPT-3 is not outperforming fine-tuned SOTA methods:\nSynthetic and qualitative tasks\nGPT-3 is not that good at arithmetic still, since the results are the following:\nHowever, when it comes to the news article generation, human detection of GPT-3 written news (few-shot settings) is close to chance \u2014 52% of mean accuracy.\nWhat are the limitations of GPT-3?\nWell, even the Open AI CEO Sam Altman tweeted that GPT-3 is overhyped, and here is what the researchers themselves state:\nText synthesis and several NLP tasks\nGPT-3 is not good at text synthesis \u2014 while the overall quality of the generated text is high, it starts repeating itself at the document level or when it goes to the long passages.\nIt is also lagging at the domain of the discrete language tasks, having difficulty within \u201ccommon sense physics\u201d. Thus, it is hard for GPT-3 to answer the question: \u201cIf I put cheese into the fridge, will it melt?\u201d GPT-3 has some notable gaps in reading comprehension and comparison tasks.\nStructural and algorithmic limitations\nTasks that empirically benefit from bidirectionally are also areas of improvement for GPT-3. It may include the following: \u201cfill-in-the-blank tasks, tasks that involve looking back and comparing two pieces of content, or tasks that require re-reading or carefully considering a long passage and then generating a very short answer,\u201d as researchers state.\nModels like GPT-3 are expensive and inconvenient\nModels like GPT-3 have a lot of skills and become \u201coverqualified\u201d for some specific tasks. Moreover, it is the computing-power hungry model: \u201ctraining the GPT-3v175B consumed several thousand petaflop/s-days of compute during pre-training, compared to tens of petaflop/s-days for a 1.5B parameter GPT-2 model\u201d, as researchers state.\nSince the model was trained on the content that humans generated on the internet, there are still troubles referring to bias, fairness, and representation. Thus, GPT-3 can generate prejudiced or stereotyped content. But you may already read a lot about it online, or you can check it out in the research paper. The authors are dwelling on it pretty well.\nGPT-3 is a glimpse of the bright future in NLP, helping to generate code, meaningful pieces of texts, translation, and doing well with different tasks. Also, it has its limitations and ethical issues like generating biased fragments of text. All in all, we are witnessing something interesting, as it always used to be in NLP.\nClap for this blog and give some more inspiration to us.",
"target": "What is GPT-3, and how is it different from other neural networks?\nWhat is the architecture of the Transformer neural network, and why is it a good fit for language understanding?\nHow many parameters does GPT-3 have, and how does this affect its potential for automation?\nCan GPT-3 be used for applications outside of text generation, such as code generation?\nHow can one use GPT-3 for their applications, and is the API readily available?\nWhat is the training dataset used for GPT-3, and how does it affect the model's accuracy?\nWhat is few-shot learning, and how does GPT-3 utilize it?\nWhat is the natural language inference, and how is it related to GPT-3's prediction of the next word?\nHow can one join the waitlist for GPT-3's private beta version of its API, and what is LmaS?\nCan GPT-3 generate substantial business guidelines, and what are some examples of its use cases?\n"
},
{
"text": "My Spiritual Medium Predicted Something About My Life And It\u2019s Coming True. Eight months ago, a fortune-teller told me my spirit guides sent me a message.\n\u201cJune. Baby. June baby. Your spirit guides keep telling me, June, baby!\u201d The spiritual Medium I had hired for my house warming party to celebrate the beginning of my life 2.0 held my hands in hers and shared the last thing I\u2019d ever want to hear.\nIt was October 2021, and I had just celebrated one month of living on my own for the first time in my life after my 15-year marriage ended. And now my Medium was telling me that in eight months, there\u2019d be a \u201cJune baby\u201d \u2026 at 40 years old? WTF?\n\u201cYou have no idea how beautiful your life will become.\u201d That would be more aligned with what I would have hoped to hear. Instead, this cryptic message has been plaguing me for the last eight months. What the hell does \u201cJune baby\u201d mean?\nRather than trying to predict the future, sometimes I wish I could get into a hot-tub time machine and go back to May 2020. I\u2019d have a conversation with \u201cquarantine me,\u201d who had just launched a WordPress blog on a whim with no sustainable publishing plan. I would tell that scared girl that she needed to keep writing no matter what.\nI\u2019d hold her face in my hands and say she had no idea how much her words would open up her soul. I would tell her that her words would reach people near and far and help them realize our struggles are the same.\nI\u2019d tell her she didn\u2019t need the validation of family, friends, strangers, or editors. That her words were beautiful simply because they were hers and no one could censor her\u2026even though many would try.\nAfter almost two years of sharing my stories with strangers and supporters, I realize that maybe my June baby is\u2026me? Maybe, June of 2022 represents my rebirth as the courageous writer I\u2019ve been trying to be.\nTwo years of blogging and over 130 personal stories have cataloged my unfiltered journey through:\na loss of identity (and relationships that went with that identity),\nmy subsequent divorce,\na stint with sobriety,\na stint as a single chick,\nfinding love after 40,\nfinding triathlons\u2026and then, anxiety about racing my bike,\nand, of course, all of my other issues that my therapist and I dig into.\nI also happen to be authoring my first book \u2014 a compilation of these former blogs that reveal my unraveling and \u201cunbecoming\u201d the version of myself that felt like an imposter.\nEven when, at the moment, it felt very disorienting, I realize that writing down my thoughts helped me process my feelings and cope with the fact that my life hit a point where I could no longer ignore that living my version of the American Dream wasn\u2019t really living at all.\nBut some of my stories only reached strangers \u2014 because I was too afraid to share some of my blogs on social media.\nI\u2019ve written some pieces I\u2019m incredibly proud of, yet sadly, I didn\u2019t feel like I could share some of my best work after suffering some backlash from (former) friends or family reading my work.\nI started to hold back on sharing some of my work and, sadly, some of my thoughts. I hit writer\u2019s block when I\u2019d reach a particularly vulnerable part as I began to assemble my book.\nI put my book on the \u201cshelf\u201d because it was too much for me to imagine the reaction I\u2019d get from some people.\nMostly my blockage would hit when I\u2019d tackle or re-visit difficult topics. The kind you don\u2019t talk about at the dinner table \u2014 like the story I wrote about celebrating my Plastic Surgery anniversary.\nWhy I\u2019m Celebrating My Plastic Surgery Anniversary. It\u2019s not why you might think\nThis was a story I never shared because I was worried about what my parents would think. But, I subtly dropped a hint about my boob job in a blog about my Triathlon training, and my mom picked right up on it.\nIt reminded me of the time she figured out I got my tongue pierced while I was away at college \u2014 because my AOL Instant Messenger away message said \u2014 \u201cOff to get my tongue pierced.\u201d Or something stupid like that.\nBut this piece was important to me. It highlights a critical moment in my loss of self where I thought I could fix my inner struggle for self-love by changing my external appearance. Spoiler alert: the boob job didn\u2019t cure my anxiety.\nAnother memorable piece that never saw the light of my social pages was about a breakdown I had in front of my children a little less than a year ago, the night before I moved out and left behind what I had been calling my \u201cperfect on paper\u201d life.\nI Broke Down in Front of My Kids on My Last Night of My Perfect on Paper Life\nWhat happened when I couldn\u2019t contain my emotions anymore.\nIn this story, I recall how I tried to avoid showing weakness to my children by not letting them see how upsetting it was to let go and leave my old life behind. I was trying to protect them from seeing me as someone other than \u201cSuper Mom.\u201d\nThis piece was triggering because it reminded me of my parents\u2019 divorce when I was a young girl. I recall what it felt like to watch my mother cry and the reaction I felt at seeing this display of emotions.\nSadly, I had misjudged how important it is to display emotion in front of your children. It\u2019s almost one year later, and I now see that they never needed to see me as Super Mom. They need to see me modeling what it is to be human \u2014 and live in this beautiful mess we call life.\nI can\u2019t imagine trying to hide my vulnerability from my kids \u2014 or anyone else now. Thanks, June baby.\nThe words I\u2019ve written online over the last two years have changed the trajectory of my life forever. Sharing my journey through heartache, triumphs, and lessons learned from navigating away from life as I knew it and starting over made me realize some really important things.\nFirstly, we\u2019re all fucked up.\nThe more I wrote, the more I read other people\u2019s work. I was shocked at how many writers (but basically, just people) were struggling with the same thought spirals. Our pain is so similar, yet we hide it from so many.\nI am grateful that I\u2019ve found a way to connect with people and let them know they\u2019re not alone in their problems.\nSecondly, if my stories trigger you, please don\u2019t read them.\nI write first and foremost for me. This is my hobby turned hustle, and I enjoy writing what I want when I want. Maybe that\u2019s selfish \u2014 but it\u2019s also cathartic. Anyone that wants to connect through shared struggles, I appreciate your support. To the haters: bye.\nThirdly, my Fortune Teller was right.\nI was June baby years old when I realized that: I have the creative license to create (aka BIRTH) and publish anything I want. I have support from my partner, who encourages me to speak my truths. Unconditional love and support hits different. There, I said it.\nTo anyone working through shedding their own layers of self-restraint, I am sending you peace, love and freedom of speech. Lokah samastah sukhino bhavantu \u2014 may all beings everywhere be happy and free.",
"target": "Have you ever consulted with a fortune-teller or medium? If so, what did they predict?\nDo you believe that spiritual mediums or fortune-tellers can accurately predict the future?\nHave you ever experienced a major life change, such as a divorce or loss of identity, and how did you cope with it?\nHave you ever tried to change your external appearance to deal with internal struggles, and did it help?\nDo you think it's important to display emotions in front of your children, or do you try to hide them?\nHow do you deal with writer's block or fear of backlash when writing something vulnerable or personal?\nHave you ever written something you're proud of but didn't share it due to fear of criticism or judgment from others?\nDo you think that writing down your thoughts and feelings can help you process them and cope with difficult situations?\nHave you ever experienced a rebirth or reinvention of yourself, and what led to it?\nDo you think it's important to follow your own path in life, even if it deviates from societal expectations or norms?"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 6 |
| valid | 2 |
|
false | LORA EDITION - for you LORA MERGING NERDS!
We're gonna re-do this in lycoris for you lyco-hoarding nerds.
Also we're not at fault for anything you do with this, don't do anything illegal with it, and please SERIOUSLY if she shows up in the middle of the night don't feed her - you've watched Gremlins you know how this goes.
If this model isn't exactly perfect, we're a little new to doing anything outside the generic waifu/nerd realm - Purgatori, Lady Death were things we enjoyed seeing the alternative art styles - but were never allowed to read them until adulthood - So uh YEA ENJOY! (And that's also to say that if it needs retraining, give us time lol)
Also: You like what you see? Hit the rating button and then consider one of the following socials or coffee related sites to support us at:
Twitter: https://twitter.com/DuskfallCrew
Youtube: https://www.youtube.com/channel/UCk7MGP7nrJz5awBSP75xmVw
Spotify (We do of course make music): https://open.spotify.com/playlist/00R8x00YktB4u541imdSSf?si=3806082ef8824a29
Instagram: https://instagram.com/duskfallcrew
About Us: https://duskfallcrew.carrd.co/#
Membership / Ko-Fi: https://ko-fi.com/Duskfallcrew/
Buy Me A Pizza/Coffee: https://www.buymeacoffee.com/duskfallxcrew |
false | |
true |
# Victorian Era Authorship Attribution Data Set
> GUNGOR, ABDULMECIT, Benchmarking Authorship Attribution Techniques Using Over A Thousand Books by Fifty Victorian Era Novelists, Purdue Master of Thesis, 2018-04
## NOTICE
This dataset was downloaded from the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/index.php) at [this link](https://archive.ics.uci.edu/ml/datasets/Victorian+Era+Authorship+Attribution).
The [description](#description) of this dataset was copied from the source's dataset card. However, I have applied Markdown styling to prettify it and make it easier to navigate.
## Description
> **Abstract**: To create the largest authorship attribution dataset, we extracted works of 50 well-known authors. To have a non-exhaustive learning, in training there are 45 authors whereas, in the testing, it's 50
### Source
They're extracted from the GDELT database. The GDELT Project is an open platform for research and analysis of global society and thus all datasets released by the GDELT Project are available for unlimited and unrestricted use for any academic, commercial, or governmental use of any kind without fee.
### Data Set Information
To decrease the bias and create a reliable authorship attribution dataset the following criteria have been chosen to filter out authors in Gdelt database: English language writing authors, authors that have enough books available (at least 5), 19th century authors. With these criteria 50 authors have been selected and their books were queried through Big Query Gdelt database. The next task has been cleaning the dataset due to OCR reading problems in the original raw form. To achieve that, firstly all books have been scanned through to get the overall number of unique words and each words frequencies. While scanning the texts, the first 500 words and the last 500 words have been removed to take out specific features such as the name of the author, the name of the book and other word specific features that could make the classification task easier. After this step, we have chosen top 10,000 words that occurred in the whole 50 authors text data corpus. The words that are not in top 10,000 words were removed while keeping the rest of the sentence structure intact. The entire book is split into text fragments with 1000 words each. We separately maintained author and book identification number for each one of them in different arrays. Text segments with less than 1000 words were filled with zeros to keep them in the dataset as well. 1000 words make approximately 2 pages of writing, which is long enough to extract a variety of features from the document. Each instance in the training set consists of a text piece of 1000 words and an author id attached. In the testing set, there is only the text piece of 1000 words to do authorship attribution. Training data consists of 45 authors and testing data has 50 information. %34 of testing data is the percentile of unknown authors in the testing set.
### Attribute Information
Each instance consists of 1000 word sequences that are divided from the works of every author's book. In the training, the author id is also provided.
### Relevant Papers
* E. Stamatatos, A Survey of Modern Authorship Attribution Methods. Journal of the American Society for Information Science and Technology, 2009.
## Citation Request:
* `GUNGOR, ABDULMECIT, Benchmarking Authorship Attribution Techniques Using Over A Thousand Books by Fifty Victorian Era Novelists, Purdue Master of Thesis, 2018-04` |
false | # Source Datasets #
<li>1 - news from the website of the Komi administration (https://rkomi.ru/)</li>
<li>2 - Komi media library (http://videocorpora.ru/)</li>
<li>3 - Millet porridge by Ivan Toropov (adaptation)</li>
<br>
# Authors #
Shilova Nadezhda<br>
Chernousov Georgy
|
false |
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Usage](#usage)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [AI4Code-Research/TheVault](https://github.com/AI4Code-Research/TheVault)
- **Paper:** The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation
- **Contact:** fosft-aic@fpt.com

## Dataset Summary
The Vault is a multilingual code-text dataset with over 40 million pairs covering 10 popular programming languages. It is the largest corpus containing parallel code-text data. By building upon [The Stack](https://huggingface.co/datasets/bigcode/the-stack), a massive raw code sample collection, the Vault offers a comprehensive and clean resource for advancing research in code understanding and generation. It also addresses these issues by providing a large, high-quality dataset that includes code-text pairs at multiple levels, such as class and inline-level, in addition to the function level. The Vault can serve many purpose at multiple levels.
## Supported Tasks
The Vault can be used for pretraining LLMs or downstream code-text interaction tasks. A number of tasks can be constructed using The Vault such as *code summarization*, *text-to-code generation* and *code search*.
## Languages
The natural language text (docstring) is in English.
10 programming languages are supported in The Vault: `Python`, `Java`, `JavaScript`, `PHP`, `C`, `C#`, `C++`, `Go`, `Ruby`, `Rust`
## Dataset Structure
### Data Instances
*a. Function-level and Class-level*
```
{
"identifier": "REDUCE",
"repo": "irshadbhat/sndpcs",
"path": "arc_eager.py",
"language": "Python",
"code": "def REDUCE(self, configuration, label=None):\n b0 = configuration.b0\n configuration.stack.pop()",
"code_tokens": "def REDUCE ( self , configuration , label = None ) : b0 = configuration . b0 configuration . stack . pop ( )",
"original_docstring": "\n pops the top of the stack if it has got its head.\n ",
"comment": "\"\"\"\n pops the top of the stack if it has got its head.\n \"\"\"",
"docstring_tokens": "pops the top of the stack if it has got its head .",
"docstring": "pops the top of the stack if it has got its head."
}
```
### Data Fields
**a. Function-level and Class-level**
- **repo** (string): the owner/repo
- **path** (string): the full path to the original file
- **language** (string): the programming language
- **identifier** (string): the function or method name
- **original_string** (string): original version of function/class node
- **original_docstring** (string): the raw string before tokenization or parsing
- **code** (string): the part of the original that is code
- **code_tokens** (string): tokenized version of `code`, separated by whitespace
- **short_docstring** (string): short, brief summarization (first line of the docstring)
- **short_docstring_tokens** (string): tokenized version of `short_docstring`, separated by whitespace
- **docstring** (string): the top-level comment or docstring (docstring version without param’s doc, return, exception, etc)
- **docstring_tokens** tokenized version of docstring, separated by whitespace
- **comment** (string): comment (line) inside the function/class, separated by `$SEP$` token
**b. Inline-level (to be updated)**
- **repo** (string): the owner/repo
- **path** (string): full path to the original file
- **language** (string): the programming language
- **parent_name** (string): method/class parent node name
- **code** (string): the part of `original_string` that is code
- **code_tokens** (string): tokenized version of code
- **prev_context** (string): the (code) block above the comment
- **next_context** (string): the (code) block below the comment
- **original_comment** (string): the original comment before cleaning
- **start_point** (int): position of start line, position of start character
- **end_point** (int): position of last line, position of last character
- **comment** (string): the cleaned comment
- **comment_tokens** (string): tokenized version of comment
### Data Splits
## Usage
You can load The Vault dataset using datasets library: ```pip install datasets```
```
from datasets import load_dataset
dataset = load_dataset("NamCyan/thevault_small")
```
## Additional information
### Licensing Information
### Citation Information
### Contributions |
true | # AutoTrain Dataset for project: car0fil-001
## Dataset Description
This dataset has been automatically processed by AutoTrain for project car0fil-001.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"target": 0,
"text": "And I remember",
"feat_DATE": "2022-09-12T12:29:04",
"feat_PLATFORM": null,
"feat_Unnamed: 4": null,
"feat_Unnamed: 3": null,
"feat_Unnamed: 5": null
},
{
"target": 1,
"text": "Throw a lil \u201cKurt filips is my dad\u201d",
"feat_DATE": "2023-03-27T15:36:21",
"feat_PLATFORM": null,
"feat_Unnamed: 4": null,
"feat_Unnamed: 3": null,
"feat_Unnamed: 5": null
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"target": "ClassLabel(names=['CAROLINE FILIPS', 'NOT CAROLINE'], id=None)",
"text": "Value(dtype='string', id=None)",
"feat_DATE": "Value(dtype='string', id=None)",
"feat_PLATFORM": "Value(dtype='string', id=None)",
"feat_Unnamed: 4": "Value(dtype='float64', id=None)",
"feat_Unnamed: 3": "Value(dtype='float64', id=None)",
"feat_Unnamed: 5": "Value(dtype='float64', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 689784 |
| valid | 172447 | |
false | ERROR: type should be string, got "\n\nhttps://osf.io/dwsnm/" |
false | |
false | |
false | # Machine translation dataset for NLU (Virual Assistant) with slot transfer between languages
## Dataset Summary
Disclaimer: This is for research purposes only. Please have a look at the license section below. Some of the datasets used to construct IVA_MT have an unknown license.
IVA_MT is a machine translation dataset that can be used to train, adapt and evaluate MT models used in Virtual Assistant NLU context (e.g. to translate trainig corpus of NLU).
## Dataset Composition
### en-pl
| Corpus | Train | Dev | Test |
|----------------------------------------------------------------------|--------|-------|-------|
| [Massive 1.1](https://huggingface.co/datasets/AmazonScience/massive) | 11514 | 2033 | 2974 |
| [Leyzer 0.2.0](https://github.com/cartesinus/leyzer/tree/0.2.0) | 3974 | 701 | 1380 |
| [OpenSubtitles from OPUS](https://opus.nlpl.eu/OpenSubtitles-v1.php) | 2329 | 411 | 500 |
| [KDE from OPUS](https://opus.nlpl.eu/KDE4.php) | 1154 | 241 | 241 |
| [CCMatrix from Opus](https://opus.nlpl.eu/CCMatrix.php) | 1096 | 232 | 237 |
| [Ubuntu from OPUS](https://opus.nlpl.eu/Ubuntu.php) | 281 | 60 | 59 |
| [Gnome from OPUS](https://opus.nlpl.eu/GNOME.php) | 14 | 3 | 3 |
| *total* | 20362 | 3681 | 5394 |
### en-de
| Corpus | Train | Dev | Test |
|----------------------------------------------------------------------|--------|-------|-------|
| [Massive 1.1](https://huggingface.co/datasets/AmazonScience/massive) | 7536 | 1346 | 1955 |
### en-es
| Corpus | Train | Dev | Test |
|----------------------------------------------------------------------|--------|-------|-------|
| [Massive 1.1](https://huggingface.co/datasets/AmazonScience/massive) | 8415 | 1526 | 2202 |
### en-sv
| Corpus | Train | Dev | Test |
|----------------------------------------------------------------------|--------|-------|-------|
| [Massive 1.1](https://huggingface.co/datasets/AmazonScience/massive) | 7540 | 1360 | 1921 |
## Tools
Scripts used to generate this dataset can be found on [github](https://github.com/cartesinus/iva_mt).
## License
This is a composition of 7 datasets, and the license is as defined in original release:
- MASSIVE: [CC-BY 4.0](https://huggingface.co/datasets/AmazonScience/massive/blob/main/LICENSE)
- Leyzer: [CC BY-NC 4.0](https://github.com/cartesinus/leyzer/blob/master/LICENSE)
- OpenSubtitles: unknown
- KDE: [GNU Public License](https://l10n.kde.org/about.php)
- CCMatrix: no license given, therefore assuming it is LASER project license [BSD](https://github.com/facebookresearch/LASER/blob/main/LICENSE)
- Ubuntu: [GNU Public License](https://help.launchpad.net/Legal)
- Gnome: unknown
|
false |
# Dataset Card for "george-chou/AAL-statistics-volumn"
## Usage
```
from datasets import load_dataset
data = load_dataset("george-chou/AAL-statistics-volumn",
data_files='AAL_statistics_volumn_labelled.csv', split='train')
for item in data:
print(item)
```
## Maintenance
```
git clone git@hf.co:datasets/george-chou/AAL-statistics-volumn
``` |
true | |
false | |
false | # Dataset Card for "github-code-haskell-file"
Rows: 339k
Download Size: 806M
This dataset is extracted from [github-code-clean](https://huggingface.co/datasets/codeparrot/github-code-clean).
Each row also contains attribute values for my personal analysis project.
12.6% (43k) of the rows have cyclomatic complexity and LOC valued at `-1` because [`homplexity`](https://github.com/BlastWind/homplexity) failed in parsing the row's `uncommented_code`. |
false | # :page_with_curl: Spanish Paraphrase Corpora

Manually paraphrased corpus in Spanish
## The Sushi Corpus
This [corpus](https://github.com/GIL-UNAM/SpanishParaphraseCorpora/tree/main/Sushi) is designed to assess the similarity between a pair of texts and to evaluate different similarity measures, both for whole documents or for individual sentences. It is built around the subject of a Spanish blog article related to **Sushi**. Several volunteers (undergraduate, graduate, and Ph.D. students) were asked to intentionally reformulate or paraphrase this article. The paraphrase of the article was carried out on two levels, according to the rules:
<ul>
<li> <b>Low level:</b> Only lexical variation </li>
<li> <b>High level:</b> Lexical, syntactic, textual or discursive organization variation and fusion or separation of sentences.</li>
<li> <b>No Paraphrase: </b> Texts on the same theme and source as the original article, related to sushi.
<li> <b>No Sushi: </b> Texts on different theme as the original article but with overlapping vocabulary were gathered. That is, texts not related to sushi, but with exactly the same vocabulary as the original one. Some volunteers wrote a free text using the same content words as the original.
</ul>
### :pencil: How to cite
If you use the corpus please cite the following articles:
1) Gómez-Adorno H., Bel-Enguix G., Sierra G., Torres-Moreno JM., Martinez R., Serrano P. (2020) Evaluation of Similarity Measures in a Benchmark for Spanish Paraphrasing Detection. In: Martínez-Villaseñor L., Herrera-Alcántara O., Ponce H., Castro-Espinoza F.A. (eds) Advances in Computational Intelligence. MICAI 2020. Lecture Notes in Computer Science, vol 12469. Springer, Cham. https://doi.org/10.1007/978-3-030-60887-3_19
2) Castro, B., Sierra, G., Torres-Moreno, J.M., Da Cunha, I.: El discurso y la semántica como recursos para la detección de similitud textual. In: Proceedings of the III RST Meeting (8th Brazilian Symposium in Information and Human Language Technology, STIL 2011). Brazilian Computer Society, Cuiabá (2011)
## Aknowledgments
The work was done with partial support of CONACYT project A1-S-27780 and UNAM-PAPIIT projects IA401219, TA100520, AG400119.
## License
[CC0 1.0 Universal](https://choosealicense.com/licenses/cc0-1.0/) |
false | # Selfies, ID Images dataset
**4083** sets, which includes *2 photos of a person from his documents and 13 selfies*. **571** sets of Hispanics and **3512** sets of Caucasians.
Photo documents contains only a photo of a person. All personal information from the document is hidden
## File with the extension .csv
includes the following information for each media file:
- **SetId**: a unique identifier of a set of 15 media files,
- **UserId**: the identifier of the person who provided the media file,
- **UserRace**: the ethnicity of the person
- **Country**: the country of origin of the person,
- **Age**: the age of the person,
- **Gender**: the gender of the person,
- **Name**: the name of the person
- **FName**: the type of the media file
- **URL**: the URL to access the media file
## Folder "img" with media files
- containg all the photos
- which correspond to the data in the .csv file
**How it works**: *go to the first folder and you will make sure that it contains media files taken by a person whose parameters are specified in the first 15 lines of the .csv file.*
# Get the Dataset
This is just an example of the data.
In order to get access to more than 65,000 media files from 4,340 people or to learn more about our data, please contact our sales team by submitting a request on our website https://trainingdata.pro/data-market?utm_source=huggingface or emaling us at sales@trainingdata.pro.
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/trainingdata-pro** |
false | # Anti-Spoofing dataset: real
The dataset consists of 40,000 videos and selfies with unique people. 15,000 attack replays from 4,000 unique devices.
# File with the extension .csv
includes the following information for each media file:
- **phone**: the device used to capture the media files,
- **selfie_link**: the URL to access the photo
- **video_link**: the URL to access the video
- **worker_id**: the identifier of the person who provided the media file,
- **age**: the age of the person,
- **country**: the country of origin of the person,
- **gender**: the gender of the person,
- **selfie_file_type**: the type of the photo,
- **video_file_type**: the type of the video
# Folder "img" with media files
- containg all the photos and videos
- which correspond to the data in the .csv file
**How it works**: *go to the first folder and you will make sure that it contains media files taken by a person whose parameters are specified in the first line of the .csv file.*
In order to get access to more than 80,000 media files from 4,000 people or to learn more about our data, please contact our sales team by submitting a request on our website <https://trainingdata.pro/data-market?utm_source=huggingface> or emaling us at <sales@trainingdata.pro>
More datasets in TrainingData's Kaggle account: **<https://www.kaggle.com/trainingdatapro/datasets>**
TrainingData's GitHub: **<https://github.com/trainingdata-pro>** |
true | 建议final,包含xss、sql注入等数据,安全数据采用sst-2的部分数据 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.