text-classification bool 2 classes | text stringlengths 0 664k |
|---|---|
false | |
false |
The dialogue pairs from Wesnoth add-on campanies IftU/AtS. |
false | # Dataset Card for "cd45rb_leukocytes_subdataset"
Citation:
Daisuke Komura, Takumi Onoyama, Koki Shinbo, Hiroto Odaka, Minako Hayakawa, Mieko Ochi, Ranny Rahaningrum Herdiantoputri, Haruya Endo, Hiroto Katoh, Tohru Ikeda, Tetsuo Ushiku, Shumpei Ishikawa,
Restaining-based annotation for cancer histology segmentation to overcome annotation-related limitations among pathologists, Patterns, Volume 4, Issue 2, 2023, 100688, https://doi.org/10.1016/j.patter.2023.100688.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
true | # Dataset Card for "github-issues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
true | |
false |
# Dataset Card for Voxpopuli
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** Kpage
- **Repository:** Kpage
- **Paper:**
- **Point of Contact:**
### Dataset Summary
SVM is a test dataset
### Example usage
SVM has one language. To load a specific language pass its name as a config name:
```python
from datasets import load_dataset
dataset = load_dataset(""KauPage/SVM", "mr-IN",)
```
```
**Note that L2 English subset contains only `test` split.**
### Supported Tasks and Leaderboards
* automatic-speech-recognition: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).
### Languages
SVM contains labelled (transcribed) data for 1 language:
| Language | Code | Transcribed Hours | Transcribed Speakers | Transcribed Tokens |
|:---:|:---:|:---:|:---:|:---:|
| Marathi | mr-IN | 1 | 1 | 4.8M |
## Dataset Structure
### Data Instances
```python
{
'audio_id': 'mrt_gurudev_10Dec22_0001',
'language': 11, # "hr"
'audio': {
'path': '/home/marathi/.cache/huggingface/datasets/downloads/extracted/44aedc80bb053f67f957a5f68e23509e9b181cc9e30c8030f110daaedf9c510e/train_part_0/mrt_gurudev_10Dec22_0001.wav',
'array': array([-0.01434326, -0.01055908, 0.00106812, ..., 0.00646973], dtype=float32),
'sampling_rate': 16000
},
'raw_text': '',
'normalized_text': 'poast genitalnog sakaenja ena u europi tek je jedna od manifestacija takve tetne politike.'
}
```
### Data Fields
* `audio_id` (string) - id of audio segment
* `language` (datasets.ClassLabel) - numerical id of audio segment
* `audio` (datasets.Audio) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate. In non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally).
* `raw_text` (string) - original (orthographic) audio segment text
* `normalized_text` (string) - normalized audio segment transcription
### Data Splits
All configs (languages) except for accented English contain data in three splits: train, validation and test. A
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
The raw data is collected from 2009-2020 [European Parliament event recordings](https://multimedia.europarl.europa.eu/en/home)
#### Initial Data Collection and Normalization
### Dataset Curators
[More Information Needed]
``` |
false |
## Dataset Summary
This dataset contains 256-dimensional vectors for a 1M sample of Wikipedia for Approximate Nearest Neighbors Search benchmarks.
### Usage
```
git lfs install
git clone https://huggingface.co/datasets/unum-cloud/ann-wiki-1m
```
### Dataset Structure
The dataset contains three matrices:
- base: `base.1M.fbin` with 1M vectors to construct the index.
- query: `query.public.100K.fbin` with 100K vectors to lookup in the index.
- truth: `groundtruth.public.100K.ibin` with 10x results for every one of the 100K queries.
Use the [ashvardanian/read_matrix.py](https://gist.github.com/ashvardanian/301b0614252941ac8a3137ac72a18892) Gist to parse the files.
|
false |
## Dataset Summary
This dataset contains 200-dimensional vectors for 1M images indexed by Yandex and produced by the Se-ResNext-101 model.
### Usage
```
git lfs install
git clone https://huggingface.co/datasets/unum-cloud/ann-t2i-1m
```
### Dataset Structure
The dataset contains three matrices:
- base: `base.1M.fbin` with 1M vectors to construct the index.
- query: `query.public.100K.fbin` with 100K vectors to lookup in the index.
- truth: `groundtruth.public.100K.ibin` with 10x results for every one of the 100K queries.
Use the [ashvardanian/read_matrix.py](https://gist.github.com/ashvardanian/301b0614252941ac8a3137ac72a18892) Gist to parse the files. |
false |
### Dataset Summary
This dataset card aims to be creating a new dataset or Sinhala news summarization tasks. It has been generated using [https://huggingface.co/datasets/cnn_dailymail] and google translate.
### Data Instances
For each instance, there is a string for the article, a string for the highlights, and a string for the id. See the [CNN / Daily Mail dataset viewer](https://huggingface.co/datasets/viewer/?dataset=cnn_dailymail&config=3.0.0) to explore more examples.
```
{'id': '0054d6d30dbcad772e20b22771153a2a9cbeaf62',
'article': '(CNN) -- An American woman died aboard a cruise ship that docked at Rio de Janeiro on Tuesday, the same ship on which 86 passengers previously fell ill, according to the state-run Brazilian news agency, Agencia Brasil. The American tourist died aboard the MS Veendam, owned by cruise operator Holland America. Federal Police told Agencia Brasil that forensic doctors were investigating her death. The ship's doctors told police that the woman was elderly and suffered from diabetes and hypertension, according the agency. The other passengers came down with diarrhea prior to her death during an earlier part of the trip, the ship's doctors said. The Veendam left New York 36 days ago for a South America tour.'
'highlights': 'The elderly woman suffered from diabetes and hypertension, ship's doctors say .\nPreviously, 86 passengers had fallen ill on the ship, Agencia Brasil says .'
'article_sinhala':'(CNN) -- බ්රසීලයේ රාජ්ය ප්රවෘත්ති ඒජන්සිය වන ඒජන්සියා බ්රසීල්ට අනුව, මීට පෙර මගීන් 86 දෙනෙකු රෝගාතුර වූ එම නෞකාවම, අඟහරුවාදා රියෝ ද ජැනයිරෝ හි නැංගුරම් ලා තිබූ නෞකාවක සිටි ඇමරිකානු කාන්තාවක් මිය ගියේය. හොලන්ඩ් ඇමරිකා කෲස් මෙහෙයුම්කරුට අයත් MS Veendam නෞකාවේදී ඇමරිකානු සංචාරකයා මිය ගියේය. ෆෙඩරල් පොලිසිය Agencia Brasil වෙත පැවසුවේ අධිකරණ වෛද්යවරුන් ඇයගේ මරණය පිළිබඳව විමර්ශනය කරන බවයි. නෞකාවේ වෛද්යවරුන් පොලිසියට පවසා ඇත්තේ එම කාන්තාව වයෝවෘද්ධ කාන්තාවක් බවත් ඇය දියවැඩියාව හා අධි රුධිර පීඩනයෙන් පෙළෙන බවත්ය. ගමනේ පෙර කොටසකදී ඇයගේ මරණයට පෙර අනෙකුත් මගීන් පාචනය වැළඳී ඇති බව නෞකාවේ වෛද්යවරු පැවසූහ. දකුණු අමෙරිකානු සංචාරයක් සඳහා වීන්ඩම් දින 36කට පෙර නිව්යෝර්ක් නුවරින් පිටත් විය.'
'summary_sinhala':'වයෝවෘද්ධ කාන්තාව දියවැඩියාව සහ අධි රුධිර පීඩනයෙන් පෙළුණු බව නෞකාවේ වෛද්යවරු පවසති.\nමීට පෙර නෞකාවේ සිටි මගීන් 86 දෙනෙකු රෝගාතුර වී ඇති බව Agencia Brasil පවසයි.'}
```
### Data Splits
The dataset has 3 splits: _train_, _validation_, and _test_. Below are the statistics forthe dataset.
| Dataset Split | Number of Instances in Split |
| ------------- | ------------------------------------------- |
| Train | 6000 |
| Validation | 2000 |
| Test | 2000 |
### Social Impact of Dataset
The purpose of this dataset is to help SriLankan NLP developers develop models that can summarize long paragraphs of text in one or two sentences .
### Licensing Information
The CNN / Daily Mail dataset version 1.0.0 is released under the [Apache-2.0 License](http://www.apache.org/licenses/LICENSE-2.0).
### Citation Information
```
@inproceedings{see-etal-2017-get,
title = "Get To The Point: Summarization with Pointer-Generator Networks",
author = "See, Abigail and
Liu, Peter J. and
Manning, Christopher D.",
booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2017",
address = "Vancouver, Canada",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/P17-1099",
doi = "10.18653/v1/P17-1099",
pages = "1073--1083",
abstract = "Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.",
}
```
```
@inproceedings{DBLP:conf/nips/HermannKGEKSB15,
author={Karl Moritz Hermann and Tomás Kociský and Edward Grefenstette and Lasse Espeholt and Will Kay and Mustafa Suleyman and Phil Blunsom},
title={Teaching Machines to Read and Comprehend},
year={2015},
cdate={1420070400000},
pages={1693-1701},
url={http://papers.nips.cc/paper/5945-teaching-machines-to-read-and-comprehend},
booktitle={NIPS},
crossref={conf/nips/2015}
}
```
|
false | |
false | # 📝 BUOD Article Scraper
Authors: [James Esguerra](https://huggingface.co/jamesesguerra), [Julia Avila](), [Hazielle Bugayong](https://huggingface.co/0xhaz)
- Article Scraper for the KAMI-3000 dataset used in the BUOD [distilBART](https://huggingface.co/ateneoscsl/BUOD_distilBART_TM) and [bert2bert](https://huggingface.co/ateneoscsl/BUOD_bert2bert_TM) Transformer Models. This was also used for the text summarization tasks in the Filipino Language.
### Setup
1. Clone the repository.
```sh
# https
git clone https://github.com/avila-bugayong-esguerra/article-scraper.git
# or
# ssh
git clone git@github.com:avila-bugayong-esguerra/article-scraper.git
```
2. Change directory into project folder.
```sh
cd article_scraper
```
3. Create a virtual environment.
```sh
python -m venv venv
```
4. Activate the virtual environment.
```sh
# windows
\venv\Scripts\activate
# unix
source venv/bin/activate
```
5. Install the dependencies.
```sh
pip install -r article_scraper/requirements.txt
```
6. Change directory into the Scrapy project.
```sh
cd article_scraper
``` |
false | # Dataset Card for "ms-marco-es"
QA asymmetric Spanish dataset filtered from [multilingual version of MS Marco](https://huggingface.co/datasets/unicamp-dl/mmarco)
```python
import datasets
ms_marco_es = datasets.load_dataset('unicamp-dl/mmarco', name='spanish', split='train')
ms_marco_es.push_to_hub("dariolopez/ms-marco-es", token=os.environ['hg_token'])
``` |
false | This dataset is machine-translated version of [databricks-dolly-15k.jsonl](https://github.com/databrickslabs/dolly/tree/master/data) into Turkish.
Used `googletrans==3.1.0a0` to translation. |
false | # AutoTrain Dataset for project: test-sa-gam
## Dataset Description
This dataset has been automatically processed by AutoTrain for project test-sa-gam.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "It is easy to navigate and update programs",
"target": "[([6, 7], [2]), ([4], [2])]"
},
{
"text": "The big screen allows you to enjoy watching movies , pictures and etc",
"target": "[([2], [1])]"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1016 |
| valid | 112 |
|
false |
# GMaSC: GEC Barton Hill Malayalam Speech Corpus
**GMaSC** is a Malayalam text and speech corpus created by the Government Engineering College Barton Hill with an emphasis on Malayalam-accented English. The corpus contains 2,000 text-audio pairs of Malayalam sentences spoken by 2 speakers, totalling in approximately 139 minutes of audio. Each sentences has at least one English word common in Malayalam speech.
## Dataset Structure
The dataset consists of 2,000 instances with fields `text`, `speaker`, and `audio`. The audio is mono, sampled at 48kH. The transcription is normalized and only includes Malayalam characters and common punctuation. The table given below specifies how the 2,000 instances are split between the speakers, along with some basic speaker info:
| Speaker | Gender | Age | Time (HH:MM:SS) | Sentences |
| --- | --- | --- | --- | --- |
| Sonia | Female | 43 | 01:02:17 | 1,000 |
| Anil | Male | 48 | 01:17:23 | 1,000 |
| **Total** | | | **02:19:40** | **2,000** |
### Data Instances
An example instance is given below:
```json
{'text': 'സൗജന്യ ആയുർവേദ മെഡിക്കൽ ക്യാമ്പ്',
'speaker': 'Sonia',
'audio': {'path': None,
'array': array([0.00036621, 0.00033569, 0.0005188 , ..., 0.00094604, 0.00091553,
0.00094604]),
'sampling_rate': 48000}}
```
### Data Fields
- **text** (str): Transcription of the audio file
- **speaker** (str): The name of the speaker
- **audio** (dict): Audio object including loaded audio array, sampling rate and path to audio (always None)
### Data Splits
We provide all the data in a single `train` split. The loaded dataset object thus looks like this:
```json
DatasetDict({
train: Dataset({
features: ['text', 'speaker', 'audio'],
num_rows: 2000
})
})
```
## Additional Information
### Licensing
The corpus is made available under the [Creative Commons license (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/).
|
false | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
false |
# abstracts-embeddings
This is the embeddings of the titles and abstracts of 95 million academic publications taken from the [OpenAlex](https://openalex.org) dataset as of May 5, 2023. The script that generated the embeddings is available on [Github](https://github.com/colonelwatch/abstracts-search/blob/master/build.py), but the general process is as follows:
1. Reconstruct the text of the abstract from the inverted index format
2. Construct a single document string in the format `title + ' ' + abstract` or just `abstract` if there is no title
3. Determine if the document string is in English using [fastText](https://fasttext.cc/docs/en/language-identification.html)
4. If it is in English, compute an embedding using the `all-MiniLM-L6-v2` model provided by [sentence-transformers](https://www.sbert.net/)
Though the OpenAlex dataset records 240 million works, not all of these works have abstracts or are in English. However, the `all-MiniLM-L6-v2` model was only trained on English texts, hence the filtering.
## Dataset Structure
In the future, this dataset might become a parquet in order to admit all the features offered by Hugging Face Datasets, but it consists only of a text file and a numpy memmap for now. The memmap is an array of many length-384 `np.float16` vectors, and the i-th row vector in this array corresponds with the i-th line in the text file. The text file is just a list of ids that can be used to get more information from the OpenAlex API.
```python
import numpy as np
with open('openalex_ids.txt', 'r') as f:
idxs = f.read().splitlines()
embeddings = np.memmap('embeddings.memmap', dtype=np.float16, mode='r').reshape(-1, 384)
```
However, the memmap cannot be uploaded to Hugging Face as a single file, so it's split with the command `split -b 3221225472 -d --suffix-length=3 --additional-suffix=.memmap embeddings.memmap embeddings_`. It can be put back together with the command `cat embeddings_*.memmap > embeddings.memmap`.
|
false |
# StyleGAN3 Annotated Images
This dataset consists of a `pandas` table and attached `images.zip` file with these entries:
* seed (`numpy` seed used to generate random vectors)
* path (path to the generated image obtained after unzipping `images.zip`)
* vector (generated numpy "random" vector used to create StyleGAN3 images)
* text (caption of each image, generated using BLIP model: `Salesforce/blip-image-captioning-base`)
## Usage
In order not to load the images into the memory, we will load the images separately.
```python
images = load_dataset("balgot/stylegan3-annotated", data_files=["*.zip"])
dataset = load_dataset("balgot/stylegan3-annotated", data_files=["*.csv"])
# TODO: convert "vector" column to numpy/torch
```
It was created as a part of the course project for FI:PA228 at Masaryk University. |
true | |
true | |
false | # Dataset Card for ImageIn_annotations_resized_images
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
false | # Dataset Card for RoBERTa Pretrain
### Dataset Summary
This is the concatenation of the datasets used to Pretrain RoBERTa.
The dataset is not shuffled and contains raw text. It is packaged for convenicence.
Essentially is the same as:
```
from datasets import load_dataset, concatenate_datasets
bookcorpus = load_dataset("bookcorpus", split="train")
openweb = load_dataset("openwebtext", split="train")
cc_news = load_dataset("cc_news", split="train")
cc_news = cc_news.remove_columns([col for col in cc_news.column_names if col != "text"])
cc_stories = load_dataset("spacemanidol/cc-stories", split="train")
return concatenate_datasets([bookcorpus, openweb, cc_news, cc_stories['train']])
``` |
false | # bollywood-celebs
## Dataset Description
This dataset has been automatically processed by AutoTrain for project bollywood-celebs.
Credits: https://www.kaggle.com/datasets/sushilyadav1998/bollywood-celeb-localized-face-dataset
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<64x64 RGB PIL image>",
"target": 15
},
{
"image": "<64x64 RGB PIL image>",
"target": 82
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['Aamir_Khan', 'Abhay_Deol', 'Abhishek_Bachchan', 'Aftab_Shivdasani', 'Aishwarya_Rai', 'Ajay_Devgn', 'Akshay_Kumar', 'Akshaye_Khanna', 'Alia_Bhatt', 'Ameesha_Patel', 'Amitabh_Bachchan', 'Amrita_Rao', 'Amy_Jackson', 'Anil_Kapoor', 'Anushka_Sharma', 'Anushka_Shetty', 'Arjun_Kapoor', 'Arjun_Rampal', 'Arshad_Warsi', 'Asin', 'Ayushmann_Khurrana', 'Bhumi_Pednekar', 'Bipasha_Basu', 'Bobby_Deol', 'Deepika_Padukone', 'Disha_Patani', 'Emraan_Hashmi', 'Esha_Gupta', 'Farhan_Akhtar', 'Govinda', 'Hrithik_Roshan', 'Huma_Qureshi', 'Ileana_DCruz', 'Irrfan_Khan', 'Jacqueline_Fernandez', 'John_Abraham', 'Juhi_Chawla', 'Kajal_Aggarwal', 'Kajol', 'Kangana_Ranaut', 'Kareena_Kapoor', 'Karisma_Kapoor', 'Kartik_Aaryan', 'Katrina_Kaif', 'Kiara_Advani', 'Kriti_Kharbanda', 'Kriti_Sanon', 'Kunal_Khemu', 'Lara_Dutta', 'Madhuri_Dixit', 'Manoj_Bajpayee', 'Mrunal_Thakur', 'Nana_Patekar', 'Nargis_Fakhri', 'Naseeruddin_Shah', 'Nushrat_Bharucha', 'Paresh_Rawal', 'Parineeti_Chopra', 'Pooja_Hegde', 'Prabhas', 'Prachi_Desai', 'Preity_Zinta', 'Priyanka_Chopra', 'R_Madhavan', 'Rajkummar_Rao', 'Ranbir_Kapoor', 'Randeep_Hooda', 'Rani_Mukerji', 'Ranveer_Singh', 'Richa_Chadda', 'Riteish_Deshmukh', 'Saif_Ali_Khan', 'Salman_Khan', 'Sanjay_Dutt', 'Sara_Ali_Khan', 'Shah_Rukh_Khan', 'Shahid_Kapoor', 'Shilpa_Shetty', 'Shraddha_Kapoor', 'Shreyas_Talpade', 'Shruti_Haasan', 'Sidharth_Malhotra', 'Sonakshi_Sinha', 'Sonam_Kapoor', 'Suniel_Shetty', 'Sunny_Deol', 'Sushant_Singh_Rajput', 'Taapsee_Pannu', 'Tabu', 'Tamannaah_Bhatia', 'Tiger_Shroff', 'Tusshar_Kapoor', 'Uday_Chopra', 'Vaani_Kapoor', 'Varun_Dhawan', 'Vicky_Kaushal', 'Vidya_Balan', 'Vivek_Oberoi', 'Yami_Gautam', 'Zareen_Khan'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 6863 |
| valid | 1764 | |
true |
Beta Dataset
Generated by GPT3.5 |
false | |
true |
# Modified Victorian Era Authorship Attribution Dataset
## About
This data set is a modified version of the one that can be found [here](https://archive.ics.uci.edu/ml/datasets/Victorian+Era+Authorship+Attribution).
The difference being that the training dataset was split into two parts: 80% training, 20% testing with labels.
Splitting was done with a random stratified sample approach.
This is different than the source dataset which did not have any labels for the testing data.
Additionally, all text has been converted to UTF-8 format and any errors were ignored.
The original testing data is not included with this release.
## Citation
> GUNGOR, ABDULMECIT, Benchmarking Authorship Attribution Techniques Using Over A Thousand Books by Fifty Victorian Era Novelists, Purdue Master of Thesis, 2018-04 |
false | # Dataset Card for "face-celeb-vietnamese"
## Dataset Summary
This dataset contains information on over 8,000 samples of well-known Vietnamese individuals, categorized into three professions: singers, actors, and beauty queens. The dataset includes data on more than 100 celebrities in each of the three job categories.
## Languages
- Vietnamese: The label is used to indicate the name of celebrities in Vietnamese.
## Dataset Structure
- The image and Vietnamese sequences are
## Source Data - Initial Data Collection and Normalization
[Website người nổi tiếng](https://nguoinoitieng.tv)
### Licensing Information
Apache License 2.0
### Contributions
Thanks to [@github-duongttr](https://github.com/duongttr) and [@github-pphuc25](https://github.com/pphuc25) for adding this dataset. |
false | |
false | |
false | |
false | |
false | |
false |
# PixAI
[scrape script](https://github.com/hlky/scrape/blob/main/pixai.py)
```
1596472 rows x 31 columns
'id', 'title', 'username', 'displayName', 'userCreatedAt', 'userUpdatedAt', 'followerCount', 'followingCount', 'userInspiredCount', 'prompts', 'createdAt', 'updatedAt', 'isNsfw', 'likedCount', 'views', 'commentCount', 'inspiredCount', 'mediaId', 'width', 'height', 'imageType', 'url', 'porn', 'sexy', 'hentai', 'neutral', 'drawings', 'adult', 'child', 'scenery', 'imageBlurHash'
```
note imageType should be used as extension when downloading images
```
'porn', 'sexy', 'hentai', 'neutral', 'drawings'
```
are scores from [LAION-SAFETY](https://github.com/LAION-AI/LAION-SAFETY/)
|
false |
# Horde4M
4M+ generation metadata only, too many spicy images
```
4130252 rows x 14 columns
'id', 'prompt', 'width', 'height', 'steps', 'sampler', 'cfg', 'seed', 'model', 'karras', 'gfpgan', 'realesrgan_x4plus', 'codeformer', 'user_type'
```
Majority use karras because stable horde ui's decided to default karras (they were told not to)
Negative prompt, if present, is separated by:
```
###
```
Model value counts
```
stable_diffusion 1231256
Anything Diffusion 595742
Deliberate 293292
Midjourney Diffusion 179623
Hentai Diffusion 163999
Realistic Vision 123042
Zeipher Female Model 112324
Hassanblend 95332
waifu_diffusion 93205
ProtoGen 79067
Dreamshaper 60041
Lawlas's yiff mix 58444
Dreamlike Photoreal 57028
Dreamlike Diffusion 56125
Yiffy 53644
URPM 48751
HASDX 43148
stable_diffusion_2.1 42271
Protogen Infinity 39055
MoistMix 35663
Liberty 29029
Epic Diffusion 27593
Anygen 24407
Analog Diffusion 23896
3DKX 23845
Poison 22974
Zack3D 22395
Dungeons and Diffusion 21466
Eimis Anime Diffusion 18139
Seek.art MEGA 17997
Ranma Diffusion 15342
ACertainThing 15308
Midjourney PaintArt 13802
Counterfeit 12903
Comic-Diffusion 11958
DucHaiten 11937
Furry Epoch 11460
PPP 10199
Elden Ring Diffusion 9526
Redshift Diffusion 9022
Pastel Mix 8785
Cyberpunk Anime Diffusion 8620
Project Unreal Engine 5 8352
PortraitPlus 7903
Elldreth's Lucid Mix 7793
RPG 7379
Abyss OrangeMix 7070
Vintedois Diffusion 7064
trinart 6977
Inkpunk Diffusion 6682
Synthwave 6619
GTM Ultimate Blend 6357
Darkest Diffusion 6036
DreamLikeSamKuvshinov 5622
AIO Pixel Art 5276
Arcane Diffusion 5264
PFG 5205
Fantasy Card Diffusion 5190
Healy's Anime Blend 5114
Future Diffusion 5056
Classic Animation Diffusion 4786
Mega Merge Diffusion 4683
mo-di-diffusion 4555
GTA5 Artwork Diffusion 4393
Openniji 4184
stable_diffusion_2.0 3997
Sonic Diffusion 3945
vectorartz 3753
Robo-Diffusion 3751
ChromaV5 3714
Trinart Characters 3492
Anything v3 3419
Dark Victorian Diffusion 3412
Moedel 3287
Ghibli Diffusion 3242
BubblyDubbly 3089
DnD Item 3061
Double Exposure Diffusion 3013
colorbook 2946
Microworlds 2927
Unstable Ink Dream 2917
Borderlands 2902
GorynichMix 2897
Sygil-Dev Diffusion 2846
App Icon Diffusion 2839
Valorant Diffusion 2833
stable_diffusion_inpainting 2736
Van Gogh Diffusion 2566
AbyssOrangeMix-AfterDark 2537
CyriousMix 2526
Knollingcase 2496
DGSpitzer Art Diffusion 2487
Ultraskin 2453
VinteProtogenMix 2446
Voxel Art Diffusion 2446
Dawgsmix 2418
Sci-Fi Diffusion 2368
ChilloutMix 2342
Samdoesarts Ultmerge 2153
Papercutcraft 2071
Woop-Woop Photo 2054
Laolei New Berry Protogen Mix 2014
Spider-Verse Diffusion 1994
kurzgesagt 1984
Guohua Diffusion 1963
Marvel Diffusion 1904
Asim Simpsons 1898
Zelda BOTW 1874
Nitro Diffusion 1864
Archer Diffusion 1821
T-Shirt Diffusion 1753
Papercut Diffusion 1736
Concept Sheet 1684
Cheese Daddys Landscape Mix 1671
Smoke Diffusion 1652
Tron Legacy Diffusion 1644
Clazy 1639
Dan Mumford Style 1556
Funko Diffusion 1477
ModernArt Diffusion 1425
Rachel Walker Watercolors 1363
CharHelper 1340
JWST Deep Space Diffusion 1301
Vector Art 1242
Grapefruit Hentai 1203
Vivid Watercolors 1198
Min Illust Background 1115
Eternos 1046
Squishmallow Diffusion 962
Balloon Art 960
Rainbowpatch 935
Xynthii-Diffusion 904
T-Shirt Print Designs 788
stable_diffusion_1.4 728
Microscopic 699
Experience 697
Rodent Diffusion 650
Pulp Vector Art 643
Supermarionation 549
Pokemon3D 533
Elldreths Retro Mix 503
PRMJ 460
Waifu Diffusion Beta 325
Open Journey Beta 308
Microchars 189
Microcasing 183
Microcritters 99
``` |
false | # NorEval
NorEval is a self-curated dataset to evaluate instruction-following LLMs, seeking to evaluate the models in nine categories: Language, Code, Mathematics, Classification, Communication & Marketing, Medical, General Knowledge, and Business Operations |
false | # AutoTrain Dataset for project: rwlv_summarizer
## Dataset Description
This dataset has been automatically processed by AutoTrain for project rwlv_summarizer.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"feat_platform": "Yelp",
"feat_line_of_business": "RWLV",
"text": "I decided to come to Resorts World to grab some sushi on a Sunday afternoon. I was so glad to see the trash gone from the parking garage. The grounds outside the building were so much nicer than my first visit. Planters were finished and the place was clean. It looked good. All the employees that I encountered were just as nice and helpful as my first visit. Bathrooms were clean. Food was great! My only complaint is that I couldn't believe how hard it was to gamble 73 cents left on my ticket! I mean they really stick it to you here. Some of the machines minimum bets were some crazy friggin number like 78 cents. Oh well. Get those pennies Resorts World. I will be back to try more food and maybe next time I'll stick with the tables. Come see Vegas newest Casino if you can.",
"feat_reactions": 0.0,
"feat_ratings": 4,
"feat_sentiment_pys": "POS",
"feat_sentiment_vad": "POS",
"feat_sentiment_tb": "POS",
"feat_sentiment_rat": "POS",
"feat_sentiment_gpt": "POS",
"feat_contextual": "facilities",
"feat_intention": "compliment",
"feat_intention_refined": "compliment",
"feat_refined_gpt": "POS",
"target": "positive review of resorts world with improved parking and grounds, friendly",
"feat_emotion": "others"
},
{
"feat_platform": "Yelp",
"feat_line_of_business": "RWLV",
"text": "The check-in line is extremely long and at the Hilton they seem understaffed. We went to the pool today. Granted it is 103\u00b0 outside however the pool is freezing. There is such thing as too cold. I did however get a Coca-Cola for nine dollars. Yes nine dollars for one can of Coke.",
"feat_reactions": 7.0,
"feat_ratings": 2,
"feat_sentiment_pys": "NEU",
"feat_sentiment_vad": "POS",
"feat_sentiment_tb": "NEG",
"feat_sentiment_rat": "NEG",
"feat_sentiment_gpt": "NEG",
"feat_contextual": "price",
"feat_intention": "complaint",
"feat_intention_refined": "complaint",
"feat_refined_gpt": "NEG",
"target": "long check-in, understaffed, freezing pool, expensive",
"feat_emotion": "others"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"feat_platform": "Value(dtype='string', id=None)",
"feat_line_of_business": "Value(dtype='string', id=None)",
"text": "Value(dtype='string', id=None)",
"feat_reactions": "Value(dtype='float64', id=None)",
"feat_ratings": "Value(dtype='int64', id=None)",
"feat_sentiment_pys": "Value(dtype='string', id=None)",
"feat_sentiment_vad": "Value(dtype='string', id=None)",
"feat_sentiment_tb": "Value(dtype='string', id=None)",
"feat_sentiment_rat": "Value(dtype='string', id=None)",
"feat_sentiment_gpt": "Value(dtype='string', id=None)",
"feat_contextual": "Value(dtype='string', id=None)",
"feat_intention": "Value(dtype='string', id=None)",
"feat_intention_refined": "Value(dtype='string', id=None)",
"feat_refined_gpt": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)",
"feat_emotion": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1539 |
| valid | 385 |
|
true |
This is an Indonesia-translated version of [snli](https://huggingface.co/datasets/snli) dataset
Translated using [Helsinki-NLP/EN-ID](https://huggingface.co/Helsinki-NLP/opus-mt-en-id) |
false | # Negative Embedding / Textual Inversion

NE4Mitsua is a Negative Embedding for Mitsua Diffusion One.
NE4Mitsua は Mitsua Diffusion One用のネガティブEmbeddingです。日本語版READMEはページ下部にあります。
---
# English README
## NE4Mitsua:
With this Embedding I tried to achieve the following two goals.
- Increase realism and complexity of the paintings
- Slightly make it easier to generate anime-style illustrations
## Usage
To use this embedding you have to download the BIN file as well as drop it into the "\stable-diffusion-webui\embeddings" folder.
Please put the embedding in the negative prompt to get the right results.
## License
- Mitsua Open RAIL-M License (More restrictive variant of CreativeML Open RAIL-M)
This embedding is open access and available to all, with a Mitsua Open RAIL-M license further specifying rights and usage. The Mitsua Open RAIL-M License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You can't use the model to infringe any rights of other by feeding image sources or model weights to the model (e.g. using another person's copyrighted image for fine-tuning without permission, using another person's copyrighted image as a source for image2image without permission).
4. You can't misrepresent that a generated image as not AI-generated.
[Please read the full license here](https://huggingface.co/Mitsua/mitsua-diffusion-one/blob/main/MODEL-LICENSE)
## Dataset
NE4Mitsua was trained on 400 images generated by Mitsua Diffusion One. This dataset is also available under the Mitsua Open RAIL-M License.
The prompts for the images is as follows:
**A 100 images**
```txt
photo,ugly,bad quality,frame,abstract,oversaturated,grain,deformed,low-res,horror,monster,deformed face,extra face,double head,extra head,ugly,poorly drawn hands,missing limb,floating limbs,disconnected limbs,melting hands,bad anatomy,blur,simple
Negative prompt: best quality painting,beautiful concept art,elegant,atmospheric,color delicate illustration,wallpaper art,new,4k,beautiful
Steps: 20, Sampler: DPM++ 2M Karras and Euler a CFG scale: 8, Size: 512x512
```
**B 100 images**
```txt
photo,ugly,bad quality,frame,abstract,oversaturated,grain,deformed,low-res,horror,monster,deformed face,extra face,double head,extra head,ugly,poorly drawn hands,missing limb,floating limbs,disconnected limbs,melting hands,bad anatomy,blur,simple,old man
Negative prompt: best quality portrait,beautiful oil painting,color manga character,youth,elegant,ultra detailed illustration,delicate outline,new,4k,beautiful
Steps: 20, Sampler: DPM++ 2M Karras and Euler a, CFG scale: 8, Size: 512x512
```
**C 100 images**
```txt
psychedelic,liquid,text,article,color noise,error,rainbow sand,fluorescent colors,insanely intricated
Negative prompt: detailed portrait
Steps: 20, Sampler: DDIM, CFG scale: 9, Size: 512x512
```
**D 100 images**
```txt
ukiyo-e,photo,3d,detailed mosaic,tile,abstract,fish scale,monster,deformed face,extra face,too long face,extra eyes,double head,extra head,ugly,poorly drawn hands,missing limb,floating limbs,disconnected limbs,melting hands,bad anatomy,old man,blur,red lips,red cheeks,simple yellow
Negative prompt: (vector art:0.7),beautiful color sketch,oil painting,diffusion,soft,new
Steps: 30, Sampler: DPM++ 2M Karras, CFG scale: 9, Size: 512x512
```
## Change History
May 5, 2023 Released NE4Mitsua.
---
# 日本語版README
## NE4Mitsua:
このネガティブEmbeddingは以下の2点を目標として作成されました。
- 絵画の質感を保ったまま写実性、複雑性を高める
- アニメ風のイラスト生成をちょっと簡単にする
## 使い方
NE4Mitsua.binをダウンロードして、stable-diffusion-webuiの"embeddings"フォルダに入れます。
NE4Mitsuaはネガティブプロンプトとして指定してください。
## ライセンス
- Mitsua Open RAIL-M ライセンス(制限を強化したCreativeML Open RAIL-Mの派生ライセンス)
NE4MitsuaはMitsua Open RAIL-Mライセンスによって権利と利用方法が規定されています。Mitsua Open RAIL-Mライセンスには次のような規定があります(意訳)。
1. 違法または有害なコンテンツを意図的に生成、共有することはできません。
2. 利用者はライセンスの規定に違反しない限り、生成された出力を自由に使用することができます。出力とその後の使用に対しては利用者が責任を負います。
3. 追加学習やimage2imageに、許諾のない他人の著作物や、無断で著作物を学習している他AIの出力を使用することはできません。
4. 生成した画像をAI生成ではないように偽ることはできません。
[Mitsua Open RAIL-M ライセンスの全文はこちら(英語)](https://huggingface.co/Mitsua/mitsua-diffusion-one/blob/main/MODEL-LICENSE)
## データセット
NE4MitsuaはMitsua Diffusion Oneで生成した画像400枚で学習を行いました。全画像はデータセットとして公開しており、Mitsua Open RAIL-M ライセンス下で利用できます。
画像のプロンプトは以下の通りです。
**A 100枚**
```txt
photo,ugly,bad quality,frame,abstract,oversaturated,grain,deformed,low-res,horror,monster,deformed face,extra face,double head,extra head,ugly,poorly drawn hands,missing limb,floating limbs,disconnected limbs,melting hands,bad anatomy,blur,simple
Negative prompt: best quality painting,beautiful concept art,elegant,atmospheric,color delicate illustration,wallpaper art,new,4k,beautiful
Steps: 20, Sampler: DPM++ 2M Karras and Euler a CFG scale: 8, Size: 512x512
```
**B 100枚**
```txt
photo,ugly,bad quality,frame,abstract,oversaturated,grain,deformed,low-res,horror,monster,deformed face,extra face,double head,extra head,ugly,poorly drawn hands,missing limb,floating limbs,disconnected limbs,melting hands,bad anatomy,blur,simple,old man
Negative prompt: best quality portrait,beautiful oil painting,color manga character,youth,elegant,ultra detailed illustration,delicate outline,new,4k,beautiful
Steps: 20, Sampler: DPM++ 2M Karras and Euler a, CFG scale: 8, Size: 512x512
```
**C 100枚**
```txt
psychedelic,liquid,text,article,color noise,error,rainbow sand,fluorescent colors,insanely intricated
Negative prompt: detailed portrait
Steps: 20, Sampler: DDIM, CFG scale: 9, Size: 512x512
```
**D 100枚**
```txt
ukiyo-e,photo,3d,detailed mosaic,tile,abstract,fish scale,monster,deformed face,extra face,too long face,extra eyes,double head,extra head,ugly,poorly drawn hands,missing limb,floating limbs,disconnected limbs,melting hands,bad anatomy,old man,blur,red lips,red cheeks,simple yellow
Negative prompt: (vector art:0.7),beautiful color sketch,oil painting,diffusion,soft,new
Steps: 30, Sampler: DPM++ 2M Karras, CFG scale: 9, Size: 512x512
```
## 更新履歴
2023/5/5 NE4Mitsuaを公開。 |
true | 这是个测试数据集 |
false |
# Dataset Card for jawiki-20220404-c400
This dataset contains passages, each of which consists of consecutive sentences no longer than 400 characters from Japanese Wikipedia as of 2022-04-04.
This dataset is used in baseline systems for [the AI王 question answering competition](https://sites.google.com/view/project-aio/home), such as [cl-tohoku/AIO3_BPR_baseline](https://github.com/cl-tohoku/AIO3_BPR_baseline).
Please refer to [the original repository](https://github.com/cl-tohoku/quiz-datasets) for further details. |
true |
# Dataset Card for RuFacts
## Dataset Description
RuFacts is a benchmark for internal fact-checking for the Russian language. The dataset contains tagged examples labeled consistent and inconsistent.
For inconsistent examples, ranges containing violations of facts in the source text and the generated text are also collected and presented on the [Kaggle competition page](https://www.kaggle.com/competitions/internal-fact-checking-for-the-russian-language).
Various data sources and approaches for data generation were used to create the training and test datasets for the fact-checking task. We consider the data on the sentence level and small texts. The average length of texts is 198 symbols, the minimum is 10 symbols, and the maximum is 3,402 symbols.
The final dataset was formed using three main approaches:
* Texts generated by a [paraphrase model](https://habr.com/ru/companies/sberdevices/articles/667106/)
* Translations of the [dataset for fact-checking](https://fever.ai/dataset/fever.html)
* Text augmentation
Translations and generated data were manually labeled via the crowd-sources platform Yandex.Toloka. We additionally manually annotate the augmented data for
the test set. The test set consists of examples from all three sources: 26% translations, 6% augmented data, and 68% generated paraphrases.
We require three criteria for the generated text to be factually consistent with the original:
1. facts are correct and not corrupted;
2. any additional facts in the generated texts are not included;
3. all the main facts are included in the generated text.
## Data Structure
### Data Fields
* `idx`: an integer
* `evidence`: a string containing the original text
* `claim`: a string containing the generated text by some genetative models
* `label`: an integer, either 0 or 1, indicating whether the facts are consistent (0) or inconsistent (1)
An example of `train`/`validation` looks as follows:
```
{'idx': 1,
'evidence': 'Суд в Англии рассмотрит дело советского диссидента Буковского',
'claim': 'Суд в Великобритании рассмотрит дело советского диссидента Буковского',
'label': 0}
```
An example of `test` looks as follows:
```
{'idx': 4,
'evidence': 'Google выплатит штраф в 200 млн долларов за сбор данных детей на YouTube.',
'claim': 'Google заплатит $200 млн за нарушения конфиденциальности детей на YouTube.',
'label': -1}
```
### Data Splits
| |train | validation | test|
|-----|------|------------|-----|
|rows |4677 | 1559 | 500 | |
false | # CC-100 zh-Hant (Traditional Chinese)
From https://data.statmt.org/cc-100/, only zh-Hant - Chinese (Traditional). Broken into lines, with each line as a row.
Estimated to have around 4B tokens when tokenized with the [`bigscience/bloom`](https://huggingface.co/bigscience/bloom) tokenizer.
There's another version that the text is split by paragraphs instead of lines: [`zetavg/CC-100-zh-Hant-merged`](https://huggingface.co/datasets/zetavg/CC-100-zh-Hant-merged).
## References
Please cite the following if you found the resources in the CC-100 corpus useful.
* **Unsupervised Cross-lingual Representation Learning at Scale**, *Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, Veselin Stoyanov*, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL), p. 8440-8451, July 2020, [pdf](https://www.aclweb.org/anthology/2020.acl-main.747.pdf), [bib](https://www.aclweb.org/anthology/2020.acl-main.747.bib) .
* **CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data**, *Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Joulin, Edouard Grave*, Proceedings of the 12th Language Resources and Evaluation Conference (LREC), p. 4003-4012, May 2020, [pdf](https://www.aclweb.org/anthology/2020.lrec-1.494.pdf), [bib](https://www.aclweb.org/anthology/2020.lrec-1.494.bib). |
false | ### brainly.co.id dataset
### Data Structure
The keys in each JSONL object include:
- "id": An integer value representing the page of task from url (e.g. brainly.co.id/tugas/117).
- "subject": A string indicating the subject of the question (e.g., "Fisika", "Matematika", "Sejarah").
- "author": A string representing the author of the question.
- "instruction": A string providing the instruction or prompt for the question.
- "answerer_1", "answer_2": Strings representing the answerers for the question. The number at the end of the key (1 & 2) signifies the answer's index.
- "answer_1", "answer_2": Strings containing the answers provided by the answerers. The number at the end of the key corresponds to the answerer index.
- "status_1", "status_2": Strings indicating the status of the answers (e.g., "verified", "loved", "generic"). |
false | # Dataset Card for Odia_GPT-Teacher-Instruct-Odia-18K
## Dataset Description
- **Homepage: https://www.odiagenai.org/**
- **Repository: https://github.com/shantipriyap/OdiaGenAI**
- **Point of Contact: Shantipriya Parida, and Sambit Sekhar**
### Dataset Summary
This dataset is the Odia-translated version of the GPT-Teacher 18K instruction set. In this dataset both English and Odia instruction, input, and output strings are available.
### Supported Tasks and Leaderboards
Large Language Model (LLM)
### Languages
Odia
## Dataset Structure
JSON
### Data Fields
instruction (string)
english_instruction (string)
input (string)
english_input (string)
output (string)
english_output (string)
### Licensing Information
This work is licensed under a
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].
[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png
[cc-by-nc-sa-shield]: https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-lightgrey.svg
### Citation Information
If you find this repository useful, please consider giving 👏 and citing:
```
@misc{OdiaGenAI,
author = {Shantipriya Parida and Sambit Sekhar and Subhadarshi Panda and Soumendra Kumar Sahoo and Swateek Jena and Abhijeet Parida and Arghyadeep Sen and Satya Ranjan Dash and Deepak Kumar Pradhan},
title = {OdiaGenAI: Generative AI and LLM Initiative for the Odia Language},
year = {2023},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/OdiaGenAI}},
}
```
### Contributions
- Shantipriya Parida
- Sambit Sekhar |
true | # Dataset Card for "boolq-id"
This dataset is a translated version of qnli dataset from [super_glue](https://huggingface.co/datasets/super_glue) dataset.
# Citing & Authors
```
@inproceedings{clark2019boolq,
title={BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions},
author={Clark, Christopher and Lee, Kenton and Chang, Ming-Wei, and Kwiatkowski, Tom and Collins, Michael, and Toutanova, Kristina},
booktitle={NAACL},
year={2019}
}
@article{wang2019superglue,
title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},
author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},
journal={arXiv preprint arXiv:1905.00537},
year={2019}
}
``` |
false | # Dataset Card for Russian riddles with answers with 377 entries.
### Dataset Summary
Contains parquet of QnA with riddle & answer pairs.
Each row consists of
* INSTRUCTION
* RESPONSE
* SOURCE
* METADATA (json with language).
### Licensing Information
Data is scrapped from several sites. Since most of the riddles and answers are publicly available and popular, any ToS and licensing of the sites themselves is irrelevant. I reserve the right to put a public and permissive license.
Moreover, there was no licensing information on these sites, which makes sense, due to the public availability and prominence of the content they provide.
### Acknowledgements
Thanks Freddie#5762 for providing this data!
He mentioned these URLs:
- https://azbyka.ru/deti/logicheskie-i-zanimatelnye-zadachi
- https://bbf.ru/riddles/ |
true | # Dataset Card for "qnli-id"
This dataset is a translated version of qnli dataset from [glue](https://huggingface.co/datasets/glue) dataset.
# Citing & Authors
```
@article{warstadt2018neural,
title={Neural Network Acceptability Judgments},
author={Warstadt, Alex and Singh, Amanpreet and Bowman, Samuel R},
journal={arXiv preprint arXiv:1805.12471},
year={2018}
}
@inproceedings{wang2019glue,
title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},
author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},
note={In the Proceedings of ICLR.},
year={2019}
}
``` |
false | load_dataset('phongmt184172/mtet')
The dataset is cloned https://github.com/vietai/mTet for machine translation task. |
false |
# Dataset Card for multilingual tatoeba QnA translation with ~120K entries.
### Dataset Summary
Contains Parquet of a list of instructions and translation articles on different languages.
Each row consists of
* INSTRUCTION
* RESPONSE
* SOURCE (tatoeba)
* METADATA (json with language, text length, uuid, langs-pair).
### Original Dataset is avalible here:
* https://huggingface.co/datasets/Helsinki-NLP/tatoeba_mt |
false |
# Dataset Card for "turkish-nlp-suite/turkish-wikiNER"
<img src="https://raw.githubusercontent.com/turkish-nlp-suite/.github/main/profile/wiki.png" width="20%" height="20%">
## Dataset Description
- **Repository:** [Turkish-WikiNER](https://github.com/turkish-nlp-suite/Turkish-Wiki-NER-Dataset)
- **Paper:** [ACL link]()
- **Dataset:** Turkish-WikiNER
- **Domain:** Wiki
- **Number of Labels:** 18
### Dataset Summary
Turkish NER dataset from Wikipedia sentences. 20.000 sentences are sampled and re-annotated from [Kuzgunlar NER dataset](https://data.mendeley.com/datasets/cdcztymf4k/1).
Annotations are done by [Co-one](https://co-one.co/). Many thanks to them for their contributions. This dataset is also used in our brand new spaCy Turkish packages.
### Dataset Instances
An instance of this dataset looks as follows:
```
{
"tokens": ["Çekimler", "5", "Temmuz", "2005", "tarihinde", "Reebok", "Stadyum", ",", "Bolton", ",", "İngiltere'de", "yapılmıştır", "."],
"tags": [O", "B-DATE", "I-DATE", "I-DATE", "O", "B-FAC", "I-FAC", "O", "B-GPE", "O", "B-GPE", "O", "O"]
}
```
or even better:

### Labels
- CARDINAL
- DATE
- EVENT
- FAC
- GPE
- LANGUAGE
- LAW
- LOC
- MONEY
- NORP
- ORDINAL
- ORG
- PERCENT
- PERSON
- PRODUCT
- QUANTITY
- TIME
- TITLE
- WORK_OF_ART
### Data Split
| name |train|validation|test|
|---------|----:|---------:|---:|
|Turkish-WikiNER|18000| 1000|1000|
### Citation
Coming soon
|
false |
# Dataset Card for turkish-nlp-suite/Corona-mini
## Dataset Description
- **Repository:** [Turkish Corona-mini corpus](https://github.com/turkish-nlp-suite/Corona-mini-dataset)
- **Paper:** [ACL link]()
- **Dataset:** Corona-mini
- **Domain:** Social Media
<img src="https://raw.githubusercontent.com/turkish-nlp-suite/.github/main/profile/corona-mini.png" width="20%" height="20%">
### Dataset Summary
This is a tiny Turkish corpus consisting of comments about Corona symptoms. The corpus is compiled from two Ekşisözlük headlines "covid-19 belirtileri" and "gün gün koronavirüs belirtileri":
https://eksisozluk.com/covid-19-belirtileri--6416646
https://eksisozluk.com/gun-gun-koronavirus-belirtileri--6757665
This corpus
- contains 178 raw, 175 processed comments
- all comments are in Turkish
- comes in 2 versions, raw and mildly processed.
For the processed version html tags, expressions in brackets and some other tags are removed.
if you want more information about how this dataset is crafted you can watch the playlist of my campaign "Turkish NLP with Duygu": [How to compile datasets](https://www.youtube.com/playlist?list=PLJTHlIwB8Vco4ONU_mCNOYIcVyFA9QrBr).
If you want to process this dataset with spaCy Turkish you can watch: [Recipes with spaCy Turkish](https://www.youtube.com/watch?v=w0WCkgCOzzw&list=PLJTHlIwB8VcoWxYHnsZOQCxWOraW42NBj)
### Dataset Instances
An instance of this dataset looks as follows:
```
{
"text": "beni sarsmayan belirtilerdir, 2 doz biontech aşılıyım, 2. doz üzerinden 5 aydan çok geçmişti cuma : ayın 12 si akşamı açık havada az üşümeye maruz kaldım."
}
```
### Data Split
| name |train|
|---------|----:|
|Corona-mini|175|
### Citation
Coming soon
|
true | |
false | |
false | # Dataset Information
## Keywords
Hebrew, handwritten, letters
## Description
HDD_v0 consists of images of isolated Hebrew characters together with training and test sets subdivision.
The images were collected from hand-filled forms.
For more details, please refer to [1].
When using this dataset in research work, please cite [1].
[1] I. Rabaev, B. Kurar Barakat, A. Churkin and J. El-Sana. The HHD Dataset. The 17th International Conference on Frontiers in Handwriting Recognition, pp. 228-233, 2020.
## Technical Details
The dataset is divided into TRAIN and TEST set (folders), each one containing 27 subfolders.
Each subfolder contains the images of a letter from the alphabet (one subfolder for each letter of the alphabet).
Train set contains 3965 samples, test set contains 1134 samples. |
false |
# Printed Photos Attacks
The dataset includes 3 different types of files of the real people: original selfies, original videos and videos of attacks with printed photos. The dataset solves tasks in the field of anti-spoofing and it is useful for buisness and safety systems.
# Get the Dataset
This is just an example of the data. If you need access to the entire dataset, contact us via **[sales@trainingdata.pro](mailto:sales@trainingdata.pro)** or leave a request on [https://trainingdata.pro/data-market](https://trainingdata.pro/data-market?utm_source=huggingface)
# Content
### The dataset contains of three folders:
- **live_selfie** contains the original selfies of people
- **live_video** includes original videos of people
- **attack** contains video of the attack with the original images from "live_selfie" folder
### File with the extension .csv
includes the following information for each media file:
- **live_selfie**: the link to access the original selfie
- **live_video**: the link to access the original video
- **attack**: the link to access the video of the attack with the printed photo
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/trainingdata-pro** |
false |
# Dataset Card for odia-qa-98K
## Dataset Description
- **Homepage: https://www.odiagenai.org/**
- **Repository: https://github.com/shantipriyap/OdiaGenAI**
- **Point of Contact: Shantipriya Parida, and Sambit Sekhar**
### Dataset Summary
### Supported Tasks and Leaderboards
Large Language Model (LLM)
### Languages
Odia
## Dataset Structure
JSON
### Data Fields
instruction (string)
english_instruction (string)
input (string)
english_input (string)
output (string)
english_output (string)
### Licensing Information
This work is licensed under a
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].
[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png
[cc-by-nc-sa-shield]: https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-lightgrey.svg
### Citation Information
If you find this repository useful, please consider giving 👏 and citing:
```
@misc{OdiaGenAI,
author = {Shantipriya Parida and Sambit Sekhar and Subhadarshi Panda and Soumendra Kumar Sahoo and Swateek Jena and Abhijeet Parida and Arghyadeep Sen and Satya Ranjan Dash and Deepak Kumar Pradhan},
title = {OdiaGenAI: Generative AI and LLM Initiative for the Odia Language},
year = {2023},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/OdiaGenAI}},
}
```
### Contributions
- Shantipriya Parida
- Sambit Sekhar |
false |
# Dataset Card for OdiEnCorp_translation_instructions_25k
## Dataset Description
- **Homepage: https://www.odiagenai.org/**
- **Repository: https://github.com/shantipriyap/OdiaGenAI**
- **Point of Contact: Shantipriya Parida, and Sambit Sekhar**
### Dataset Summary
This dataset is the English-to-Odia translation instruction set. The instruction set is built using the OdienCorp_1.0 English-Odia parallel dataset. The instruction set contains input, and output strings.
### Supported Tasks and Leaderboards
Large Language Model (LLM)
### Languages
Odia
## Dataset Structure
JSON
### Data Fields
instruction (string)
output (string)
### Licensing Information
This work is licensed under a
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].
[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png
[cc-by-nc-sa-shield]: https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-lightgrey.svg
### Citation Information
If you find this repository useful, please consider giving 👏 and citing:
```
@misc{OdiaGenAI,
author = {Shantipriya Parida and Sambit Sekhar and Subhadarshi Panda and Soumendra Kumar Sahoo and Swateek Jena and Abhijeet Parida and Arghyadeep Sen and Satya Ranjan Dash and Deepak Kumar Pradhan},
title = {OdiaGenAI: Generative AI and LLM Initiative for the Odia Language},
year = {2023},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/OdiaGenAI}},
}
```
### Contributions
- Shantipriya Parida
- Sambit Sekhar |
false | |
false | |
true | # Dataset Card for "CsFEVERv2"
## Dataset Description
CsFEVERv2 is a dataset for Czech fact-checking developed as part of a bachelor thesis at the Artificial Intelligence Center of the Faculty of Electrical Engineering of the Czech technical university in Prague. The dataset consists of an **original** subset, which is only an iteration of CsFEVER with new data and better processing and **f1**, **precision**, and **07** subsets filtered using an NLI model and optimized threshold values. The subset **wiki_pages** is a processed Wikipedia dump from August 2022 with correct revids. This subset should be used to map evidence from datasets to Wikipedia texts.
The original subset can generate other filtered datasets by filtering with other thresholds using predicted_label and predicted_score fields.
### Languages
Czech
## Dataset Usage Example
```python
from datasets import load_dataset
#load default (original) subset
dataset = load_dataset("/home/mlynatom/csfever_v2")
dataset = load_dataset("/home/mlynatom/csfever_v2", "original")
#load f1, precision, 07 subsets
dataset = load_dataset("/home/mlynatom/csfever_v2", "f1")
dataset = load_dataset("/home/mlynatom/csfever_v2", "precision")
dataset = load_dataset("/home/mlynatom/csfever_v2", "07")
#load wiki_pages subset
dataset = load_dataset("/home/mlynatom/csfever_v2", "wiki_pages")
```
## Dataset Structure
### Data Instances
#### original
An example of 'train' looks as follows.
```json
{'id': 75397,
'label': 'SUPPORTS',
'predicted_label': 'SUPPORTS',
'predicted_score': 0.921731
'claim': 'Nikolaj Coster-Waldau pracoval pro Fox Broadcasting Company.',
'evidence': [ [ "Nikolaj Coster-Waldau", "Nikolaj Coster-Waldau" ], [ "Fox Broadcasting Company", "Fox Broadcasting Company" ] ]}
```
#### f1, precision, 07
An example of 'train' looks as follows.
```json
{'id': 75397,
'label': 'SUPPORTS',
'claim': 'Nikolaj Coster-Waldau pracoval pro Fox Broadcasting Company.',
'evidence': [ [ "Nikolaj Coster-Waldau", "Nikolaj Coster-Waldau" ], [ "Fox Broadcasting Company", "Fox Broadcasting Company" ] ]}
```
#### wiki_pages
An example of 'wiki_pages' looks as follows.
```json
{'id': 80916,
'revid': 20561555,
'url': "https://cs.wikipedia.org/wiki?curid=80916",
'title': "Altruismus",
'text': "Altruismus (z lat. "alter", druhý, 3. pád "altrui", druhému) je moderní ..."}
```
### Data Fields
#### original
- `id`: a `int32` feature.
- `label`: a `string` feature.
- `predicted_label`: a `string` feature. (label predicted by NLI model)
- `predicted_score`: a `int32` feature. (confidence of predicted_label predicted by NLI model)
- `claim`: a `string` feature.
- `evidence`: a `sequence` feature.
#### f1, precision, 07
- `id`: a `int32` feature.
- `label`: a `string` feature.
- `claim`: a `string` feature.
- `evidence`: a `sequence` feature.
#### wiki_pages
- `id`: a `int32` feature.
- `revid`: a `int32` feature.
- `url`: a `string` feature.
- `title`: a `string` feature.
- `text`: a `string` feature.
### Data Splits
### Data Splits
#### original
| | train | dev | test |
|----------|-------:|-----:|------:|
| original | 118950 | 7458 | 7520 |
#### f1
| | train | dev | test |
|----|------:|-----:|-----:|
| f1 | 83438 | 5445 | 5328 |
#### precision
| | train | dev | test |
|-----------|-------:|-----:|------:|
| precision | 60828 | 4288 | 4236 |
#### 07
| | train | dev | test |
|----|-------:|-----:|------:|
| 07 | 108607 | 6685 | 6623 |
#### wiki_pages
| | wiki_pages |
|------------|-----------:|
| wiki_pages | 825078 |
|
false |
# DivSumm summarization dataset
Dataset introduced in the paper: Analyzing the Dialect Diversity in Multi-document Summaries (COLING 2022)
_Olubusayo Olabisi, Aaron Hudson, Antonie Jetter, Ameeta Agrawal_
DivSumm is a novel dataset consisting of dialect-diverse tweets and human-written extractive and abstractive summaries. It consists of 90 tweets each on 25 topics in multiple English dialects (African-American, Hispanic and White), and two reference summaries per input.
## Directories
input_docs - 90 tweets per topic evenly distributed among 3 dialects; total 25 topics
abstractive - Two annotators were asked to summarize each topic in 5 sentences using their own words.
extractive - Two annotators were asked to select 5 tweets from each topic that summarized the input tweets.
## Paper
You can find our paper [here](https://aclanthology.org/2022.coling-1.542/). If you use this dataset in your work, please cite our paper:
@inproceedings{olabisi-etal-2022-analyzing,
title = "Analyzing the Dialect Diversity in Multi-document Summaries",
author = "Olabisi, Olubusayo and Hudson, Aaron and Jetter, Antonie and Agrawal, Ameeta",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
}
|
false | # so13m
so13m is a dataset containing 13m discussion threads from StackOverflow. The origin of the data is the StackExchange data dump from between January 2014 and December 2022. The threads cover a multitude of topics. This dataset serves as a natural language and (often) accompanying code in the domain of software engineering. Its inclusion could help downstream tasks depending on generating or understanding natural language.
---
## so13m file list
- so13m.pkl -- a pickle file that is a dictionary for stackoverflow's posts with key = post id and value = stackoverflow post
- so13m.json.gz -- a compressed version of json file that is a dicrionary for stackoverflow's posts with key = post id and value = stackoverflow post
- stackoverflow_txtfiles.pkl -- a pickle file that is a list of id of stackoverflow's post
- train.bin; val.bin -- bin files for traning and fine-tuning models
---
## so13m dataset details
We provide the size of our dataset in the following table:
| Config | Value |
| ------- | ------- |
|number of tokens | 10,495,518,108|
|number of Stack Overflow Posts | 13,071,148|
|megabytes after processing |16,695 |
We tokenize our data using scripts provided in our [github repository](https://github.com/apcl-research/jam/blob/main/data/jam_so13m/prepare_stackoverflow.py).
|
false | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
false | # Dataset Card for MADBase
## Dataset Description
- **Homepage:**
https://datacenter.aucegypt.edu/shazeem/
- **Repository:**
- **Paper:**
A Two-Stage System for Arabic Handwritten Digit Recognition Tested on a New Large Database.
EA El-Sherif, S Abdelazeem
Artificial intelligence and pattern recognition, 237-242
- **Leaderboard:**
- **Point of Contact:**
Ezzat ezzat.elsherif@gmail.com
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Arabic
## Dataset Structure
### Data Instances
{
'image': <PIL.PngImagePlugin.PngImageFile image mode=L size=28x28 at 0x7F5EE5B427A0>,
'label': 1,
}
### Data Fields
image: A PIL.Image.Image object containing the 28x28 image. Note that when accessing the image column: dataset[0]["image"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. dataset[0]["image"] should always be preferred over dataset["image"][0]
label: an integer between 0 and 9 representing the digit.
### Data Splits
The data is split into training and test set. The training set contains, as in mnist dataset, 60,000 images and the test set 10,000 images.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is publickly available for research. Any work that uses this dataset should cite the work below in Citation Information.
### Citation Information
```
@inproceedings{el2007two,
title={A Two-Stage System for Arabic Handwritten Digit Recognition Tested on a New Large Database.},
author={El-Sherif, Ezzat Ali and Abdelazeem, Sherif},
booktitle={Artificial intelligence and pattern recognition},
pages={237--242},
year={2007}
}
```
### Contributions
[More Information Needed] |
true | |
true | |
true |
This is the same dataset as [`OxAISH-AL-LLM/pubmed_20k_rct`](https://huggingface.co/datasets/OxAISH-AL-LLM/pubmed_20k_rct).
The only differences are
1. Addition of a unique identifier, `uid`
1. Addition of the indices, that is 3 columns with the embeddings of 3 different sentence-transformers
- `all-mpnet-base-v2`
- `multi-qa-mpnet-base-dot-v1`
- `all-MiniLM-L12-v2`
1. Renaming of the `label` column to `labels` for easier compatibility with the transformers library |
true |
This is the same dataset as [`DeveloperOats/DBPedia_Classes`](https://huggingface.co/datasets/DeveloperOats/DBPedia_Classes).
The only differences are
1. Addition of a unique identifier, `uid`
1. Addition of the indices, that is 3 columns with the embeddings of 3 different sentence-transformers
- `all-mpnet-base-v2`
- `multi-qa-mpnet-base-dot-v1`
- `all-MiniLM-L12-v2` |
false | # Fine tuning progress validation - RedPajama 3B, StableLM Alpha 7B, Open-LLaMA
This repository contains the progress of fine-tuning models: RedPajama 3B, StableLM Alpha 7B, Open-LLaMA. These models have been fine-tuned on a specific text dataset and the results of the fine-tuning process are provided in the text file included in this repository.
## Fine-Tuning Details
- **Model: RedPajama 3B, size: 3 billion parameters, method: adapter**
- **Model: StableLM Alpha 7B, size: 7 billion parameters, method: adapter**
- **Model: Open-LLaMA 7B 300B, size: 7 billion parameters (300B tokens), method: LoRA**
- **Model: Open-LLaMA 7B 300B, size: 7 billion parameters (300B tokens), method: adapter**
## Dataset
The text source used for fine-tuning these models has a size of 25MB, which has been split into 174,000 data inputs.
## Fine-Tuning Process
The fine-tuning process was conducted with the following details:
- **Epochs:** 1
- **Validation Frequency:** Every 1% of the training data
- **Training Data:** 174,000 data inputs
## Acknowledgments #1
I would like to acknowledge @stabilityai, @togethercompute and OpenLM Research for providing the base models. Their groundbreaking work in the field of natural language processing has made projects like this possible.
## Acknowledgments #2
I would like to acknowledge @LightningAI for providing the lit-parrot fine-tuning framework.
## Disclaimer
There might be NSFW results in the results.
## License
This repository and the fine-tuned models are licensed under the [MIT License](LICENSE). Feel free to modify and use them according to the terms of the license. |
false | # Ukrainian Hypernymy Pairs Dataset
## Background
Hypernymy is the super-subordinate or ISA semantic relation that links more general terms to more specific ones. For example, *rose* is a hyponym of *flower*, a hypernym of *rose*. Words that are hyponyms of the same hypernym are called co-hyponyms, for instance, *rose* and *tulip*. Hyponymy relation is transitive and asymmetric.
Hypernymy is also differentiated by:
* Types — common nouns: *armchair* is a type (hyponym) of *chair*;
* Instances — specific persons, countries, and geographic entities: *Dnipro river* is an instance (instance hyponym) of *river*.
## Project Description
The Ukrainian Hypernymy Pairs Dataset is a collection of noun pairs that express hypernymy relations between words in the Ukrainian language. The dataset contains pairs of words linked by four different types of relations: hypernym-hyponym, co-hyponyms, hypernym-instance, and co-instances.
An example of such a dataset in English may be [BLESS](https://sites.google.com/site/geometricalmodels/shared-evaluation). However, their concepts are linked by one of the following six relations: co-hyponyms, hypernyms, meronyms, attributes, events, and random. Moreover, the hypernymy relation is not divided by terms and instances.
Ukrainian Hypernymy Pairs were constructed utilizing the linkage between [Princeton WordNet](https://wordnet.princeton.edu/), [Wikidata](https://www.wikidata.org/wiki/Wikidata:Main_Page), and Ukrainian [Wikipedia](https://www.wikipedia.org/). We used the Python [Wn package](https://wn.readthedocs.io/en/latest/) to get the relation, which provides an interface to WordNet data.
All terms in the dataset are Wikipedia article titles, and no preprocessing was applied. Therefore it sometimes contains additional info in the brackets.
## Dataset Statistics
This table presents the number of word pairs obtained for each relation type.
| **Relation Type** | **# of Pairs** |
|-----------------------|----------------|
| **Hypernym-Hyponym** | 6,906 |
| **Co-Hyponyms** | 42,860 |
| **Hypernym-Instance** | 2,971 |
| **Co-Instances** | 22,927 |
| **Total # of Pairs** | 275,664 |
## Intended Use
The dataset produced can be particularly valuable for the Hypernym Detection task, where the pair of words is presented to a model, and it should classify whether they are in a hypernymy relation. Other lexico-semantic relations can be added to improve the diversity of the dataset.
## License
Copyright: [Nataliia Romanyshyn](https://twitter.com/supersubnat), [Dmytro Chaplynskyi](https://twitter.com/dchaplinsky), [lang-uk project](https://lang.org.ua), 2023 |
false |
Dataset created from bittensor's subnet1. Will be constantly updated as I add more Q/A.
Dataset is currently in "raw" format, would love to have something prettier for loading into datasets. |
true | This is the same dataset as [`armanc/pubmed-rct20k`](https://huggingface.co/datasets/armanc/pubmed-rct20k).
The only differences are
1. Addition of a unique identifier, `uid`
1. Addition of the indices, that is 3 columns with the embeddings of 3 different sentence-transformers
- `all-mpnet-base-v2`
- `multi-qa-mpnet-base-dot-v1`
- `all-MiniLM-L12-v2`
1. Renaming of the `label` column to `labels` for easier compatibility with the transformers library |
false |
## Kinyarwanda-English Augmented parallel text
This dataset contains 1,400,000 Kinyarwanda-English sentence pairs augmented from 48,000 corpus from [MbazaNLP dataset](https://huggingface.co/datasets/mbazaNLP/Kinyarwanda_English_parallel_dataset),
obtained by scraping web data from religious sources such as:
[Bible](https://servervideos.hopto.org/XMLBible/EnglishKJBible.xml)
[Quran](https://quranenc.com/en/home/download/csv/kinyarwanda_assoc)
This dataset has not been curated only cleaned. |
false | AugQ-Wiki is an unsupervised augmented dataset for training retrievers used in AugTriever: Unsupervised Dense Retrieval by Scalable Data Augmentation. It consists of 22.6M pseudo query-document pairs based on Wikipedia.
It follows the same license of Wikipedia (Creative Commons Attribution-Share-Alike License 3.0).
```
@article{meng2022augtriever,
title={AugTriever: Unsupervised Dense Retrieval by Scalable Data Augmentation},
author={Meng, Rui and Liu, Ye and Yavuz, Semih and Agarwal, Divyansh and Tu, Lifu and Yu, Ning and Zhang, Jianguo and Bhat, Meghana and Zhou, Yingbo},
journal={arXiv preprint arXiv:2212.08841},
year={2022}
}
``` |
false | > 上述数据集为ABSA(Aspect-Based Sentiment Analysis)领域数据集,基本形式为从句子中抽取:方面术语、方面类别(术语类别)、术语在上下文中情感极性以及针对该术语的观点词,不同数据集抽取不同的信息,这点在jsonl文件的“instruction”键中有分别提到,在此我将其改造为了生成任务,需要模型按照一定格式生成抽取结果。
#### 以acos数据集中抽取的jsonl文件一条数据举例:
```
{
"task_type": "generation",
"dataset": "acos",
"input": ["the computer has difficulty switching between tablet and computer ."],
"output": "[['computer', 'laptop usability', 'negative', 'difficulty']]",
"situation": "none",
"label": "",
"extra": "",
"instruction": "
Task: Extracting aspect terms and their corresponding aspect categories, sentiment polarities, and opinion words.
Input: A sentence
Output: A list of 4-tuples, where each tuple contains the extracted aspect term, its aspect category, sentiment polarity, and opinion words (if any). Supplement: \"Null\" means that there is no occurrence in the sentence.
Example:
Sentence: \"Also it's not a true SSD drive in there but eMMC, which makes a difference.\"
Output: [['SSD drive', 'hard_disc operation_performance', 'negative', 'NULL']]'
"
}
```
> 此处未设置label和extra,在instruction中以如上所示的字符串模板,并给出一个例子进行one-shot,ABSA领域数据集(absa-quad,acos,arts,aste-data-v2,mams,semeval-2014,semeval-2015,semeval-2016,towe)每个数据集对应instruction模板相同,内容有细微不同,且部分数据集存在同一数据集不同数据instruction内容不同的情况。
#### 原始数据集
- 数据[链接](https://github.com/IsakZhang/ABSA-QUAD)
- Paper: [Aspect Sentiment Quad Prediction as Paraphrase Generation](https://aclanthology.org/2021.emnlp-main.726.pdf)
- 说明:原始数据集由Rest15和Rest16两个文件夹的数据组成,本次改造我将两个数据集的数据合并并区分为train、validation与test
#### 当前SOTA
*数据来自[论文](https://arxiv.org/abs/2305.09193)*
- 评价指标:F1 score
- SOTA模型:E2H-large (Rest15上F1 Score:**52.39** , Rest16上F1 Score:**61.86**)
- Paper:[Easy-to-Hard Learning for Information Extraction](https://arxiv.org/pdf/2305.09193.pdf)
- 说明:该论文来自[Google Scholar](https://scholar.google.com/scholar?hl=zh-CN&as_sdt=2005&sciodt=0,5&cites=13359676136585163616&scipsc=&q=&scisbd=1)检索到的引用ABSA-QUAD原论文的论文之一,我比较了2023年的一些论文工作后筛选了一个最优指标以及模型。
|
false | > 上述数据集为ABSA(Aspect-Based Sentiment Analysis)领域数据集,基本形式为从句子中抽取:方面术语、方面类别(术语类别)、术语在上下文中情感极性以及针对该术语的观点词,不同数据集抽取不同的信息,这点在jsonl文件的“instruction”键中有分别提到,在此我将其改造为了生成任务,需要模型按照一定格式生成抽取结果。
补充:SemEval-2014数据集文件夹中有两个文件夹"laptop"和"restaurant",其实根据数据集文本的主要围绕主题区分的。抽取的元素方面,laptop和restaurant两文件夹中,数据的抽取元素也不同,laptop抽取的是方面类别和情感极性、restaurant抽取的是{(方面术语,情感极性),(方面类别,情感极性)}的元素
#### 以acos数据集中抽取的jsonl文件一条数据举例:
```
{
"task_type": "generation",
"dataset": "acos",
"input": ["the computer has difficulty switching between tablet and computer ."],
"output": "[['computer', 'laptop usability', 'negative', 'difficulty']]",
"situation": "none",
"label": "",
"extra": "",
"instruction": "
Task: Extracting aspect terms and their corresponding aspect categories, sentiment polarities, and opinion words.
Input: A sentence
Output: A list of 4-tuples, where each tuple contains the extracted aspect term, its aspect category, sentiment polarity, and opinion words (if any). Supplement: \"Null\" means that there is no occurrence in the sentence.
Example:
Sentence: \"Also it's not a true SSD drive in there but eMMC, which makes a difference.\"
Output: [['SSD drive', 'hard_disc operation_performance', 'negative', 'NULL']]'
"
}
```
> 此处未设置label和extra,在instruction中以如上所示的字符串模板,并给出一个例子进行one-shot,ABSA领域数据集(absa-quad,acos,arts,aste-data-v2,mams,semeval-2014,semeval-2015,semeval-2016,towe)每个数据集对应instruction模板相同,内容有细微不同,且部分数据集存在同一数据集不同数据instruction内容不同的情况。
#### 原始数据集
- 数据[链接](https://alt.qcri.org/semeval2014/task4/)
- Paper:[SemEval-2014 Task 4: Aspect Based Sentiment Analysis](https://aclanthology.org/S14-2004/)
- 说明:数据分为Laptop和restaurant两个主题的数据,分别在两个文件夹中放置。两个主题的数据抽取的元素不同。
#### 当前SOTA
*数据来自[PaperWithCode](https://paperswithcode.com/sota)*
- [SemEval2014-Laptop](https://paperswithcode.com/sota/aspect-based-sentiment-analysis-on-semeval-5)
- 评价指标:F1-score
- 模型:InstructABSA (**79.34**)
- Paper:[InstructABSA: Instruction Learning for Aspect Based Sentiment Analysis](https://paperswithcode.com/paper/instructabsa-instruction-learning-for-aspect)
- [SemEval2014-Restaurant](https://paperswithcode.com/sota/aspect-based-sentiment-analysis-on-semeval-5)
- 评价指标:Accuracy(抽取的分类准确率)
- 模型:HGCN (**84.09**)
- Paper:[Learn from Structural Scope: Improving Aspect-Level Sentiment Analysis with Hybrid Graph Convolutional Networks](https://paperswithcode.com/paper/learn-from-structural-scope-improving-aspect) |
true |
# NLP: Sentiment Classification Dataset
This is a bundle dataset for a NLP task of sentiment classification in English.
There is a sample project is using this dataset [GURA-gru-unit-for-recognizing-affect](https://github.com/NatLee/GURA-gru-unit-for-recognizing-affect).
## Content
- `myanimelist-sts`: This dataset is derived from MyAnimeList, a social networking and cataloging service for anime and manga fans. The dataset typically includes user reviews with ratings. We used [skip-thoughts](https://pypi.org/project/skip-thoughts/) to summarize them. You can find the original source of the dataset [myanimelist-comment-dataset](https://www.kaggle.com/datasets/natlee/myanimelist-comment-dataset) and the version is `2023-05-11`.
- `aclImdb`: The ACL IMDB dataset is a large movie review dataset collected for sentiment analysis tasks. It contains 50,000 highly polar movie reviews, divided evenly into 25,000 training and 25,000 test sets. Each set includes an equal number of positive and negative reviews. The source is from [sentiment](https://ai.stanford.edu/~amaas/data/sentiment/)
- `MR`: Movie Review Data (MR) is a dataset that contains 5,331 positive and 5,331 negative processed sentences/lines. This dataset is suitable for binary sentiment classification tasks, and it's a good starting point for text classification models. You can find the source [movie-review-data](http://www.cs.cornell.edu/people/pabo/movie-review-data/) and the section is `Sentiment scale datasets`.
- `MPQA`: The Multi-Perspective Question Answering (MPQA) dataset is a resource for opinion detection and sentiment analysis research. It consists of news articles from a wide variety of sources annotated for opinions and other private states. You can get the source from [MPQA](https://mpqa.cs.pitt.edu/)
- `SST2`: The Stanford Sentiment Treebank version 2 (SST2) is a popular benchmark for sentence-level sentiment analysis. It includes movie review sentences with corresponding sentiment labels (positive or negative). You can obtain the dataset from [SST2](https://huggingface.co/datasets/sst2)
- `SUBJ`: The Subjectivity dataset is used for sentiment analysis research. It consists of 5000 subjective and 5000 objective processed sentences, which can help a model to distinguish between subjective and objective (factual) statements. You can find the source [movie-review-data](http://www.cs.cornell.edu/people/pabo/movie-review-data/) and the section is `Subjectivity datasets`.
# Tokenizer
```python
from pathlib import Path
import pickle
from tensorflow.keras.preprocessing.text import Tokenizer
def check_data_path(file_path:str) -> bool:
if Path(file_path).exists():
print(f'[Path][OK] {file_path}')
return True
print(f'[Path][FAILED] {file_path}')
return False
sentences = []
# =====================
# Anime Reviews
# =====================
dataset = './myanimelist-sts.pkl'
if check_data_path(dataset):
with open(dataset, 'rb') as p:
X, Y = pickle.load(p)
sentences.extend(X)
sentences.extend(Y)
# =====================
# MPQA
# =====================
dataset = './MPQA.pkl'
if check_data_path(dataset):
with open(dataset, 'rb') as p:
mpqa = pickle.load(p)
sentences.extend(list(mpqa.sentence))
# =====================
# IMDB
# =====================
dataset = './aclImdb.pkl'
if check_data_path(dataset):
with open(dataset, 'rb') as p:
x_test, y_test, x_train, y_train = pickle.load(p)
sentences.extend(x_train)
sentences.extend(y_train)
# =====================
# MR
# =====================
dataset = './MR.pkl'
if check_data_path(dataset):
with open(dataset, 'rb') as p:
mr = pickle.load(p)
sentences.extend(list(mr.sentence))
# =====================
# SST2
# =====================
dataset = './SST2.pkl'
if check_data_path(dataset):
with open(dataset, 'rb') as p:
sst2 = pickle.load(p)
sentences.extend(list(sst2.sentence))
# =====================
# SUBJ
# =====================
dataset = './SUBJ.pkl'
if check_data_path(dataset):
with open(dataset, 'rb') as p:
subj = pickle.load(p)
sentences.extend(list(subj.sentence))
sentences = map(str, sentences)
#Tokenize the sentences
myTokenizer = Tokenizer(
num_words = 100,
oov_token="{OOV}"
)
myTokenizer.fit_on_texts(sentences)
print(myTokenizer.word_index)
with open('./big-tokenizer.pkl', 'wb') as p:
pickle.dump(myTokenizer, p)
```
|
false | # Dataset Card for Dataset Name
### Dataset Summary
The benchmark datasets for document-level machine translation.
### Supported Tasks
Document-level Machine Translation Tasks.
### Languages
English-German
## Dataset Structure
### Data Instances
TED: iwslt17, News: nc2016, Europarl: europarl7
### Data Fields
Pure text that each line represents a sentence and multiple lines separated by '\<d\>' line form a document.
### Data Splits
train, dev, test
### Data Usage
This dataset is created for the convenience of usage by https://github.com/baoguangsheng/g-transformer
|
true | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
false |
# Dataset Card for Dataset Name
### Dataset Summary
Text corpus dataset (fifa world cup 2022)
## Additional Information
### Citation Information
```
@misc{ enwiki:1154298520,
author = "{Wikipedia contributors}",
title = "2022 FIFA World Cup --- {Wikipedia}{,} The Free Encyclopedia",
year = "2023",
url = "https://en.wikipedia.org/w/index.php?title=2022_FIFA_World_Cup&oldid=1154298520"
}
``` |
false |
# Overview
SGDD-TST - [Schema-Guided Dialogue Dataset for Text Style Transfer](https://arxiv.org/abs/2206.09676) is a dataset for evaluating the quality of content similarity measures for text style transfer in the domain of the personal plans. The original texts were obtained from [The Schema-Guided
Dialogue Dataset](https://arxiv.org/pdf/1909.05855.pdf) and were paraphrased by the [T5-based model](https://huggingface.co/ceshine/t5-paraphrase-paws-msrp-opinosis) trained on [GYAFC formality dataset](https://aclanthology.org/N18-1012/). The results were annotated by the crowdsource workers using [Yandex.Toloka](https://toloka.yandex.ru/).
# File description
The file consists of the following columns
- INPUT:text_first - the original text
- INPUT:text_second - formality transferred text
- OUTPUT:result - automatically assigned the label of the annotation (David-Skene aggregation method is used)
- CONFIDENCE:result - confidence of the annotation
- vote_type -
- vote_different - number of votes for the option "The texts are completely different"
- vote_some_details_lost - number of votes for the option "The texts are similar but have significant differences"
- vote_OK - number of votes for the option "The texts mean the same or have minor differences"
- **average - an averaged score of content similarity. This score can be used for evaluating the quality of content similarity measures, e.g. by calculating the Spearman Rank Correlation Coefficient between these scores and automatic scores**
# Contact and Citations
If you have any questions feel free to drop a line to [Nikolay](mailto:bbkhse@gmail.com)
If you find this repository helpful, feel free to cite our publication:
```
@InProceedings{10.1007/978-3-031-08473-7_40,
author="Babakov, Nikolay
and Dale, David
and Logacheva, Varvara
and Krotova, Irina
and Panchenko, Alexander",
editor="Rosso, Paolo
and Basile, Valerio
and Mart{\'i}nez, Raquel
and M{\'e}tais, Elisabeth
and Meziane, Farid",
title="Studying the Role of Named Entities for Content Preservation in Text Style Transfer",
booktitle="Natural Language Processing and Information Systems",
year="2022",
publisher="Springer International Publishing",
address="Cham",
pages="437--448",
abstract="Text style transfer techniques are gaining popularity in Natural Language Processing, finding various applications such as text detoxification, sentiment, or formality transfer. However, the majority of the existing approaches were tested on such domains as online communications on public platforms, music, or entertainment yet none of them were applied to the domains which are typical for task-oriented production systems, such as personal plans arrangements (e.g. booking of flights or reserving a table in a restaurant). We fill this gap by studying formality transfer in this domain.",
isbn="978-3-031-08473-7"
}
``` |
false |
# Dataset Card for Bulgarian QnA reasoning with ~2.7K entries.
### Dataset Summary
Contains Parquet of a list of instructions and answers.
Each row consists of
* INSTRUCTION
* RESPONSE
* SOURCE (reasoning_bg)
* METADATA (json with language, url, id).
### Original Dataset is available here:
* https://huggingface.co/datasets/reasoning_bg |
false |
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
For this dataset, we selected literary texts in Russian that are closest in style and subject matter to real diary entries, giving priority to texts written in the first person and paying considerable attention to the inner state of the characters. By parsing popular Internet resources with retellings of literary works, we received briefings for each of the works selected in the previous step and supplemented the dataset.
### Supported Tasks and Leaderboards
[Summarization]
### Languages
[Russian]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
false | # Dataset Card for Dataset: OktoberfestFoodDatasetPlus
## Dataset Description
- **Homepage: www.ilass.com**
- **Repository: https://github.com/ilassAG/OktoberfestFoodDataset**
- **Paper: https://arxiv.org/abs/1912.05007**
### Dataset Summary
This dataset comprises three categories: drinkServed, foodServed, person.
Part of it consists of real camera footage annotated by hand, while the rest is synthetically generated and annotated data.
A demo space is available to view results after training on the YOLO8 platform:
https://huggingface.co/spaces/ilass/yolov8_foodServed_drinkServed_Person
### Annotations
#### Annotation process
1000 images were annotated by hand.
1000 person images were sourced from COCO.
3000 images were synthetically produced and annotated.
|
false | |
false | # KiriTrash Dataset
## Summary
KiriTrash is a collection of trash images taken on the shorelines of Tarawa Atoll, Kiribati.
This is a dataset I used for my own research.
## Dataset Description
+ Dataset format: COCO Format
+ Number of images: 650 training, 90 validation, 5 Test
+ Preprocessings: Auto-Oriented, Resized to 640x640
+ Classes: 1 class
+ Augmentations: Flipped-Horizontal, Bounding Box exposure: -17%-17%
## Cite
I would really appreciate you citing my github homepage if you are using it:
[Github Homepage](https://github.com/tbensap18).
---
license: odc-by
--- |
false |
# Dataset Card for GSM QnA reasoning with ~8.8K entries.
### Dataset Summary
Contains Parquet of a list of instructions and answers.
Each row consists of
* INSTRUCTION
* RESPONSE
* SOURCE
* METADATA (json with language).
### Original Datasets are available here:
* https://huggingface.co/datasets/gsm8k
* https://huggingface.co/datasets/reasoning-machines/gsm-hard |
false | # Summary
This is a 🇹🇭 Thai-translated (GCP) dataset based on [MBZUAI/LaMini-instruction](MBZUAI/LaMini-instruction), The dataset was generated with a total of 2.58 million pairs of instructions and responses which later used to fine-tune the LaMini-LM model series.
This dataset utilizes GPT-3.5-turbo and is based on several existing resources of prompts, including self-instruct (Wang et al., 2022), P3 (Sanh et al., 2022), FLAN (Longpre et al., 2023), and Alpaca (Taori et al., 2023).
For more information about the process of generating instruction dataset, please refer to [the accompanying paper](https://arxiv.org/abs/2304.14402).
Supported Tasks:
- Training LLMs
- Synthetic Data Generation
- Data Augmentation
### Special Thanks:
- Mr. Harris Boonkerd (Data Annotator)
### Languages: Thai
### Version: 1.0
--- |
false | # Dataset Card for "github-code-haskell-function"
Rows: 3.26M
Download Size: 1.17GB
This dataset is extracted from [github-code-haskell-file](https://huggingface.co/datasets/blastwind/github-code-haskell-file).
Each row has 3 flavors of the same function:
`uncommented_code`: Includes the function and its closest signature.
`function_only_code`: Includes the function only.
`full_code`: Includes the function and its closest [signature](https://wiki.haskell.org/Type_signature) and comment.
The heuristic for finding the closest signature and comment follows: If the immediate previous neighbor of the function
is neither a signature nor comment, `full_code` is just the function. If the previous neighbor is one though, include
them appropriately, then search the previous neighbor for the other node with the same logic.
Further, each row also contains attribute values for my personal analysis project. The attributes are calculated from the code in column `uncommented_code`.
7% (225k) of the rows have cyclomatic complexity and LOC valued at `-1` because [`homplexity`](https://github.com/BlastWind/homplexity) failed in parsing the row's `uncommented_code`.
|
true | |
true |
# Dataset Card for russe-semantics-sim with ~200K entries. Russian language.
### Dataset Summary
License: MIT. Contains CSV of a list of word1, word2, their `connection score` (are they synonymous or associations), type of connection.
### Original Datasets are available here:
- https://github.com/nlpub/russe-evaluation |
false | # Dataset Card for "code-search-net-php"
## Dataset Description
- **Homepage:** None
- **Repository:** https://huggingface.co/datasets/Nan-Do/code-search-net-go
- **Paper:** None
- **Leaderboard:** None
- **Point of Contact:** [@Nan-Do](https://github.com/Nan-Do)
### Dataset Summary
This dataset is the Php portion of the CodeSarchNet annotated with a summary column.
The code-search-net dataset includes open source functions that include comments found at GitHub.
The summary is a short description of what the function does.
### Languages
The dataset's comments are in English and the functions are coded in Php
### Data Splits
Train, test, validation labels are included in the dataset as a column.
## Dataset Creation
May of 2023
### Curation Rationale
This dataset can be used to generate instructional (or many other interesting) datasets that are useful to train LLMs
### Source Data
The CodeSearchNet dataset can be found at https://www.kaggle.com/datasets/omduggineni/codesearchnet
### Annotations
This datasets include a summary column including a short description of the function.
#### Annotation process
The annotation procedure was done using [Salesforce](https://huggingface.co/Salesforce) T5 summarization models.
A sample notebook of the process can be found at https://github.com/Nan-Do/OpenAssistantInstructionResponsePython
The annontations have been cleaned to make sure there are no repetitions and/or meaningless summaries. (some may still be present in the dataset)
### Licensing Information
Apache 2.0 |
false | # Dataset Card for "clts"
[original link](https://github.com/lxj5957/CLTS-Dataset)
|
false | |
false |
# Dataset Card for all_combined_odia_171K
## Dataset Description
- **Homepage: https://www.odiagenai.org/**
- **Repository: https://github.com/shantipriyap/OdiaGenAI**
- **Point of Contact: Shantipriya Parida, and Sambit Sekhar**
### Dataset Summary
This dataset is a mix of Odia instruction sets translated from open-source instruction sets.
The Odia instruction sets used are:
* dolly-odia-15k
* OdiEnCorp_translation_instructions_25k
* gpt-teacher-roleplay-odia-3k
* Odia_Alpaca_instructions_52k
* hardcode_odia_qa_105
In this dataset Odia instruction, input, and output strings are available.
### Supported Tasks and Leaderboards
Large Language Model (LLM)
### Languages
Odia
## Dataset Structure
JSON
### Data Fields
output (string)
data_source (string)
instruction (string)
input (string)
### Licensing Information
This work is licensed under a
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].
[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png
[cc-by-nc-sa-shield]: https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-lightgrey.svg
### Citation Information
If you find this repository useful, please consider giving 👏 and citing:
```
@misc{OdiaGenAI,
author = {Shantipriya Parida and Sambit Sekhar and Subhadarshi Panda and Soumendra Kumar Sahoo and Swateek Jena and Abhijeet Parida and Arghyadeep Sen and Satya Ranjan Dash and Deepak Kumar Pradhan},
title = {OdiaGenAI: Generative AI and LLM Initiative for the Odia Language},
year = {2023},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/OdiaGenAI}},
}
```
### Contributions
- Shantipriya Parida
- Sambit Sekhar |
false | # Instruction Tuning with GPT 4 RedPajama-Chat
This dataset has been converted from the <a href="https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM" target="_new">Instruction-Tuning-with-GPT-4</a> dataset for the purpose of fine-tuning the <a href="https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-3B-v1" target="_new">RedPajama-INCITE-Chat-3B-v1</a> model.
## About Instruction-Tuning-with-GPT-4
English Instruction-Following Data generated by GPT-4 using Alpaca prompts for fine-tuning LLMs.
### Usage and License Notices
The data is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes.
|
false | 为https://huggingface.co/TMZN/ChatGLM-wyw 服务的数据集之一。
# ChatGLM-wyw
一个读了文言文的ChatGLM
# 缘起
2023年5月16日,念叨了好久要让AI读文言文正式开工。<br>
# 感谢
一站式整合包(含chatglm模型):链接:https://pan.baidu.com/s/13GePNuh8ZP_DkMVRf5sHqw?pwd=2d2z
一站式整合包(不含模型):链接:https://pan.baidu.com/s/1lMfG34jerHO7aFjfdKTGUw?pwd=6y7j
数据集制作大佬链接:https://github.com/huang1332/finetune_dataset_maker
模型微调大佬链接:https://github.com/mymusise/ChatGLM-Tuning
ChatGLM官方链接:https://github.com/THUDM/ChatGLM-6B
|
false | |
false | |
false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.