id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
MasterThesisCBS/Court_Decisions_Lovdata | 2023-04-16T10:31:02.000Z | [
"task_categories:text-generation",
"language:no",
"language:nb",
"license:cc-by-4.0",
"summarizaton",
"region:us"
] | MasterThesisCBS | null | null | null | 1 | 4 | ---
license: cc-by-4.0
language:
- 'no'
- nb
tags:
- summarizaton
pretty_name: LovData XSUM
task_categories:
- text-generation
dataset_info:
features:
- name: Summary
dtype: string
- name: KeyWords
dtype: string
- name: Full_Text
dtype: string
- name: length
dtype: int64
- name: Summary_w/o_paragraph
dtype: string
- name: KeyWords_w/o_paragraph
dtype: string
- name: Full_Text_w/o_paragraph
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 397486921
num_examples: 26539
- name: test
num_bytes: 4564831
num_examples: 1397
download_size: 201357343
dataset_size: 402051752
---
# Summarization dataset for Norwegian Court Decisions
This data was scraped from www.lovdata.no, April 2023, and contains about 27k samples.
## How to Use
```python
from datasets import load_dataset
data = load_dataset("MasterThesisCBS/Court_Decisions_Lovdata")
```
### Dataset Curators
[John Oskar Holmen Skjeldrum](mailto:josk18ad@student.cbs.dk) and [Peder Tanberg](mailto:peha28ae@student.cbs.dk) |
RyokoAI/ScribbleHub17K | 2023-04-03T23:21:16.000Z | [
"task_categories:text-classification",
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:apache-2.0",
"novel",
"training",
"story",
"region:us"
] | RyokoAI | null | null | null | 2 | 4 | ---
license: apache-2.0
language:
- en
tags:
- novel
- training
- story
task_categories:
- text-classification
- text-generation
pretty_name: ScribbleHub17K
size_categories:
- 100K<n<1M
---
# Dataset Card for ScribbleHub17K
*The BigKnow2022 dataset and its subsets are not yet complete. Not all information here may be accurate or accessible.*
## Dataset Description
- **Homepage:** (TODO)
- **Repository:** <https://github.com/RyokoAI/BigKnow2022>
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** Ronsor/undeleted <ronsor@ronsor.com>
### Dataset Summary
ScribbleHub17K is a dataset consisting of text from over 373,000 chapters across approximately 17,500 series posted on the
original story sharing site [Scribble Hub](https://scribblehub.com).
### Supported Tasks and Leaderboards
This dataset is primarily intended for unsupervised training of text generation models; however, it may be useful for other purposes.
* text-classification
* text-generation
### Languages
* English
## Dataset Structure
### Data Instances
```json
{
"text": " \n2082 Planet Earth the Fracture War, after a sudden fracture in our dimension unidentified beings with advance technology and u...",
"meta": {
"subset": "scribblehub",
"series": "3811",
"id": "3812",
"q": 0.91,
"title": "The First - Prologue- The Fracture War",
"author": "RobotLove",
"chapters": 1,
"rating": 5,
"rating_ct": 1,
"genre": [
"Action",
"Martial Arts",
"Romance"
],
"tags": [
"Kingdom Building",
"Loyal Subordinates",
"Male Protagonist",
"Organized Crime",
"Scheming"
]
}
}
{
"text": " For anyone that may see this, thanks for reading. I'm just here to see if a story can spill out of my mind if just start writin...",
"meta": {
"subset": "scribblehub",
"series": "586090",
"id": "586099",
"q": 0.82,
"title": "Just writing to write…i guess? - I’m here now",
"author": "BigOofStudios",
"chapters": 1,
"rating": 4.5,
"rating_ct": 2,
"genre": [
"Action",
"Comedy"
],
"tags": []
}
}
```
### Data Fields
* `text`: the actual chapter text
* `meta`: metadata for chapter and series
* `subset`: data source tag: `scribblehub`
* `series`: series ID
* `id`: chapter ID
* `lang`: always `en` (English)
* `q`: quality score (q-score) between (0.0) terrible and 1.0 (perfect); anything with a score `> 0.5` is generally good enough
* `title`: chapter and series title in the format `<chapter title> - <series title>`
* `chapters`: total number of chapters in the series
* `rating`: Scribble Hub rating between 0 and 5 stars
* `rating_ct`: number of ratings
* `author`: author name
* `genre`: array of Scribble Hub genres for the series
* `tags`: array of tags for the series
#### Q-Score Distribution
```
0.00: 0
0.10: 0
0.20: 0
0.30: 84
0.40: 718
0.50: 3775
0.60: 22300
0.70: 72581
0.80: 137982
0.90: 135800
1.00: 59
```
### Data Splits
No splitting of the data was performed.
## Dataset Creation
### Curation Rationale
Scribble Hub is a home for original web stories, effectively a smaller, English version of Japan's Syosetuka ni Narou. As a
result, it is a good source for reasonably well written creative content.
### Source Data
#### Initial Data Collection and Normalization
TODO
#### Who are the source language producers?
The authors of each novel.
### Annotations
#### Annotation process
Title, ratings, and other metadata were parsed out using scripts that will be provided in the BigKnow2022 GitHub repository.
#### Who are the annotators?
No human annotators.
### Personal and Sensitive Information
The dataset contains only works of fiction, and we do not believe it contains any PII.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is intended to be useful for anyone who wishes to train a model to generate "more entertaining" content.
It may also be useful for other languages depending on your language model.
### Discussion of Biases
This dataset is composed of fictional works by various authors. Because of this fact, the contents of this dataset will reflect
the biases of those authors. **Additionally, this dataset contains NSFW material and was not filtered. Beware of stereotypes.**
### Other Known Limitations
N/A
## Additional Information
### Dataset Curators
Ronsor Labs
### Licensing Information
Apache 2.0, for all parts of which Ronsor Labs or the Ryoko AI Production Committee may be considered authors. All other material is
distributed under fair use principles.
### Citation Information
```
@misc{ryokoai2023-bigknow2022,
title = {BigKnow2022: Bringing Language Models Up to Speed},
author = {Ronsor},
year = {2023},
howpublished = {\url{https://github.com/RyokoAI/BigKnow2022}},
}
```
### Contributions
Thanks to @ronsor (GH) for gathering this dataset. |
emre/stanford-alpaca-cleaned-turkish-translated | 2023-04-08T21:28:43.000Z | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:tr",
"license:afl-3.0",
"region:us"
] | emre | null | null | null | 14 | 4 | ---
license: afl-3.0
task_categories:
- text-generation
language:
- tr
size_categories:
- 10K<n<100K
---
09/04/2023 Update:
New instructions added from: https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM
Original Version: https://github.com/tatsu-lab/stanford_alpaca#data-release
AI BASED TRANSLATION RESULTS OF STANFORD ALPACA EN TO TR
For academic only, please cite before you use it.
Taşar, D. E. T. (2023). stanford-alpaca-cleaned-turkish-translated [Dataset]. In Stanford Alpaca TR (1.0.1.a). https://huggingface.co/datasets/emre/stanford-alpaca-cleaned-turkish-translated
### Citation
Please cite the repo if you use the data or code in this repo.
```
@misc{alpaca-tr,tasar-2023
author = {Taşar, Davut Emre},
title = {stanford-alpaca-cleaned-turkish-translated},
year = {2023},
publisher = {Huggingface},
journal = {Huggingface repository},
howpublished = {\url{https://huggingface.co/datasets/emre/stanford-alpaca-cleaned-turkish-translated}},
}
``` |
SkyHuReal/DrugBank-Alpaca | 2023-04-03T17:37:30.000Z | [
"license:afl-3.0",
"region:us"
] | SkyHuReal | null | null | null | 0 | 4 | ---
license: afl-3.0
---
|
IES-Rafael-Alberti/letras-carnaval-cadiz | 2023-06-04T11:51:32.000Z | [
"annotations_creators:no-annotation",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:es",
"license:cc-by-sa-4.0",
"lyrics",
"carnival",
"cadiz",
"region:us"
] | IES-Rafael-Alberti | This dataset is a comprehensive collection of lyrics from the Carnaval de Cádiz, a significant cultural heritage of the city of Cádiz, Spain. Despite its cultural importance, there has been a lack of a structured database for these lyrics, hindering research and public access to this cultural heritage. This dataset aims to address this gap.
The dataset was created by the Cádiz AI Learning Community, a branch of the non-profit association Spain AI, and was developed by Iván Romero Reyna and Jesús Federico Franco Medinilla, students of the Specialization Course in Artificial Intelligence and Big Data at IES Rafael Alberti during the 2022-2023 academic year. The project is supervised by Jesús Carlos Avecilla de la Herrán, a computational linguist.
Collaboration is encouraged, with individuals able to verify the different records of the dataset at letrascarnavalcadiz.com, ensuring the transcription of the lyrics and all data are correct. New lyrics can also be added to the dataset. Corrections and additions are not immediately reflected in the dataset but are updated periodically.
For more information or to report a problem, you can write to contacto@letrascarnavalcadiz.com. | @misc{letrascarnavalcadiz2023,
author = {Romero Reyna, Iván and Franco Medinilla, Jesús Federico and Avecilla de la Herrán, Jesús Carlos},
title = {letras-carnaval-cadiz},
year = {2023},
url = {https://huggingface.co/datasets/IES-Rafael-Alberti/letras-carnaval-cadiz}
} | null | 1 | 4 | ---
annotations_creators:
- no-annotation
language:
- es
language_creators:
- machine-generated
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: letrascarnavalcadiz
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- lyrics
- carnival
- cadiz
task_categories: []
task_ids: []
---
# Dataset Card for Letras Carnaval Cádiz

<h4 align="center">
<p>
<b>English</b> |
<a href="https://huggingface.co/datasets/IES-Rafael-Alberti/letras-carnaval-cadiz/blob/main/README_es.md">Español</a>
<p>
</h4>
## Dataset Description
- **Homepage:** https://letrascarnavalcadiz.com
- **Repository:** https://huggingface.co/datasets/IES-Rafael-Alberti/letras-carnaval-cadiz
- **Point of Contact:** contacto@letrascarnavalcadiz.com
### Changelog
|Release|Description|
|-|-|
|v1.0| Initial release of the dataset. Included more than 1K lyrics. It is necessary to verify the accuracy of the data, especially the subset midaccurate. |
### Dataset Summary
This dataset is a comprehensive collection of lyrics from the Carnaval de Cádiz, a significant cultural heritage of the city of Cádiz, Spain. Despite its cultural importance, there has been a lack of a structured database for these lyrics, hindering research and public access to this cultural heritage. This dataset aims to address this gap.
The dataset was created by the Cádiz AI Learning Community, a branch of the non-profit association Spain AI, and was developed by Iván Romero Reyna and Jesús Federico Franco Medinilla, students of the Specialization Course in Artificial Intelligence and Big Data at IES Rafael Alberti during the 2022-2023 academic year. The project is supervised by Jesús Carlos Avecilla de la Herrán, a computational linguist.
Collaboration is encouraged, with individuals able to verify the different records of the dataset at [letrascarnavalcadiz.com](https://letrascarnavalcadiz.com), ensuring the transcription of the lyrics and all data are correct. New lyrics can also be added to the dataset. Corrections and additions are not immediately reflected in the dataset but are updated periodically.
For more information or to report a problem, you can write to [contacto@letrascarnavalcadiz.com](mailto:contacto@letrascarnavalcadiz.com).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset is in Spanish, reflecting the language of the Carnaval de Cádiz.
## Dataset Structure
### Data Instances
A typical instance in the dataset is formatted in JSON and contains the following fields:
```json
{
"id": "9de8647521b728c45ff45c1c11208708d055397fd7781b31cf91b473dff224d5",
"authors": ["Juan Carlos Aragón Becerra"],
"song_type": 2,
"year": "2018",
"group": "Los Mafiosos",
"group_type": 2,
"lyrics": [
"Mujer va llegando el momento",
"de ser la que lleve la rienda",
"el camino ha sido largo y polvoriento",
"pero ya no habrá varón que te detenga",
"gritad larga vida a la reina",
"que va a comenzar tu gobierno",
"ojalá no heredes nada",
"de aquel macho que te odiaba",
"porque en el fondo sabía",
"que ya tú te le acercabas",
"y el contigo no podía",
"ten en cuenta cuando hagas justicia",
"de volver a nivelar la balanza",
"y aguantar aunque tragando saliva",
"el deseo de venganza",
"de ser oh humano fatal",
"de ser o que puedo entender",
"tan solo con una mirada",
"la llaga que baña tu alma y tu piel",
"que te sirva la experiencia",
"del macho de la manada",
"la fuerza no vale nada",
"si no es con la inteligencia",
"y ojalá que tu conciencia",
"a mí me brinde la suerte",
"de nunca volver a verte",
"con los pies en una iglesia",
"que ella fue quien escribió",
"que ella fue quien escribió",
"la historia contra vosotras",
"y encima se la cobró",
"y encima se la cobró",
"con mil millones de devotas",
"ojalá que tu corona y tu bandera",
"abran paso a una vida nueva",
"como un mundo en primavera",
"ojalá que a ti no te envenene el poder",
"y que no dejes nunca de ser la mujer",
"que siempre fue nuestra gran compañera"
]
}
```
The `id` field uniquely identifies each instance in the dataset, providing a way to reference specific entries. The `authors`, `song_type`, `year`, `group`, and `group_type` fields provide context for the lyrics, while the `lyrics` field itself contains the actual text of the song. The relationships between these fields are implicit in the structure of the dataset, with each instance representing a single song from the Carnaval de Cádiz.
### Data Fields
`id`
Unique identifier for each song in the dataset. A SHA-256 hash calculated from the first four verses of the lyrics and the group name, with all spaces removed and converted to lowercase (string).
`authors`
List of authors who have written the song (string array).
`song_type`
The type of song (1: presentación, 2: pasodoble/tango, 3: cuplé, 4: estribillo, 5: popurrí, 6: cuarteta).
`year`
Year the song was written or performed (string).
`group`
Name of the group that performed the song (string).
`group_type`
The type of the group (1: coro, 2: comparsa, 3: chirigota, 4: cuarteto).
`lyrics`
The lyrics of the song, represented as an array of verses (string array).
### Data Splits
This dataset does not have traditional training, validation, and test splits. Instead, it is divided into two subsets: "accurate" and "midaccurate".
The "accurate" subset contains 958 instances. All fields of first 957 instances in this subset have been obtained through web scraping and have undergone at least one human review for accuracy. The rest have been added by users at [letrascarnavalcadiz.com](https://letrascarnavalcadiz.com).
The "midaccurate" subset contains 226 instances. The 'group' and 'lyrics' fields in this subset were collected through web scraping, but the remaining fields were filled in by querying language models connected to the Internet. Therefore, the data in these fields may not be accurate.
| Subset | Instances |
|-------------|----------:|
| Accurate | 958 |
| Midaccurate | 226 |
Please note that the division into subsets is based on the method and reliability of data collection, rather than a random or stratified split typically used in machine learning tasks. Users of the dataset should consider this when deciding how to use the data.
## Dataset Creation
### Curation Rationale
The dataset was created to address a significant need in the cultural heritage of the city of Cádiz, Spain. The Carnaval de Cádiz is a major cultural event, yet there was no structured database of its lyrics that could be consulted for research or public access. This lack of a structured database hindered the exploration and appreciation of this cultural heritage. The dataset was curated to respond to this need.
### Source Data
#### Initial Data Collection and Normalization
The initial collection of lyrics was carried out through automatic scraping of various websites and multimedia content on the Internet. To maximize the number of records with minimal effort, all collection is being done using different Artificial Intelligence models.
#### Who are the source language producers?
The source language producers of the dataset are the authors and performers of the songs from the Carnaval de Cádiz. These include a wide range of individuals and groups who have participated in the Carnaval over the years. The dataset does not include self-reported demographic or identity information for these individuals or groups.
The data in the dataset was collected from two websites: https://www.alsondelcarnaval.es and http://letrasdesdeelparaiso.blogspot.com. The first 957 instances of "accurate" subset of the dataset was collected from the former, while the "midaccurate" subset was collected from the latter. The data was extracted through automatic web scraping, and in the case of the "midaccurate" subset, some fields were filled in by querying language models connected to the Internet.
The rest of "accurate" subset have been added by users at [letrascarnavalcadiz.com](https://letrascarnavalcadiz.com).
### Personal and Sensitive Information
The only sensitive information in the dataset is the names and surnames of the authors of the lyrics.
## Considerations for Using the Data
### Social Impact of Dataset
The use of this dataset has significant social impact.
Firstly, this dataset can positively contribute to the understanding and preservation of Cadiz's culture and traditions, as the Carnaval de Cádiz is an integral part of the city's cultural identity. By providing an accessible and easily searchable resource for carnival song lyrics, this dataset can assist cultural researchers, linguists, and the general public in better understanding and appreciating the rich tradition of the Carnaval de Cádiz.
Additionally, this dataset can be utilized to enhance natural language processing (NLP) technologies in Spanish, a language that can sometimes be underrepresented in NLP research. By providing a high-quality, culture-specific Spanish text corpus, this dataset can aid in improving the accuracy and cultural relevance of Spanish NLP models.
However, there are also risks associated with the use of this dataset. For instance, if used to train text generation models, these models could generate content that reinforces cultural stereotypes or perpetuates existing biases. Moreover, the automatic interpretation of carnival song lyrics can be challenging due to cultural and linguistic subtleties, and errors in this interpretation could lead to misunderstandings or misrepresentations of Cadiz's culture.
Finally, although this dataset does not contain a low-resource or underrepresented language, it does focus on a specific cultural tradition from a specific region of Spain. Therefore, its use can impact the Cadiz community by helping to preserve and disseminate its unique culture and traditions.
### Discussion of Biases
The dataset is subject to several biases due to the nature of the data collection and the historical context of the Cadiz Carnival.
Firstly, there is a temporal bias in the dataset. More recent lyrics are overrepresented compared to older ones, as there is more information available on the internet about modern groups. This may lead to a skewed understanding of the evolution of the Carnival's themes over time.
Secondly, the dataset exhibits a popularity bias. Lyrics from more popular groups are overrepresented because individuals have chosen to write about them more frequently. This could potentially limit the diversity of styles and themes represented in the dataset.
Thirdly, there is a competition bias. Lyrics from groups that advanced further in the competition stages are overrepresented, resulting in more available lyrics from these groups. This might lead to an overemphasis on the styles and themes that tend to be more successful in the competition.
Lastly, the dataset reflects a gender bias. Given that there have historically been more male authors than female authors in the Cadiz Carnival, the majority of the dataset consists of lyrics written by men. This could potentially limit the representation of diverse perspectives and themes in the lyrics.
To mitigate these biases, we actively encourage the participation of the community. By verifying the different records of the dataset, reviewing the transcription of the lyrics and all the data for accuracy, and adding new lyrics, we hope to broaden the diversity and representation.
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
- Iván Romero Reyna. Student of the Specialisation Course in Artificial Intelligence and Big Data at [IES Rafael Alberti](https://iesrafaelalberti.es) during the academic year 2022-2023.
- Jesús Federico Franco Medinilla. Student of the Specialisation Course in Artificial Intelligence and Big Data at [IES Rafael Alberti](https://iesrafaelalberti.es) during the academic year 2022-2023.
- Jesús Carlos Avecilla de la Herrán. Promoter in [Cádiz AI](https://www.spain-ai.com).
### Licensing Information
[CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0)
### Citation Information
```
@misc{letrascarnavalcadiz2023,
author = {Romero Reyna, Iván and Franco Medinilla, Jesús Federico and Avecilla de la Herrán, Jesús Carlos},
title = {letras-carnaval-cadiz},
year = {2023},
url = {https://huggingface.co/datasets/IES-Rafael-Alberti/letras-carnaval-cadiz}
}
```
### Contributions
Thanks to [@ivanro](https://huggingface.co/ivanro), [@jframed281](https://huggingface.co/jframed281) for adding this dataset.
Thanks to all the reviewers and contributors at [letrascarnavalcadiz.com](https://letrascarnavalcadiz.com). |
Devika03/Research_Paper_Summarization_Dataset | 2023-04-05T05:59:26.000Z | [
"region:us"
] | Devika03 | null | null | null | 2 | 4 | Entry not found |
harpomaxx/dga-detection | 2023-05-10T13:32:11.000Z | [
"license:cc-by-2.0",
"region:us"
] | harpomaxx | A dataset containing both DGA and normal domain names. The normal domain names were taken from the Alexa top one million domains. An additional 3,161 normal
domains were included in the dataset, provided by the Bambenek Consulting feed. This later group is particularly interesting since it consists of suspicious domain
names that were not generated by DGA. Therefore, the total amount of domains normal in the dataset is 1,003,161. DGA domains were obtained from the repositories
of DGA domains of Andrey Abakumov and John Bambenek. The total amount of DGA domains is 1,915,335, and they correspond to 51 different malware families. DGA domains
were generated by 51 different malware families. About the 55% of of the DGA portion of dataset is composed of samples from the Banjori, Post, Timba, Cryptolocker,
Ramdo and Conficker malware. | null | null | 2 | 4 | ---
license: cc-by-2.0
---
A dataset containing both DGA and normal domain names. The normal domain names were taken from the Alexa top one million domains.
An additional 3,161 normal domains were included in the dataset, provided by the Bambenek Consulting feed. This later group is particularly interesting since it consists
of suspicious domain names that were not generated by DGA. Therefore, the total amount of domains normal in the dataset is 1,003,161. DGA domains
were obtained from the repositories of DGA domains of [Andrey Abakumov](https://github.com/andrewaeva/DGA) and [John Bambenek](http://osint.bambenekconsulting.com/feeds/).
The total amount of DGA domains is 1,915,335, and they correspond to 51 different malware families. DGA domains were generated by 51 different malware families.
About the 55% of of the DGA portion of dataset is composed of samples from the Banjori, Post, Timba, Cryptolocker, Ramdo and Conficker malware.
The DGA generation scheme followed by the malware families includes the simple arithmetical (A) and the recent word based (W) schemes.
Under the arithmetic scheme, the algorithm usually calculates a sequence of values that have a direct ASCII representation usable for a domain name.
On the other hand, word-based consists of concatenating a sequence of words from one or more wordlists. |
Netruk44/uesp-wiki-content | 2023-04-10T20:20:54.000Z | [
"size_categories:100K<n<1M",
"language:en",
"license:other",
"region:us"
] | Netruk44 | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: namespace
dtype: int64
- name: page_id
dtype: int64
- name: url
dtype: string
- name: title
dtype: string
- name: content
dtype: string
- name: revision_id
dtype: int64
- name: timestamp
dtype: string
- name: contributor
dtype: string
- name: content_cleaned
dtype: string
splits:
- name: train
num_bytes: 757966297
num_examples: 324930
download_size: 363485644
dataset_size: 757966297
license: other
language:
- en
size_categories:
- 100K<n<1M
---
# Dataset Card for "uesp-wiki-content"
This dataset contains the content of the pages from the [Unofficial Elder Scrolls Pages](https://en.uesp.net/wiki/Main_Page).
**License**: The contents of this dataset are licensed under the [Creative Commons by-sa 2.5 License](http://creativecommons.org/licenses/by-sa/2.5/).
**Source**:
* The content of this dataset was taken from the [dumps](http://dumps.uesp.net/) subdomain of [uesp.net](https://uesp.net)
* The contents of this dataset come from the file named "`uespwiki-2022-02-09-current.xml.bz2`", as that was the most recent version available at the time of dataset creation.
* The archive file was processed by [mediawiki-dump](https://github.com/macbre/mediawiki-dump)
* Using [my own fork](https://github.com/Netruk44/mediawiki-dump/tree/namespace-fix) to fix a bug with cleaning the text.
**Caveats**:
* The `content_cleaned` column has some known issues.
* Words may occasionally be missing if they were a special link type in the original content. |
asgaardlab/GameplayCaptions | 2023-04-07T14:38:12.000Z | [
"task_categories:image-to-text",
"task_categories:text-to-image",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"Gameplay",
"region:us"
] | asgaardlab | null | null | null | 3 | 4 | ---
dataset_info:
features:
- name: img_id
dtype: string
- name: game
dtype: string
- name: image
dtype: image
- name: blip2-opt-6.7b_captions.csv
dtype: string
- name: coca_captions.csv
dtype: string
- name: git-large-coco_captions.csv
dtype: string
- name: git-large-r-textcaps_captions.csv
dtype: string
- name: vit-gpt2_captions.csv
dtype: string
splits:
- name: validation
num_bytes: 69110393094.684
num_examples: 75979
download_size: 66660916127
dataset_size: 69110393094.684
license: apache-2.0
task_categories:
- image-to-text
- text-to-image
language:
- en
tags:
- Gameplay
pretty_name: Gameplay Captions
size_categories:
- 10K<n<100K
---
# Dataset Card for "Gameplay Captions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
grosenthal/latin_english_parallel | 2023-04-28T02:11:31.000Z | [
"task_categories:translation",
"size_categories:10K<n<100K",
"language:la",
"language:en",
"license:mit",
"region:us"
] | grosenthal | null | null | null | 3 | 4 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: la
dtype: string
- name: en
dtype: string
- name: file
dtype: string
splits:
- name: train
num_bytes: 39252644
num_examples: 99343
- name: test
num_bytes: 405056
num_examples: 1014
- name: valid
num_bytes: 392886
num_examples: 1014
download_size: 25567350
dataset_size: 40050586
license: mit
task_categories:
- translation
language:
- la
- en
pretty_name: Latin to English Translation Pairs
size_categories:
- 10K<n<100K
---
# Dataset Card for "latin_english_parallel"
101k translation pairs between Latin and English, split 99/1/1 as train/test/val. These have been collected roughly 66% from the Loeb Classical Library and 34% from the Vulgate translation.
For those that were gathered from the Loeb Classical Library, alignment was performd manually between Source and Target sequences. Additionally, the English translations were both 1. copyrighted and 2. outdated. As such, we decided to modernize and transform them into ones that could be used in the public domain, as the original Latin is not copyrighted.
To perform this, we used the gpt3.5-turbo model on OpenAI with the prompt `Translate an old dataset from the 1800s to modern English while preserving the original meaning and exact same sentence structure. Retain extended adjectives, dependent clauses, and punctuation. Output the translation preceded by the text "Modern Translation: ". If a given translation is not a complete sentence, repeat the input sentence. \n'` followed by the source English.
We then manually corrected all outputs that did not conform to the standard.
Each sample is annotated with the index and file (and therefore author/work) that the sample is from. If you find errors, please feel free to submit a PR to fix them.
 |
andrewsunanda/fast_food_image_classification | 2023-04-08T06:53:22.000Z | [
"task_categories:image-classification",
"language:en",
"region:us"
] | andrewsunanda | null | null | null | 1 | 4 | ---
task_categories:
- image-classification
language:
- en
--- |
0x7194633/value_determinant | 2023-04-09T06:46:02.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"region:us"
] | 0x7194633 | null | null | null | 0 | 4 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
pretty_name: Value Determinant
size_categories:
- 1K<n<10K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
JayZhang1/Verilogdata4pretrainCODET5 | 2023-04-09T10:17:01.000Z | [
"region:us"
] | JayZhang1 | null | null | null | 0 | 4 | Entry not found |
Yairama/alpaca_miner_dataset | 2023-04-11T07:05:13.000Z | [
"license:gpl-3.0",
"region:us"
] | Yairama | null | null | null | 0 | 4 | ---
license: gpl-3.0
---
# A dataset of mining engineering generated with ChatGPT & BinGPT
I take as base the [colorado school of mines - mining engineering syllabus](https://catalog.mines.edu/undergraduate/programs/miningengineering/miningengineering.pdf) |
rexarski/TCFD_disclosure | 2023-04-25T14:06:34.000Z | [
"task_categories:text-classification",
"size_categories:n<1K",
"language:en",
"license:mit",
"climate",
"region:us"
] | rexarski | null | null | null | 1 | 4 | ---
dataset_info:
features:
- name: question
dtype: string
- name: text
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 191356
num_examples: 593
download_size: 58524
dataset_size: 191356
license: mit
task_categories:
- text-classification
language:
- en
tags:
- climate
pretty_name: Sentence dataset extracted from TCFD recommendations for climate disclosure
category classification.
size_categories:
- n<1K
---
# Dataset Card for "TCFD_disclosure"
### Dataset Summary
This dataset was created to aid our team in developing a model to address two climate-related tasks: Fact Checking, and TCFD Classification, both of which are discussed below.
These two tasks are believed to be solveable with a BERT Language model, as identified by the [ClimateBERT](https://climatebert.ai/about) team. However, conclusive benchmarks or model weights for these
tasks were never released, leading to our team developing our own approach to these problems.
### Data Fields
Our dataset contains 540 records, each of which is composed of several attributes:
- `question`: a `string` feature, provides additional detail about the particular TCFD category a particular document is labeled as.
- `text`: a `string` feature, containes the raw text of the sentence from the document that best characterizes a particular document.
- `label`: a `string` feature, identifies which of the 11 TCFD categories this document is labeled as.
### Source Data
The reports used as the basis of the dataset were drawn from the Task Force on Climate-Related Financial Disclosures (TCFD) list of [Example Disclosures](https://www.fsb-tcfd.org/example-disclosures/).
These documents were provided by TCFD to highlight climate-related financial disclosures that align with one or more of the TCFD’s 11 recommended categories. With this in mind,
we can think of this list as exemplars for disclosures that display clear alignment and focus throughout the document.
### Methodology
This dataset was curated by our team through a custom processing pipeline, to ensure the creation of a dataset in a way that was reproducible, explainable, and correct. A collection of
financial disclosures was highlighted by the TCFD, as discussed above. These reports served as the foundation of our dataset, giving our team a curated selection of data upon which to build our dataset.
These reports were scraped from the TCFD website via [Selenium](https://www.selenium.dev/), a tool designed to automate the collection of publicly available data from websites. With it we were able to
save the example disclosures as PDF files for processing. The collected documents already contained a label, provided by the TCFD in regards to its 11 identified categories for disclosures (discussed on page 112 of the [following report](https://assets.bbhub.io/company/sites/60/2022/10/2022-TCFD-Status-Report.pdf)).
With these labels in mind, we used a custom Python tool called [`ChitChat`](https://github.com/rexarski/chitchat) to return key sentences from each of the example disclosures. For the purposes of this study we returned five sentences from each report, giving us a total of 540 data points.
Each of the five created sentences shares the original label of the root document they come from. More information about our processing pipeline and further analysis can be found on our project page, or by contacting any of the authors of this project.
### Languages
The text contained in the dataset is entirely in English, as found in the real-world financial disclosures identified by the TCFD. The associated BCP-47 code is [`en`](https://www.techonthenet.com/js/language_tags.php), to ensure clear labeling of language usage for downstream tasks and other future applications.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
7eu7d7/HCP-Diffusion-datas | 2023-05-12T05:09:23.000Z | [
"license:apache-2.0",
"region:us"
] | 7eu7d7 | null | null | null | 7 | 4 | ---
license: apache-2.0
---
Anime prompt dataset (动漫风格数据集):
+ danbooru-160000.parquet
Natural scenes prompt dataset (真实风格数据集):
+ stable-diffusion-prompts-160000.parquet
+ stable-diffusion-prompts2-320000.parquet
Artistic style dataset (艺术风格数据集):
+ Lexica.art.parquet |
MasterThesisCBS/NorPaca | 2023-04-14T07:09:06.000Z | [
"task_categories:text-generation",
"language:no",
"language:nb",
"license:cc-by-4.0",
"instruction-finetuning",
"region:us"
] | MasterThesisCBS | null | null | null | 2 | 4 | ---
license: cc-by-4.0
language:
- 'no'
- nb
tags:
- instruction-finetuning
pretty_name: NB Alpaca Norwegian Bokmål
task_categories:
- text-generation
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 54356020
num_examples: 50961
- name: test
num_bytes: 1113587
num_examples: 1041
download_size: 28514339
dataset_size: 55469607
---
# NorPaca Norwegian Bokmål
This dataset is a translation to Norwegian Bokmål of [alpaca_gpt4_data.json](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM), a clean version of the [Alpaca dataset made at Stanford](https://huggingface.co/datasets/tatsu-lab/alpaca), but generated with GPT4.
# Prompt to generate dataset
```
Du blir bedt om å komme opp med et sett med 20 forskjellige oppgaveinstruksjoner. Disse oppgaveinstruksjonene vil bli gitt til en GPT-modell, og vi vil evaluere GPT-modellen for å fullføre instruksjonene.
Her er kravene:
1. Prøv å ikke gjenta verbet for hver instruksjon for å maksimere mangfoldet.
2. Språket som brukes til undervisningen bør også være mangfoldig. For eksempel bør du kombinere spørsmål med imperative instruksjoner.
3. Type instruksjoner bør være mangfoldig. Listen bør inneholde forskjellige typer oppgaver som åpen generering, klassifisering, redigering, etc.
2. En GPT-språkmodell skal kunne fullføre instruksjonen. For eksempel, ikke be assistenten om å lage visuell eller lydutgang. For et annet eksempel, ikke be assistenten om å vekke deg klokken 17.00 eller angi en påminnelse fordi den ikke kan utføre noen handling.
3. Instruksjonene skal være på norsk.
4. Instruksjonene skal være 1 til 2 setninger lange. Enten en imperativ setning eller et spørsmål er tillatt.
5. Du bør generere et passende input til instruksjonen. Inndatafeltet skal inneholde et spesifikt eksempel gitt for instruksjonen. Det bør involvere realistiske data og bør ikke inneholde enkle plassholdere. Innspillet bør gi betydelig innhold for å gjøre instruksjonen utfordrende, men bør ideelt sett ikke overstige 100 ord.
6. Ikke alle instruksjoner krever inndata. For eksempel, når en instruksjon spør om noen generell informasjon, "hva er den høyeste toppen i verden", er det ikke nødvendig å gi en spesifikk kontekst. I dette tilfellet legger vi ganske enkelt "<noinput>" i inntastingsfeltet.
7. Utgangen skal være et passende svar på instruksjonen og input.Sørg for at utgangen er mindre enn 100 ord.
Liste over 200 instrukser:
``` |
prashanthpillai/docvqa_train_and_val | 2023-04-13T17:29:28.000Z | [
"region:us"
] | prashanthpillai | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: questionId
dtype: int64
- name: question
dtype: string
- name: image
sequence:
sequence:
sequence: uint8
- name: docId
dtype: int64
- name: ucsf_document_id
dtype: string
- name: ucsf_document_page_no
dtype: string
- name: answers
sequence: string
- name: data_split
dtype: string
- name: words
sequence: string
- name: boxes
sequence:
sequence: int64
splits:
- name: val
num_bytes: 869361798
num_examples: 5349
- name: train
num_bytes: 6381793673
num_examples: 39454
download_size: 2578887111
dataset_size: 7251155471
---
# Dataset Card for "docvqa_train_and_val"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
KonghaYao/civitai_all_data | 2023-04-14T01:38:02.000Z | [
"language:en",
"license:cc-by-4.0",
"art",
"region:us"
] | KonghaYao | null | null | null | 6 | 4 | ---
license: cc-by-4.0
language:
- en
tags:
- art
---
# Model and Gallery Data in Civitai
### Dataset Summary
This Dataset includes model message and gallery data under model, like:

1. I crawl some data from [Civitai](https://civitai.com/) using Github CodeSpace and Deno, It takes me 6 hours to download it safely😄.
2. This dataset can be use to create many interesting model like auto prompting AI or prompt improve AI.
3. This project has a github repo for code that crawl all Data. [Link](https://github.com/KonghaYao/tinyproxy/tree/main/civitai)
### Dataset
1. /index/index.jsonl: It's all Model Base Message!
2. /index/index.filter.jsonl: filtered Model!
3. /index/info.jsonl: All Gallery Post Info!
### Notice some info in dataset
1. It includes many **NSFW** prompts or image URLs you will meet in dataset
2. `jsonl file` is a file that every row is a single json, but I just use '\n' to join an array and wrote to the file, so some bug could appear.
|
MasterThesisCBS/XSum_NO | 2023-04-16T10:34:50.000Z | [
"task_categories:text-generation",
"task_categories:summarization",
"language:no",
"language:nb",
"license:cc-by-4.0",
"summarization",
"region:us"
] | MasterThesisCBS | null | null | null | 0 | 4 | ---
license: cc-by-4.0
language:
- 'no'
- nb
tags:
- summarization
pretty_name: XSUM Norwegian
task_categories:
- text-generation
- summarization
dataset_info:
features:
- name: title
dtype: string
- name: url
dtype: string
- name: timestamp
dtype: string
- name: body
dtype: string
- name: lead
dtype: string
- name: body_length
dtype: float64
- name: summary
dtype: string
- name: prompt_train
dtype: string
- name: prompt_test
dtype: string
splits:
- name: train
num_bytes: 284661834
num_examples: 64070
- name: test
num_bytes: 14882449
num_examples: 3373
download_size: 186192491
dataset_size: 299544283
---
# XSUM NO
A norwegian summarization dataset custom made for evaluation or fine-tuning of GPT models.
## Data Collection
Data was scraped from Aftenposten.no and Vg.no, and the summarization column is represented by the title and ingress.
## How to Use
```python
from datasets import load_dataset
data = load_dataset("MasterThesisCBS/XSum_NO")
```
### Dataset Curators
[John Oskar Holmen Skjeldrum](mailto:josk18ad@student.cbs.dk) and [Peder Tanberg](mailto:peha28ae@student.cbs.dk) |
vietgpt/databricks_dolly15k_en | 2023-07-15T09:20:16.000Z | [
"language:en",
"region:us"
] | vietgpt | null | null | null | 0 | 4 | ---
language: en
dataset_info:
features:
- name: id
dtype: int64
- name: instruction
dtype: string
- name: input
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 12208698
num_examples: 15014
download_size: 7936782
dataset_size: 12208698
---
- Format for Instruction task
```python
def preprocess(
sample,
instruction_key="### Instruction:",
input_key="Input:",
response_key="### Response:",
end_key="<|endoftext|>"
):
instruction = sample['instruction']
input = sample['input']
response = sample['response']
if input:
return {'text': """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
{instruction_key}
{instruction}
{input_key}
{input}
{response_key}
{response}
{end_key}""".format(
instruction_key=instruction_key,
instruction=instruction,
input_key=input_key,
input=input,
response_key=response_key,
response=response,
end_key=end_key,
)}
else:
return {'text': """Below is an instruction that describes a task. Write a response that appropriately completes the request.
{instruction_key}
{instruction}
{response_key}
{response}
{end_key}""".format(
instruction_key=instruction_key,
instruction=instruction,
response_key=response_key,
response=response,
end_key=end_key,
)}
"""
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
When did Virgin Australia start operating?
Input:
Virgin Australia, the trading name of Virgin Australia Airlines Pty Ltd, is an Australian-based airline. It is the largest airline by fleet size to use the Virgin brand. It commenced services on 31 August 2000 as Virgin Blue, with two aircraft on a single route.[3] It suddenly found itself as a major airline in Australia's domestic market after the collapse of Ansett Australia in September 2001. The airline has since grown to directly serve 32 cities in Australia, from hubs in Brisbane, Melbourne and Sydney.[4]
### Response:
Virgin Australia commenced services on 31 August 2000 as Virgin Blue, with two aircraft on a single route.
<|endoftext|>
"""
``` |
semaj83/ctmatch_classification | 2023-05-10T11:05:13.000Z | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"license:mit",
"medical",
"region:us"
] | semaj83 | null | null | null | 2 | 4 | ---
license: mit
task_categories:
- text-classification
tags:
- medical
size_categories:
- 10K<n<100K
---
**CTMatch Classification Dataset**
This is a combined set of 2 labelled datasets of:
`topic (patient descriptions), doc (clinical trials documents - selected fields), and label ({0, 1, 2})` triples, in jsonl format.
(Somewhat of a duplication of some of the `ir_dataset` also available on HF.)
These have been processed using ctproc, and in this state can be used by various tokenizers for fine-tuning (see ctmatch for examples).
These 2 datasets contain no patient identifying information are openly available in raw forms:
#### TREC: http://www.trec-cds.org/2021.html
#### CSIRO: https://data.csiro.au/collection/csiro:17152
---
**see repo for more information**:
https://github.com/semajyllek/ctmatch |
diffusers/cat_toy_example | 2023-04-18T14:24:58.000Z | [
"region:us"
] | diffusers | null | null | null | 3 | 4 | Entry not found |
gustawdaniel/ngram-google-2012 | 2023-04-21T04:48:47.000Z | [
"license:cc-by-3.0",
"region:us"
] | gustawdaniel | null | null | null | 0 | 4 | ---
license: cc-by-3.0
---
```
python -m spacy download en_core_web_sm
```
Titles:
```
jq -s '.[].title' raw/dict.jsonl
```
returns
- [x] "English"
- [ ] "English One Million"
- [x] "American English"
- [x] "British English"
- [x] "English Fiction"
- [ ] "Chinese (simplified)"
- [x] "French"
- [x] "German"
- [ ] "Hebrew"
- [ ] "Italian"
- [x] "Russian"
- [x] "Spanish"
Spellcheck:
https://pypi.org/project/pyspellchecker/
```
English - ‘en’
Spanish - ‘es’
French - ‘fr’
Portuguese - ‘pt’
German - ‘de’
Russian - ‘ru’
Arabic - ‘ar’
```
Sets now:
- [x] "English" - en
- [x] "Spanish" - es
- [x] "French" - fr
- [x] "German" - de
- [x] "Russian" - ru
|
sam-mosaic/vicuna_alpaca_hc3_chatml | 2023-07-18T00:29:05.000Z | [
"language:en",
"region:us"
] | sam-mosaic | null | null | null | 20 | 4 | ---
language: en
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 387859366
num_examples: 170637
download_size: 146603814
dataset_size: 387859366
---
# Dataset Card for "vicuna_alpaca_hc3_chatml"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
biglab/webui-7k | 2023-05-05T02:25:39.000Z | [
"license:other",
"region:us"
] | biglab | null | null | null | 0 | 4 | ---
license: other
---
This data accompanies the WebUI project (https://dl.acm.org/doi/abs/10.1145/3544548.3581158)
For more information, check out the project website: https://uimodeling.github.io/
To download this dataset, you need to install the huggingface-hub package
```
pip install huggingface-hub
```
Use snapshot_download
```
from huggingface_hub import snapshot_download
snapshot_download(repo_id="biglab/webui-7k", repo_type="dataset")
```
IMPORTANT
* Before downloading and using, please review the copyright info here: https://github.com/js0nwu/webui/blob/main/COPYRIGHT.txt
* Not all data samples have the same number of files (e.g., same number of device screenshots) due to the fact that the crawler used a timeout during collection
* The dataset released on HuggingFace was filtered using a list of explicit words and therefore contains fewer samples than the experiments originally used in the paper. The raw dataset is currently available (https://drive.google.com/drive/folders/1hcO75W2FjsZoibsj2TIbKz67hy9JkOBz?usp=share_link) but may be removed in the future. |
sander-wood/wikimusictext | 2023-04-26T07:33:25.000Z | [
"task_categories:text-classification",
"task_categories:text2text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"music",
"arxiv:2304.11029",
"region:us"
] | sander-wood | null | null | null | 4 | 4 | ---
license: mit
task_categories:
- text-classification
- text2text-generation
pretty_name: wikimt
size_categories:
- 1K<n<10K
language:
- en
tags:
- music
---
## Dataset Summary
In [CLaMP: Contrastive Language-Music Pre-training for Cross-Modal Symbolic Music Information Retrieval](https://ai-muzic.github.io/clamp/), we introduce WikiMusicText (WikiMT), a new dataset for the evaluation of semantic search and music classification. It includes 1010 lead sheets in ABC notation sourced from Wikifonia.org, each accompanied by a title, artist, genre, and description. The title and artist information is extracted from the score, whereas the genre labels are obtained by matching keywords from the Wikipedia entries and assigned to one of the 8 classes (Jazz, Country, Folk, R&B, Pop, Rock, Dance, and Latin) that loosely mimic the GTZAN genres. The description is obtained by utilizing BART-large to summarize and clean the corresponding Wikipedia entry. Additionally, the natural language information within the ABC notation is removed.
WikiMT is a unique resource to support the evaluation of semantic search and music classification. However, it is important to acknowledge that the dataset was curated from publicly available sources, and there may be limitations concerning the accuracy and completeness of the genre and description information. Further research is needed to explore the potential biases and limitations of the dataset and to develop strategies to address them. Therefore, to support additional investigations, we also provide the [source files](https://github.com/microsoft/muzic/blob/main/clamp/wikimusictext/source_files.zip) of WikiMT, including the MusicXML files from Wikifonia and the original entries from Wikipedia.
## Copyright Disclaimer
WikiMT was curated from publicly available sources and is believed to be in the public domain. However, it is important to acknowledge that copyright issues cannot be entirely ruled out. Therefore, users of the dataset should exercise caution when using it. The authors of WikiMT do not assume any legal responsibility for the use of the dataset. If you have any questions or concerns regarding the dataset's copyright status, please contact the authors at shangda@mail.ccom.edu.cn.
## BibTeX entry and citation info
```
@misc{wu2023clamp,
title={CLaMP: Contrastive Language-Music Pre-training for Cross-Modal Symbolic Music Information Retrieval},
author={Shangda Wu and Dingyao Yu and Xu Tan and Maosong Sun},
year={2023},
eprint={2304.11029},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
``` |
JosephusCheung/GuanacoVQA-mini19K | 2023-04-22T03:46:39.000Z | [
"task_categories:visual-question-answering",
"language:zh",
"language:ja",
"language:de",
"license:gpl-3.0",
"llama",
"minigpt-4",
"region:us"
] | JosephusCheung | null | null | null | 24 | 4 | ---
license: gpl-3.0
task_categories:
- visual-question-answering
language:
- zh
- ja
- de
tags:
- llama
- minigpt-4
---
19K Multilingual VQA Alignment Dataset, in the format of Mini-GPT4 dataset.
With 1.1K images from COCO-2017, resized.
|
haiyan1/qizhikejihaha | 2023-05-17T08:37:19.000Z | [
"task_categories:image-classification",
"task_categories:text-classification",
"size_categories:n<1K",
"language:zh",
"license:apache-2.0",
"那你",
"medical",
"chemistry",
"biology",
"finance",
"music",
"art",
"legal",
"code",
"climate",
"not-for-all-audiences",
"xx",
"ssss",
"xxss... | haiyan1 | null | null | null | 0 | 4 | ---
license: apache-2.0
task_categories:
- image-classification
- text-classification
language:
- zh
tags:
- 那你
- medical
- chemistry
- biology
- finance
- music
- art
- legal
- code
- climate
- not-for-all-audiences
- xx
- ssss
- xxss
- sss
- swwww
- wwwww
- wwww
- 我1
- '11'
- '22'
- '333'
- '444'
- '555'
- '666'
- '777'
- '6777'
- '7777'
size_categories:
- n<1K
pretty_name: 很好
---
很棒 |
bprateek/amazon_product_description | 2023-05-17T20:12:35.000Z | [
"license:apache-2.0",
"region:us"
] | bprateek | null | null | null | 1 | 4 | ---
license: apache-2.0
---
|
alpayariyak/MATH_Instruct_no_input | 2023-04-24T06:33:59.000Z | [
"region:us"
] | alpayariyak | null | null | null | 1 | 4 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 9423883
num_examples: 12500
download_size: 4856922
dataset_size: 9423883
---
# Dataset Card for "MATH_Instruct_no_input"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
JosephusCheung/GuanacoVQADataset | 2023-04-24T15:34:17.000Z | [
"language:zh",
"language:ja",
"language:de",
"license:gpl-3.0",
"region:us"
] | JosephusCheung | null | null | null | 29 | 4 | ---
license: gpl-3.0
language:
- zh
- ja
- de
---
93.9K in ZH / JA / DE
Multilingual VQA Alignment Dataset, in the format of Mini-GPT4 dataset.
With images from COCO-2017, resized.
Larger and updating version of [JosephusCheung/GuanacoVQA-mini19K](https://huggingface.co/datasets/JosephusCheung/GuanacoVQA-mini19K) |
nomic-ai/cohere-wiki-sbert | 2023-04-25T01:01:36.000Z | [
"region:us"
] | nomic-ai | null | null | null | 2 | 4 | ---
dataset_info:
features:
- name: id
dtype: int32
- name: title
dtype: string
- name: text
dtype: string
- name: url
dtype: string
- name: wiki_id
dtype: int32
- name: views
dtype: float32
- name: paragraph_id
dtype: int32
- name: langs
dtype: int32
- name: embedding
sequence: float32
splits:
- name: train
num_bytes: 72128274660
num_examples: 35167920
download_size: 85878901052
dataset_size: 72128274660
---
# Dataset Card for "cohere-wiki-sbert"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vmalperovich/QC | 2023-04-24T23:50:00.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"region:us"
] | vmalperovich | This data collection contains all the data used in our learning question classification experiments(see [1]), which has question class definitions, the training and testing question sets, examples of preprocessing the questions, feature definition scripts and examples of semantically related word features.
This work has been done by Xin Li and Dan Roth and supported by [2]. | """
_TRAIN_DOWNLOAD_URL = "https://huggingface.co/datasets/vmalperovich/QC/raw/main/train.csv"
_TEST_DOWNLOAD_URL = "https://huggingface.co/datasets/vmalperovich/QC/raw/main/test.csv"
CATEGORY_MAPPING = {'ENTY_cremat': 0,
'DESC_manner': 1,
'ENTY_animal': 2,
'ABBR_exp': 3,
'HUM_ind': 4,
'HUM_gr': 5,
'HUM_title': 6,
'DESC_def': 7,
'NUM_date': 8,
'DESC_reason': 9,
'ENTY_event': 10,
'LOC_state': 11,
'DESC_desc': 12,
'NUM_count': 13,
'ENTY_other': 14,
'ENTY_letter': 15,
'LOC_other': 16,
'ENTY_religion': 17,
'ENTY_food': 18,
'LOC_country': 19,
'ENTY_color': 20,
'ENTY_termeq': 21,
'LOC_city': 22,
'ENTY_body': 23,
'ENTY_dismed': 24,
'LOC_mount': 25,
'NUM_money': 26,
'ENTY_product': 27,
'NUM_period': 28,
'ENTY_substance': 29,
'ENTY_sport': 30,
'ENTY_plant': 31,
'ENTY_techmeth': 32,
'NUM_volsize': 33,
'HUM_desc': 34,
'ENTY_instru': 35,
'ABBR_abb': 36,
'NUM_other': 37,
'NUM_speed': 38,
'ENTY_word': 39,
'ENTY_lang': 40,
'NUM_perc': 41,
'NUM_code': 42,
'NUM_dist': 43,
'NUM_temp': 44,
'ENTY_symbol': 45,
'NUM_ord': 46,
'ENTY_veh': 47,
'NUM_weight': 48,
'ENTY_currency': 49}
class AGNews(datasets.GeneratorBasedBuilder): | null | 0 | 4 | ---
license: mit
task_categories:
- text-classification
language:
- en
size_categories:
- 1K<n<10K
pretty_name: uiuc-qc
---
# Question Classification dataset
**Fixed version** (added some examples to test in order to have the same labels in train and test)
This data collection contains all the data used in our learning question classification experiments(see [1]), which has question class definitions, the training and testing question sets, examples of preprocessing the questions, feature definition scripts and examples of semantically related word features. This work has been done by Xin Li and Dan Roth
Source: https://cogcomp.seas.upenn.edu/Data/QA/QC/ |
iamketan25/open-assistant-instructions | 2023-04-25T17:42:38.000Z | [
"region:us"
] | iamketan25 | null | null | null | 6 | 4 | Entry not found |
recastai/LAION-art-EN-improved-captions | 2023-06-24T04:19:50.000Z | [
"language:en",
"license:cc-by-4.0",
"region:us"
] | recastai | null | null | null | 6 | 4 | ---
license: cc-by-4.0
dataset_info:
features:
- name: orig_caption
dtype: string
- name: generated_caption
dtype: string
- name: key
dtype: string
- name: url
dtype: string
- name: index
dtype: int64
splits:
- name: train
num_bytes: 681710086
num_examples: 2684160
download_size: 441945582
dataset_size: 681710086
language:
- en
---
# Dataset Card for LAION-art-EN-improved-captions
### Dataset Summary
This dataset has been created by **Re:cast AI** for improving the semantic relationship of image-caption pairs. `generated_captions` were created in a semi-supervised fashion using the **Salesforce/blip2-flan-t5-xxl** model.
### Supported Tasks
Fine-tuning text-to-image generators (e.g. stable-diffusion), or a searchable prompt database (requires faiss-index).
## Dataset Structure
### Data Fields
- orig_caption
- generated_caption
- key
- index
- url
### Data Splits
- train
### Source Data
LAION-Art |
tasksource/oasst1_dense_flat | 2023-05-31T08:49:36.000Z | [
"license:apache-2.0",
"region:us"
] | tasksource | null | null | null | 2 | 4 | ---
dataset_info:
features:
- name: message_id
dtype: string
- name: parent_id
dtype: string
- name: user_id
dtype: string
- name: created_date
dtype: string
- name: text
dtype: string
- name: role
dtype: string
- name: lang
dtype: string
- name: review_count
dtype: int32
- name: review_result
dtype: bool
- name: deleted
dtype: bool
- name: rank
dtype: float64
- name: synthetic
dtype: bool
- name: model_name
dtype: 'null'
- name: detoxify
struct:
- name: identity_attack
dtype: float64
- name: insult
dtype: float64
- name: obscene
dtype: float64
- name: severe_toxicity
dtype: float64
- name: sexual_explicit
dtype: float64
- name: threat
dtype: float64
- name: toxicity
dtype: float64
- name: message_tree_id
dtype: string
- name: tree_state
dtype: string
- name: emojis
struct:
- name: count
sequence: int32
- name: name
sequence: string
- name: labels
struct:
- name: count
sequence: int32
- name: name
sequence: string
- name: value
sequence: float64
- name: parent_text
dtype: string
- name: spam
dtype: float64
- name: fails_task
dtype: float64
- name: lang_mismatch
dtype: float64
- name: pii
dtype: float64
- name: not_appropriate
dtype: float64
- name: hate_speech
dtype: float64
- name: sexual_content
dtype: float64
- name: quality
dtype: float64
- name: toxicity
dtype: float64
- name: humor
dtype: float64
- name: helpfulness
dtype: float64
- name: creativity
dtype: float64
- name: violence
dtype: float64
splits:
- name: train
num_bytes: 59657796
num_examples: 34059
- name: validation
num_bytes: 3164029
num_examples: 1816
download_size: 25173939
dataset_size: 62821825
license: apache-2.0
---
# Dataset Card for "oasst1_dense_flat"
[OASST1 dataset](https://huggingface.co/datasets/OpenAssistant/oasst1)
But where with retrieved parent_text, and where we only keep messages with dense annotations (all labels have 2 annotators)
```python
from datasets import Dataset, DatasetDict
d={}
for split in ['train','validation']:
df=load_dataset("OpenAssistant/oasst1")[split].to_pandas()
m2t=df.set_index("message_id")['text'].to_dict()
df['parent_text']=df.parent_id.map(lambda x: m2t.get(x,''))
df=df[df.labels.map(lambda x:x!=None)]
df=df[df.labels.map(lambda x:x['count'].min()>2)]
labels=df.labels.map(lambda x:list(x['name'])).value_counts().index[0]
df=df[df.labels.map(lambda x:x!=None)]
df=df[df.labels.map(lambda x:list(x['name'])==labels)]
for label in labels:
df[label]=df.labels.map(lambda x: x['value'][list(x['name']).index(label)])
d[split]=Dataset.from_pandas(df,preserve_index=False)
DatasetDict(d).push_to_hub('oasst1_dense_flat')
```
https://github.com/LAION-AI/Open-Assistant
```
@article{kopf2023openassistant,
title={OpenAssistant Conversations--Democratizing Large Language Model Alignment},
author={K{\"o}pf, Andreas and Kilcher, Yannic and von R{\"u}tte, Dimitri and Anagnostidis, Sotiris and Tam, Zhi-Rui and Stevens, Keith and Barhoum, Abdullah and Duc, Nguyen Minh and Stanley, Oliver and Nagyfi, Rich{\'a}rd and others},
journal={arXiv preprint arXiv:2304.07327},
year={2023}
}
``` |
james-burton/wine_reviews | 2023-04-27T15:56:36.000Z | [
"region:us"
] | james-burton | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: country
dtype: string
- name: description
dtype: string
- name: points
dtype: int64
- name: price
dtype: float64
- name: province
dtype: string
- name: variety
dtype:
class_label:
names:
'0': Bordeaux-style Red Blend
'1': Bordeaux-style White Blend
'2': Cabernet Franc
'3': Cabernet Sauvignon
'4': Champagne Blend
'5': Chardonnay
'6': Gamay
'7': Gewürztraminer
'8': Grüner Veltliner
'9': Malbec
'10': Merlot
'11': Nebbiolo
'12': Pinot Grigio
'13': Pinot Gris
'14': Pinot Noir
'15': Portuguese Red
'16': Portuguese White
'17': Red Blend
'18': Rhône-style Red Blend
'19': Riesling
'20': Rosé
'21': Sangiovese
'22': Sauvignon Blanc
'23': Shiraz
'24': Sparkling Blend
'25': Syrah
'26': Tempranillo
'27': Viognier
'28': White Blend
'29': Zinfandel
splits:
- name: train
num_bytes: 21014061.962412182
num_examples: 71504
- name: validation
num_bytes: 3708554.0375878178
num_examples: 12619
- name: test
num_bytes: 6181444
num_examples: 21031
download_size: 16227253
dataset_size: 30904060.0
---
# Dataset Card for "wine_reviews"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
TrainingDataPro/selfies_and_id | 2023-09-14T16:41:46.000Z | [
"task_categories:image-to-image",
"license:cc-by-nc-nd-4.0",
"code",
"region:us"
] | TrainingDataPro | 4083 sets, which includes 2 photos of a person from his documents and
13 selfies. 571 sets of Hispanics and 3512 sets of Caucasians.
Photo documents contains only a photo of a person.
All personal information from the document is hidden. | @InProceedings{huggingface:dataset,
title = {selfies_and_id},
author = {TrainingDataPro},
year = {2023}
} | null | 1 | 4 | ---
license: cc-by-nc-nd-4.0
task_categories:
- image-to-image
tags:
- code
dataset_info:
features:
- name: id_1
dtype: image
- name: id_2
dtype: image
- name: selfie_1
dtype: image
- name: selfie_2
dtype: image
- name: selfie_3
dtype: image
- name: selfie_4
dtype: image
- name: selfie_5
dtype: image
- name: selfie_6
dtype: image
- name: selfie_7
dtype: image
- name: selfie_8
dtype: image
- name: selfie_9
dtype: image
- name: selfie_10
dtype: image
- name: selfie_11
dtype: image
- name: selfie_12
dtype: image
- name: selfie_13
dtype: image
- name: user_id
dtype: string
- name: set_id
dtype: string
- name: user_race
dtype: string
- name: name
dtype: string
- name: age
dtype: int8
- name: country
dtype: string
- name: gender
dtype: string
splits:
- name: train
num_bytes: 376371811
num_examples: 10
download_size: 374658409
dataset_size: 376371811
---
# Selfies, ID Images dataset
**4083** sets, which includes *2 photos of a person from his documents and 13 selfies*. **571** sets of Hispanics and **3512** sets of Caucasians.
Photo documents contains only a photo of a person. All personal information from the document is hidden
## File with the extension .csv
includes the following information for each media file:
- **SetId**: a unique identifier of a set of 15 media files,
- **UserId**: the identifier of the person who provided the media file,
- **UserRace**: the ethnicity of the person
- **Country**: the country of origin of the person,
- **Age**: the age of the person,
- **Gender**: the gender of the person,
- **Name**: the name of the person
- **FName**: the type of the media file
- **URL**: the URL to access the media file
## Folder "img" with media files
- containg all the photos
- which correspond to the data in the .csv file
**How it works**: *go to the first folder and you will make sure that it contains media files taken by a person whose parameters are specified in the first 15 lines of the .csv file.*
# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=selfies_and_id) to discuss your requirements, learn about the price and buy the dataset.
## [**TrainingData**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=selfies_and_id) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** |
TrainingDataPro/selfie_and_video | 2023-09-14T16:46:47.000Z | [
"license:cc-by-nc-nd-4.0",
"region:us"
] | TrainingDataPro | 4000 people in this dataset. Each person took a selfie on a webcam,
took a selfie on a mobile phone. In addition, people recorded video from
the phone and from the webcam, on which they pronounced a given set of numbers.
Includes folders corresponding to people in the dataset. Each folder includes
8 files (4 images and 4 videos). | @InProceedings{huggingface:dataset,
title = {selfie_and_video},
author = {TrainingDataPro},
year = {2023}
} | null | 1 | 4 | ---
license: cc-by-nc-nd-4.0
dataset_info:
features:
- name: photo_1
dtype: image
- name: photo_2
dtype: image
- name: video_3
dtype: string
- name: video_4
dtype: string
- name: photo_5
dtype: image
- name: photo_6
dtype: image
- name: video_7
dtype: string
- name: video_8
dtype: string
- name: set_id
dtype: string
- name: worker_id
dtype: string
- name: age
dtype: int8
- name: country
dtype: string
- name: gender
dtype: string
splits:
- name: train
num_bytes: 49771508
num_examples: 10
download_size: 829589647
dataset_size: 49771508
---
# Selfies and video dataset
4000 people in this dataset. Each person took a selfie on a webcam, took a selfie on a mobile phone. In addition, people recorded video from the phone and from the webcam, on which they pronounced a given set of numbers.
Includes folders corresponding to people in the dataset. Each folder includes 8 files (4 images and 4 videos).
# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=selfie_and_video) to discuss your requirements, learn about the price and buy the dataset.
# File with the extension .csv
includes the following information for each media file:
- **SetId**: a unique identifier of a set of 8 media files,
- **WorkerId**: the identifier of the person who provided the media file,
- **Country**: the country of origin of the person,
- **Age**: the age of the person,
- **Gender**: the gender of the person,
- **Type**: the type of media file
- **Link**: the URL to access the media file
# Folder "img" with media files
- containg all the photos and videos
- which correspond to the data in the .csv file
**How it works**: *go to the first folder and you will make sure that it contains media files taken by a person whose parameters are specified in the first 8 lines of the .csv file.*
## [**TrainingData**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=selfie_and_video) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** |
TrainingDataPro/portrait_and_26_photos | 2023-09-14T16:43:13.000Z | [
"task_categories:image-to-image",
"language:en",
"license:cc-by-nc-nd-4.0",
"finance",
"code",
"region:us"
] | TrainingDataPro | Each set includes 27 photos of people. Each person provided
two types of photos: one photo in profile (portrait_1),
and 26 photos from their life (photo_1, photo_2, …, photo_26). | @InProceedings{huggingface:dataset,
title = {portrait_and_26_photos},
author = {TrainingDataPro},
year = {2023}
} | null | 1 | 4 | ---
license: cc-by-nc-nd-4.0
task_categories:
- image-to-image
language:
- en
tags:
- finance
- code
dataset_info:
features:
- name: portrait_1
dtype: image
- name: photo_1
dtype: image
- name: photo_2
dtype: image
- name: photo_3
dtype: image
- name: photo_4
dtype: image
- name: photo_5
dtype: image
- name: photo_6
dtype: image
- name: photo_7
dtype: image
- name: photo_8
dtype: image
- name: photo_9
dtype: image
- name: photo_10
dtype: image
- name: photo_11
dtype: image
- name: photo_12
dtype: image
- name: photo_13
dtype: image
- name: photo_14
dtype: image
- name: photo_15
dtype: image
- name: photo_16
dtype: image
- name: photo_17
dtype: image
- name: photo_18
dtype: image
- name: photo_19
dtype: image
- name: photo_20
dtype: image
- name: photo_21
dtype: image
- name: photo_22
dtype: image
- name: photo_23
dtype: image
- name: photo_24
dtype: image
- name: photo_25
dtype: image
- name: photo_26
dtype: image
- name: worker_id
dtype: string
- name: age
dtype: int8
- name: country
dtype: string
- name: gender
dtype: string
splits:
- name: train
num_bytes: 927211725
num_examples: 14
download_size: 923699881
dataset_size: 927211725
---
# The Portrait and 26 Photos (272 people)
Each set includes 27 photos of people. Each person provided two types of photos: one photo in profile (portrait_1), and 26 photos from their life (photo_1, photo_2, …, photo_26).
# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=portrait_and_26_photos) to discuss your requirements, learn about the price and buy the dataset.
# The Portrait
The portrait photo is a photo that shows a person in profile. Mandatory conditions for the photo are:
- The person is pictured alone;
- Shoulder-length photo;
- No sunglasses or medical mask on the face;
- The face is calm, with no smiling or gesturing.
# 26 Photos
The rest of the photos are completely different, with one exception being that they show a person from The Portrait. There may be different people in it, taken at different times of life and in different locations. The person may be laughing, wearing a mask, and surrounded by friends.
# File with the extension .csv
includes the following information for each media file:
- **WorkerId**: the identifier of the person who provided the media file,
- **Age**: the age of the person,
- **Country**: the country of origin of the person,
- **Gender**: the gender of the person,
- **Type**: a unique identifier of a set of 26 media files,
- **Link**: the URL to access the media file
# Folder "img" with media files
- containg all the photos
- which correspond to the data in the .csv file
**How it works**: *go to the folder “0ff4d24098b3110ecfc0a7198e080a4b” and you will make sure that it contains media files taken by a person whose parameters are specified in the first 27 lines of the .csv file.*
## [**TrainingData**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=portrait_and_26_photos) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** |
Riot186/M1_EURUSD_candles | 2023-04-29T01:21:11.000Z | [
"size_categories:100K<n<1M",
"license:afl-3.0",
"finance",
"EURUSD",
"region:us"
] | Riot186 | null | null | null | 1 | 4 | ---
license: afl-3.0
tags:
- finance
- EURUSD
size_categories:
- 100K<n<1M
---
### All chunks have more than 4000 rows of data in chronological order in a panda dataframe
### CSV files are the same data in chronological order, some may not be more than 4000 rows
|
crcb/crdflower | 2023-04-29T12:40:09.000Z | [
"license:apache-2.0",
"region:us"
] | crcb | null | null | null | 0 | 4 | ---
license: apache-2.0
---
|
sidovic/LearningQ-qg | 2023-08-31T14:23:06.000Z | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:unknown",
"question generation",
"region:us"
] | sidovic | null | null | null | 0 | 4 | ---
license: unknown
task_categories:
- text-generation
language:
- en
tags:
- question generation
pretty_name: LeaningQ-qg
size_categories:
- 100K<n<1M
train-eval-index:
- config: plain_text
task: question-generation
task_id: extractive_question_generation
splits:
train_split: train
eval_split: validation
test_split: test
col_mapping:
context: context
questionsrc: question source
question: question
metrics:
- type: squad
name: SQuAD
dataset_info:
features:
- name: context
dtype: string
- name: questionsrc
dtype: string
- name: question
dtype: string
config_name: plain_text
splits:
- name: train
num_examples: 188660
- name: validation
num_examples: 20630
- name: test
num_examples: 18227
---
# Dataset Card for LearningQ-qg
## Dataset Description
- **Repository:** [GitHub](https://github.com/AngusGLChen/LearningQ#readme)
- **Paper:** [LearningQ: A Large-scale Dataset for Educational Question Generation](https://ojs.aaai.org/index.php/ICWSM/article/view/14987/14837)
- **Point of Contact:** s.lamri@univ-bouira.dz
### Dataset Summary
LearningQ, a challenging educational question generation dataset containing over 230K document-question pairs by [Guanliang Chen, Jie Yang, Claudia Hauff and Geert-Jan Houben]. It includes 7K instructor-designed questions assessing knowledge concepts being taught and 223K learner-generated questions seeking in-depth understanding of the taught concepts. This new version collected and corrected from over than 50000 error and more than 1500 type of error by [Sidali Lamri](https://dz.linkedin.com/in/sidali-lamri)
### Use the dataset
```python
from datasets import load_dataset
lq_dataset = load_dataset("sidovic/LearningQ-qg")
lq_dataset["train"][1]
len(lq_dataset["train"]),len(lq_dataset["validation"]),len(lq_dataset["test"])
```
### Supported Tasks and Leaderboards
[Question generation]
### Languages
[English]
## Dataset Structure
### Data Instances
An example of example looks as follows.
```
{
"context": "This is a test context.",
"questionsrc": "test context",
"question": "Is this a test?"
}
```
### Data Fields
The data fields are the same among all splits.
- `context`: a `string` feature.
- `questionsrc`: a `string` feature.
- `question`: a `string` feature.
### Data Splits
| name |train |validation|test |
|----------|-----:|---------:|----:|
|LearningQ |188660| 20630|18227|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
{
author = {Sidali Lamri},
title = {new LearningQ version for Question generation in transformers},
year = {2023}
}
@paper{ICWSM18LearningQ,
author = {Guanliang Chen, Jie Yang, Claudia Hauff and Geert-Jan Houben},
title = {LearningQ: A Large-scale Dataset for Educational Question Generation},
conference = {International AAAI Conference on Web and Social Media},
year = {2018}
}
```
### Contributions
[More Information Needed] |
Hyeon2/riffusion_musiccaps_datasets_768 | 2023-05-04T00:12:40.000Z | [
"task_categories:text-to-image",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-4.0",
"riffusion",
"region:us"
] | Hyeon2 | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 453812212.472
num_examples: 5464
download_size: 451327913
dataset_size: 453812212.472
license: cc-by-4.0
task_categories:
- text-to-image
language:
- en
tags:
- riffusion
pretty_name: r
size_categories:
- 1K<n<10K
---
# Dataset Card for "riffusion-musiccaps-datasets-768"
Converted google/musicCaps to spectograms with audio_to_spectrum with riffusion cli.
Random 7.68 sec for each music in musicCaps.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
colonelwatch/abstracts-embeddings | 2023-05-15T02:03:52.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"size_categories:10M<n<100m",
"language:en",
"license:cc0-1.0",
"region:us"
] | colonelwatch | null | null | null | 1 | 4 | ---
language:
- en
license: cc0-1.0
size_categories:
- 10M<n<100m
task_categories:
- text-retrieval
task_ids:
- document-retrieval
---
# abstracts-embeddings
This is the embeddings of the titles and abstracts of 95 million academic publications taken from the [OpenAlex](https://openalex.org) dataset as of May 5, 2023. The script that generated the embeddings is available on [Github](https://github.com/colonelwatch/abstracts-search/blob/master/build.py), but the general process is as follows:
1. Reconstruct the text of the abstract from the inverted index format
2. Construct a single document string in the format `title + ' ' + abstract` or just `abstract` if there is no title
3. Determine if the document string is in English using [fastText](https://fasttext.cc/docs/en/language-identification.html)
4. If it is in English, compute an embedding using the `all-MiniLM-L6-v2` model provided by [sentence-transformers](https://www.sbert.net/)
Though the OpenAlex dataset records 240 million works, not all of these works have abstracts or are in English. However, the `all-MiniLM-L6-v2` model was only trained on English texts, hence the filtering.
## Dataset Structure
In the future, this dataset might become a parquet in order to admit all the features offered by Hugging Face Datasets, but it consists only of a text file and a numpy memmap for now. The memmap is an array of many length-384 `np.float16` vectors, and the i-th row vector in this array corresponds with the i-th line in the text file. The text file is just a list of ids that can be used to get more information from the OpenAlex API.
```python
import numpy as np
with open('openalex_ids.txt', 'r') as f:
idxs = f.read().splitlines()
embeddings = np.memmap('embeddings.memmap', dtype=np.float16, mode='r').reshape(-1, 384)
```
However, the memmap cannot be uploaded to Hugging Face as a single file, so it's split with the command `split -b 3221225472 -d --suffix-length=3 --additional-suffix=.memmap embeddings.memmap embeddings_`. It can be put back together with the command `cat embeddings_*.memmap > embeddings.memmap`.
|
sam-mosaic/hhrlhf_evol_chatml | 2023-07-18T00:28:37.000Z | [
"language:en",
"region:us"
] | sam-mosaic | null | null | null | 18 | 4 | ---
language: en
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 302247789
num_examples: 217107
- name: test
num_bytes: 17609162
num_examples: 16555
download_size: 139692649
dataset_size: 319856951
---
# Dataset Card for "hhrlhf_evol_chatml"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
llm-book/jawiki-20220404-c400 | 2023-05-05T07:43:52.000Z | [
"task_categories:question-answering",
"size_categories:10M<n<100M",
"language:ja",
"license:mit",
"region:us"
] | llm-book | This dataset is used for AIO (AI王), a competition to promote research on question answering systems for the Japanese language. This dataset contains passages, each of which consists of consecutive sentences
no longer than 400 characters from Japanese Wikipedia as of 2022-04-04. | null | null | 0 | 4 | ---
license: mit
task_categories:
- question-answering
language:
- ja
size_categories:
- 10M<n<100M
---
# Dataset Card for jawiki-20220404-c400
This dataset contains passages, each of which consists of consecutive sentences no longer than 400 characters from Japanese Wikipedia as of 2022-04-04.
This dataset is used in baseline systems for [the AI王 question answering competition](https://sites.google.com/view/project-aio/home), such as [cl-tohoku/AIO3_BPR_baseline](https://github.com/cl-tohoku/AIO3_BPR_baseline).
Please refer to [the original repository](https://github.com/cl-tohoku/quiz-datasets) for further details. |
paul-ww/ei-abstract-significance | 2023-10-09T13:37:05.000Z | [
"region:us"
] | paul-ww | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: pmcid
dtype: int32
- name: pmid
dtype: int32
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': no significant effect
'1': significant effect
splits:
- name: train
num_bytes: 1930106
num_examples: 1028
- name: validation
num_bytes: 229838
num_examples: 118
- name: test
num_bytes: 230635
num_examples: 123
download_size: 0
dataset_size: 2390579
---
# Dataset Card for "ei-abstract-significance"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
intm/codet5_go-generation | 2023-05-06T01:06:07.000Z | [
"license:apache-2.0",
"region:us"
] | intm | null | null | null | 0 | 4 | ---
license: apache-2.0
---
max_src_len = 512, max_trg_len = 256
|
phongmt184172/mtet | 2023-05-08T07:41:53.000Z | [
"task_categories:translation",
"size_categories:100M<n<1B",
"language:en",
"language:vi",
"region:us"
] | phongmt184172 | null | null | null | 4 | 4 | ---
task_categories:
- translation
language:
- en
- vi
size_categories:
- 100M<n<1B
---
load_dataset('phongmt184172/mtet')
The dataset is cloned https://github.com/vietai/mTet for machine translation task. |
turkish-nlp-suite/Corona-mini | 2023-09-20T15:04:26.000Z | [
"task_categories:summarization",
"multilinguality:monolingual",
"size_categories:n<1K",
"language:tr",
"license:cc-by-sa-4.0",
"region:us"
] | turkish-nlp-suite | null | null | null | 0 | 4 | ---
language:
- tr
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- n<1K
task_categories:
- summarization
pretty_name: Corona-mini
---
# Dataset Card for turkish-nlp-suite/Corona-mini
## Dataset Description
- **Repository:** [Turkish Corona-mini corpus](https://github.com/turkish-nlp-suite/Corona-mini-dataset)
- **Paper:** [ACL link]()
- **Dataset:** Corona-mini
- **Domain:** Social Media
<img src="https://raw.githubusercontent.com/turkish-nlp-suite/.github/main/profile/corona-mini.png" width="20%" height="20%">
### Dataset Summary
This is a tiny Turkish corpus consisting of comments about Corona symptoms. The corpus is compiled from two Ekşisözlük headlines "covid-19 belirtileri" and "gün gün koronavirüs belirtileri":
https://eksisozluk.com/covid-19-belirtileri--6416646
https://eksisozluk.com/gun-gun-koronavirus-belirtileri--6757665
This corpus
- contains 178 raw, 175 processed comments
- all comments are in Turkish
- comes in 2 versions, raw and mildly processed.
For the processed version html tags, expressions in brackets and some other tags are removed.
if you want more information about how this dataset is crafted you can watch the playlist of my campaign "Turkish NLP with Duygu": [How to compile datasets](https://www.youtube.com/playlist?list=PLJTHlIwB8Vco4ONU_mCNOYIcVyFA9QrBr).
If you want to process this dataset with spaCy Turkish you can watch: [Recipes with spaCy Turkish](https://www.youtube.com/watch?v=w0WCkgCOzzw&list=PLJTHlIwB8VcoWxYHnsZOQCxWOraW42NBj)
### Dataset Instances
An instance of this dataset looks as follows:
```
{
"text": "beni sarsmayan belirtilerdir, 2 doz biontech aşılıyım, 2. doz üzerinden 5 aydan çok geçmişti cuma : ayın 12 si akşamı açık havada az üşümeye maruz kaldım."
}
```
### Data Split
| name |train|
|---------|----:|
|Corona-mini|175|
### Citation
This work is supported by Google Developer Experts Program. Part of Duygu 2022 Fall-Winter collection, "Turkish NLP with Duygu"/ "Duygu'yla Türkçe NLP". All rights reserved. If you'd like to use this dataset in your own work, please kindly cite [A Diverse Set of Freely Available Linguistic Resources for Turkish](https://aclanthology.org/2023.acl-long.768/) :
```
@inproceedings{altinok-2023-diverse,
title = "A Diverse Set of Freely Available Linguistic Resources for {T}urkish",
author = "Altinok, Duygu",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.768",
pages = "13739--13750",
abstract = "This study presents a diverse set of freely available linguistic resources for Turkish natural language processing, including corpora, pretrained models and education material. Although Turkish is spoken by a sizeable population of over 80 million people, Turkish linguistic resources for natural language processing remain scarce. In this study, we provide corpora to allow practitioners to build their own applications and pretrained models that would assist industry researchers in creating quick prototypes. The provided corpora include named entity recognition datasets of diverse genres, including Wikipedia articles and supplement products customer reviews. In addition, crawling e-commerce and movie reviews websites, we compiled several sentiment analysis datasets of different genres. Our linguistic resources for Turkish also include pretrained spaCy language models. To the best of our knowledge, our models are the first spaCy models trained for the Turkish language. Finally, we provide various types of education material, such as video tutorials and code examples, that can support the interested audience on practicing Turkish NLP. The advantages of our linguistic resources are three-fold: they are freely available, they are first of their kind, and they are easy to use in a broad range of implementations. Along with a thorough description of the resource creation process, we also explain the position of our resources in the Turkish NLP world.",
}
```
|
TrainingDataPro/printed_photos_attacks | 2023-09-14T16:49:56.000Z | [
"task_categories:image-to-image",
"task_categories:video-classification",
"language:en",
"license:cc-by-nd-4.0",
"code",
"finance",
"region:us"
] | TrainingDataPro | The dataset consists of 40,000 videos and selfies with unique people. 15,000
attack replays from 4,000 unique devices. 10,000 attacks with A4 printouts and
10,000 attacks with cut-out printouts. | @InProceedings{huggingface:dataset,
title = {printed_photos_attacks},
author = {TrainingDataPro},
year = {2023}
} | null | 1 | 4 | ---
license: cc-by-nd-4.0
task_categories:
- image-to-image
- video-classification
language:
- en
tags:
- code
- finance
---
# Printed Photos Attacks
The dataset includes 3 different types of files of the real people: original selfies, original videos and videos of attacks with printed photos. The dataset solves tasks in the field of anti-spoofing and it is useful for buisness and safety systems.
# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=printed_photos_attacks) to discuss your requirements, learn about the price and buy the dataset.
# Content
### The dataset contains of three folders:
- **live_selfie** contains the original selfies of people
- **live_video** includes original videos of people
- **attack** contains video of the attack with the original images from "live_selfie" folder
### File with the extension .csv
includes the following information for each media file:
- **live_selfie**: the link to access the original selfie
- **live_video**: the link to access the original video
- **attack**: the link to access the video of the attack with the printed photo
## [**TrainingData**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=printed_photos_attacks) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** |
vaclavpechtor/rvl_cdip-small-200 | 2023-05-10T07:36:15.000Z | [
"region:us"
] | vaclavpechtor | null | null | null | 0 | 4 | # RVL-CDIP Small-200 Dataset
## Dataset Summary
This is a subset of the RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset, containing 200 samples per class for a total of 3,200 samples. The dataset consists of scanned document images in TIFF format, collected from various sources. The documents belong to 16 different categories, such as letter, memo, email, and more. The purpose of this dataset is to facilitate document classification tasks using NLP and computer vision techniques.
## Supported Tasks and Leaderboards
- **Document Classification**: This dataset can be used for document classification tasks where the goal is to predict the correct category for each document image. No specific leaderboard is associated with this dataset.
## Languages
The dataset contains documents in English.
## Dataset Structure
### Data Instances
A data instance consists of a TIFF image file representing a scanned document and its corresponding label indicating the document category.
### Data Fields
- `image`: A TIFF image file representing a scanned document.
- `label`: A string representing the category of the document (e.g., "letter", "memo", "email", etc.).
### Data Splits
The dataset is split into two subsets:
- Training set: Contains 200 samples per class, totaling 3,200 samples.
- Validation set: Contains a smaller number of samples per class.
## Dataset Creation
### Curation Rationale
This subset of the RVL-CDIP dataset was created to provide a smaller and more manageable dataset for researchers and practitioners who want to experiment with document classification tasks without the computational overhead of the full dataset.
### Source Data
The dataset is a subset of the [RVL-CDIP dataset](https://www.cs.cmu.edu/~aharley/rvl-cdip/), which contains 400,000 grayscale images in 16 classes, with 25,000 images per class.
### Annotations
The dataset labels were derived from the original RVL-CDIP dataset. Each image file is associated with a label indicating its document category.
## Personal and Sensitive Information
The dataset may contain personal or sensitive information, such as names, addresses, phone numbers, or email addresses. Users should take this into consideration when using the dataset.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset can be used to develop models for document classification tasks, which can benefit a wide range of applications, such as document management systems, content analysis, and information retrieval.
### Discussion of Biases
The dataset may contain biases due to the limited number of samples per class and the fact that the documents are sourced from different domains. These biases may affect the generalizability of models trained on this dataset.
### Other Known Limitations
As this dataset is a small subset of the RVL-CDIP dataset, it may not be as representative or diverse as the full dataset. Additionally, the dataset only contains English documents, which may limit its applicability to other languages.
## Additional Information
### Licensing
Please refer to the [RVL-CDIP dataset website](https://www.cs.cmu.edu/~aharley/rvl-cdip/) for information on licensing and usage restrictions.
### Citation Information
If you use this dataset, please cite the following paper:
@inproceedings{harley2015evaluation,
title={An evaluation of deep learning techniques for document image classification},
author={Harley, Adam W and Ufkes, Alex and Derpanis, Konstantinos G},
booktitle={2015 13th International Conference on Document Analysis and Recognition (ICDAR)},
pages={991--995},
year={2015},
organization={IEEE}
}
### Contact Information
For questions regarding the dataset, please refer to the [RVL-CDIP dataset website](https://www.cs.cmu.edu/~aharley/rvl-cdip/) for contact information.
### Acknowledgements
This dataset is a subset of the RVL-CDIP dataset created by Adam W. Harley, Alex Ufkes, and Konstantinos G. Derpanis at the Ryerson Vision Lab (RVL), Ryerson University. The dataset creation was supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC).
|
PORTULAN/parlamento-pt | 2023-05-12T06:34:53.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:pt",
"license:other",
"parlame... | PORTULAN | null | null | null | 2 | 4 | ---
annotations_creators:
- no-annotation
language:
- pt
license:
- other
multilinguality:
- monolingual
pretty_name: ParlamentoPT
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
tags:
- parlamentopt
- parlamento
- parlamento-pt
- albertina-pt*
- albertina-ptpt
- albertina-ptbr
- fill-mask
- bert
- deberta
- portuguese
- encoder
- foundation model
---
# Dataset Card for ParlamentoPT
### Dataset Summary
The ParlamentoPT is a **Portuguese** language data set obtained by collecting publicly available documents containing transcriptions of debates in the Portuguese Parliament.
The data was collected from the Portuguese Parliament portal in accordance with its [open data policy](https://www.parlamento.pt/Cidadania/Paginas/DadosAbertos.aspx).
This dataset was collected with the purpose of creating the [Albertina-PT*](https://huggingface.co/PORTULAN/albertina-ptpt) language model, and it serves as training data for model development.
The development of the model is a collaborative effort between the University of Lisbon and the University of Porto in Portugal
</br>
# Citation
When using or citing this data set, kindly cite the following [publication](https://arxiv.org/abs/2305.06721):
``` latex
@misc{albertina-pt,
title={Advancing Neural Encoding of Portuguese
with Transformer Albertina PT-*},
author={João Rodrigues and Luís Gomes and João Silva and
António Branco and Rodrigo Santos and
Henrique Lopes Cardoso and Tomás Osório},
year={2023},
eprint={2305.06721},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<br>
# Acknowledgments
The research reported here was partially supported by: PORTULAN CLARIN—Research Infrastructure for the Science and Technology of Language,
funded by Lisboa 2020, Alentejo 2020 and FCT—Fundação para a Ciência e Tecnologia under the
grant PINFRA/22117/2016; research project ALBERTINA - Foundation Encoder Model for Portuguese and AI, funded by FCT—Fundação para a Ciência e Tecnologia under the
grant CPCA-IAC/AV/478394/2022; innovation project ACCELERAT.AI - Multilingual Intelligent Contact Centers, funded by IAPMEI, I.P. - Agência para a Competitividade e Inovação under the grant C625734525-00462629, of Plano de Recuperação e Resiliência, call RE-C05-i01.01 – Agendas/Alianças Mobilizadoras para a Reindustrialização; and LIACC - Laboratory for AI and Computer Science, funded by FCT—Fundação para a Ciência e Tecnologia under the grant FCT/UID/CEC/0027/2020. |
AravindVadlapudi02/UA_speech_high | 2023-05-10T14:45:29.000Z | [
"region:us"
] | AravindVadlapudi02 | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: label
dtype:
class_label:
names:
'0': control
'1': pathology
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 768265600
num_examples: 800
- name: test
num_bytes: 4599029948
num_examples: 4789
download_size: 619976569
dataset_size: 5367295548
---
# Dataset Card for "UA_speech_high"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
aneeshas/imsdb-500tokendrama-movie-scripts | 2023-05-10T19:37:26.000Z | [
"region:us"
] | aneeshas | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: Drama
dtype: string
splits:
- name: train
num_bytes: 307903
num_examples: 652
download_size: 189402
dataset_size: 307903
---
# Dataset Card for "imsdb-500tokendrama-movie-scripts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dspoka/sdg-single | 2023-05-15T05:14:42.000Z | [
"region:us"
] | dspoka | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: iso3
dtype: string
- name: country
dtype: string
- name: goal
dtype: string
- name: target
dtype: string
- name: text
dtype: string
- name: status
dtype: string
- name: sector
dtype: string
- name: response
dtype: string
- name: infotype
dtype: string
- name: start
dtype: float64
- name: end
dtype: float64
- name: filename
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: full
num_bytes: 4297968
num_examples: 14219
download_size: 0
dataset_size: 4297968
---
# Dataset Card for "sdg-single"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
KakologArchives/KakologArchives | 2023-10-11T01:19:33.000Z | [
"task_categories:text-classification",
"language:ja",
"license:mit",
"region:us"
] | KakologArchives | null | null | null | 2 | 4 | ---
pretty_name: ニコニコ実況 過去ログアーカイブ
license: mit
language:
- ja
task_categories:
- text-classification
---
# ニコニコ実況 過去ログアーカイブ
ニコニコ実況 過去ログアーカイブは、[ニコニコ実況](https://jk.nicovideo.jp)のサービス開始から現在までのすべての過去ログコメントを収集したデータセットです。
去る2020年12月、ニコニコ実況は[ニコニコ生放送内の一公式チャンネルとしてリニューアル](https://blog.nicovideo.jp/niconews/143148.html)されました。
これに伴い、2009年11月から運用されてきた旧システムは提供終了となり(事実上のサービス終了)、torne や BRAVIA などの家電への対応が軒並み終了する中、当時の生の声が詰まった約11年分の過去ログも同時に失われることとなってしまいました。
そこで 5ch の DTV 板の住民が中心となり、旧ニコニコ実況が終了するまでに11年分の全チャンネルの過去ログをアーカイブする計画が立ち上がりました。紆余曲折あり Nekopanda 氏が約11年分のラジオや BS も含めた全チャンネルの過去ログを完璧に取得してくださったおかげで、11年分の過去ログが電子の海に消えていく事態は回避できました。
しかし、旧 API が廃止されてしまったため過去ログを API 経由で取得することができなくなり、またアーカイブされた過去ログから見たい範囲のログを探す場合も、アーカイブのサイズが合計約 150GB もあることから、とても以前のように手軽に過去ログに触れることはできなくなってしまいました。
一方、ニコニコ生放送内の一公式チャンネルとして移行した新ニコニコ実況では、タイムシフト(旧ニコニコ実況での過去ログに相当)の視聴期限は3週間までとなっているため、その期限を過ぎると過去ログは視聴できなくなってしまいます。
また一般会員は事前にタイムシフト予約をしておく必要があるなど、以前のような利便性は失われています。
私たちは、ニコニコ実況に投稿された日本のテレビ放送についてのコメントは、当時の世相や時代背景を端的に表す、歴史的価値のある資料だと考えています。
このデータセットでは、ニコニコ実況のすべての過去ログを後世に残すべく、Nekopanda 氏が配布されていた旧ニコニコ実況の 2020/12/15 までのすべての過去ログに加え、コミュニティベースの番組も含めた新ニコニコ実況の当日分の過去ログを5分に1回収集し、随時反映しています。
過去ログをかんたんに取得するための [API](https://jikkyo.tsukumijima.net/) もあります。
よろしければそちらもご活用ください。
## Dataset Structure
### Builder Config
| Key | Value Type | Default Value | Description |
| --------------- | ---------- | ------------- | ----------- |
| channel_id | string | None | 過去ログを取得するニコニコ実況チャンネルの ID (省略時はすべてのチャンネル) |
| year | int | None | 取得する過去ログの年 (省略時はすべての年) |
| number_of_files | int | None | 取得する過去ログファイルの数 (省略時はすべてのファイル) |
### Data Splits
| Split | Approximate Size | Description |
| ------- | ---------------- | ----------- |
| sample | 1GB | サンプルとして、2022年中に投稿された TOKYO MX (ID: jk9) のすべての過去ログコメントを取得します。1GB ほどあります。 |
| all | 180GB | 全チャンネル/全期間のすべての過去ログコメントを取得します。180GB 近くあるため注意してください。 |
### Data Fields
| Field | Type | Description |
| --------------- | -------- | ----------- |
| thread | string | コメントのスレッド ID |
| no | int64 | コメント番号 (コメ番) |
| vpos | int64 | スレッド ID から起算したコメントの再生位置 (1/100秒) |
| date | int64 | コメント投稿時間の UNIX タイムスタンプ |
| date_usec | int64 | コメント投稿時間の小数点以下の時間 |
| user_id | string | ユーザー ID (コマンドに 184 が指定されている場合は匿名化され、1週間ほどでシャッフルされる) |
| mail | string | コメントのコマンド (184, red naka big など、省略されることもある) |
| premium | boolean | コメントしたユーザーがプレミアム会員であれば True |
| anonymity | boolean | 匿名コメントであれば True |
| content | string | コメント本文 (AA など、まれに複数行コメントがあるので注意) |
## Example
```python
from datasets import load_dataset
dataset = load_dataset('KakologArchives/KakologArchives', 'all', channel_id='jk211', year=2023, number_of_files=10)
for data in dataset['train']:
print(data)
```
## Licensing Information
[MIT License](https://opensource.org/license/mit/)
|
Nan-Do/code-search-net-go | 2023-05-15T00:56:15.000Z | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:summarization",
"language:en",
"license:apache-2.0",
"code",
"go",
"CodeSearchNet",
"summary",
"region:us"
] | Nan-Do | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: repo
dtype: string
- name: path
dtype: string
- name: func_name
dtype: string
- name: original_string
dtype: string
- name: language
dtype: string
- name: code
dtype: string
- name: code_tokens
sequence: string
- name: docstring
dtype: string
- name: docstring_tokens
sequence: string
- name: sha
dtype: string
- name: url
dtype: string
- name: partition
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 833011518
num_examples: 345890
download_size: 239636894
dataset_size: 833011518
license: apache-2.0
task_categories:
- text-generation
- text2text-generation
- summarization
language:
- en
tags:
- code
- go
- CodeSearchNet
- summary
pretty_name: Go CodeSearchNet with Summaries
---
# Dataset Card for "code-search-net-go"
## Dataset Description
- **Homepage:** None
- **Repository:** https://huggingface.co/datasets/Nan-Do/code-search-net-go
- **Paper:** None
- **Leaderboard:** None
- **Point of Contact:** [@Nan-Do](https://github.com/Nan-Do)
### Dataset Summary
This dataset is the Go portion of the CodeSarchNet annotated with a summary column.
The code-search-net dataset includes open source functions that include comments found at GitHub.
The summary is a short description of what the function does.
### Languages
The dataset's comments are in English and the functions are coded in Go
### Data Splits
Train, test, validation labels are included in the dataset as a column.
## Dataset Creation
May of 2023
### Curation Rationale
This dataset can be used to generate instructional (or many other interesting) datasets that are useful to train LLMs
### Source Data
The CodeSearchNet dataset can be found at https://www.kaggle.com/datasets/omduggineni/codesearchnet
### Annotations
This datasets include a summary column including a short description of the function.
#### Annotation process
The annotation procedure was done using [Salesforce](https://huggingface.co/Salesforce) T5 summarization models.
A sample notebook of the process can be found at https://github.com/Nan-Do/OpenAssistantInstructionResponsePython
The annontations have been cleaned to make sure there are no repetitions and/or meaningless summaries. (some may still be present in the dataset)
### Licensing Information
Apache 2.0 |
scaredmeow/shopee-reviews-tl-stars | 2023-05-15T07:40:20.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:tl",
"license:mpl-2.0",
"reviews",
"shopee",
"doi:10.57967/hf/0656",
"region:us"
] | scaredmeow | null | null | null | 0 | 4 | ---
license: mpl-2.0
task_categories:
- text-classification
language:
- tl
size_categories:
- 1K<n<10K
dataset_info:
features:
- name: label
dtype:
class_label:
names:
'0': 1 star
'1': 2 star
'2': 3 stars
'3': 4 stars
'4': 5 stars
- name: text
dtype: string
tags:
- reviews
- shopee
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:** [Enhancement to Low Resource Text Classification via Sequential Transfer Learning](#)
- **Leaderboard:**
- **Point of Contact:** [Neil Riego](mailto:neilchristianriego3@gmail.com)
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Tagalog (TL)
## Dataset Structure
### Data Instances
A typical data point, comprises of a text and the corresponding label.
An example from the YelpReviewFull test set looks as follows:
```
{
'label': 2,
'text': 'Madaling masira yung sa may sinisintasan nya. Wala rin syang box. Sana mas ginawa pa na matibay para sana sulit yung pagkakabili'
}
```
### Data Fields
- 'text': The review texts are escaped using double quotes ("), and any internal double quote is escaped by 2 double quotes ("").
- 'label': Corresponds to the score associated with the review (between 1 and 5).
### Data Splits
The Shopee reviews tl 15 dataset is constructed by randomly taking 2100 training samples and 450 samples for testing and validation for each review star from 1 to 5.
In total there are 10500 trainig samples and 2250 each in validation and testing samples.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
scaredmeow/shopee-reviews-tl-binary | 2023-05-19T19:44:57.000Z | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:tl",
"license:odc-by",
"reviews",
"shopee",
"doi:10.57967/hf/0657",
"region:us"
] | scaredmeow | null | null | null | 0 | 4 | ---
license: odc-by
task_categories:
- text-classification
language:
- tl
tags:
- reviews
- shopee
size_categories:
- 10K<n<100K
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': positive
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:** [Enhancement to Low Resource Text Classification via Sequential Transfer Learning](#)
- **Leaderboard:**
- **Point of Contact:** [Neil Riego](mailto:neilchristianriego3@gmail.com)
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
A typical data point, comprises of a text and the corresponding label.
An example from the YelpReviewFull test set looks as follows:
```
{
'label': pos,
'text': 'Huyyy ang gandaaaaaaaaaaa. Grabe sobrang ganda talaga wala ako masabi. Complete orders pa pinadala sa akin. Buti hindi nabasag kahit walang bubble wrap. Okay na lang din para save mother earth and at least hindi nabasag hehe. Oorder ulit ako ang ganda eh'
}
```
### Data Fields
- 'text': The review texts are escaped using double quotes ("), and any internal double quote is escaped by 2 double quotes ("").
- 'label': Corresponds to the score associated with the review (between positive and negative).
### Data Splits
The Shopee reviews tl binary dataset is constructed by randomly taking 14000 training samples and 3000 samples for testing and validation for each review star from neg and pos.
In total there are 28000 training samples and 6000 each in validation and testing samples.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
carlosejimenez/seq2seq-glue | 2023-05-15T03:21:03.000Z | [
"region:us"
] | carlosejimenez | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: string
- name: orig_idx
dtype: int64
splits:
- name: train
num_bytes: 190089393
num_examples: 949098
- name: validation_cola
num_bytes: 87041
num_examples: 1043
- name: test_cola
num_bytes: 86025
num_examples: 1063
- name: validation_mnli
num_bytes: 2157948
num_examples: 9815
- name: validation_mnli_mm
num_bytes: 2274020
num_examples: 9832
- name: test_mnli
num_bytes: 2162126
num_examples: 9796
- name: test_mnli_mm
num_bytes: 2265807
num_examples: 9847
- name: validation_mrpc
num_bytes: 120267
num_examples: 408
- name: test_mrpc
num_bytes: 499335
num_examples: 1725
- name: validation_qnli
num_bytes: 1554164
num_examples: 5463
- name: test_qnli
num_bytes: 1542446
num_examples: 5463
- name: validation_qqp
num_bytes: 7049694
num_examples: 40430
- name: test_qqp
num_bytes: 67681991
num_examples: 390965
- name: validation_rte
num_bytes: 100393
num_examples: 277
- name: test_rte
num_bytes: 1070053
num_examples: 3000
- name: validation_sst2
num_bytes: 126308
num_examples: 872
- name: test_sst2
num_bytes: 260344
num_examples: 1821
- name: validation_stsb
num_bytes: 262564
num_examples: 1500
- name: test_stsb
num_bytes: 220997
num_examples: 1379
download_size: 0
dataset_size: 279610916
---
# Dataset Card for "seq2seq-glue"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
medmac01/qa_morocco_history_v1 | 2023-05-15T16:21:07.000Z | [
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:fr",
"language:en",
"extractive_qa",
"region:us"
] | medmac01 | null | null | null | 2 | 4 | ---
task_categories:
- question-answering
language:
- fr
- en
tags:
- extractive_qa
size_categories:
- 1K<n<10K
---
|
SamaAI/sama-drives-california | 2023-06-14T14:58:49.000Z | [
"size_categories:10K<n<100K",
"license:cc-by-4.0",
"region:us"
] | SamaAI | null | null | null | 4 | 4 | ---
dataset_info:
features:
- name: fname
dtype: string
- name: path
dtype: string
- name: label
struct:
- name: attributes
struct:
- name: timeofday
dtype: string
- name: weather
dtype: string
- name: labels
list:
- name: attributes
struct:
- name: drivingConditions
dtype: string
- name: laneChange
dtype: string
- name: occluded
dtype: bool
- name: box2d
struct:
- name: x1
dtype: int64
- name: x2
dtype: int64
- name: y1
dtype: int64
- name: y2
dtype: int64
- name: category
dtype: string
- name: id
dtype: int64
- name: manualAttributes
dtype: bool
- name: manualShape
dtype: bool
- name: poly2d
list:
- name: closed
dtype: bool
- name: filled
dtype: bool
- name: vertices
sequence:
sequence: int64
- name: name
dtype: string
- name: img
dtype: image
splits:
- name: train
num_bytes: 1088252764.96
num_examples: 25136
download_size: 1025635407
dataset_size: 1088252764.96
license: cc-by-4.0
size_categories:
- 10K<n<100K
---
# Dataset Card for sama-drives-california

## Dataset Description
- **Homepage:** www.sama.com
- **Point of Contact:** datasets@samasource.org
### Dataset Summary
This is an object detection dataset (bounding boxes and polygons) of **25 136 frames** (848x480 pixels) taken by a dashboard video camera of a car driving in California.
The frames were captured at 1 FPS, and hence the entire footage covers over 7 hours of driving.
All but 110 frames contain at least one annotated object (25 026) of interest.
## Dataset Structure
### Data Instances
The dataset is saved according to the `bdd100k` format described [here](https://doc.bdd100k.com/format.html#segmentation-formats) (no affiliation with Sama).
Frames are named according to the original video they are from, along with the sequence index in that video (1-indexed): **videoNumber-frameIndex.jpg** \
(e.g., 099-002.jpg for the second frame of the 99th video)
`label:id`s are used to denote unique objects, such as a specific vehicle, throughout an entire video, but not across videos.
The first digits of a `label:id` denote what video it is from (e.g., the `id` 53002 comes from video 53).
Frames were taken from videos that were recorded in a continuous sequence without any time gap in between videos. However, some videos were not included \
in the final dataset either because they contained sensitive information or because they were part of a long sequence when the car was parked and facing a scene of no interest.
The labelling format and different classes supported are described in the section Data Fields below.
Sample annotation:
```json
{
"name": "001-019.jpg",
"attributes": {"weather": "Sunny", "timeofday": "Day"},
"labels":
[
{"category": "Drivable Space", "attributes": {"occluded": true}, "manualShape": true, "manualAttributes": true, "id": 1001, "poly2d": [{"vertices": [[369, 296], [370, 276], [389, 277], [432, 278], [494, 279], [504, 266], [563, 262], [590, 270], [656, 271], [705, 276], [776, 270], [847, 274], [847, 337], [847, 419], [766, 408], [681, 402], [626, 400], [550, 393], [507, 391], [426, 390], [321, 387], [242, 394], [206, 402], [170, 402], [135, 399], [72, 405], [29, 413], [0, 418], [0, 259], [66, 259], [91, 267], [154, 265], [126, 280], [145, 288], [188, 284], [155, 265], [187, 265], [225, 263], [309, 260], [301, 271], [345, 272], [370, 276], [369, 296], [306, 300], [225, 300], [226, 312], [309, 334], [416, 353], [552, 373], [635, 375], [669, 365], [666, 343], [654, 338], [542, 313]], "closed": true, "filled": true}], "box2d": {"x1": 0, "y1": 259, "x2": 847, "y2": 419}},
{"category": "Vehicles | Truck", "attributes": {"occluded": true}, "manualShape": true, "manualAttributes": true, "id": 1041, "poly2d": [{"vertices": [[708, 247], [692, 247], [688, 251], [687, 258], [687, 265], [709, 265], [714, 265], [713, 255]], "closed": true, "filled": true}], "box2d": {"x1": 687, "y1": 247, "x2": 714, "y2": 265}},
{"category": "Vehicles | Truck", "attributes": {"occluded": true}, "manualShape": true, "manualAttributes": true, "id": 1043, "poly2d": [{"vertices": [[468, 238], [486, 251], [494, 253], [500, 257], [507, 258], [515, 262], [527, 267], [530, 278], [531, 293], [503, 300], [482, 299], [425, 291], [426, 296], [415, 298], [409, 291], [391, 288], [390, 299], [375, 300], [369, 289], [353, 284], [354, 254], [409, 256], [424, 238]], "closed": true, "filled": true}], "box2d": {"x1": 353, "y1": 238, "x2": 531, "y2": 300}},
{"category": "Vehicles | Car", "attributes": {"occluded": true}, "manualShape": true, "manualAttributes": true, "id": 1044, "poly2d": [{"vertices": [[560, 256], [539, 253], [541, 257], [553, 264], [561, 271], [563, 288], [568, 288], [584, 290], [596, 288], [599, 277], [595, 271], [589, 267], [577, 264], [570, 260]], "closed": true, "filled": true}], "box2d": {"x1": 539, "y1": 253, "x2": 599, "y2": 290}},
{"category": "Vehicles | Car", "attributes": {"occluded": true}, "manualShape": true, "manualAttributes": true, "id": 1045, "poly2d": [{"vertices": [[507, 246], [499, 247], [495, 248], [506, 255], [523, 262], [526, 270], [532, 281], [530, 295], [547, 296], [565, 294], [562, 271], [551, 261], [537, 254], [519, 251]], "closed": true, "filled": true}], "box2d": {"x1": 495, "y1": 246, "x2": 565, "y2": 296}},
{"category": "Vehicles | Car", "attributes": {"occluded": false, "drivingConditions": "Light Traffic"}, "manualShape": true, "manualAttributes": true, "id": 1046, "poly2d": [{"vertices": [[30, 249], [14, 249], [9, 256], [8, 262], [10, 271], [13, 271], [13, 269], [24, 269], [24, 271], [30, 271], [32, 268], [36, 268], [38, 271], [41, 269], [41, 263], [40, 256], [37, 252], [34, 250]], "closed": true, "filled": true}], "box2d": {"x1": 8, "y1": 249, "x2": 41, "y2": 271}}
]
}
```
### Data Fields
Each frame contains a label for `timeofday` and `weather`. `Dusk`, `Dawn` and `Twilight` all fall in the same `timeofday` category.
| timeofday | weather |
|:--------------------|:--------|
| Day | Sunny |
| Night | Cloudy |
| Dusk/Dawn/Twilight | Rainy |
| | Snowy |
| | Other |
Bounding boxes are provided for all objects as `box2d`.
`Vehicles`, `People` and `Areas` are also identified with closed `Polygons` of the type `poly2d`.
`Lanes` are available as `Lines`, that are denoted as open `Polygons` of the type `poly2d`.
`Traffic Lights` and `Traffic Signs` are only available as `Bounding Boxes`.
| Vehicles (Polygons) | People (Polygons) | Areas (Polygons) | Lanes (Lines) | Traffic (Bounding Boxes) |
|:----------------------|:----------------------|:-------------------|:------------------|:--------------------------|
| Car | Pedestrians | Drivable Space | Current Lane | Traffic Lights |
| Truck | | | Alternate Lane | Traffic Signs |
| Van | | | Opposite Lane | |
| SUV | | | | |
| Bus | | | | |
| Other LV | | | | |
| Bicycles | | | | |
| Motorbikes | | | | |
The objects above can each be `occluded` (true) or not (false).
`Vehicles` also have a label called `drivingConditions` that denotes the amount of vehicle traffic they are facing.
Note that this label is not always present.
| drivingConditions (for Vehicles) |
|:------------------------------------|
| Light Traffic |
| Moderate Traffic |
| Heavy Traffic |
`Lanes` also contain a laneChange label. Note that this label is not always present.
| laneChange (for Lanes) |
|:---------------------------|
| Current |
| Alternate |
| Opposite |
### Visualize Dataset
To visualize the dataset on the [FiftyOne](https://docs.voxel51.com/) app, download and unzip the following [zip file](https://sama-documentation-assets.s3.amazonaws.com/sama-drives-california/zipped/sama-drives-california.zip) (2.3GB).
```python
import fiftyone as fo
# <dataset_dir>/
# labels.json
# data/
# 001-001.jpg
# 001-002.jpg
# ...
name = "sama-drives-california"
dataset_dir = "/path/to/dataset"
# Create the dataset
dataset = fo.Dataset.from_dir(
dataset_dir=dataset_dir,
dataset_type=fo.types.BDDDataset,
name=name
)
```
### Dataset in Video Format
This dataset is also available as a video dataset with [FiftyOne](https://docs.voxel51.com/) style label format. You can download a zipped file of the dataset (videos and fiftyone labels) [here](https://sama-documentation-assets.s3.amazonaws.com/sama-drives-california/zipped/sama-drives-california-videos.zip) (1.1GB).
```python
import fiftyone as fo
# <video_dataset_dir>/
# frames.json
# metadata.json
# samples.json
# data/
# 001.mp4
# 002.mp4
# ...
name = "sama-drives-california-videos"
dataset_dir = "/path/to/videos-dataset"
# Create the dataset
dataset = fo.Dataset.from_dir(
dataset_dir=dataset_dir,
dataset_type=fo.types.FiftyOneDataset,
name=name
)
```
### Annotations
The dataset was annotated by a team of Sama Associates.
They were instructed to annotate all objects of the classes described in the section *Data Fields* above with the following details:
* Ignore objects under 10 pixels in width or height.
* Annotate with a pixel tolerance of 2 pixels.
* For motorized vehicles, include the mirrors but do not include the antennas.
* For bicycles, include the cyclist.
* For motorbikes, include the rider.
* For traffic lights, place the bounding box around the light fixture but not the pole.
* For traffic signs, do not include the pole or structure.
### Personal and Sensitive Information
All personal and sensitive information has been removed. Vehicle license plates and faces are blurred.
### Other Known Limitations
Objects of interest that were smaller than 10 pixels in width or height were not annotated.
### Licensing Information
(CC BY 4.0) [https://creativecommons.org/licenses/by/4.0/] |
ai4bharat/Bhasha-Abhijnaanam | 2023-06-22T08:01:44.000Z | [
"task_categories:text-generation",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"language_creators:machine-generated",
"language_creators:found",
"language_creators:other",
"multilinguality:multilingual",
"source_datasets:original",
"language:asm",
"language:ben",
"lan... | ai4bharat | null | null | null | 1 | 4 | ---
license: cc0-1.0
annotations_creators: []
language_creators:
- crowdsourced
- expert-generated
- machine-generated
- found
- other
language:
- asm
- ben
- brx
- guj
- hin
- kan
- kas
- kok
- mai
- mal
- mar
- mni
- nep
- ori
- pan
- san
- sat
- sid
- snd
- tam
- tel
- urd
multilinguality:
- multilingual
pretty_name: Bhasha-Abhijnaanam
size_categories: []
source_datasets:
- original
task_categories:
- text-generation
task_ids: []
---
# Dataset Card for Aksharantar
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/AI4Bharat/IndicLID
- **Paper:** [Bhasha-Abhijnaanam: Native-script and romanized Language Identification for 22 Indic languages](https://arxiv.org/abs/2305.15814)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Bhasha-Abhijnaanam is a language identification test set for native-script as well as Romanized text which spans 22 Indic languages.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
| <!-- --> | <!-- --> | <!-- --> | <!-- --> | <!-- --> | <!-- --> |
| -------------- | -------------- | -------------- | --------------- | -------------- | ------------- |
| Assamese (asm) | Hindi (hin) | Maithili (mai) | Nepali (nep) | Sanskrit (san) | Tamil (tam) |
| Bengali (ben) | Kannada (kan) | Malayalam (mal)| Oriya (ori) | Santali (sat) | Telugu (tel) |
| Bodo(brx) | Kashmiri (kas) | Manipuri (mni) | Punjabi (pan) | Sindhi (snd) | Urdu (urd) |
| Gujarati (guj) | Konkani (kok) | Marathi (mar)
## Dataset Structure
### Data Instances
```
A random sample from Hindi (hin) Test dataset.
{
"unique_identifier": "hin1",
"native sentence": "",
"romanized sentence": "",
"language": "Hindi",
"script": "Devanagari",
"source": "Dakshina",
}
```
### Data Fields
- `unique_identifier` (string): 3-letter language code followed by a unique number in Test set.
- `native sentence` (string): A sentence in Indic language.
- `romanized sentence` (string): Transliteration of native sentence in English (Romanized sentence).
- `language` (string): Language of native sentence.
- `script` (string): Script in which native sentence is written.
- `source` (string): Source of the data.
For created data sources, depending on the destination/sampling method of a pair in a language, it will be one of:
- Dakshina Dataset
- Flores-200
- Manually Romanized
- Manually generated
### Data Splits
| Subset | asm | ben | brx | guj | hin | kan | kas (Perso-Arabic) | kas (Devanagari) | kok | mai | mal | mni (Bengali) | mni (Meetei Mayek) | mar | nep | ori | pan | san | sid | tam | tel | urd |
|:------:|:---:|:---:|:---:|:---:|:---:|:---:|:------------------:|:----------------:|:---:|:---:|:---:|:-------------:|:------------------:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| Native | 1012 | 5606 | 1500 | 5797 | 5617 | 5859 | 2511 | 1012 | 1500 | 2512 | 5628 | 1012 | 1500 | 5611 | 2512 | 1012 | 5776 | 2510 | 2512 | 5893 | 5779 | 5751 | 6883 |
| Romanized | 512 | 4595 | 433 | 4785 | 4606 | 4848 | 450 | 0 | 444 | 439 | 4617 | 0 | 442 | 4603 | 423 | 512 | 4765 | 448 | 0 | 4881 | 4767 | 4741 | 4371 |
## Dataset Creation
Information in the paper. [Bhasha-Abhijnaanam: Native-script and romanized Language Identification for 22 Indic languages](https://arxiv.org/abs/2305.15814)
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Information in the paper. [Bhasha-Abhijnaanam: Native-script and romanized Language Identification for 22 Indic languages](https://arxiv.org/abs/2305.15814)
#### Who are the source language producers?
[More Information Needed]
### Annotations
Information in the paper. [Bhasha-Abhijnaanam: Native-script and romanized Language Identification for 22 Indic languages](https://arxiv.org/abs/2305.15814)
#### Who are the annotators?
Information in the paper. [Bhasha-Abhijnaanam: Native-script and romanized Language Identification for 22 Indic languages](https://arxiv.org/abs/2305.15814)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
<!-- <a rel="license" float="left" href="http://creativecommons.org/publicdomain/zero/1.0/">
<img src="https://licensebuttons.net/p/zero/1.0/88x31.png" style="border-style: none;" alt="CC0" width="100" />
<img src="https://mirrors.creativecommons.org/presskit/buttons/88x31/png/by.png" style="border-style: none;" alt="CC-BY" width="100" href="http://creativecommons.org/publicdomain/zero/1.0/"/>
</a>
<br/> -->
This data is released under the following licensing scheme:
- Manually collected data: Released under CC0 license.
**CC0 License Statement**
<a rel="license" float="left" href="https://creativecommons.org/about/cclicenses/">
<img src="https://licensebuttons.net/p/zero/1.0/88x31.png" style="border-style: none;" alt="CC0" width="100"/>
</a>
<br>
<br>
- We do not own any of the text from which this data has been extracted.
- We license the actual packaging of manually collected data under the [Creative Commons CC0 license (“no rights reserved”)](http://creativecommons.org/publicdomain/zero/1.0).
- To the extent possible under law, <a rel="dct:publisher" href="https://indicnlp.ai4bharat.org/"> <span property="dct:title">AI4Bharat</span></a> has waived all copyright and related or neighboring rights to <span property="dct:title">Aksharantar</span> manually collected data and existing sources.
- This work is published from: India.
### Citation Information
```
@misc{madhani2023bhashaabhijnaanam,
title={Bhasha-Abhijnaanam: Native-script and romanized Language Identification for 22 Indic languages},
author={Yash Madhani and Mitesh M. Khapra and Anoop Kunchukuttan},
year={2023},
eprint={2305.15814},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
---
|
CarlosKidman/test-cases | 2023-05-17T20:20:41.000Z | [
"size_categories:n<1K",
"language:en",
"license:mit",
"testing",
"region:us"
] | CarlosKidman | null | null | null | 0 | 4 | ---
license: mit
language:
- en
tags:
- testing
size_categories:
- n<1K
---
# Functional Test Cases
This is a _very_ small list of functional test cases that a team of software testers (QA) created for an example mobile app called Boop.
## Dataset
* Name: `Boop Test Cases.csv`
* Number of Rows: `136`
* Columns: `11`
* `Test ID` (int)
* `Summary` (string)
* `Idea` (string)
* `Preconditions` (string)
* `Steps to reproduce` (string)
* `Expected Result` (string)
* `Actual Result` (string)
* `Pass/Fail` (string)
* `Bug #` (string)
* `Author` (string)
* `Area` (string)
> 💡 There are missing values. For example, not every test case had a related Bug
## Use Cases
Two common problems in Software Testing are:
* Duplicate test cases (and bug reports)
* Assigning issues to the correct team quickly (from internal sources, Customer or Tech Support, etc)
This dataset is probably too small to create an "Auto-Assigner" tool -- especially because almost half the tests are focused in the `Account` Area.
However, with embeddings, we could see if a new Test Case already exists by checking similarity 🤔 |
cakiki/stack-smol-xxl | 2023-06-06T11:37:36.000Z | [
"language:code",
"license:other",
"region:us"
] | cakiki | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: hexsha
dtype: string
- name: size
dtype: int64
- name: ext
dtype: string
- name: lang
dtype: string
- name: max_stars_repo_path
dtype: string
- name: max_stars_repo_name
dtype: string
- name: max_stars_repo_head_hexsha
dtype: string
- name: max_stars_repo_licenses
sequence: string
- name: max_stars_count
dtype: int64
- name: max_stars_repo_stars_event_min_datetime
dtype: string
- name: max_stars_repo_stars_event_max_datetime
dtype: string
- name: max_issues_repo_path
dtype: string
- name: max_issues_repo_name
dtype: string
- name: max_issues_repo_head_hexsha
dtype: string
- name: max_issues_repo_licenses
sequence: string
- name: max_issues_count
dtype: int64
- name: max_issues_repo_issues_event_min_datetime
dtype: string
- name: max_issues_repo_issues_event_max_datetime
dtype: string
- name: max_forks_repo_path
dtype: string
- name: max_forks_repo_name
dtype: string
- name: max_forks_repo_head_hexsha
dtype: string
- name: max_forks_repo_licenses
sequence: string
- name: max_forks_count
dtype: int64
- name: max_forks_repo_forks_event_min_datetime
dtype: string
- name: max_forks_repo_forks_event_max_datetime
dtype: string
- name: content
dtype: string
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
splits:
- name: train
num_bytes: 78577965159
num_examples: 11658586
download_size: 28807934580
dataset_size: 78577965159
license: other
language:
- code
---
# Dataset Card for "stack-smol-xxl"
This is a subset of the [deduplicated Stack dataset](https://huggingface.co/datasets/bigcode/the-stack-dedup)
It was generated like so:
```python
from datasets import load_dataset, Dataset
languages = ["css", "prolog", "c", "fortran", "solidity", "kotlin", "literate-agda", "julia", "java-server-pages",
"isabelle", "idris", "lean", "powershell", "go", "erlang", "f-sharp", "ada", "pascal", "perl", "r", "protocol-buffer",
"cmake", "sas", "ruby", "rust", "rmarkdown", "c-sharp", "smalltalk", "haskell", "maple", "mathematica", "ocaml",
"makefile", "lua", "literate-coffeescript", "literate-haskell", "restructuredtext", "racket", "standard-ml",
"systemverilog", "tex", "awk", "assembly", "alloy", "agda", "emacs-lisp", "dart", "cuda", "bluespec", "augeas", "batchfile",
"tcsh", "stan", "scala", "tcl", "stata", "applescript", "shell", "clojure", "scheme", "antlr", "sparql", "sql",
"glsl", "elm", "dockerfile", "cpp", "coffeescript", "common-lisp", "elixir", "groovy", "html", "java", "javascript",
"markdown", "php", "python", "typescript", "verilog", "visual-basic", "vhdl", "thrift", "matlab", "yacc", "zig", "xslt", "json", "yaml"]
def dset_gen():
for language in languages:
dset = load_dataset("bigcode/the-stack-dedup", data_dir=f"data/{language}", streaming=True, split="train")
sample = dset.take(250_000)
for row in sample:
yield row
dset = Dataset.from_generator(dset_gen)
```
## Dataset Structure
```
num_examples: 11658586
download_size: 28807934580
dataset_size: 78577965159
```
### Data Instances
Each data instance corresponds to one file. The content of the file is in the `content` feature, and other features (`repository_name`, `licenses`, etc.) provide some metadata. Note that a given file can appear in several different repositories that satisfy our safe-license criterion. If that is the case, only the first – in alphabetical order -- of these repositories is shown for simplicity.
### Data Fields
- `content` (string): the content of the file.
- `size` (integer): size of the uncompressed file.
- `lang` (string): the programming language.
- `ext` (string): file extension
- `avg_line_length` (float): the average line-length of the file.
- `max_line_length` (integer): the maximum line-length of the file.
- `alphanum_fraction` (float): the fraction of characters in the file that are alphabetical or numerical characters.
- `hexsha` (string): unique git hash of file
- `max_{stars|forks|issues}_repo_path` (string): path to file in repo containing this file with maximum number of `{stars|forks|issues}`
- `max_{stars|forks|issues}_repo_name` (string): name of repo containing this file with maximum number of `{stars|forks|issues}`
- `max_{stars|forks|issues}_repo_head_hexsha` (string): hexsha of repository head
- `max_{stars|forks|issues}_repo_licenses` (string): licenses in repository
- `max_{stars|forks|issues}_count` (integer): number of `{stars|forks|issues}` in repository
- `max_{stars|forks|issues}_repo_{stars|forks|issues}_min_datetime` (string): first timestamp of a `{stars|forks|issues}` event
- `max_{stars|forks|issues}_repo_{stars|forks|issues}_max_datetime` (string): last timestamp of a `{stars|forks|issues}` event
|
Nan-Do/instructional_code-search-net-javacript | 2023-05-20T05:26:15.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:text2text-generation",
"language:en",
"license:apache-2.0",
"JavaScript",
"Code Generation",
"Instruction Response",
"region:us"
] | Nan-Do | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
splits:
- name: train
num_bytes: 126970947
num_examples: 121323
download_size: 49942966
dataset_size: 126970947
license: apache-2.0
task_categories:
- conversational
- text-generation
- text2text-generation
language:
- en
tags:
- JavaScript
- Code Generation
- Instruction Response
pretty_name: Instructional JavaScript Dataset
---
# Dataset Card for "instructional_code-search-net-javacript"
## Dataset Description
- **Homepage:** None
- **Repository:** https://huggingface.co/datasets/Nan-Do/instructional_code-search-net-javascript
- **Paper:** None
- **Leaderboard:** None
- **Point of Contact:** [@Nan-Do](https://github.com/Nan-Do)
### Dataset Summary
This is an instructional dataset for JavaScript.
The dataset contains two different kind of tasks:
- Given a piece of code generate a description of what it does.
- Given a description generate a piece of code that fulfils the description.
### Languages
The dataset is in English.
### Data Splits
There are no splits.
## Dataset Creation
May of 2023
### Curation Rationale
This dataset was created to improve the coding capabilities of LLMs.
### Source Data
The summarized version of the code-search-net dataset can be found at https://huggingface.co/datasets/Nan-Do/code-search-net-javascript
### Annotations
The dataset includes an instruction and response columns.
#### Annotation process
The annotation procedure was done using templates and NLP techniques to generate human-like instructions and responses.
A sample notebook of the process can be found at https://github.com/Nan-Do/OpenAssistantInstructionResponsePython
The annontations have been cleaned to make sure there are no repetitions and/or meaningless summaries.
### Licensing Information
Apache 2.0 |
Nan-Do/instructional_code-search-net-php | 2023-05-20T05:20:07.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:text2text-generation",
"language:en",
"license:apache-2.0",
"PHP",
"Code Generation",
"Instruction Response",
"region:us"
] | Nan-Do | null | null | null | 1 | 4 | ---
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
splits:
- name: train
num_bytes: 448756286
num_examples: 536632
download_size: 158708948
dataset_size: 448756286
license: apache-2.0
task_categories:
- conversational
- text-generation
- text2text-generation
language:
- en
tags:
- PHP
- Code Generation
- Instruction Response
pretty_name: Instructional PHP Dataset
---
# Dataset Card for "instructional_code-search-net-php"
## Dataset Description
- **Homepage:** None
- **Repository:** https://huggingface.co/datasets/Nan-Do/instructional_code-search-net-php
- **Paper:** None
- **Leaderboard:** None
- **Point of Contact:** [@Nan-Do](https://github.com/Nan-Do)
### Dataset Summary
This is an instructional dataset for PHP.
The dataset contains two different kind of tasks:
- Given a piece of code generate a description of what it does.
- Given a description generate a piece of code that fulfils the description.
### Languages
The dataset is in English.
### Data Splits
There are no splits.
## Dataset Creation
May of 2023
### Curation Rationale
This dataset was created to improve the coding capabilities of LLMs.
### Source Data
The summarized version of the code-search-net dataset can be found at https://huggingface.co/datasets/Nan-Do/code-search-net-php
### Annotations
The dataset includes an instruction and response columns.
#### Annotation process
The annotation procedure was done using templates and NLP techniques to generate human-like instructions and responses.
A sample notebook of the process can be found at https://github.com/Nan-Do/OpenAssistantInstructionResponsePython
The annontations have been cleaned to make sure there are no repetitions and/or meaningless summaries.
### Licensing Information
Apache 2.0
|
asoria/mnist | 2023-05-19T15:57:56.000Z | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-nist",
"language:en",
"license:mit",
"region:us"
] | asoria | The MNIST dataset consists of 70,000 28x28 black-and-white images in 10 classes (one for each digits), with 7,000
images per class. There are 60,000 training images and 10,000 test images. | @article{lecun2010mnist,
title={MNIST handwritten digit database},
author={LeCun, Yann and Cortes, Corinna and Burges, CJ},
journal={ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist},
volume={2},
year={2010}
} | null | 0 | 4 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-nist
task_categories:
- image-classification
task_ids:
- multi-class-image-classification
paperswithcode_id: mnist
pretty_name: MNIST
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: '5'
6: '6'
7: '7'
8: '8'
9: '9'
config_name: mnist
splits:
- name: test
num_bytes: 2916440
num_examples: 10000
- name: train
num_bytes: 17470848
num_examples: 60000
download_size: 11594722
dataset_size: 20387288
---
# Dataset Card for MNIST
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://yann.lecun.com/exdb/mnist/
- **Repository:**
- **Paper:** MNIST handwritten digit database by Yann LeCun, Corinna Cortes, and CJ Burges
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The MNIST dataset consists of 70,000 28x28 black-and-white images of handwritten digits extracted from two NIST databases. There are 60,000 images in the training dataset and 10,000 images in the validation dataset, one class per digit so a total of 10 classes, with 7,000 images (6,000 train images and 1,000 test images) per class.
Half of the image were drawn by Census Bureau employees and the other half by high school students (this split is evenly distributed in the training and testing sets).
### Supported Tasks and Leaderboards
- `image-classification`: The goal of this task is to classify a given image of a handwritten digit into one of 10 classes representing integer values from 0 to 9, inclusively. The leaderboard is available [here](https://paperswithcode.com/sota/image-classification-on-mnist).
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its label:
```
{
'image': <PIL.PngImagePlugin.PngImageFile image mode=L size=28x28 at 0x276021F6DD8>,
'label': 5
}
```
### Data Fields
- `image`: A `PIL.Image.Image` object containing the 28x28 image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `label`: an integer between 0 and 9 representing the digit.
### Data Splits
The data is split into training and test set. All the images in the test set were drawn by different individuals than the images in the training set. The training set contains 60,000 images and the test set 10,000 images.
## Dataset Creation
### Curation Rationale
The MNIST database was created to provide a testbed for people wanting to try pattern recognition methods or machine learning algorithms while spending minimal efforts on preprocessing and formatting. Images of the original dataset (NIST) were in two groups, one consisting of images drawn by Census Bureau employees and one consisting of images drawn by high school students. In NIST, the training set was built by grouping all the images of the Census Bureau employees, and the test set was built by grouping the images form the high school students.
The goal in building MNIST was to have a training and test set following the same distributions, so the training set contains 30,000 images drawn by Census Bureau employees and 30,000 images drawn by high school students, and the test set contains 5,000 images of each group. The curators took care to make sure all the images in the test set were drawn by different individuals than the images in the training set.
### Source Data
#### Initial Data Collection and Normalization
The original images from NIST were size normalized to fit a 20x20 pixel box while preserving their aspect ratio. The resulting images contain grey levels (i.e., pixels don't simply have a value of black and white, but a level of greyness from 0 to 255) as a result of the anti-aliasing technique used by the normalization algorithm. The images were then centered in a 28x28 image by computing the center of mass of the pixels, and translating the image so as to position this point at the center of the 28x28 field.
#### Who are the source language producers?
Half of the source images were drawn by Census Bureau employees, half by high school students. According to the dataset curator, the images from the first group are more easily recognizable.
### Annotations
#### Annotation process
The images were not annotated after their creation: the image creators annotated their images with the corresponding label after drawing them.
#### Who are the annotators?
Same as the source data creators.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Chris Burges, Corinna Cortes and Yann LeCun
### Licensing Information
MIT Licence
### Citation Information
```
@article{lecun2010mnist,
title={MNIST handwritten digit database},
author={LeCun, Yann and Cortes, Corinna and Burges, CJ},
journal={ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist},
volume={2},
year={2010}
}
```
### Contributions
Thanks to [@sgugger](https://github.com/sgugger) for adding this dataset. |
voidful/MuSiQue | 2023-05-20T16:43:22.000Z | [
"region:us"
] | voidful | null | null | null | 0 | 4 | Entry not found |
pythainlp/han-corf-dataset-v1.0 | 2023-05-24T08:52:48.000Z | [
"size_categories:1K<n<10K",
"language:th",
"license:cc-by-3.0",
"coreference-resolution",
"coreference",
"anaphora",
"region:us"
] | pythainlp | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: text
dtype: string
- name: clusters
sequence:
sequence:
sequence: int64
- name: clusters_strings
sequence:
sequence: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 1185411
num_examples: 1039
- name: test
num_bytes: 200945
num_examples: 149
- name: validation
num_bytes: 167328
num_examples: 150
download_size: 618416
dataset_size: 1553684
license: cc-by-3.0
tags:
- coreference-resolution
- coreference
- anaphora
language:
- th
size_categories:
- 1K<n<10K
---
# 🪿 Han-Coref: Thai Coreference resolution by PyThaiNLP (Dataset)
This project want to create Thai Coreference resolution system.
This project is developed by 🪿 Wannaphong Phatthiyaphaibun.
**Current 🪿 Han-Coref version**: 1.0
- GitHub: [pythainlp/han-coref](https://github.com/pythainlp/han-coref)
- Model: [pythainlp/han-coref-v1.0](https://huggingface.co/pythainlp/han-coref-v1.0)
- Dataset: [pythainlp/han-corf-dataset-v1.0](https://huggingface.co/datasets/pythainlp/han-corf-dataset-v1.0)
## Cite as
> Wannaphong Phatthiyaphaibun, & Peerat Limkonchotiwat. (2023). Han-Coref: Thai Coreference resolution by PyThaiNLP. https://doi.org/10.5281/zenodo.7965488
or BibTeX entry:
``` bib
@misc{wannaphong_phatthiyaphaibun_2023_7965488,
author = {Wannaphong Phatthiyaphaibun and
Peerat Limkonchotiwat},
title = {{Han-Coref: Thai Coreference resolution by
PyThaiNLP}},
month = may,
year = 2023,
publisher = {Zenodo},
version = {1.0.0},
doi = {10.5281/zenodo.7965488},
url = {https://doi.org/10.5281/zenodo.7965488}
}
```
## License
- All source code use [Apache License Version 2.0](http://www.apache.org/licenses/LICENSE-2.0).
- The Dataset use [Creative Commons Attribution 3.0 Unported License](https://creativecommons.org/licenses/by/3.0/).
This project is a part of [🪿 PyThaiNLP project](https://github.com/PyThaiNLP/).
We build Thai NLP.
PyThaiNLP |
Dzeniks/BBC-IDC-article | 2023-05-21T21:11:56.000Z | [
"region:us"
] | Dzeniks | null | null | null | 0 | 4 | Entry not found |
zirui3/webMedQA-instructions | 2023-05-22T10:39:21.000Z | [
"license:cc-by-4.0",
"region:us"
] | zirui3 | null | null | null | 1 | 4 | ---
license: cc-by-4.0
---
# summary
A Chinese medical question answering instructions dataset based on `webMedQA`
# Reference
[1]. Applying deep matching networks to Chinese medical question answering: A study and a dataset |
wyxu/dataset_copied | 2023-05-25T07:45:47.000Z | [
"task_categories:image-classification",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] | wyxu | The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images
per class. There are 50000 training images and 10000 test images. | @TECHREPORT{Krizhevsky09learningmultiple,
author = {Alex Krizhevsky},
title = {Learning multiple layers of features from tiny images},
institution = {},
year = {2009}
} | null | 0 | 4 | ---
task_categories:
- image-classification
language:
- en
size_categories:
- 1K<n<10K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
A copied data set from CIFAR10 as a demonstration
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
yuval6967/OIG-small-chip2_deduplicated | 2023-05-24T11:37:16.000Z | [
"region:us"
] | yuval6967 | null | null | null | 1 | 4 | ---
dataset_info:
features:
- name: user
dtype: string
- name: chip2
dtype: string
splits:
- name: train
num_bytes: 73795170.04573706
num_examples: 188892
download_size: 47456241
dataset_size: 73795170.04573706
---
# Dataset Card for "OIG-small-chip2_deduplicated"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ccmusic-database/acapella_eval | 2023-10-03T17:17:15.000Z | [
"task_categories:audio-classification",
"task_categories:table-question-answering",
"task_categories:summarization",
"size_categories:n<1K",
"language:zh",
"language:en",
"license:mit",
"music",
"art",
"region:us"
] | ccmusic-database | This database contains 6 Mandarin song segments sung by 22 singers, totaling 132 audio clips.
Each segment consists of a verse and a chorus. Four judges evaluate the singing from nine aspects
which are pitch, rhythm, vocal range, timbre, pronunciation, vibrato, dynamic, breath control and
overall performance on a 10-point scale. The scores are recorded in a sheet. | @dataset{zhaorui_liu_2021_5676893,
author = {Zhaorui Liu, Monan Zhou, Shenyang Xu and Zijin Li},
title = {{Music Data Sharing Platform for Computational Musicology Research (CCMUSIC DATASET)}},
month = nov,
year = 2021,
publisher = {Zenodo},
version = {1.1},
doi = {10.5281/zenodo.5676893},
url = {https://doi.org/10.5281/zenodo.5676893}
} | null | 1 | 4 | ---
license: mit
task_categories:
- audio-classification
- table-question-answering
- summarization
language:
- zh
- en
tags:
- music
- art
pretty_name: Acapella Evaluation Dataset
size_categories:
- n<1K
---
# Dataset Card for Acapella Evaluation Dataset
## Dataset Description
- **Homepage:** <https://ccmusic-database.github.io>
- **Repository:** <https://huggingface.co/datasets/CCMUSIC/acapella_evaluation>
- **Paper:** <https://doi.org/10.5281/zenodo.5676893>
- **Leaderboard:** <https://ccmusic-database.github.io/team.html>
- **Point of Contact:** N/A
### Dataset Summary
This database contains 6 Mandarin song segments sung by 22 singers, totaling 132 audio clips. Each segment consists of a verse and a chorus. Four judges evaluate the singing from nine aspects which are pitch, rhythm, vocal range, timbre, pronunciation, vibrato, dynamic, breath control and overall performance on a 10-point scale. The scores are recorded in a sheet.
### Supported Tasks and Leaderboards
Acapella evaluation/scoring
### Languages
Chinese, English
## Dataset Structure
### Data Instances
.wav & .csv
### Data Fields
song, singer id, pitch, rhythm, vocal range, timbre, pronunciation, vibrato, dynamic, breath control and overall performance
### Data Splits
song1-6
## Dataset Creation
### Curation Rationale
Lack of a training dataset for acapella scoring system
### Source Data
#### Initial Data Collection and Normalization
Zhaorui Liu, Monan Zhou
#### Who are the source language producers?
Students and judges from CCMUSIC
### Annotations
#### Annotation process
6 Mandarin song segments were sung by 22 singers, totaling 132 audio clips. Each segment consists of a verse and a chorus. Four judges evaluate the singing from nine aspects which are pitch, rhythm, vocal range, timbre, pronunciation, vibrato, dynamic, breath control and overall performance on a 10-point scale. The scores are recorded in a sheet.
#### Who are the annotators?
Judges from CCMUSIC
### Personal and Sensitive Information
Singers' and judges' names are hided
## Considerations for Using the Data
### Social Impact of Dataset
Providing a training dataset for acapella scoring system may improve the developement of related Apps
### Discussion of Biases
Only for Mandarin songs
### Other Known Limitations
No starting point has been marked for the vocal
## Additional Information
### Dataset Curators
Zijin Li
### Evaluation
[Li, R.; Zhang, M. Singing-Voice Timbre Evaluations Based on Transfer Learning. Appl. Sci. 2022, 12, 9931. https://doi.org/10.3390/app12199931](https://www.mdpi.com/2076-3417/12/19/9931)
### Licensing Information
```
MIT License
Copyright (c) CCMUSIC
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
### Citation Information
```
@dataset{zhaorui_liu_2021_5676893,
author = {Zhaorui Liu, Monan Zhou, Shenyang Xu and Zijin Li},
title = {CCMUSIC DATABASE: Music Data Sharing Platform for Computational Musicology Research},
month = {nov},
year = {2021},
publisher = {Zenodo},
version = {1.1},
doi = {10.5281/zenodo.5676893},
url = {https://doi.org/10.5281/zenodo.5676893}
}
```
### Contributions
Provide a training dataset for acapella scoring system |
james-burton/news_channel_ordinal | 2023-05-25T09:29:59.000Z | [
"region:us"
] | james-burton | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: ' n_tokens_content'
dtype: float64
- name: ' n_unique_tokens'
dtype: float64
- name: ' n_non_stop_words'
dtype: float64
- name: ' n_non_stop_unique_tokens'
dtype: float64
- name: ' num_hrefs'
dtype: float64
- name: ' num_self_hrefs'
dtype: float64
- name: ' num_imgs'
dtype: float64
- name: ' num_videos'
dtype: float64
- name: ' average_token_length'
dtype: float64
- name: ' num_keywords'
dtype: float64
- name: ' global_subjectivity'
dtype: float64
- name: ' global_sentiment_polarity'
dtype: float64
- name: ' global_rate_positive_words'
dtype: float64
- name: ' global_rate_negative_words'
dtype: float64
- name: ' rate_positive_words'
dtype: float64
- name: ' rate_negative_words'
dtype: float64
- name: article_title
dtype: string
- name: channel
dtype: int64
splits:
- name: train
num_bytes: 3354492
num_examples: 17241
- name: validation
num_bytes: 591868
num_examples: 3043
- name: test
num_bytes: 987135
num_examples: 5071
download_size: 3376135
dataset_size: 4933495
---
# Dataset Card for "news_channel_ordinal"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Thaweewat/codegen-th | 2023-05-25T15:06:44.000Z | [
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:th",
"license:cc-by-sa-3.0",
"instruction-finetuning",
"region:us"
] | Thaweewat | null | null | null | 0 | 4 | ---
license: cc-by-sa-3.0
task_categories:
- question-answering
language:
- th
tags:
- instruction-finetuning
size_categories:
- 1K<n<10K
---
# Summary
This is a 🇹🇭 Thai-translated (GCP) dataset based on 4.5K codegen instruction dataset [GPTeacher](https://github.com/teknium1/GPTeacher)
Supported Tasks:
- Training LLMs
- Synthetic Data Generation
- Data Augmentation
Languages: Thai
Version: 1.0
--- |
Mutonix/RefGPT-Code-cr | 2023-06-01T09:10:58.000Z | [
"task_categories:conversational",
"size_categories:10K<n<100K",
"language:zh",
"language:en",
"license:apache-2.0",
"arxiv:2305.14994",
"region:us"
] | Mutonix | null | null | null | 6 | 4 | ---
license: apache-2.0
dataset_info:
features:
- name: dialogue
dtype: string
- name: reference
dtype: string
- name: language
dtype: string
- name: type
dtype: string
splits:
- name: en
num_bytes: 165025559.5254741
num_examples: 14119
- name: zh
num_bytes: 157858797.9941188
num_examples: 15288
download_size: 136112295
dataset_size: 322884357.5195929
task_categories:
- conversational
language:
- zh
- en
arxiv: https://arxiv.org/abs/2305.14994
size_categories:
- 10K<n<100K
---
# Dataset Card for RefGPT-Code-cr
## Dataset Description
- **Homepage:**
- **Repository:** [https://github.com/ziliwangnlp/RefGPT](https://github.com/ziliwangnlp/RefGPT)
- **Paper:** [https://arxiv.org/abs/2305.14994](https://arxiv.org/abs/2305.14994)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
<p align="center">
<a href="https://arxiv.org/abs/2305.14994"><b>[Paper] RefGPT</b></a> |
<a href="https://github.com/ziliwangnlp/RefGPT"><b>[Github] RefGPT</b></a>
</p>
RefGPT-Code is a dataset containing 76k multi-turn dialogues about programming with 37k English and 39k Chinese, which has covered most aspects of code usage scenarios and multiple types of programming languages. Both the English version and Chinese version use the public GitHub dataset on Google BiqQuery with no overlap in these two languages. RefGPT-Code has derived various ways of leveraging the program code as the reference to enable different scenarios. We consider three perspectives of code discussion, code creation and bug fixing in RefGPT-Code.
**RefGPT-Code-cr** is the "code creation" subset.
### Supported Tasks and Leaderboards
Chatbot instruction finetuning
### Languages
Chinese, English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
Please pay attention that RefGPT Datasets, including RefGPT-Fact and RefGPT-Code, have not undergone manual verification, and as such, their security cannot be strictly guaranteed. Users should be aware that they are responsible for the results generated using this data.
### Discussion of Biases
As the datasets RefGPT-Fact and RefGPT-Code are collected by using the references like Wikipedia and Github repositories, it can not be avoided that the reference itself has factual errors, typos, or bugs and malicious code if it is from Github repositories. The datasets may also reflect the biases of the selected references and GPT-3.5/GPT-4 model
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```bibtex
@misc{yang2023refgpt,
title={RefGPT: Reference -> Truthful & Customized Dialogues Generation by GPTs and for GPTs},
author={Dongjie Yang and Ruifeng Yuan and YuanTao Fan and YiFei Yang and Zili Wang and Shusen Wang and Hai Zhao},
year={2023},
eprint={2305.14994},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
[More Information Needed] |
anzorq/hf-spaces-descriptions-embeddings | 2023-05-26T13:33:58.000Z | [
"license:mit",
"region:us"
] | anzorq | null | null | null | 6 | 4 | ---
license: mit
dataset_info:
features:
- name: id
dtype: string
- name: description
dtype: string
- name: embedding
sequence: float64
splits:
- name: train
num_bytes: 94758018
num_examples: 29718
download_size: 78891306
dataset_size: 94758018
---
# Hugging Face Spaces Descriptions and Embeddings Dataset
I parsed all the available public 🤗 spaces as of May 22, 2023, generated concise descriptions of their functionality, and created embeddings for them.
The descriptions were generated using various LLMs from each space's app file (README.md -> app_file). The embeddings were created using the [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) SentenceTransformer model.
The dataset comprises approximately 30,000 spaces that meet specific criteria: having more than 40 lines of code and over 1000 characters in the app file.
The descriptions provide an overview of the spaces and their features.
## Dataset Details
- **Name**: HF Spaces Descriptions and Embeddings
- **Creator**: [anzorq](https://huggingface.co/anzorq)
- **License**: MIT
## Dataset Usage
You can use this dataset for various natural language processing (NLP) tasks such as semantic search, clustering, etc.
## Loading the Dataset
You can load the dataset using the datasets library:
```python
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("anzorq/hf-spaces-descriptions-embeddings")
# Access the different splits
train_split = dataset['train']
valid_split = dataset['valid']
test_split = dataset['test']
```
## Semantic Search Example
Performing a semantic search using the dataset's embeddings:
```python
import torch
from sentence_transformers import SentenceTransformer
from datasets import load_dataset
import numpy as np
# Load the dataset
dataset = load_dataset("anzorq/hf-spaces-descriptions-embeddings")
# Load the SentenceTransformer model
model = SentenceTransformer('all-MiniLM-L6-v2')
# Example query
query = "Removing background from images"
# Encode the query
query_embedding = model.encode([query], convert_to_tensor=True)
# Get the space descriptions and embeddings
descriptions = dataset['train']['description']
embeddings = np.array(dataset['train']['embedding'])
# Calculate cosine similarity
cosine_scores = torch.nn.functional.cosine_similarity(query_embedding, torch.tensor(embeddings))
# Sort the results
top_k = torch.topk(cosine_scores, k=5)
# Print the top-k results
print("Query:", query)
for idx in top_k.indices[0]:
print("Space ID:", dataset['train']['id'][idx])
print("Description:", descriptions[idx])
print("Score:", cosine_scores[idx].item())
```
## License
This dataset is distributed under the [MIT License](https://opensource.org/licenses/MIT).
|
rvashurin/wikidata_simplequestions | 2023-05-29T14:31:23.000Z | [
"region:us"
] | rvashurin | HuggingFace wrapper for https://github.com/askplatypus/wikidata-simplequestions dataset
Simplequestions dataset based on Wikidata. | null | null | 1 | 4 | # Wikidata Simplequestions
Huggingface Dataset wrapper for Wikidata-simplequestion dataset
### Usage
```bash
git clone git@github.com:skoltech-nlp/wikidata-simplequestions-hf.git wikidata_simplequestions
```
```python3
from datasets import load_dataset;
load_dataset('../wikidata_simplequestions', 'answerable_en', cache_dir='/YOUR_PATH_TO_CACHE/', ignore_verifications=True)
```
|
potsawee/xsum_eng2thai | 2023-09-22T08:47:07.000Z | [
"task_categories:summarization",
"size_categories:100K<n<1M",
"language:th",
"language:en",
"license:cc-by-4.0",
"region:us"
] | potsawee | null | null | null | 2 | 4 | ---
dataset_info:
features:
- name: document
dtype: string
- name: summary
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 518590635
num_examples: 204045
- name: validation
num_bytes: 28478150
num_examples: 11332
- name: test
num_bytes: 28953771
num_examples: 11334
download_size: 349745164
dataset_size: 576022556
license: cc-by-4.0
task_categories:
- summarization
language:
- th
- en
source_data:
- xsum
size_categories:
- 100K<n<1M
---
# Dataset Card for "xsum_eng2thai 🇬🇧🇹🇭"
- [Update 21/09/2023] [xsum_th](https://huggingface.co/datasets/potsawee/xsum_thai) 🇹🇭 is available. It's better than this dataset, and can be used for both Thai2Thai and Cross-Summarization.
- The input documents are also translated using NLLB-200-3.3B
- The target summaries (translated to Thai) are slightly different from this dataset in that, `no_repeat_ngram_size=6, repetition_penalty=1.2
` were used in translation to mitigate the repetition problem observed in this dataset.
- This dataset is based on [XSum](https://huggingface.co/datasets/xsum).
- The summaries were translated from English (as in the original XSum) to Thai using Meta's [NLLB-200-3.3B](https://huggingface.co/facebook/nllb-200-3.3B).
- The dataset is intended for Cross-Lingual Summarization (English Document -> Thai Summary).
### Data Fields
- `id`: BBC ID of the article.
- `document`: a string containing the body of the news article
- `summary`: a string containing a *translated* summary of the article.
## Data Structure
```
{
"id": "29750031",
"document": "news article in English",
"summary": "summary in Thai"
}
```
### Data Splits
train/validation/test = 204045/11332/11334 |
TrainingDataPro/low_quality_webcam_video_attacks | 2023-09-14T16:48:24.000Z | [
"task_categories:video-classification",
"language:en",
"license:cc-by-nc-nd-4.0",
"finance",
"legal",
"code",
"region:us"
] | TrainingDataPro | The dataset includes live-recorded Anti-Spoofing videos from around the world,
captured via low-quality webcams with resolutions like QVGA, QQVGA and QCIF. | @InProceedings{huggingface:dataset,
title = {low_quality_webcam_video_attacks},
author = {TrainingDataPro},
year = {2023}
} | null | 1 | 4 | ---
license: cc-by-nc-nd-4.0
task_categories:
- video-classification
language:
- en
tags:
- finance
- legal
- code
---
# Low Quality Live Attacks
The dataset includes live-recorded Anti-Spoofing videos from around the world, captured via **low-quality** webcams with resolutions like QVGA, QQVGA and QCIF.
# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=low_quality_webcam_video_attacks) to discuss your requirements, learn about the price and buy the dataset.

# Webcam Resolution
The collection of different video resolutions is provided, like:
- QVGA (320p x 240p),
- QQVGA (120p x 160p),
- QCIF (176p x 144p) and others.
# Metadata
Each attack instance is accompanied by the following details:
- Unique attack identifier
- Identifier of the user recording the attack
- User's age
- User's gender
- User's country of origin
- Attack resolution
Additionally, the model of the webcam is also specified.
Metadata is represented in the `file_info.csv`.
## [**TrainingData**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=low_quality_webcam_video_attacks) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: https://www.kaggle.com/trainingdatapro/datasets
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** |
TrainingDataPro/high_quality_webcam_video_attacks | 2023-09-14T16:47:53.000Z | [
"task_categories:video-classification",
"language:en",
"license:cc-by-nc-nd-4.0",
"finance",
"legal",
"code",
"region:us"
] | TrainingDataPro | The dataset includes live-recorded Anti-Spoofing videos from around the world,
captured via **high-quality** webcams with Full HD resolution and above. | @InProceedings{huggingface:dataset,
title = {high_quality_webcam_video_attacks},
author = {TrainingDataPro},
year = {2023}
} | null | 1 | 4 | ---
license: cc-by-nc-nd-4.0
task_categories:
- video-classification
language:
- en
tags:
- finance
- legal
- code
dataset_info:
features:
- name: video_file
dtype: string
- name: assignment_id
dtype: string
- name: worker_id
dtype: string
- name: gender
dtype: string
- name: age
dtype: uint8
- name: country
dtype: string
- name: resolution
dtype: string
splits:
- name: train
num_bytes: 1547
num_examples: 10
download_size: 623356178
dataset_size: 1547
---
# High Definition Live Attacks
The dataset includes live-recorded Anti-Spoofing videos from around the world, captured via **high-quality** webcams with Full HD resolution and above.
# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=high_quality_webcam_video_attacks) to discuss your requirements, learn about the price and buy the dataset.
.png?generation=1684702390091084&alt=media)
# Webcam Resolution
The collection of different video resolutions from Full HD (1080p) up to 4K (2160p) is provided, including several intermediate resolutions like QHD (1440p)

# Metadata
Each attack instance is accompanied by the following details:
- Unique attack identifier
- Identifier of the user recording the attack
- User's age
- User's gender
- User's country of origin
- Attack resolution
Additionally, the model of the webcam is also specified.
Metadata is represented in the `file_info.csv`.
## [**TrainingData**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=high_quality_webcam_video_attacks) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** |
TigerResearch/tigerbot-zhihu-zh-10k | 2023-05-31T02:59:43.000Z | [
"language:zh",
"license:apache-2.0",
"region:us"
] | TigerResearch | null | null | null | 12 | 4 | ---
license: apache-2.0
language:
- zh
---
[Tigerbot](https://github.com/TigerResearch/TigerBot) 基于开源搜集的知乎数据生成的sft问答对
## Usage
```python
import datasets
ds_sft = datasets.load_dataset('TigerResearch/tigerbot-zhihu-zh-10k')
``` |
talmp/en-vi-translation | 2023-05-31T22:45:58.000Z | [
"task_categories:translation",
"size_categories:1M<n<10M",
"language:en",
"language:vi",
"license:wtfpl",
"region:us"
] | talmp | null | null | null | 1 | 4 | ---
license: wtfpl
task_categories:
- translation
language:
- en
- vi
size_categories:
- 1M<n<10M
---
# To join all training set files together
run `python join_dataset.py` file, final result will be `join_dataset.json` file |
kraina/airbnb | 2023-06-03T10:37:15.000Z | [
"size_categories:10K<n<100K",
"license:cc-by-4.0",
"geospatial",
"hotels",
"housing",
"region:us"
] | kraina | This dataset contains accommodation offers from the AirBnb platform from 10 European cities.
It has been copied from https://zenodo.org/record/4446043#.ZEV8d-zMI-R to make it available as a Huggingface Dataset.
It was originally published as supplementary material for the article: Determinants of Airbnb prices in European cities: A spatial econometrics approach
(DOI: https://doi.org/10.1016/j.tourman.2021.104319) | @dataset{gyodi_kristof_2021_4446043,
author = {Gyódi, Kristóf and
Nawaro, Łukasz},
title = {{Determinants of Airbnb prices in European cities:
A spatial econometrics approach (Supplementary
Material)}},
month = jan,
year = 2021,
note = {{This research was supported by National Science
Centre, Poland: Project number 2017/27/N/HS4/00951}},
publisher = {Zenodo},
doi = {10.5281/zenodo.4446043},
url = {https://doi.org/10.5281/zenodo.4446043}
} | null | 0 | 4 | ---
license: cc-by-4.0
tags:
- geospatial
- hotels
- housing
size_categories:
- 10K<n<100K
dataset_info:
- config_name: weekdays
features:
- name: _id
dtype: string
- name: city
dtype: string
- name: realSum
dtype: float64
- name: room_type
dtype: string
- name: room_shared
dtype: bool
- name: room_private
dtype: bool
- name: person_capacity
dtype: float64
- name: host_is_superhost
dtype: bool
- name: multi
dtype: int64
- name: biz
dtype: int64
- name: cleanliness_rating
dtype: float64
- name: guest_satisfaction_overall
dtype: float64
- name: bedrooms
dtype: int64
- name: dist
dtype: float64
- name: metro_dist
dtype: float64
- name: attr_index
dtype: float64
- name: attr_index_norm
dtype: float64
- name: rest_index
dtype: float64
- name: rest_index_norm
dtype: float64
- name: lng
dtype: float64
- name: lat
dtype: float64
splits:
- name: train
num_bytes: 3998764
num_examples: 25500
download_size: 5303928
dataset_size: 3998764
- config_name: weekends
features:
- name: _id
dtype: string
- name: city
dtype: string
- name: realSum
dtype: float64
- name: room_type
dtype: string
- name: room_shared
dtype: bool
- name: room_private
dtype: bool
- name: person_capacity
dtype: float64
- name: host_is_superhost
dtype: bool
- name: multi
dtype: int64
- name: biz
dtype: int64
- name: cleanliness_rating
dtype: float64
- name: guest_satisfaction_overall
dtype: float64
- name: bedrooms
dtype: int64
- name: dist
dtype: float64
- name: metro_dist
dtype: float64
- name: attr_index
dtype: float64
- name: attr_index_norm
dtype: float64
- name: rest_index
dtype: float64
- name: rest_index_norm
dtype: float64
- name: lng
dtype: float64
- name: lat
dtype: float64
splits:
- name: train
num_bytes: 4108612
num_examples: 26207
download_size: 5451150
dataset_size: 4108612
- config_name: all
features:
- name: _id
dtype: string
- name: city
dtype: string
- name: realSum
dtype: float64
- name: room_type
dtype: string
- name: room_shared
dtype: bool
- name: room_private
dtype: bool
- name: person_capacity
dtype: float64
- name: host_is_superhost
dtype: bool
- name: multi
dtype: int64
- name: biz
dtype: int64
- name: cleanliness_rating
dtype: float64
- name: guest_satisfaction_overall
dtype: float64
- name: bedrooms
dtype: int64
- name: dist
dtype: float64
- name: metro_dist
dtype: float64
- name: attr_index
dtype: float64
- name: attr_index_norm
dtype: float64
- name: rest_index
dtype: float64
- name: rest_index_norm
dtype: float64
- name: lng
dtype: float64
- name: lat
dtype: float64
- name: day_type
dtype: string
splits:
- name: train
num_bytes: 8738970
num_examples: 51707
download_size: 10755078
dataset_size: 8738970
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:** [https://zenodo.org/record/4446043#.ZEV8d-zMI-R](https://zenodo.org/record/4446043#.ZEV8d-zMI-R)
- **Paper:** [https://www.sciencedirect.com/science/article/pii/S0261517721000388](https://www.sciencedirect.com/science/article/pii/S0261517721000388)
### Dataset Summary
This dataset contains accommodation offers from the [AirBnb](https://airbnb.com/) platform from 10 European cities.
It has been copied from [https://zenodo.org/record/4446043#.ZEV8d-zMI-R](https://zenodo.org/record/4446043#.ZEV8d-zMI-R) to make it available as a Huggingface Dataset.
It was originally published as supplementary material for the article:
**Determinants of Airbnb prices in European cities: A spatial econometrics approach**
(DOI: https://doi.org/10.1016/j.tourman.2021.104319)
## Dataset Structure
### Data Fields
The data fields contain all fields from the source dataset
along with additional `city` field denoting the city of the offer.
`all` split contains an additional field `day_type` denoting whether the offer is for
`weekdays` or `weekends`.
- city: the city of the offer,
- realSum: the full price of accommodation for two people and two nights in EUR,
- room_type: the type of the accommodation,
- room_shared: dummy variable for shared rooms,
- room_private: dummy variable for private rooms,
- person_capacity: the maximum number of guests,
- host_is_superhost: dummy variable for superhost status,
- multi: dummy variable if the listing belongs to hosts with 2-4 offers,
- biz: dummy variable if the listing belongs to hosts with more than 4 offers,
- cleanliness_rating: cleanliness rating,
- guest_satisfaction_overall: overall rating of the listing,
- bedrooms: number of bedrooms (0 for studios),
- dist: distance from city centre in km,
- metro_dist: distance from nearest metro station in km,
- attr_index: attraction index of the listing location,
- attr_index_norm: normalised attraction index (0-100),
- rest_index: restaurant index of the listing location,
- attr_index_norm: normalised restaurant index (0-100),
- lng: longitude of the listing location,
- lat: latitude of the listing location,
`all` config contains additionally:
- day_type: either `weekdays` or `weekends`
### Data Splits
| name | train |
|------------|--------:|
| weekdays | 25500 |
| weekends | 26207 |
| all | 51707 |
## Additional Information
### Licensing Information
The data is released under the licensing scheme from the original authors - CC-BY-4.0 ([source](https://zenodo.org/record/4446043#.ZEV8d-zMI-R)).
### Citation Information
```
@dataset{gyodi_kristof_2021_4446043,
author = {Gyódi, Kristóf and
Nawaro, Łukasz},
title = {{Determinants of Airbnb prices in European cities:
A spatial econometrics approach (Supplementary
Material)}},
month = jan,
year = 2021,
note = {{This research was supported by National Science
Centre, Poland: Project number 2017/27/N/HS4/00951}},
publisher = {Zenodo},
doi = {10.5281/zenodo.4446043},
url = {https://doi.org/10.5281/zenodo.4446043}
}
```
|
ChristophSchuhmann/LAION-Aesthetics-HQ-captions-6plus | 2023-05-31T13:36:35.000Z | [
"license:apache-2.0",
"region:us"
] | ChristophSchuhmann | null | null | null | 1 | 4 | ---
license: apache-2.0
---
This is a subset of LAION-Aesthetics 6+ with 1.4M samples, that all have HQ captions.
This subset could be useful for tuning text-to-image or image-captioning models.
The texts were filtered to have more than 50 characters and a KenLM score of <=600, with this model: https://huggingface.co/siddhesh1793/kenlm/tree/main/the_pile_books3 (trained on books3)
|
tasksource/PRM800K | 2023-05-31T21:22:16.000Z | [
"license:mit",
"region:us"
] | tasksource | null | null | null | 2 | 4 | ---
license: mit
---
https://github.com/openai/prm800k/tree/main
|
shivangibithel/SOTAB | 2023-06-14T11:44:31.000Z | [
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"table-annotation",
"region:us"
] | shivangibithel | # Understanding the semantics of table elements is a prerequisite for many data integration and data discovery tasks. Table annotation is the task of labeling table elements with terms from a given vocabulary. This paper presents the WDC Schema.org Table Annotation Benchmark (SOTAB) for comparing the performance of table annotation systems. SOTAB covers the column type annotation (CTA) and columns property annotation (CPA) tasks. SOTAB provides ∼50,000 annotated tables for each of the tasks containing Schema.org data from different websites. The tables cover 17 different types of entities such as movie, event, local business, recipe, job posting, or product. The tables stem from the WDC Schema.org Table Corpus which was created by extracting Schema.org annotations from the Common Crawl. Consequently, the labels used for annotating columns in SOTAB are part of the Schema.org vocabulary. The benchmark covers 91 types for CTA and 176 properties for CPA distributed across textual, numerical and date/time columns. The tables are split into fixed training, validation and test sets. The test sets are further divided into subsets focusing on specific challenges, such as columns with missing values or different value formats, in order to allow a more fine-grained comparison of annotation systems. The evaluation of SOTAB using Doduo and TURL shows that the benchmark is difficult to solve for current state-of-the-art systems.
# | # @inproceedings{madoc63868, pages = {14--19}, booktitle = {SemTab 2022 : Proceedings of the Semantic Web Challenge on Tabular Data to Knowledge Graph Matching, co-located with the 21st International semantic Web Conference (ISWC 2022), virtual conference, October 23-27, 2022}, address = {Aachen, Germany}, editor = {Vasilis Efthymiou and Ernesto Jim{\'e}nez-Ruiz and Jiaoyan Chen and Vincenzo Cutrona and Oktie Hassanzadeh and Juan Sequeda and Kavitha Srinivas and Nora Abdelmageed and Madelon Hulsebos}, journal = {CEUR Workshop Proceedings}, year = {2022}, title = {SOTAB: The WDC Schema.org table annotation benchmark}, publisher = {RWTH Aachen}, language = {Englisch}, author = {Keti Korini and Ralph Peeters and Christian Bizer}, volume = {3320}, abstract = {Understanding the semantics of table elements is a prerequisite for many data integration and data discovery tasks. Table annotation is the task of labeling table elements with terms from a given vocabulary. This paper presents the WDC Schema.org Table Annotation Benchmark (SOTAB) for comparing the performance of table annotation systems. SOTAB covers the column type annotation (CTA) and columns property annotation (CPA) tasks. SOTAB provides {$\sim$}50,000 annotated tables for each of the tasks containing Schema.org data from different websites. The tables cover 17 different types of entities such as movie, event, local business, recipe, job posting, or product. The tables stem from the WDC Schema.org Table Corpus which was created by extracting Schema.org annotations from the Common Crawl. Consequently, the labels used for annotating columns in SOTAB are part of the Schema.org vocabulary. The benchmark covers 91 types for CTA and 176 properties for CPA distributed across textual, numerical and date/time columns. The tables are split into fixed training, validation and test sets. The test sets are further divided into subsets focusing on specific challenges, such as columns with missing values or different value formats, in order to allow a more fine-grained comparison of annotation systems. The evaluation of SOTAB using Doduo and TURL shows that the benchmark is difficult to solve for current state-of-the-art systems.}, url = {https://madoc.bib.uni-mannheim.de/63868/} }
# | null | 0 | 4 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
paperswithcode_id: null
pretty_name: SOTAB_CTA
source_datasets:
- original
task_ids: []
tags:
- table-annotation
dataset_info:
- config_name:
features:
# - name: id
# dtype: int32
- name: column_index
dtype: int32
- name: label
dtype: string
- name: table
struct:
- name: header
sequence: string
- name: rows
sequence:
sequence: string
- name: name
dtype: string
splits:
- name: train
num_bytes:
num_examples: 130471
- name: test
num_bytes:
num_examples: 15040
- name: validation
num_bytes:
num_examples: 16840
download_size:
dataset_size: 162351
---
# Dataset Card for SOTAB
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [SOTAB homepage](https://webdatacommons.org/structureddata/sotab/)
- **Repository:** [SOTAB repository](https://github.com/wbsg-uni-mannheim/wdc-sotab)
- **Paper:** [SOTAB: The WDC Schema.org Table Annotation Benchmark](https://ceur-ws.org/Vol-3320/paper1.pdf)
- **Leaderboard:** [SOTAB leaderboard on PaperWithCode](https://paperswithcode.com/paper/sotab-the-wdc-schema-org-table-annotation)
- **Point of Contact:** [Needs More Information]
### Dataset Summary
The SOTAB dataset is a large-scale dataset for the task of column type annotation on semi-structured tables.
### Supported Tasks and Leaderboards
table-annotation, column-type-annotation
### Languages
en
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:**
- **Size of the generated dataset:**
- **Total amount of disk used:**
An example of 'validation' looks as follows:
```
{
"id": 0,
"column_index": 3,
"label": "currency",
"table": {
"name": "Book_7sat.co.uk_September2020_CTA.json.gz",
"header": ["col1", "col2", ...]
"rows": [
["2001", "2", "USL A-League", ...],
["2002", "2", "USL A-League", ...],
...
]
}
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `id`: a `int32` feature.
- `column_index`: a `string` feature.
- `label`: a `string` feature.
- `table`: a dictionary feature containing:
- `rows`: a `list` of `string` features.
- `rows`: a `list` of `list` of `string` features.
- `name`: a `string` feature.
### Data Splits
| name |train|validation|test |
|-------|-----:|---------:|----:|
|default|130471| 16840|15040|
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
Panupong Pasupat and Percy Liang
### Licensing Information
Creative Commons Attribution Share Alike 4.0 International
### Citation Information
```
```
### Contributions
Thanks to [@ShivangiBithel](https://github.com/shivangibithel) for adding this dataset. |
PanoEvJ/real-toxicity-prompts-severe0.7 | 2023-06-01T10:37:21.000Z | [
"region:us"
] | PanoEvJ | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: filename
dtype: string
- name: begin
dtype: int64
- name: end
dtype: int64
- name: challenging
dtype: bool
- name: prompt
struct:
- name: text
dtype: string
- name: threat
dtype: float64
- name: insult
dtype: float64
- name: severe_toxicity
dtype: float64
- name: toxicity
dtype: float64
- name: profanity
dtype: float64
- name: sexually_explicit
dtype: float64
- name: flirtation
dtype: float64
- name: identity_attack
dtype: float64
- name: continuation
struct:
- name: text
dtype: string
- name: severe_toxicity
dtype: float64
- name: toxicity
dtype: float64
- name: profanity
dtype: float64
- name: sexually_explicit
dtype: float64
- name: identity_attack
dtype: float64
- name: flirtation
dtype: float64
- name: threat
dtype: float64
- name: insult
dtype: float64
- name: input_ids
sequence: int32
- name: query
dtype: string
splits:
- name: train
num_bytes: 2181853
num_examples: 3781
download_size: 1763414
dataset_size: 2181853
---
# Dataset Card for "real-toxicity-prompts-severe0.7"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tollefj/rettsavgjoerelser_100samples_embeddings | 2023-08-11T10:45:31.000Z | [
"language:no",
"region:us"
] | tollefj | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: url
dtype: string
- name: keywords
sequence: string
- name: text
dtype: string
- name: sentences
sequence: string
- name: summary
sequence: string
- name: embedding
sequence:
sequence: float32
splits:
- name: train
num_bytes: 73887305
num_examples: 100
download_size: 71145367
dataset_size: 73887305
language:
- 'no'
---
# Dataset Card for "rettsavgjoerelser_100samples_embeddings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
togethercomputer/RedPajama-Data-Instruct | 2023-06-06T03:38:08.000Z | [
"license:apache-2.0",
"region:us"
] | togethercomputer | null | null | null | 31 | 4 | ---
license: apache-2.0
---
# Dataset Summary
RedPajama-Instruct-Data is curated from a diverse collection of NLP tasks from both [P3 (BigScience)](https://huggingface.co/datasets/bigscience/P3) and [Natural Instruction (AI2)](https://github.com/allenai/natural-instructions),
and conduct aggressive decontamination against [HELM]((https://crfm.stanford.edu/helm/latest/?group=core_scenarios)),
in two steps: (1) We first conduct semantic search using each validation example in HELM as the query and get top-100 similar instances from the Instruct data set and check tasks that have any returned instances overlapping (using 10-Gram) with the validation example.
We remove the entire task if the returned instance and the validation example correspond to the same task
(In this step, we keep the task in the case that the returned instance happens to use the same Wikipedia article as the validation example, but asks different questions);
(2) We then remove all instances that have any 10-Gram overlap with any HELM validation example.
In total, we filtered out 137 tasks and 5.2M instances (out of 1069 tasks and 93.3M instances).
# QuickStart
The materialized version of P3 includes three main fields. The inputs field contains task instructions and data inputs, while the targets field denotes the labels. The third field, meta, provides meta information.
```python
data = load_dataset('togethercomputer/RedPajama-Instruct-Data', data_files='data/P3_decontaminated.jsonl.zst', split='train')
```
For NI, the definition field refers to the task instructions, while inputs represent the input data. The targets field pertains to the labels, and meta provides relevant meta information.
```python
data = load_dataset('togethercomputer/RedPajama-Instruct-Data', data_files='data/NI_decontaminated.jsonl.zst', split='train')
```
# Source Data
RedPajama-Instruct-Data is sourced from two prominent datasets:
- [Public Pool of Prompts](https://huggingface.co/datasets/bigscience/P3): A large dataset featuring various creative tasks obtained from crowdsourcing efforts.
- [Natural-Instructions](https://github.com/allenai/natural-instructions): An instruction-tuning dataset comprising a diverse set of tasks in natural languages.
# Languages
Primarily English.
# Licensing Information
This dataset is released under the licsence of Apache 2.0.
|
TravelLeraLone/WebSRC_v1.0 | 2023-06-05T10:12:19.000Z | [
"license:cc-by-4.0",
"arxiv:2101.09465",
"region:us"
] | TravelLeraLone | null | null | null | 1 | 4 | ---
license: cc-by-4.0
---
# WebSRC v1.0
WebSRC v1.0 is a dataset for reading comprehension on structural web pages.
The task is to answer questions about web pages, which requires a system to
have a comprehensive understanding of the spatial structure and logical
structure. WebSRC consists of 6.4K web pages and 400K question-answer pairs
about web pages. For each web page, we manually chose one segment from it
and saved the corresponding HTML code, screenshot, and metadata like
positions and sizes. Questions in WebSRC were created for each segment.
Answers are either text spans from web pages or yes/no. Taking the HTML
code, screenshot, metadata as well as question as input, a model is to
predict the answer from the web page. Our dataset is the first one that
provides HTML documents as well as images, and is larger in the number of
domains and queries.
For more details, please refer to our paper [WebSRC: A Dataset for Web-Based Structural Reading Comprehension](https://arxiv.org/abs/2101.09465).
The Leaderboard of WebSRC v1.0 can be found [here](https://x-lance.github.io/WebSRC/).
## Data Format Description
The dataset for each website will be stored in `dataset.csv` in the directory
`{domain-name}/{website-number}`. The corresponding raw data (including HTML
files, screenshots, bounding box coordinates, and page names and urls) is
stored in the `processed_data` folder in the same directory.
In `dataset.csv`, each row corresponds to one question-answer data point
except the header. The meanings of each column are as follows:
* `question`: a string, the question of this question-answer data point.
* `id`: a unique id for this question-answer data point. Each `id` has a length 14, the first two characters are the domain indicator, the following two number is the website name. The corresponding page id can be extracted by `id[2:9]`, for example, id "sp160000100001" means this line is created from the *sport* domain, website *16*, and the corresponding page is `1600001.html`.
* `element_id`: an integer, the tag id (corresponding to the tag's `tid` attribute in the HTML files) of the deepest tag in the DOM tree which contain all the answer. For yes/no question, there is no tag associated with the answer, so the `element_id` is -1.
* `answer_start`: an integer, the char offset of the answer from the start of the content of the tag specified by `element_id`. Note that before counting this number, we first eliminate all the inner tags in the specified tag and replace all the consecutive whitespaces with one space. For yes/no questions, `answer_start` is 1 for answer "yes" and 0 for answer "no".
* `answer`: a string, the answer of this question-answer data point.
## Data Statistics
We roughly divided the questions in WebSRC v1.0 into three categories: KV,
Compare, and Table. The detailed definitions can be found in our
[paper](https://arxiv.org/abs/2101.09465). The numbers of websites, webpages,
and QAs corresponding to the three categories are as follows:
Type | # Websites | # Webpages | # QAs
---- | ---------- | ---------- | -----
KV | 34 | 3,207 | 168,606
Comparison | 15 | 1,339 | 68,578
Table | 21 | 1,901 | 163,314
The statistics of the dataset splits are as follows:
Split | # Websites | # Webpages | # QAs
----- | ---------- | ---------- | -----
Train | 50 | 4,549 | 307,315
Dev | 10 | 913 | 52,826
Test | 10 | 985 | 40,357
## Obtain Test Result
For test set evaluation, please send your prediction files to
zhao_mengxin@sjtu.edu.cn and chenlusz@sjtu.edu.cn with title "WebSRC Test:
\<your model name\>+\<your institution\>". For evaluation, the prediction
files should contain two files:
```jsonc
// prediction.json
// A json format file, keys are ids and values are the predicted answers (string).
{
"sp160000100001": "predicted answer",
"sp160000100002": "...",
//...
}
// tag_prediction.json
// A json format file, keys are ids and values are the predicted tag tid (int)
{
"sp160000100001": -1,
"sp160000100002": -1,
//...
}
```
We encourage to submit results from **at least three runs with different random
seeds** to reduce the uncertainty of the experiments. Please place prediction files
for each run in different directories and submit a zipped file. The average test
result will be sent by email.
## Reference
If you use any source codes or datasets included in this repository in your work,
please cite the corresponding papers. The bibtex are listed below:
```text
@inproceedings{chen-etal-2021-websrc,
title = "{W}eb{SRC}: A Dataset for Web-Based Structural Reading Comprehension",
author = "Chen, Xingyu and
Zhao, Zihan and
Chen, Lu and
Ji, JiaBao and
Zhang, Danyang and
Luo, Ao and
Xiong, Yuxuan and
Yu, Kai",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.343",
pages = "4173--4185",
abstract = "Web search is an essential way for humans to obtain information, but it{'}s still a great challenge for machines to understand the contents of web pages. In this paper, we introduce the task of web-based structural reading comprehension. Given a web page and a question about it, the task is to find an answer from the web page. This task requires a system not only to understand the semantics of texts but also the structure of the web page. Moreover, we proposed WebSRC, a novel Web-based Structural Reading Comprehension dataset. WebSRC consists of 400K question-answer pairs, which are collected from 6.4K web pages with corresponding HTML source code, screenshots, and metadata. Each question in WebSRC requires a certain structural understanding of a web page to answer, and the answer is either a text span on the web page or yes/no. We evaluate various strong baselines on our dataset to show the difficulty of our task. We also investigate the usefulness of structural information and visual features. Our dataset and baselines have been publicly available.",
}
```
|
tasksource/winodict | 2023-07-13T11:07:34.000Z | [
"language:en",
"license:cc-by-4.0",
"arxiv:2209.12153",
"region:us"
] | tasksource | null | null | null | 0 | 4 | ---
language: en
license: cc-by-4.0
dataset_info:
features:
- name: id
dtype: string
- name: lemma
dtype: string
- name: fake_lemma
dtype: string
- name: pos
dtype: string
- name: tag
dtype: string
- name: pronoun
dtype: string
- name: definition
dtype: string
- name: sentence
dtype: string
- name: option1
dtype: string
- name: option2
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 415190
num_examples: 1488
- name: val
num_bytes: 135624
num_examples: 496
- name: test
num_bytes: 135191
num_examples: 496
download_size: 249676
dataset_size: 686005
---
https://github.com/google-research/language/tree/master/language/wino_dict
```@inproceedings{51779,
title = {WinoDict: Probing language models for in-context language acquisition},
author = {Fangyu Liu and Jeremy Cole and Julian Martin Eisenschlos and William Weston Cohen},
year = {2022},
URL = {https://arxiv.org/abs/2209.12153},
booktitle = {EACL}
}
``` |
SahandNZ/cryptonews-articles-with-price-momentum-labels | 2023-06-07T17:49:38.000Z | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"license:openrail",
"finance",
"region:us"
] | SahandNZ | null | null | null | 4 | 4 | ---
license: openrail
task_categories:
- text-classification
language:
- en
tags:
- finance
pretty_name: Cryptonews.com articles with price momentum labels
size_categories:
- 10K<n<100K
---
# Dataset Card for Cryptonews articles with price momentum labels
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/SahandNZ/IUST-NLP-project-spring-2023
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The dataset was gathered from two prominent sources in the cryptocurrency industry: Cryptonews.com and Binance.com. The aim of the dataset was to evaluate the impact of news on crypto price movements.
As we know, news events such as regulatory changes, technological advancements, and major partnerships can have a significant impact on the price of cryptocurrencies. By analyzing the data collected from these sources, this dataset aimed to provide insights into the relationship between news events and crypto market trends.
### Supported Tasks and Leaderboards
- **Text Classification**
- **Sentiment Analysis**
### Languages
The language data in this dataset is in English (BCP-47 en)
## Dataset Structure
### Data Instances
Todo
### Data Fields
Todo
### Data Splits
Todo
### Source Data
- **Textual:** https://Cryptonews.com
- **Numerical:** https://Binance.com |
amitness/maltese-news-nli-random | 2023-08-15T14:52:22.000Z | [
"region:us"
] | amitness | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: title
dtype: string
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': not_entailment
'1': entailment
splits:
- name: train
num_bytes: 30826887
num_examples: 17792
- name: validation
num_bytes: 6840831
num_examples: 3813
- name: test
num_bytes: 6605698
num_examples: 3813
download_size: 27154710
dataset_size: 44273416
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
# Dataset Card for "maltese-news-nli-random"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
akufeldt/fr-gec-dataset | 2023-06-09T05:51:34.000Z | [
"region:us"
] | akufeldt | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: lang
dtype: string
- name: sentence
dtype: string
- name: modified
dtype: string
- name: transformation
dtype: string
- name: sec_transformation
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 14735896.265220648
num_examples: 59850
- name: dev
num_bytes: 818660.9036233693
num_examples: 3325
- name: test
num_bytes: 818660.9036233693
num_examples: 3325
download_size: 9578782
dataset_size: 16373218.072467385
---
# Dataset Card for "fr-gec-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.