id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
PaulineSanchez/Dataset_food_translation_fr_en | 2023-05-16T14:46:06.000Z | [
"task_categories:translation",
"size_categories:1K<n<10K",
"language:fr",
"language:en",
"food",
"restaurant",
"menus",
"nutrition",
"region:us"
] | PaulineSanchez | null | null | 0 | 10 | 2023-05-15T15:39:03 | ---
dataset_info:
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- fr
splits:
- name: train
num_bytes: 255634.8588621444
num_examples: 2924
- name: validation
num_bytes: 63996.14113785558
num_examples: 732
download_size: 208288
dataset_size: 319631.0
task_categories:
- translation
language:
- fr
- en
tags:
- food
- restaurant
- menus
- nutrition
size_categories:
- 1K<n<10K
---
# Dataset Card for "Dataset_food_translation_fr_en"
- info: This dataset is the combination of two datasets I previously made .
- There is : https://huggingface.co/datasets/PaulineSanchez/Trad_food which is made from the ANSES-CIQUAL 2020 Table in English in XML format, found on https://www.data.gouv.fr/fr/datasets/table-de-composition-nutritionnelle-des-aliments-ciqual/ .
I made some minor changes on it in order to have it meets my needs (removed/added words to have exact translations, removed repetitions etc).
- And : https://huggingface.co/datasets/PaulineSanchez/Multi_restaurants_menus_translation which is made of the translations of different menus in different restaurants. I used the menus of these different restaurants : https://salutbaramericain.com/edina/menus/ , https://menuonline.fr/legeorgev, https://www.covedina.com/menu/, https://menuonline.fr/fouquets/cartes, https://www.theavocadoshow.com/fr/food, https://papacionuparis.fr/carte/.
I also made some minor changes on these menus in order to have a dataset that meets my needs. I have absolutely no connection with these restaurants and their menus are certainly subject to change. | 1,672 | [
[
-0.01450347900390625,
-0.0159912109375,
0.01983642578125,
0.022979736328125,
-0.013275146484375,
-0.01325225830078125,
-0.021728515625,
-0.03106689453125,
0.044342041015625,
0.06365966796875,
-0.050384521484375,
-0.05535888671875,
-0.043701171875,
0.03887939... |
narizhny/addresses-2 | 2023-06-19T11:38:04.000Z | [
"region:us"
] | narizhny | null | null | 0 | 10 | 2023-05-16T14:00:37 | ---
dataset_info:
features:
- name: Name
dtype: string
- name: Surname
dtype: string
- name: Address
dtype: string
- name: City
dtype: string
- name: State
dtype: string
- name: Postcode
dtype: int64
splits:
- name: train
num_bytes: 413
num_examples: 6
download_size: 3258
dataset_size: 413
---
# Dataset Card for "addresses-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 514 | [
[
-0.038665771484375,
-0.01413726806640625,
0.0198211669921875,
0.01229095458984375,
-0.01340484619140625,
-0.0159759521484375,
0.032867431640625,
-0.02484130859375,
0.05328369140625,
0.037322998046875,
-0.051727294921875,
-0.044219970703125,
-0.039825439453125,
... |
FreedomIntelligence/huatuo26M-testdatasets | 2023-05-17T03:39:41.000Z | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:zh",
"license:apache-2.0",
"medical",
"arxiv:2305.01526",
"region:us"
] | FreedomIntelligence | null | null | 12 | 10 | 2023-05-17T02:31:23 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- zh
tags:
- medical
size_categories:
- 1K<n<10K
---
# Dataset Card for huatuo26M-testdatasets
## Dataset Description
- **Homepage: https://www.huatuogpt.cn/**
- **Repository: https://github.com/FreedomIntelligence/Huatuo-26M**
- **Paper: https://arxiv.org/abs/2305.01526**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
We are pleased to announce the release of our evaluation dataset, a subset of the Huatuo-26M. This dataset contains 6,000 entries that we used for Natural Language Generation (NLG) experimentation in our associated research paper.
We encourage researchers and developers to use this evaluation dataset to gauge the performance of their own models. This is not only a chance to assess the accuracy and relevancy of generated responses but also an opportunity to investigate their model's proficiency in understanding and generating complex medical language.
Note: All the data points have been anonymized to protect patient privacy, and they adhere strictly to data protection and privacy regulations.
## Citation
```
@misc{li2023huatuo26m,
title={Huatuo-26M, a Large-scale Chinese Medical QA Dataset},
author={Jianquan Li and Xidong Wang and Xiangbo Wu and Zhiyi Zhang and Xiaolong Xu and Jie Fu and Prayag Tiwari and Xiang Wan and Benyou Wang},
year={2023},
eprint={2305.01526},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | 1,496 | [
[
-0.019683837890625,
-0.045166015625,
-0.0008740425109863281,
0.025604248046875,
-0.033843994140625,
-0.0099639892578125,
-0.0343017578125,
-0.02215576171875,
-0.01216888427734375,
0.0274505615234375,
-0.041107177734375,
-0.052520751953125,
-0.01776123046875,
... |
dspoka/cfpb | 2023-05-18T20:15:36.000Z | [
"region:us"
] | dspoka | null | null | 0 | 10 | 2023-05-18T20:08:10 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
mlfoundations/datacomp_xlarge | 2023-08-21T21:42:38.000Z | [
"license:cc-by-4.0",
"region:us"
] | mlfoundations | null | null | 1 | 10 | 2023-05-22T21:49:34 | ---
license: cc-by-4.0
---
## DataComp XLarge Pool
This repository contains metadata files for the xlarge pool of DataComp. For details on how to use the metadata, please visit [our website](https://www.datacomp.ai/) and our [github repository](https://github.com/mlfoundations/datacomp).
We distribute the image url-text samples and metadata under a standard Creative Common CC-BY-4.0 license. The individual images are under their own copyrights.
## Terms and Conditions
We have terms of service that are similar to those adopted by HuggingFace (https://huggingface.co/terms-of-service), which covers their dataset library. Specifically, any content you download, access or use from our index, is at your own risk and subject to the terms of service or copyright limitations accompanying such content. The image url-text index, which is a research artifact, is provided as is. By using said index, you assume all risks, including but not limited to, liabilities related to image downloading and storage. | 1,010 | [
[
-0.042694091796875,
-0.026458740234375,
0.020477294921875,
0.01499176025390625,
-0.0301055908203125,
0.0025043487548828125,
0.008636474609375,
-0.0394287109375,
0.021392822265625,
0.047332763671875,
-0.06805419921875,
-0.04888916015625,
-0.0460205078125,
0.0... |
ccmusic-database/bel_folk | 2023-10-03T16:56:58.000Z | [
"task_categories:audio-classification",
"size_categories:n<1K",
"language:zh",
"language:en",
"license:mit",
"music",
"art",
"region:us"
] | ccmusic-database | This database contains hundreds of acapella singing clips that are sung in two styles,
Bel Conto and Chinese national singing style by professional vocalists.
All of them are sung by professional vocalists and were recorded in professional commercial recording studios. | @dataset{zhaorui_liu_2021_5676893,
author = {Zhaorui Liu, Monan Zhou, Shenyang Xu and Zijin Li},
title = {{Music Data Sharing Platform for Computational Musicology Research (CCMUSIC DATASET)}},
month = nov,
year = 2021,
publisher = {Zenodo},
version = {1.1},
doi = {10.5281/zenodo.5676893},
url = {https://doi.org/10.5281/zenodo.5676893}
} | 1 | 10 | 2023-05-26T08:53:43 | ---
license: mit
task_categories:
- audio-classification
language:
- zh
- en
tags:
- music
- art
pretty_name: Bel Conto and Chinese Folk Song Singing Tech Database
size_categories:
- n<1K
---
# Dataset Card for Bel Conto and Chinese Folk Song Singing Tech Database
## Dataset Description
- **Homepage:** <https://ccmusic-database.github.io>
- **Repository:** <https://huggingface.co/datasets/ccmusic-database/bel_folk>
- **Paper:** <https://doi.org/10.5281/zenodo.5676893>
- **Leaderboard:** <https://ccmusic-database.github.io/team.html>
- **Point of Contact:** N/A
### Dataset Summary
This database contains hundreds of acapella singing clips that are sung in two styles, Bel Conto and Chinese national singing style by professional vocalists. All of them are sung by professional vocalists and were recorded in professional commercial recording studios.
### Supported Tasks and Leaderboards
Audio classification, singing method classification, voice classification
### Languages
Chinese, English
## Dataset Structure
### Data Instances
.zip(.wav, .jpg)
### Data Fields
m_bel, f_bel, m_folk, f_folk
### Data Splits
train, validation, test
## Dataset Creation
### Curation Rationale
Lack of a dataset for Bel Conto and Chinese folk song singing tech
### Source Data
#### Initial Data Collection and Normalization
Zhaorui Liu, Monan Zhou
#### Who are the source language producers?
Students from CCMUSIC
### Annotations
#### Annotation process
All of them are sung by professional vocalists and were recorded in professional commercial recording studios.
#### Who are the annotators?
professional vocalists
### Personal and Sensitive Information
None
## Considerations for Using the Data
### Social Impact of Dataset
Promoting the development of AI in the music industry
### Discussion of Biases
Only for Chinese songs
### Other Known Limitations
Some singers may not have enough professional training in classical or ethnic vocal techniques.
## Additional Information
### Dataset Curators
Zijin Li
### Evaluation
Coming soon...
### Licensing Information
```
MIT License
Copyright (c) CCMUSIC
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
### Citation Information
```
@dataset{zhaorui_liu_2021_5676893,
author = {Zhaorui Liu, Monan Zhou, Shenyang Xu and Zijin Li},
title = {CCMUSIC DATABASE: Music Data Sharing Platform for Computational Musicology Research},
month = {nov},
year = {2021},
publisher = {Zenodo},
version = {1.1},
doi = {10.5281/zenodo.5676893},
url = {https://doi.org/10.5281/zenodo.5676893}
}
```
### Contributions
Provide a dataset for distinguishing Bel Conto and Chinese folk song singing tech | 3,687 | [
[
-0.029296875,
-0.0304107666015625,
-0.01013946533203125,
0.04754638671875,
-0.0221710205078125,
0.00435638427734375,
-0.035675048828125,
-0.043487548828125,
0.0280303955078125,
0.04541015625,
-0.0672607421875,
-0.06817626953125,
-0.0097808837890625,
0.011810... |
aisyahhrazak/ms-news-harakahdaily | 2023-06-24T00:24:27.000Z | [
"language:ms",
"region:us"
] | aisyahhrazak | null | null | 0 | 10 | 2023-05-27T01:47:23 | ---
language:
- ms
---
### Dataset Summary
- 45505 Scraped News Article From Harakah Daily From 2017 to 21st May 2023
- Nearly all malay , small portion in english
### Dataset Format
```
{"url": "...", "headline": "...", "content": [...,...]}
``` | 252 | [
[
-0.02252197265625,
-0.038909912109375,
-0.0012388229370117188,
0.028350830078125,
-0.0340576171875,
-0.0160675048828125,
0.009857177734375,
0.0030689239501953125,
0.01995849609375,
0.0400390625,
-0.04144287109375,
-0.052215576171875,
-0.05126953125,
0.036163... |
kraina/airbnb | 2023-06-03T10:37:15.000Z | [
"size_categories:10K<n<100K",
"license:cc-by-4.0",
"geospatial",
"hotels",
"housing",
"region:us"
] | kraina | This dataset contains accommodation offers from the AirBnb platform from 10 European cities.
It has been copied from https://zenodo.org/record/4446043#.ZEV8d-zMI-R to make it available as a Huggingface Dataset.
It was originally published as supplementary material for the article: Determinants of Airbnb prices in European cities: A spatial econometrics approach
(DOI: https://doi.org/10.1016/j.tourman.2021.104319) | @dataset{gyodi_kristof_2021_4446043,
author = {Gyódi, Kristóf and
Nawaro, Łukasz},
title = {{Determinants of Airbnb prices in European cities:
A spatial econometrics approach (Supplementary
Material)}},
month = jan,
year = 2021,
note = {{This research was supported by National Science
Centre, Poland: Project number 2017/27/N/HS4/00951}},
publisher = {Zenodo},
doi = {10.5281/zenodo.4446043},
url = {https://doi.org/10.5281/zenodo.4446043}
} | 0 | 10 | 2023-05-30T21:15:45 | ---
license: cc-by-4.0
tags:
- geospatial
- hotels
- housing
size_categories:
- 10K<n<100K
dataset_info:
- config_name: weekdays
features:
- name: _id
dtype: string
- name: city
dtype: string
- name: realSum
dtype: float64
- name: room_type
dtype: string
- name: room_shared
dtype: bool
- name: room_private
dtype: bool
- name: person_capacity
dtype: float64
- name: host_is_superhost
dtype: bool
- name: multi
dtype: int64
- name: biz
dtype: int64
- name: cleanliness_rating
dtype: float64
- name: guest_satisfaction_overall
dtype: float64
- name: bedrooms
dtype: int64
- name: dist
dtype: float64
- name: metro_dist
dtype: float64
- name: attr_index
dtype: float64
- name: attr_index_norm
dtype: float64
- name: rest_index
dtype: float64
- name: rest_index_norm
dtype: float64
- name: lng
dtype: float64
- name: lat
dtype: float64
splits:
- name: train
num_bytes: 3998764
num_examples: 25500
download_size: 5303928
dataset_size: 3998764
- config_name: weekends
features:
- name: _id
dtype: string
- name: city
dtype: string
- name: realSum
dtype: float64
- name: room_type
dtype: string
- name: room_shared
dtype: bool
- name: room_private
dtype: bool
- name: person_capacity
dtype: float64
- name: host_is_superhost
dtype: bool
- name: multi
dtype: int64
- name: biz
dtype: int64
- name: cleanliness_rating
dtype: float64
- name: guest_satisfaction_overall
dtype: float64
- name: bedrooms
dtype: int64
- name: dist
dtype: float64
- name: metro_dist
dtype: float64
- name: attr_index
dtype: float64
- name: attr_index_norm
dtype: float64
- name: rest_index
dtype: float64
- name: rest_index_norm
dtype: float64
- name: lng
dtype: float64
- name: lat
dtype: float64
splits:
- name: train
num_bytes: 4108612
num_examples: 26207
download_size: 5451150
dataset_size: 4108612
- config_name: all
features:
- name: _id
dtype: string
- name: city
dtype: string
- name: realSum
dtype: float64
- name: room_type
dtype: string
- name: room_shared
dtype: bool
- name: room_private
dtype: bool
- name: person_capacity
dtype: float64
- name: host_is_superhost
dtype: bool
- name: multi
dtype: int64
- name: biz
dtype: int64
- name: cleanliness_rating
dtype: float64
- name: guest_satisfaction_overall
dtype: float64
- name: bedrooms
dtype: int64
- name: dist
dtype: float64
- name: metro_dist
dtype: float64
- name: attr_index
dtype: float64
- name: attr_index_norm
dtype: float64
- name: rest_index
dtype: float64
- name: rest_index_norm
dtype: float64
- name: lng
dtype: float64
- name: lat
dtype: float64
- name: day_type
dtype: string
splits:
- name: train
num_bytes: 8738970
num_examples: 51707
download_size: 10755078
dataset_size: 8738970
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:** [https://zenodo.org/record/4446043#.ZEV8d-zMI-R](https://zenodo.org/record/4446043#.ZEV8d-zMI-R)
- **Paper:** [https://www.sciencedirect.com/science/article/pii/S0261517721000388](https://www.sciencedirect.com/science/article/pii/S0261517721000388)
### Dataset Summary
This dataset contains accommodation offers from the [AirBnb](https://airbnb.com/) platform from 10 European cities.
It has been copied from [https://zenodo.org/record/4446043#.ZEV8d-zMI-R](https://zenodo.org/record/4446043#.ZEV8d-zMI-R) to make it available as a Huggingface Dataset.
It was originally published as supplementary material for the article:
**Determinants of Airbnb prices in European cities: A spatial econometrics approach**
(DOI: https://doi.org/10.1016/j.tourman.2021.104319)
## Dataset Structure
### Data Fields
The data fields contain all fields from the source dataset
along with additional `city` field denoting the city of the offer.
`all` split contains an additional field `day_type` denoting whether the offer is for
`weekdays` or `weekends`.
- city: the city of the offer,
- realSum: the full price of accommodation for two people and two nights in EUR,
- room_type: the type of the accommodation,
- room_shared: dummy variable for shared rooms,
- room_private: dummy variable for private rooms,
- person_capacity: the maximum number of guests,
- host_is_superhost: dummy variable for superhost status,
- multi: dummy variable if the listing belongs to hosts with 2-4 offers,
- biz: dummy variable if the listing belongs to hosts with more than 4 offers,
- cleanliness_rating: cleanliness rating,
- guest_satisfaction_overall: overall rating of the listing,
- bedrooms: number of bedrooms (0 for studios),
- dist: distance from city centre in km,
- metro_dist: distance from nearest metro station in km,
- attr_index: attraction index of the listing location,
- attr_index_norm: normalised attraction index (0-100),
- rest_index: restaurant index of the listing location,
- attr_index_norm: normalised restaurant index (0-100),
- lng: longitude of the listing location,
- lat: latitude of the listing location,
`all` config contains additionally:
- day_type: either `weekdays` or `weekends`
### Data Splits
| name | train |
|------------|--------:|
| weekdays | 25500 |
| weekends | 26207 |
| all | 51707 |
## Additional Information
### Licensing Information
The data is released under the licensing scheme from the original authors - CC-BY-4.0 ([source](https://zenodo.org/record/4446043#.ZEV8d-zMI-R)).
### Citation Information
```
@dataset{gyodi_kristof_2021_4446043,
author = {Gyódi, Kristóf and
Nawaro, Łukasz},
title = {{Determinants of Airbnb prices in European cities:
A spatial econometrics approach (Supplementary
Material)}},
month = jan,
year = 2021,
note = {{This research was supported by National Science
Centre, Poland: Project number 2017/27/N/HS4/00951}},
publisher = {Zenodo},
doi = {10.5281/zenodo.4446043},
url = {https://doi.org/10.5281/zenodo.4446043}
}
```
| 6,302 | [
[
-0.03375244140625,
-0.043060302734375,
0.027496337890625,
0.0229034423828125,
-0.004772186279296875,
-0.0472412109375,
-0.0157012939453125,
-0.025665283203125,
0.043853759765625,
0.0103302001953125,
-0.043121337890625,
-0.058990478515625,
-0.004856109619140625,
... |
whu9/arxiv_summarization_postprocess | 2023-06-03T04:49:04.000Z | [
"region:us"
] | whu9 | null | null | 0 | 10 | 2023-06-03T04:47:28 | ---
dataset_info:
features:
- name: source
dtype: string
- name: summary
dtype: string
- name: source_num_tokens
dtype: int64
- name: summary_num_tokens
dtype: int64
splits:
- name: train
num_bytes: 6992115668
num_examples: 197465
- name: validation
num_bytes: 216277493
num_examples: 6435
- name: test
num_bytes: 216661725
num_examples: 6439
download_size: 3553348742
dataset_size: 7425054886
---
# Dataset Card for "arxiv_summarization_postprocess"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 645 | [
[
-0.0455322265625,
-0.01030731201171875,
0.0029735565185546875,
0.0145263671875,
-0.0283966064453125,
-0.007755279541015625,
0.02264404296875,
0.0090179443359375,
0.06317138671875,
0.04974365234375,
-0.03485107421875,
-0.052032470703125,
-0.061614990234375,
-... |
aisyahhrazak/ms-news-utusanborneo | 2023-06-29T04:00:06.000Z | [
"language:ms",
"region:us"
] | aisyahhrazak | null | null | 0 | 10 | 2023-06-07T06:25:00 | ---
language:
- ms
---
Dataset Summary
- Scraped News Article From Utusan Borneo on 27.5.2023
- All malay articles
Dataset Format
```
{"url": "...", "content": [...,...]}
``` | 178 | [
[
-0.01019287109375,
-0.043914794921875,
-0.003704071044921875,
0.0256500244140625,
-0.056732177734375,
-0.023162841796875,
0.0008215904235839844,
0.0101776123046875,
0.04156494140625,
0.057861328125,
-0.037445068359375,
-0.055389404296875,
-0.0300140380859375,
... |
helena7/job_titles | 2023-06-14T09:57:15.000Z | [
"region:us"
] | helena7 | null | null | 0 | 10 | 2023-06-12T11:22:41 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
DISCOX/DISCO-10M | 2023-06-26T19:54:22.000Z | [
"size_categories:10M<n<100M",
"language:en",
"license:cc-by-4.0",
"music",
"arxiv:2306.13512",
"doi:10.57967/hf/1190",
"region:us"
] | DISCOX | null | null | 13 | 10 | 2023-06-13T07:45:14 | ---
license: cc-by-4.0
language:
- en
tags:
- music
size_categories:
- 10M<n<100M
dataset_info:
features:
- name: video_url_youtube
dtype: string
- name: video_title_youtube
dtype: string
- name: track_name_spotify
dtype: string
- name: video_duration_youtube_sec
dtype: float64
- name: preview_url_spotify
dtype: string
- name: video_view_count_youtube
dtype: float64
- name: video_thumbnail_url_youtube
dtype: string
- name: search_query_youtube
dtype: string
- name: video_description_youtube
dtype: string
- name: track_id_spotify
dtype: string
- name: album_id_spotify
dtype: string
- name: artist_id_spotify
sequence: string
- name: track_duration_spotify_ms
dtype: int64
- name: primary_artist_name_spotify
dtype: string
- name: track_release_date_spotify
dtype: string
- name: explicit_content_spotify
dtype: bool
- name: similarity_duration
dtype: float64
- name: similarity_query_video_title
dtype: float64
- name: similarity_query_description
dtype: float64
- name: similarity_audio
dtype: float64
- name: audio_embedding_spotify
sequence: float32
- name: audio_embedding_youtube
sequence: float32
splits:
- name: train
num_bytes: 73263841657.0
num_examples: 15296232
download_size: 88490703682
dataset_size: 73263841657.0
---
### Getting Started
You can download the dataset using HuggingFace:
```python
from datasets import load_dataset
ds = load_dataset("DISCOX/DISCO-10M")
```
## Dataset Structure
The dataset contains the following features:
```json
{
'video_url_youtube',
'video_title_youtube',
'track_name_spotify',
'video_duration_youtube_sec',
'preview_url_spotify',
'video_view_count_youtube',
'video_thumbnail_url_youtube',
'search_query_youtube',
'video_description_youtube',
'track_id_spotify',
'album_id_spotify',
'artist_id_spotify',
'track_duration_spotify_ms',
'primary_artist_name_spotify',
'track_release_date_spotify',
'explicit_content_spotify',
'similarity_duration',
'similarity_query_video_title',
'similarity_query_description',
'similarity_audio',
'audio_embedding_spotify',
'audio_embedding_youtube',
}
```
## What is DISCO-10M?
DISCO-10M is a music dataset created to democratize research on large-scale machine learning models for music.
The dataset contains no music due to copyright laws.
The audio embedding features were computed using [Laion-CLAP](https://github.com/LAION-AI/CLAP), and can be used instead of the raw audio for many down-stream tasks.
In case the raw audio is needed, it can be downloaded from the provided Spotify preview URL or via the YouTube link.
DISCO-10M was created by collecting a list of 400,000 artist IDs and 2.6M track IDs from Spotify, and collecting YouTube video links that match the track duration,
artist name, and track names. These matches were computed using the following three similarity metrics:
- Duration similarity: ` 1 - abs(track_duration_spotify - video_duration_youtube) / max(track_duration_spotify, video_duration_youtube) `
- Text similarity is calculated using the cosine similarity between the embedding of the search query and the embedding of the video title, as well as the search query embedding and the video description embedding. Embeddings are computed using [Sentence Bert](https://huggingface.co/sentence-transformers).
- Audio similarity is calculated using the cosine similarity between the Spotify preview snippet audio embedding and the YouTube audio embedding.
For DISCO-10M we only keep samples that return true for: ` duration_similarity > 0.25 and (description_similarity > 0.65 or title_similarity > 0.65) and audio_similarity > 0.4 `
We offer three subsets based on DISCO-10M:
- [DISCO-10K-random](https://huggingface.co/datasets/DISCOX/DISCO-10K-random): a small subset of random samples from the entire dataset.
- [DISCO-200K-random](https://huggingface.co/datasets/DISCOX/DISCO-200K-random): a subset of random samples, useful for a light-weight and representative analysis of the entire dataset.
- [DISCO-200K-high-quality](https://huggingface.co/datasets/DISCOX/DISCO-200K-high-quality): a subset of samples which were filtered more strictly to ensure a higher quality match between Spotify tracks and YouTube videos.
To cite our work, please refer to our paper [here](https://arxiv.org/abs/2306.13512).
<!--
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
--> | 4,513 | [
[
-0.049896240234375,
-0.04595947265625,
0.0191650390625,
0.0305938720703125,
-0.0099334716796875,
-0.006256103515625,
-0.0279998779296875,
-0.00154876708984375,
0.05126953125,
0.0294036865234375,
-0.0718994140625,
-0.059600830078125,
-0.027587890625,
-0.00281... |
renumics/cifar10-outlier | 2023-06-30T20:09:38.000Z | [
"task_categories:image-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-80-Million-Tiny-Images",
"language:en",
"license:unknown",
"region:us"
] | renumics | null | null | 0 | 10 | 2023-06-14T20:53:24 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-80-Million-Tiny-Images
task_categories:
- image-classification
task_ids: []
paperswithcode_id: cifar-10
pretty_name: Cifar10-Outliers
dataset_info:
features:
- name: img
dtype: image
- name: label
dtype:
class_label:
names:
'0': airplane
'1': automobile
'2': bird
'3': cat
'4': deer
'5': dog
'6': frog
'7': horse
'8': ship
'9': truck
- name: embedding_foundation
sequence: float32
- name: embedding_ft
sequence: float32
- name: outlier_score_ft
dtype: float64
- name: outlier_score_foundation
dtype: float64
- name: nn_image
struct:
- name: bytes
dtype: binary
- name: path
dtype: 'null'
config_name: plain_text
splits:
- name: train
num_bytes: 535120320.0
num_examples: 50000
download_size: 595144805
dataset_size: 535120320.0
---
# Dataset Card for "cifar10-outlier"
📚 This dataset is an enriched version of the [CIFAR-10 Dataset](https://www.cs.toronto.edu/~kriz/cifar.html).
The workflow is described in the medium article: [Changes of Embeddings during Fine-Tuning of Transformers](https://medium.com/@markus.stoll/changes-of-embeddings-during-fine-tuning-c22aa1615921).
## Explore the Dataset
The open source data curation tool [Renumics Spotlight](https://github.com/Renumics/spotlight) allows you to explorer this dataset. You can find a Hugging Face Spaces running Spotlight with this dataset here:
- Full Version (High hardware requirement) <https://huggingface.co/spaces/renumics/cifar10-outlier>
- Fast Version <https://huggingface.co/spaces/renumics/cifar10-outlier-low>

Or you can explorer it locally:
```python
!pip install renumics-spotlight datasets
from renumics import spotlight
import datasets
ds = datasets.load_dataset("renumics/cifar10-outlier", split="train")
df = ds.rename_columns({"img": "image", "label": "labels"}).to_pandas()
df["label_str"] = df["labels"].apply(lambda x: ds.features["label"].int2str(x))
dtypes = {
"nn_image": spotlight.Image,
"image": spotlight.Image,
"embedding_ft": spotlight.Embedding,
"embedding_foundation": spotlight.Embedding,
}
spotlight.show(
df,
dtype=dtypes,
layout="https://spotlight.renumics.com/resources/layout_pre_post_ft.json",
)
```
| 2,624 | [
[
-0.047882080078125,
-0.0394287109375,
0.00878143310546875,
0.023834228515625,
-0.0103912353515625,
0.00560760498046875,
-0.0221099853515625,
-0.017425537109375,
0.04827880859375,
0.03515625,
-0.046844482421875,
-0.042999267578125,
-0.04510498046875,
-0.00638... |
HausaNLP/Naija-Lex | 2023-06-18T16:13:08.000Z | [
"multilinguality:monolingual",
"multilinguality:multilingual",
"language:hau",
"language:ibo",
"language:yor",
"license:cc-by-nc-sa-4.0",
"sentiment analysis, Twitter, tweets",
"stopwords",
"region:us"
] | HausaNLP | Naija-Stopwords is a part of the Naija-Senti project. It is a list of collected stopwords from the four most widely spoken languages in Nigeria — Hausa, Igbo, Nigerian-Pidgin, and Yorùbá. | @inproceedings{muhammad-etal-2022-naijasenti,
title = "{N}aija{S}enti: A {N}igerian {T}witter Sentiment Corpus for Multilingual Sentiment Analysis",
author = "Muhammad, Shamsuddeen Hassan and
Adelani, David Ifeoluwa and
Ruder, Sebastian and
Ahmad, Ibrahim Sa{'}id and
Abdulmumin, Idris and
Bello, Bello Shehu and
Choudhury, Monojit and
Emezue, Chris Chinenye and
Abdullahi, Saheed Salahudeen and
Aremu, Anuoluwapo and
Jorge, Al{\"\i}pio and
Brazdil, Pavel",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.63",
pages = "590--602",
} | 0 | 10 | 2023-06-16T09:12:05 | ---
license: cc-by-nc-sa-4.0
tags:
- sentiment analysis, Twitter, tweets
- stopwords
multilinguality:
- monolingual
- multilingual
language:
- hau
- ibo
- yor
pretty_name: NaijaStopwords
---
# Naija-Lexicons
Naija-Lexicons is a part of the [Naija-Senti](https://huggingface.co/datasets/HausaNLP/NaijaSenti-Twitter) project. It is a list of collected stopwords from the four most widely spoken languages in Nigeria — Hausa, Igbo, Nigerian-Pidgin, and Yorùbá.
--------------------------------------------------------------------------------
## Dataset Description
- **Homepage:** https://github.com/hausanlp/NaijaSenti/tree/main/data/stopwords
- **Repository:** [GitHub](https://github.com/hausanlp/NaijaSenti/tree/main/data/stopwords)
- **Paper:** [NaijaSenti: A Nigerian Twitter Sentiment Corpus for Multilingual Sentiment Analysis](https://aclanthology.org/2022.lrec-1.63/)
- **Leaderboard:** N/A
- **Point of Contact:** [Shamsuddeen Hassan Muhammad](shamsuddeen2004@gmail.com)
### Languages
3 most indigenous Nigerian languages
* Hausa (hau)
* Igbo (ibo)
* Yoruba (yor)
## Dataset Structure
### Data Instances
List of lexicons instances in each of the 3 languages with their sentiment labels.
```
{
"word": "string",
"label": "string"
}
```
### How to use it
```python
from datasets import load_dataset
# you can load specific languages (e.g., Hausa). This download manually created and translated lexicons.
ds = load_dataset("HausaNLP/Naija-Lexicons", "hau")
# you can load specific languages (e.g., Hausa). You may also specify the split you want to downloaf
ds = load_dataset("HausaNLP/Naija-Lexicons", "hau", split = "manual")
```
## Additional Information
### Dataset Curators
* Shamsuddeen Hassan Muhammad
* Idris Abdulmumin
* Ibrahim Said Ahmad
* Bello Shehu Bello
### Licensing Information
This Naija-Lexicons dataset is licensed under a Creative Commons Attribution BY-NC-SA 4.0 International License
### Citation Information
```
@inproceedings{muhammad-etal-2022-naijasenti,
title = "{N}aija{S}enti: A {N}igerian {T}witter Sentiment Corpus for Multilingual Sentiment Analysis",
author = "Muhammad, Shamsuddeen Hassan and
Adelani, David Ifeoluwa and
Ruder, Sebastian and
Ahmad, Ibrahim Sa{'}id and
Abdulmumin, Idris and
Bello, Bello Shehu and
Choudhury, Monojit and
Emezue, Chris Chinenye and
Abdullahi, Saheed Salahudeen and
Aremu, Anuoluwapo and
Jorge, Al{\'\i}pio and
Brazdil, Pavel",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.63",
pages = "590--602",
}
```
### Contributions
> This work was carried out with support from Lacuna Fund, an initiative co-founded by The Rockefeller Foundation, Google.org, and Canada’s International Development Research Centre. The views expressed herein do not necessarily represent those of Lacuna Fund, its Steering Committee, its funders, or Meridian Institute. | 3,175 | [
[
-0.04705810546875,
-0.030487060546875,
0.006237030029296875,
0.048065185546875,
-0.031280517578125,
0.0091094970703125,
-0.0325927734375,
-0.037139892578125,
0.0667724609375,
0.031951904296875,
-0.031219482421875,
-0.048858642578125,
-0.054168701171875,
0.04... |
PhaniManda/autotrain-data-identifying-person-location-date | 2023-06-22T09:17:06.000Z | [
"task_categories:token-classification",
"region:us"
] | PhaniManda | null | null | 3 | 10 | 2023-06-22T09:16:09 | ---
task_categories:
- token-classification
---
# AutoTrain Dataset for project: identifying-person-location-date
## Dataset Description
This dataset has been automatically processed by AutoTrain for project identifying-person-location-date.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"tokens": [
"I",
"will",
"be",
"traveling",
"to",
"Tokyo",
"next",
"month."
],
"tags": [
13,
13,
13,
13,
13,
1,
13,
0,
5
]
},
{
"tokens": [
"The",
"company",
"Apple",
"Inc.",
"is",
"based",
"in",
"California."
],
"tags": [
13,
13,
3,
9,
13,
13,
1
]
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"tokens": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"tags": "Sequence(feature=ClassLabel(names=['B-DATE', 'B-LOC', 'B-MISC', 'B-ORG', 'B-PER', 'I-DATE', 'I-DATE,', 'I-LOC', 'I-MISC', 'I-ORG', 'I-ORG,', 'I-PER', 'I-PER,', 'O'], id=None), length=-1, id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 21 |
| valid | 9 |
| 1,526 | [
[
-0.03228759765625,
0.007335662841796875,
0.01425933837890625,
0.01971435546875,
-0.01323699951171875,
0.0164337158203125,
0.00225067138671875,
-0.0270538330078125,
0.015167236328125,
0.0214080810546875,
-0.05438232421875,
-0.0609130859375,
-0.031219482421875,
... |
breadlicker45/discord-chat | 2023-06-27T01:27:41.000Z | [
"region:us"
] | breadlicker45 | null | null | 1 | 10 | 2023-06-27T01:27:11 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
lyogavin/Anima33B_rlhf_belle_eval_1k | 2023-06-28T00:24:01.000Z | [
"region:us"
] | lyogavin | null | null | 2 | 10 | 2023-06-28T00:23:53 | ---
dataset_info:
features:
- name: question
dtype: string
- name: std_answer
dtype: string
- name: class
dtype: string
- name: anima_answer
dtype: string
- name: anima_answer_extraced
dtype: string
- name: inputPrompt
dtype: string
- name: gpt_output
dtype: string
- name: gpt_output_score
dtype: float64
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: chosen_token_len
dtype: int64
- name: rejected_token_len
dtype: int64
splits:
- name: train
num_bytes: 2972300.1
num_examples: 700
- name: test
num_bytes: 1273842.9
num_examples: 300
download_size: 2384211
dataset_size: 4246143.0
---
# Dataset Card for "Anima33B_rlhf_belle_eval_1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 889 | [
[
-0.04241943359375,
-0.0152740478515625,
0.003265380859375,
0.035064697265625,
-0.00762939453125,
-0.0019435882568359375,
0.03955078125,
-0.0217132568359375,
0.06591796875,
0.045989990234375,
-0.0731201171875,
-0.049102783203125,
-0.036590576171875,
-0.000660... |
Fsoft-AIC/the-vault-inline | 2023-08-22T10:01:46.000Z | [
"task_categories:text-generation",
"multilinguality:multiprogramming languages",
"language:code",
"language:en",
"license:mit",
"arxiv:2305.06156",
"region:us"
] | Fsoft-AIC | The Vault is a multilingual code-text dataset with over 34 million pairs covering 10 popular programming languages.
It is the largest corpus containing parallel code-text data. By building upon The Stack, a massive raw code sample collection,
the Vault offers a comprehensive and clean resource for advancing research in code understanding and generation. It provides a
high-quality dataset that includes code-text pairs at multiple levels, such as class and inline-level, in addition to the function level.
The Vault can serve many purposes at multiple levels. | @article{manh2023vault,
title={The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation},
author={Manh, Dung Nguyen and Hai, Nam Le and Dau, Anh TV and Nguyen, Anh Minh and Nghiem, Khanh and Guo, Jin and Bui, Nghi DQ},
journal={arXiv preprint arXiv:2305.06156},
year={2023}
} | 2 | 10 | 2023-06-30T11:07:10 | ---
language:
- code
- en
multilinguality:
- multiprogramming languages
task_categories:
- text-generation
license: mit
dataset_info:
features:
- name: identifier
dtype: string
- name: return_type
dtype: string
- name: repo
dtype: string
- name: path
dtype: string
- name: language
dtype: string
- name: code
dtype: string
- name: code_tokens
dtype: string
- name: original_docstring
dtype: string
- name: comment
dtype: string
- name: docstring_tokens
dtype: string
- name: docstring
dtype: string
- name: original_string
dtype: string
pretty_name: The Vault Function
viewer: true
---
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Statistics](#dataset-statistics)
- [Usage](#usage)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [FSoft-AI4Code/TheVault](https://github.com/FSoft-AI4Code/TheVault)
- **Paper:** [The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation](https://arxiv.org/abs/2305.06156)
- **Contact:** support.ailab@fpt.com
- **Website:** https://www.fpt-aicenter.com/ai-residency/
<p align="center">
<img src="https://raw.githubusercontent.com/FSoft-AI4Code/TheVault/main/assets/the-vault-4-logo-png.png" width="300px" alt="logo">
</p>
<div align="center">
# The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation
</div>
## Dataset Summary
The Vault dataset is a comprehensive, large-scale, multilingual parallel dataset that features high-quality code-text pairs derived from The Stack, the largest permissively-licensed source code dataset.
We provide The Vault which contains code snippets from 10 popular programming languages such as Java, JavaScript, Python, Ruby, Rust, Golang, C#, C++, C, and PHP. This dataset provides multiple code-snippet levels, metadata, and 11 docstring styles for enhanced usability and versatility.
## Supported Tasks
The Vault can be used for pretraining LLMs or downstream code-text interaction tasks. A number of tasks related to code understanding and geneartion can be constructed using The Vault such as *code summarization*, *text-to-code generation* and *code search*.
## Languages
The natural language text (docstring) is in English.
10 programming languages are supported in The Vault: `Python`, `Java`, `JavaScript`, `PHP`, `C`, `C#`, `C++`, `Go`, `Ruby`, `Rust`
## Dataset Structure
### Data Instances
```
{
"hexsha": "ee1cf38808d3db0ea364b049509a01a65e6e5589",
"repo": "Waguy02/Boomer-Scripted",
"path": "python/subprojects/testbed/mlrl/testbed/persistence.py",
"license": [
"MIT"
],
"language": "Python",
"identifier": "__init__",
"code": "def __init__(self, model_dir: str):\n \"\"\"\n :param model_dir: The path of the directory where models should be saved\n \"\"\"\n self.model_dir = model_dir",
"code_tokens": [
"def",
"__init__",
"(",
"self",
",",
"model_dir",
":",
"str",
")",
":",
"\"\"\"\n :param model_dir: The path of the directory where models should be saved\n \"\"\"",
"self",
".",
"model_dir",
"=",
"model_dir"
],
"original_comment": "\"\"\"\n :param model_dir: The path of the directory where models should be saved\n \"\"\"",
"comment": ":param model_dir: The path of the directory where models should be saved",
"comment_tokens": [
":",
"param",
"model_dir",
":",
"The",
"path",
"of",
"the",
"directory",
"where",
"models",
"should",
"be",
"saved"
],
"start_point": [
1,
8
],
"end_point": [
3,
11
],
"prev_context": {
"code": null,
"start_point": null,
"end_point": null
},
"next_context": {
"code": "self.model_dir = model_dir",
"start_point": [
4,
8
],
"end_point": [
4,
34
]
}
}
```
### Data Fields
Data fields for inline level:
- **hexsha** (string): the unique git hash of file
- **repo** (string): the owner/repo
- **path** (string): the full path to the original file
- **license** (list): licenses in the repo
- **language** (string): the programming language
- **identifier** (string): the function or method name
- **code** (string): the part of the original that is code
- **code_tokens** (list): tokenized version of `code`
- **original_comment** (string): original text of comment ,
- **comment** (string): clean version of comment,
- **comment_tokens** (list): tokenized version of `comment`,
- **start_point** (int): start position of `original_comment` in `code`,
- **end_point** (int): end position of `original_comment` in `code`,
- **prev_context** (dict): block of code before `original_comment`,
- **next_context** (dict): block of code after `original_comment`
### Data Splits
In this repo, the inline level data is not split, and contained in only train set.
## Dataset Statistics
| Languages | Number of inline comments |
|:-----------|---------------------------:|
|Python | 14,013,238 |
|Java | 17,062,277 |
|JavaScript | 1,438,110 |
|PHP | 5,873,744 |
|C | 6,778,239 |
|C# | 6,274,389 |
|C++ | 10,343,650 |
|Go | 4,390,342 |
|Ruby | 767,563 |
|Rust | 2,063,784 |
|TOTAL | **69,005,336** |
## Usage
You can load The Vault dataset using datasets library: ```pip install datasets```
```python
from datasets import load_dataset
# Load full inline level dataset (69M samples)
dataset = load_dataset("Fsoft-AIC/the-vault-inline")
# specific language (e.g. Python)
dataset = load_dataset("Fsoft-AIC/the-vault-inline", languages=['Python'])
# dataset streaming
data = load_dataset("Fsoft-AIC/the-vault-inline", streaming= True)
for sample in iter(data['train']):
print(sample)
```
## Additional information
### Licensing Information
MIT License
### Citation Information
```
@article{manh2023vault,
title={The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation},
author={Manh, Dung Nguyen and Hai, Nam Le and Dau, Anh TV and Nguyen, Anh Minh and Nghiem, Khanh and Guo, Jin and Bui, Nghi DQ},
journal={arXiv preprint arXiv:2305.06156},
year={2023}
}
```
### Contributions
This dataset is developed by [FSOFT AI4Code team](https://github.com/FSoft-AI4Code). | 7,263 | [
[
-0.021331787109375,
-0.0256805419921875,
0.007083892822265625,
0.0240325927734375,
-0.0081329345703125,
0.0189666748046875,
-0.003040313720703125,
-0.0101318359375,
0.00039005279541015625,
0.028076171875,
-0.048675537109375,
-0.0677490234375,
-0.02783203125,
... |
UmaDiffusion/ULTIMA | 2023-07-29T03:16:24.000Z | [
"task_categories:text-to-image",
"task_categories:image-to-text",
"task_ids:image-captioning",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:other",
"region:us"
] | UmaDiffusion | ULTIMA Dataset - Uma Musume Labeled Text-Image Multimodal Alignment Dataset | @misc{ULTIMA,
author = {Oh Giyeong (BootsofLagrangian), Kang Dohoon (Haken)},
title = {ULTIMA - Uma Musume Labeled Text-Image Multimodal Alignment Dataset},
howpublished = {\\url{https://huggingface.co/datasets/UmaDiffusion/ULTIMA}},
month = {July},
year = {2023}
} | 4 | 10 | 2023-07-02T07:39:10 | ---
license: other
language:
- en
multilinguality:
- monolingual
pretty_name: Uma Musume Labeled Text-Image Multimodal Alignment Dataset
size_categories:
- 10K<n<100K
task_categories:
- text-to-image
- image-to-text
task_ids:
- image-captioning
extra_gated_prompt: "You agree to use this dataset for non-commercial ONLY and NOT VIOLATE the guidelines for secondary creation of Uma Musume Pretty Derby."
extra_gated_fields:
I agree to use this dataset for non-commercial ONLY and to NOT VIOLATE the guidelines for secondary creation of Uma Musume Pretty Derby from Cygames, Inc: checkbox
---
---
# About **ULTIMA**
ULTIMA Dataset is **U**ma Musume **L**abeled **T**ext-**I**mage **M**ultimodal **A**lignment Dataset.
ULTIMA is *a supervised dataset for fine-tuning* of characters in Uma Musume: Pretty Derby.
It contains **~14K** text-image pairs.
We ***manually*** processed the entire data. This is an essential fact even though it is assisted by a machine.
What we did is on [Data Preprocessing.md](https://huggingface.co/datasets/UmaDiffusion/ULTIMA/blob/main/Data%20Preprocessing.md).
Statistics about datset and abbreviations of Uma Musume are in [statistics.md](https://huggingface.co/datasets/UmaDiffusion/ULTIMA/blob/main/statistics.md).
Pruned tag-clothes pairs are in [prompts.md](https://huggingface.co/datasets/UmaDiffusion/ULTIMA/blob/main/prompts.md)
## Dataset Structure
We use a modularized file structure to distribute ULTIMA. The 14,460 images in ULTIMA are split into 73 folders, where each folder contains 200 images and a JSON file that these 200 images to their text and information.
```bash
# ULTIMA
./
├──data
│ ├──part-00000
│ │ ├──01_agt_00000.png
│ │ ├──01_agt_00001.png
│ │ ├──01_agt_00002.png
│ │ ├──[...]
│ │ └──part-00000.json
│ ├──part-00002
│ ├──part-00003
│ ├──[...]
│ └──part-00072
└──metadata.parquet
```
These sub-folders have names `part-0xxxx`, and each image has a name which has a format, `[quality]_[abbreviation]_[image number].png`. The JSON file in a sub-folder has the same name as the sub-folder. Each image is a `PNG` file. The JSON file contains key-value pairs mapping image filenames to their prompts and aesthetic scores.
## Data Instances
For example, below is the image of `01_agt_00007.png` and its key-value pair in `part-00000.json`.
<img width="300" src="https://i.imgur.com/LNNVGA2.png">
```json
{
"01_agt_00007.png": {
"text": "agnes tachyon \(umamusume\), labcoat, closed eyes, white background, single earring, tracen school uniform, smile, open mouth, sleeves past fingers, blush, upper body, sleeves past wrists, purple shirt, facing viewer, sailor collar, bowtie, long sleeves, :d, purple bow, white coat, breasts",
"width": 1190,
"height": 1684,
"pixels": 2003960,
"LAION_aesthetic": 6.2257309,
"cafe_aesthetic": 0.97501057
},
}
```
## Data Fields
- key: Unique image name
- `text`: Manipulated tags
- `width`: Width of image
- `height`: Height of image
- `pixels`: Pixels(Width*Height) of image
- `LAION_aesthetic`: Aesthetic score by [CLIP+MLP Aesthetic Score Predictor](https://github.com/christophschuhmann/improved-aesthetic-predictor)
- `cafe_aesthetic`: Aesthetic score by [cafe aesthetic](https://huggingface.co/cafeai/cafe_aesthetic)
## Data Metadata
To help you easily access prompts and other attributes of images without downloading all the Zip files, we include metadata table `metadata.parquet` for ULTIMA.
The shape of `metadata.parquet` is (14460, 8). We store these tables in the Parquet format because Parquet is column-based: you can efficiently query individual columns (e.g., texts) without reading the entire table.
Below are first three rows from `metadata.parquet`.
| image_name | text | part_id | width | height | pixels | LAION_aesthetic | cafe_aesthetic |
|:--:|:---------------------------------------|:-:|:-:|:-:|:-:|:-:|:-:|
| 01_agt_00000.png | agnes tachyon \\\(umamusume\\\), vehicle focus, motor vehicle, ground vehicle, labcoat, sleeves past wrists, sports car, sleeves past fingers, yellow sweater, black necktie, open mouth, black pantyhose, smile, looking at viewer, single earring, short necktie, holding | 0 | 3508 | 2480 | 8699840 | 5.99897194 | 0.9899081 |
| 01_agt_00001.png | agnes tachyon \\\(umamusume\\\), labcoat, sleeves past wrists, sleeves past fingers, long sleeves, black pantyhose, skirt, smile, white background, white coat, cowboy shot, from side, profile, hand up, closed mouth, yellow sweater, collared shirt, black shirt, black necktie, pen coat, looking to the side | 0 | 1105 | 1349 | 1490645 | 6.3266325 | 0.99231464 |
| 01_agt_00002.png | agnes tachyon \\\(umamusume\\\), labcoat, test tube, sitting, crossed legs, yellow sweater, sleeves past wrists, black pantyhose, sleeves past fingers, black necktie, boots removed, high heels, full body, long sleeves, shoes, high heel boots, single shoe, sweater vest, white coat, smile, closed mouth, collared shirt, single boot, white footwear, white background, single earring, black shirt, short necktie, open coat, vial | 0 | 2000 | 2955 | 5910000 | 6.21014023 | 0.94741267 |
|
## Metadata Schema
|Column|Type|Description|
|:---|:---|:---|
|`image_name`|`string`| Image filename |
|`text`|`string`| The manipulated text of image for alignment |
|`part_id`|`uint16`| Folder ID of this image |
|`width`|`uint16`| Image width |
|`height`|`uint16`| Image height |
|`pixels`|`uint32`| Image pixels |
|`LAION_aesthetic`|`float32`| LATION aesthetic score of image |
|`cafe_aesthetic`|`float32`| cafe aesthetic score of image |
|
# Considerations for Using the Data
## Limitations and Bias
The whole process was based on the subjectivity of the author.
0. Domain of the dataset, which only contains characters in Uma Musume: Pretty Derby.
1. Collection of images
3. Calibration on images
4. Manipulation of tags
5. Alignment on tags
6. Separation of images by quality
Therefore, the dataset is totally based on author's supervision, not on any objective metric.
## Guidelines for secondary creation of Uma Musume: Pretty Derby
Here is the guidelines for secondary creation of Uma Musume: Pretty Derby from Cygames, Inc.
>We would like to provide you with the guidelines for secondary creations of Uma Musume Pretty Derby.
>This work features numerous characters based on real-life racehorses, and it has been made possible through the cooperation of many individuals, including the horse owners who have lent their horse names.
>We kindly ask everyone, including fans of the racehorses that serve as motifs, horse owners, and related parties, to refrain from expressions that may cause discomfort or significantly damage the image of the racehorses or characters.
>Specifically, please refrain from publishing creations that fall under the following provisions within Uma Musume Pretty Derby
>1. Creations that aim to harm this work, the thoughts of third parties, or their reputation
>2. Violent, grotesque, or sexually explicit content
>3. Creations that excessively support or denigrate specific politics, religions, or beliefs
>4. Expressions with antisocial content
>5. Creations that infringe upon the rights of third parties
>
>These guidelines have been established after consultation with the management company responsible for the horse names.
>In cases that fall under the aforementioned provisions, we may have to consider taking legal measures if necessary.
>These guidelines do not deny the fan activities of those who support Uma Musume.
>We have established these guidelines to ensure that everyone can engage in fan activities with peace of mind.
>We appreciate your understanding and cooperation.
>Please note that we will not provide individual responses to inquiries regarding these guidelines.
>The Uma Musume project will continue to support racehorses and their achievements alongside everyone, in order to uphold the dignity of these renowned horses.
Translated by ChatGPT. The original document(in japanese) is [here](https://umamusume.jp/derivativework_guidelines/).
## Licensing Information
The dataset is made available for academic research purposes only and for non-commercial purposes. All the images are collected from the Internet, and the copyright of images belongs to the original owners. If any of the images belongs to you and you would like it removed, please inform us, we will try to remove it from the dataset.
## Citation
```bibtex
@misc{ULTIMA,
author = {Oh Giyeong (BootsofLagrangian), Kang Dohoon (Haken)},
title = {ULTIMA - Uma Musume Labeled Text-Image Alignment Dataset},
howpublished = {\url{https://huggingface.co/datasets/UmaDiffusion/ULTIMA}},
month = {July},
year = {2023}
}
```
| 8,835 | [
[
-0.04840087890625,
-0.05126953125,
0.031890869140625,
0.00579071044921875,
-0.0419921875,
-0.00981903076171875,
0.0005788803100585938,
-0.040130615234375,
0.04718017578125,
0.056976318359375,
-0.0419921875,
-0.07171630859375,
-0.02935791015625,
0.01919555664... |
rdpahalavan/CIC-IDS2017 | 2023-07-22T21:42:04.000Z | [
"task_categories:text-classification",
"task_categories:tabular-classification",
"size_categories:100M<n<1B",
"license:apache-2.0",
"Network Intrusion Detection",
"Cybersecurity",
"Network Packets",
"CIC-IDS2017",
"region:us"
] | rdpahalavan | null | null | 0 | 10 | 2023-07-08T07:25:54 | ---
license: apache-2.0
task_categories:
- text-classification
- tabular-classification
size_categories:
- 100M<n<1B
tags:
- Network Intrusion Detection
- Cybersecurity
- Network Packets
- CIC-IDS2017
---
We have developed a Python package as a wrapper around Hugging Face Hub and Hugging Face Datasets library to access this dataset easily.
# NIDS Datasets
The `nids-datasets` package provides functionality to download and utilize specially curated and extracted datasets from the original UNSW-NB15 and CIC-IDS2017 datasets. These datasets, which initially were only flow datasets, have been enhanced to include packet-level information from the raw PCAP files. The dataset contains both packet-level and flow-level data for over 230 million packets, with 179 million packets from UNSW-NB15 and 54 million packets from CIC-IDS2017.
## Installation
Install the `nids-datasets` package using pip:
```shell
pip install nids-datasets
```
Import the package in your Python script:
```python
from nids_datasets import Dataset, DatasetInfo
```
## Dataset Information
The `nids-datasets` package currently supports two datasets: [UNSW-NB15](https://research.unsw.edu.au/projects/unsw-nb15-dataset) and [CIC-IDS2017](https://www.unb.ca/cic/datasets/ids-2017.html). Each of these datasets contains a mix of normal traffic and different types of attack traffic, which are identified by their respective labels. The UNSW-NB15 dataset has 10 unique class labels, and the CIC-IDS2017 dataset has 24 unique class labels.
- UNSW-NB15 Labels: 'normal', 'exploits', 'dos', 'fuzzers', 'generic', 'reconnaissance', 'worms', 'shellcode', 'backdoor', 'analysis'
- CIC-IDS2017 Labels: 'BENIGN', 'FTP-Patator', 'SSH-Patator', 'DoS slowloris', 'DoS Slowhttptest', 'DoS Hulk', 'Heartbleed', 'Web Attack – Brute Force', 'Web Attack – XSS', 'Web Attack – SQL Injection', 'Infiltration', 'Bot', 'PortScan', 'DDoS', 'normal', 'exploits', 'dos', 'fuzzers', 'generic', 'reconnaissance', 'worms', 'shellcode', 'backdoor', 'analysis', 'DoS GoldenEye'
## Subsets of the Dataset
Each dataset consists of four subsets:
1. Network-Flows - Contains flow-level data.
2. Packet-Fields - Contains packet header information.
3. Packet-Bytes - Contains packet byte information in the range (0-255).
4. Payload-Bytes - Contains payload byte information in the range (0-255).
Each subset contains 18 files (except Network-Flows, which has one file), where the data is stored in parquet format. In total, this package provides access to 110 files. You can choose to download all subsets or select specific subsets or specific files depending on your analysis requirements.
## Getting Information on the Datasets
The `DatasetInfo` function provides a summary of the dataset in a pandas dataframe format. It displays the number of packets for each class label across all 18 files in the dataset. This overview can guide you in selecting specific files for download and analysis.
```python
df = DatasetInfo(dataset='UNSW-NB15') # or dataset='CIC-IDS2017'
df
```
## Downloading the Datasets
The `Dataset` class allows you to specify the dataset, subset, and files that you are interested in. The specified data will then be downloaded.
```python
dataset = 'UNSW-NB15' # or 'CIC-IDS2017'
subset = ['Network-Flows', 'Packet-Fields', 'Payload-Bytes'] # or 'all' for all subsets
files = [3, 5, 10] # or 'all' for all files
data = Dataset(dataset=dataset, subset=subset, files=files)
data.download()
```
The directory structure after downloading files:
```
UNSW-NB15
│
├───Network-Flows
│ └───UNSW_Flow.parquet
│
├───Packet-Fields
│ ├───Packet_Fields_File_3.parquet
│ ├───Packet_Fields_File_5.parquet
│ └───Packet_Fields_File_10.parquet
│
└───Payload-Bytes
├───Payload_Bytes_File_3.parquet
├───Payload_Bytes_File_5.parquet
└───Payload_Bytes_File_10.parquet
```
You can then load the parquet files using pandas:
```python
import pandas as pd
df = pd.read_parquet('UNSW-NB15/Packet-Fields/Packet_Fields_File_10.parquet')
```
## Merging Subsets
The `merge()` method allows you to merge all data of each packet across all subsets, providing both flow-level and packet-level information in a single file.
```python
data.merge()
```
The merge method, by default, uses the details specified while instantiating the `Dataset` class. You can also pass subset=list of subsets and files=list of files you want to merge.
The directory structure after merging files:
```
UNSW-NB15
│
├───Network-Flows
│ └───UNSW_Flow.parquet
│
├───Packet-Fields
│ ├───Packet_Fields_File_3.parquet
│ ├───Packet_Fields_File_5.parquet
│ └───Packet_Fields_File_10.parquet
│
├───Payload-Bytes
│ ├───Payload_Bytes_File_3.parquet
│ ├───Payload_Bytes_File_5.parquet
│ └───Payload_Bytes_File_10.parquet
│
└───Network-Flows+Packet-Fields+Payload-Bytes
├───Network_Flows+Packet_Fields+Payload_Bytes_File_3.parquet
├───Network_Flows+Packet_Fields+Payload_Bytes_File_5.parquet
└───Network_Flows+Packet_Fields+Payload_Bytes_File_10.parquet
```
## Extracting Bytes
Packet-Bytes and Payload-Bytes subset contains the first 1500-1600 bytes. To retrieve all bytes (up to 65535 bytes) from the Packet-Bytes and Payload-Bytes subsets, use the `Bytes()` method. This function requires files in the Packet-Fields subset to operate. You can specify how many bytes you want to extract by passing the max_bytes parameter.
```python
data.bytes(payload=True, max_bytes=2500)
```
Use packet=True to extract packet bytes. You can also pass files=list of files to retrieve bytes.
The directory structure after extracting bytes:
```
UNSW-NB15
│
├───Network-Flows
│ └───UNSW_Flow.parquet
│
├───Packet-Fields
│ ├───Packet_Fields_File_3.parquet
│ ├───Packet_Fields_File_5.parquet
│ └───Packet_Fields_File_10.parquet
│
├───Payload-Bytes
│ ├───Payload_Bytes_File_3.parquet
│ ├───Payload_Bytes_File_5.parquet
│ └───Payload_Bytes_File_10.parquet
│
├───Network-Flows+Packet-Fields+Payload-Bytes
│ ├───Network_Flows+Packet_Fields+Payload_Bytes_File_3.parquet
│ ├───Network_Flows+Packet_Fields+Payload_Bytes_File_5.parquet
│ └───Network_Flows+Packet_Fields+Payload_Bytes_File_10.parquet
│
└───Payload-Bytes-2500
├───Payload_Bytes_File_3.parquet
├───Payload_Bytes_File_5.parquet
└───Payload_Bytes_File_10.parquet
```
## Reading the Datasets
The `read()` method allows you to read files using Hugging Face's `load_dataset` method, one subset at a time. The dataset and files parameters are optional if the same details are used to instantiate the `Dataset` class.
```python
dataset = data.read(dataset='UNSW-NB15', subset='Packet-Fields', files=[1,2])
```
The `read()` method returns a dataset that you can convert to a pandas dataframe or save to a CSV, parquet, or any other desired file format:
```python
df = dataset.to_pandas()
dataset.to_csv('file_path_to_save.csv')
dataset.to_parquet('file_path_to_save.parquet')
```
For scenarios where you want to process one packet at a time, you can use the `stream=True` parameter:
```python
dataset = data.read(dataset='UNSW-NB15', subset='Packet-Fields', files=[1,2], stream=True)
print(next(iter(dataset)))
```
## Notes
The size of these datasets is large, and depending on the subset(s) selected and the number of bytes extracted, the operations can be resource-intensive. Therefore, it's recommended to ensure you have sufficient disk space and RAM when using this package. | 7,424 | [
[
-0.0374755859375,
-0.052276611328125,
-0.00685882568359375,
0.04571533203125,
-0.0070037841796875,
-0.0079498291015625,
0.00992584228515625,
-0.02398681640625,
0.048675537109375,
0.050567626953125,
-0.024139404296875,
-0.0250701904296875,
-0.034820556640625,
... |
DynamicSuperb/EnvironmentalSoundClassification_ESC50-ExteriorAndUrbanNoises | 2023-07-12T05:58:21.000Z | [
"region:us"
] | DynamicSuperb | null | null | 0 | 10 | 2023-07-11T11:56:42 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: label
dtype: string
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 176506600.0
num_examples: 400
download_size: 168310913
dataset_size: 176506600.0
---
# Dataset Card for "environmental_sound_classification_exterior_and_urban_noises_ESC50"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 523 | [
[
-0.056884765625,
-0.006313323974609375,
0.02459716796875,
0.01751708984375,
0.0096588134765625,
0.00007736682891845703,
-0.013458251953125,
-0.033538818359375,
0.029998779296875,
0.016998291015625,
-0.055572509765625,
-0.07940673828125,
-0.0191650390625,
-0.... |
BigSuperbPrivate/SpeakerVerification_LibrispeechTrainClean100 | 2023-07-17T19:29:07.000Z | [
"region:us"
] | BigSuperbPrivate | null | null | 0 | 10 | 2023-07-14T18:31:55 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: file2
dtype: string
- name: instruction
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 6617191795.67
num_examples: 28539
- name: validation
num_bytes: 359547975.058
num_examples: 2703
download_size: 6771822691
dataset_size: 6976739770.728
---
# Dataset Card for "SpeakerVerification_LibrispeechTrainClean100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 617 | [
[
-0.0574951171875,
-0.0176239013671875,
0.01471710205078125,
0.0169677734375,
0.0023860931396484375,
-0.0091552734375,
-0.0130767822265625,
-0.0018911361694335938,
0.06494140625,
0.0284423828125,
-0.0599365234375,
-0.0472412109375,
-0.0310516357421875,
-0.037... |
ivrit-ai/audio-base | 2023-09-26T05:49:29.000Z | [
"task_categories:audio-classification",
"task_categories:voice-activity-detection",
"size_categories:1K<n<10K",
"language:he",
"license:other",
"arxiv:2307.08720",
"region:us"
] | ivrit-ai | null | null | 4 | 10 | 2023-07-15T08:01:33 | ---
license: other
task_categories:
- audio-classification
- voice-activity-detection
language:
- he
size_categories:
- 1K<n<10K
extra_gated_prompt:
"You agree to the following license terms:
This material and data is licensed under the terms of the Creative Commons Attribution 4.0
International License (CC BY 4.0), The full text of the CC-BY 4.0 license is available at
https://creativecommons.org/licenses/by/4.0/.
Notwithstanding the foregoing, this material and data may only be used, modified and distributed for
the express purpose of training AI models, and subject to the foregoing restriction. In addition, this
material and data may not be used in order to create audiovisual material that simulates the voice or
likeness of the specific individuals appearing or speaking in such materials and data (a “deep-fake”).
To the extent this paragraph is inconsistent with the CC-BY-4.0 license, the terms of this paragraph
shall govern.
By downloading or using any of this material or data, you agree that the Project makes no
representations or warranties in respect of the data, and shall have no liability in respect thereof. These
disclaimers and limitations are in addition to any disclaimers and limitations set forth in the CC-BY-4.0
license itself. You understand that the project is only able to make available the materials and data
pursuant to these disclaimers and limitations, and without such disclaimers and limitations the project
would not be able to make available the materials and data for your use."
extra_gated_fields:
I have read the license, and agree to its terms: checkbox
---
ivrit.ai is a database of Hebrew audio and text content.
**audio-base** contains the raw, unprocessed sources.
**audio-vad** contains audio snippets generated by applying Silero VAD (https://github.com/snakers4/silero-vad) to the base dataset.
v1 data is generated using silero-vad's default parameters.
v2 data is generated using min_speech_duration_ms=2000 (milliseconds), and max_speech_duration_s=30 (seconds).
**audio-transcripts** contains transcriptions for each snippet in the audio-vad dataset.
You can find the full list of sources in this dataset under https://www.ivrit.ai/en/credits.
Paper: https://arxiv.org/abs/2307.08720
If you use our datasets, the following quote is preferable:
```
@misc{marmor2023ivritai,
title={ivrit.ai: A Comprehensive Dataset of Hebrew Speech for AI Research and Development},
author={Yanir Marmor and Kinneret Misgav and Yair Lifshitz},
year={2023},
eprint={2307.08720},
archivePrefix={arXiv},
primaryClass={eess.AS}
}
```
| 2,665 | [
[
-0.0226287841796875,
-0.0609130859375,
0.0010728836059570312,
0.01324462890625,
-0.02716064453125,
-0.01470947265625,
-0.027313232421875,
-0.037139892578125,
0.017974853515625,
0.043914794921875,
-0.036468505859375,
-0.040496826171875,
-0.0350341796875,
0.00... |
Dmini/FFHQ-64x64 | 2023-07-21T02:36:30.000Z | [
"region:us"
] | Dmini | null | null | 0 | 10 | 2023-07-21T02:26:03 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Bellaaazzzzz/X_ray | 2023-07-27T23:41:09.000Z | [
"region:us"
] | Bellaaazzzzz | null | null | 0 | 10 | 2023-07-21T21:11:52 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
- name: additional_feature
dtype: int64
- name: conditioning_image
dtype: image
splits:
- name: train
num_bytes: 625886838.168
num_examples: 4218
- name: test
num_bytes: 86326556.0
num_examples: 762
download_size: 697216425
dataset_size: 712213394.168
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "X_ray"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 673 | [
[
-0.0294952392578125,
0.005573272705078125,
0.02508544921875,
-0.004962921142578125,
-0.0198516845703125,
0.00833892822265625,
0.03857421875,
-0.0165863037109375,
0.06982421875,
0.031402587890625,
-0.0487060546875,
-0.053802490234375,
-0.04290771484375,
-0.02... |
openchat/openchat_sharegpt_v3 | 2023-09-04T14:32:11.000Z | [
"license:mit",
"region:us"
] | openchat | null | null | 14 | 10 | 2023-07-22T15:51:31 | ---
license: mit
---
ShareGPT dataset for training OpenChat V3 series. See [OpenChat repository](https://github.com/imoneoi/openchat) for instructions.
Contents:
* `sharegpt_clean.json`: ShareGPT dataset in original format, converted to Markdown, and with `model` labels.
* `sharegpt_gpt4.json`: All instances in `sharegpt_clean.json` with `model == "Model: GPT-4"`.
* `*.parquet`: Pre-tokenized dataset for training specified version of OpenChat.
Note: The dataset is NOT currently compatible with HF dataset loader.
Licensed under MIT.
| 543 | [
[
-0.02691650390625,
-0.04071044921875,
0.00812530517578125,
0.027099609375,
-0.018402099609375,
0.0007157325744628906,
0.0033206939697265625,
-0.01529693603515625,
0.01451873779296875,
0.049346923828125,
-0.059417724609375,
-0.037689208984375,
-0.0369873046875,
... |
WelfCrozzo/kupalinka-sum | 2023-09-03T13:05:28.000Z | [
"region:us"
] | WelfCrozzo | null | null | 0 | 10 | 2023-07-23T15:48:48 | ---
dataset_info:
features:
- name: x
dtype: string
- name: y
dtype: string
- name: lang
dtype: string
splits:
- name: train
num_bytes: 839341049
num_examples: 332876
- name: validation
num_bytes: 93823377
num_examples: 37057
download_size: 516244839
dataset_size: 933164426
---
# Dataset Card for "kupalinka-sum"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 492 | [
[
-0.037322998046875,
-0.013427734375,
0.00897216796875,
0.0242462158203125,
-0.02789306640625,
0.003696441650390625,
0.0232696533203125,
-0.0022335052490234375,
0.06427001953125,
0.029541015625,
-0.049652099609375,
-0.039703369140625,
-0.050811767578125,
-0.0... |
FreedomIntelligence/MMLU_Korean | 2023-08-06T08:06:43.000Z | [
"language:ko",
"license:mit",
"region:us"
] | FreedomIntelligence | null | null | 2 | 10 | 2023-07-24T05:46:16 | ---
license: mit
language:
- ko
---
Korean version of MMLU dataset tranlasted by gpt-3.5-turbo.
The dataset is used in the research related to [MultilingualSIFT](https://github.com/FreedomIntelligence/MultilingualSIFT). | 222 | [
[
-0.01751708984375,
-0.025238037109375,
0.022369384765625,
0.021331787109375,
-0.0201263427734375,
0.0088043212890625,
-0.0037689208984375,
-0.0228118896484375,
0.024871826171875,
0.024993896484375,
-0.07330322265625,
-0.0081787109375,
-0.01271820068359375,
0... |
polplop/cnndm_llama2_7b_chat_summary | 2023-07-27T05:48:43.000Z | [
"region:us"
] | polplop | null | null | 1 | 10 | 2023-07-25T09:39:11 | ---
dataset_info:
features:
- name: article
dtype: string
- name: highlights
dtype: string
- name: id
dtype: string
- name: clean_summary
dtype: string
- name: summary_summacConv_scores
dtype: float64
- name: highlight_summacConv_scores
dtype: float64
splits:
- name: test
num_bytes: 813399
num_examples: 200
download_size: 538654
dataset_size: 813399
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
# Dataset Card for "cnndm_llama2_7b_chat_summary"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 676 | [
[
-0.04193115234375,
-0.02685546875,
0.00920867919921875,
0.0299224853515625,
-0.038421630859375,
0.007266998291015625,
0.01352691650390625,
-0.0144805908203125,
0.055908203125,
0.0335693359375,
-0.0452880859375,
-0.06195068359375,
-0.0523681640625,
-0.0147247... |
DynamicSuperb/SpeechCommandRecognition_GoogleSpeechCommandsV1 | 2023-07-26T08:46:33.000Z | [
"region:us"
] | DynamicSuperb | null | null | 0 | 10 | 2023-07-26T08:16:57 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: is_unknown
dtype: bool
- name: speaker_id
dtype: string
- name: utterance_id
dtype: int8
- name: label
dtype: string
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 1956550554.0
num_examples: 60973
download_size: 1788385204
dataset_size: 1956550554.0
---
# Dataset Card for "SpeechCommandRecognition_GoogleSpeechCommandsV1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 660 | [
[
-0.036773681640625,
-0.0194854736328125,
0.00362396240234375,
0.0199432373046875,
-0.005008697509765625,
-0.0035400390625,
-0.0048065185546875,
-0.0022830963134765625,
0.06500244140625,
0.0307159423828125,
-0.0787353515625,
-0.061798095703125,
-0.05377197265625,... |
MajdTannous/Dataset1 | 2023-10-26T07:51:19.000Z | [
"region:us"
] | MajdTannous | Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. | @article{2016arXiv160605250R,
author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},
Konstantin and {Liang}, Percy},
title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}",
journal = {arXiv e-prints},
year = 2016,
eid = {arXiv:1606.05250},
pages = {arXiv:1606.05250},
archivePrefix = {arXiv},
eprint = {1606.05250},
} | 0 | 10 | 2023-08-04T13:17:48 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
foduucom/table-detection-yolo | 2023-08-05T14:42:23.000Z | [
"task_categories:object-detection",
"size_categories:1K<n<10K",
"language:en",
"foduuai",
"table",
"Documents",
"bordered table",
"borderless table",
"unstructured document",
"region:us"
] | foduucom | null | null | 5 | 10 | 2023-08-05T11:43:51 | ---
task_categories:
- object-detection
tags:
- foduuai
- table
- Documents
- bordered table
- borderless table
- unstructured document
language:
- en
pretty_name: TableBorderNet
size_categories:
- 1K<n<10K
---
<div align="center">
<img width="640" alt="foduucom/table-detection-yolo" src="https://huggingface.co/datasets/foduucom/table-detection-yolo/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['bordered', 'borderless']
```
### Number of Images
```json
{'test': 34, 'train': 238, 'valid': 70}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("foduucom/table-detection-yolo", name="full")
example = ds['train'][0]
```
### Dataset Summary
Certainly! Here's a dataset summary for your dataset of images containing tables that are categorized as border and borderless, provided in YOLO format:
## Dataset Summary
The **Table Detection Dataset** is a curated collection of images, each depicting tables that are classified as either 'bordered' or 'borderless'. The dataset is provided in YOLO format, featuring annotations for accurate object detection and classification. It serves as a valuable resource for researchers, developers, and practitioners working on table detection tasks, with a specific focus on distinguishing between tables with distinct visual characteristics.
**Key Features:**
- **Image Variety:** The dataset encompasses a diverse range of images, capturing tables from various real-world scenarios and environments.
- **Annotation Precision:** Each image is meticulously annotated with bounding box coordinates and class labels, indicating whether the table is 'bordered' or 'borderless'.
- **YOLO Format:** Annotations follow the YOLO format, making it suitable for training and evaluating object detection models.
- **Research and Development:** The dataset is designed to facilitate advancements in table detection algorithms and technologies, enabling the development of models capable of accurately identifying and classifying different types of tables.
Whether you are working on document analysis, data extraction, or image-based content recognition, the Table Detection Dataset provides an essential foundation for enhancing the capabilities of object detection models in identifying tables with varying visual attributes. By offering a comprehensive collection of border and borderless tables, this dataset empowers the AI community to tackle challenges in table detection across a wide range of applications.
For more details and access to the dataset, please refer to info@foduu.com . | 2,693 | [
[
-0.0557861328125,
-0.05633544921875,
0.017608642578125,
-0.027618408203125,
-0.030242919921875,
-0.0213623046875,
0.0101165771484375,
-0.040557861328125,
0.0272369384765625,
0.04632568359375,
-0.014129638671875,
-0.06103515625,
-0.0265960693359375,
0.0099334... |
worldboss/bitcoin-data-sentiment | 2023-08-11T23:05:06.000Z | [
"region:us"
] | worldboss | null | null | 0 | 10 | 2023-08-11T23:04:06 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
Trelis/protein_stability_single_mutation | 2023-08-21T20:47:40.000Z | [
"task_categories:question-answering",
"task_categories:tabular-classification",
"task_categories:text-generation",
"size_categories:100K<1M",
"language:en",
"biology",
"proteins",
"amino-acids",
"region:us"
] | Trelis | null | null | 0 | 10 | 2023-08-17T16:43:47 | ---
task_categories:
- question-answering
- tabular-classification
- text-generation
language:
- en
tags:
- biology
- proteins
- amino-acids
size_categories:
- 100K<1M
---
# Protein Data Stability - Single Mutation
This repository contains data on the change in protein stability with a single mutation.
## Attribution of Data Sources
- **Primary Source**: Tsuboyama, K., Dauparas, J., Chen, J. et al. Mega-scale experimental analysis of protein folding stability in biology and design. Nature 620, 434–444 (2023). [Link to the paper](https://www.nature.com/articles/s41586-023-06328-6)
- **Dataset Link**: [Zenodo Record](https://zenodo.org/record/7992926)
As to where the dataset comes from in this broader work, the relevant dataset (#3) is shown in `dataset_table.jpeg` of this repository's files.
## Sample Protein Stability Data [subset of 4 columns]
| Base Protein Sequence | Mutation | ΔΔG_ML | Classification |
|-------------------------------------------------------------|----------|--------------------|-----------------|
| FDIYVVTADYLPLGAEQDAITLREGQYVEVLDAAHPLRWLVRTKPTKSSPSRQGWVSPAYLDRRL | R63W | -0.2010871345320799 | neutral |
| FDIYVVTADYLPLGAEQDAITLREGQYVEVLDAAHPLRWLVRTKPTKSSPSRQGWVSPAYLDRRL | R63Y | 0.0194756159891467 | neutral |
| FDIYVVTADYLPLGAEQDAITLREGQYVEVLDAAHPLRWLVRTKPTKSSPSRQGWVSPAYLDRRL | R63F | 0.7231614929744659 | stabilising |
| FDIYVVTADYLPLGAEQDAITLREGQYVEVLDAAHPLRWLVRTKPTKSSPSRQGWVSPAYLDRRL | R63P | -0.3668887752897785 | neutral |
| FDIYVVTADYLPLGAEQDAITLREGQYVEVLDAAHPLRWLVRTKPTKSSPSRQGWVSPAYLDRRL | R63C | -0.5317304030261774 | destabilising |
## Dataset Structure
This dataset focuses on the differential deltaG of *unfolding* (mutation minus base) of various protein mutations and is derived from stability measurements (free energy of unfolding) measured by two proteases, trypsin and chymotrypsin.
### Columns (Trypsin):
- **name**: The name of the protein variant.
- **dna_seq**: The DNA sequence encoding the protein variant.
- **log10_K50_t**: The log10 of the K50 value measured with trypsin (a measure of stability).
- **log10_K50_t_95CI_high**: The upper bound of the 95% confidence interval for log10_K50_t.
- **log10_K50_t_95CI_low**: The lower bound of the 95% confidence interval for log10_K50_t.
- **log10_K50_t_95CI**: The width of the 95% confidence interval for log10_K50_t.
- **fitting_error_t**: A measure of error between the model and data for trypsin.
- **log10_K50unfolded_t**: The predicted log10 K50 value for the unfolded state with trypsin.
- **deltaG_t**: The ΔG stability calculated from the trypsin data.
- **deltaG_t_95CI_high**: The upper bound of the ΔG confidence interval from trypsin.
- **deltaG_t_95CI_low**: The lower bound of the ΔG confidence interval from trypsin.
- **deltaG_t_95CI**: The width of the ΔG confidence interval from trypsin.
### Columns (Chymotrypsin):
- **log10_K50_c**: Analogous to `log10_K50_t`, but for chymotrypsin.
- **log10_K50_c_95CI_high**: Upper bound of the 95% CI for `log10_K50_c`.
- **log10_K50_c_95CI_low**: Lower bound of the 95% CI for `log10_K50_c`.
- **log10_K50_c_95CI**: Width of the 95% CI for `log10_K50_c`.
- **fitting_error_c**: A measure of error between the model and data for chymotrypsin.
- **log10_K50unfolded_c**: Predicted log10 K50 value for the unfolded state with chymotrypsin.
- **deltaG_c**: ΔG stability calculated from the chymotrypsin data.
- **deltaG_c_95CI_high**: Upper bound of the ΔG CI from chymotrypsin.
- **deltaG_c_95CI_low**: Lower bound of the ΔG CI from chymotrypsin.
- **deltaG_c_95CI**: Width of the ΔG CI from chymotrypsin.
### Combined Data:
- **deltaG**: The combined ΔG estimate from both trypsin and chymotrypsin.
- **deltaG_95CI_high**: Upper bound of the combined ΔG confidence interval.
- **deltaG_95CI_low**: Lower bound of the combined ΔG confidence interval.
- **deltaG_95CI**: Width of the combined ΔG confidence interval.
### Protein Sequencing Data:
- **aa_seq_full**: The full amino acid sequence.
- **aa_seq**: A (sometimes shortened) amino acid sequence representing the protein.
- **mut_type**: The type of mutation introduced to the protein.
- **WT_name**: Name of the wild type variant.
- **WT_cluster**: Cluster classification for the wild type variant.
- **mutation**: Represented as a combination of amino acid and its position (e.g., F10N indicates changing the 10th amino acid (F) in a sequence for N).
- **base_aa_seq**: The base sequence of the protein before the mutation.
### Derived Data:
- **log10_K50_trypsin_ML**: Log10 value of K50 derived from a machine learning model using trypsin data.
- **log10_K50_chymotrypsin_ML**: Log10 value of K50 derived from a machine learning model using chymotrypsin data.
- **dG_ML**: ΔG derived from a machine learning model that makes use of stability measurements from both proteases.
- **ddG_ML**: Differential ΔG (mutation minus base) derived from a machine learning model.
### Classification:
- **Stabilizing_mut**: Indicates whether the mutation is stabilizing or not.
- **pair_name**: Name representation combining the wild type and mutation.
- **classification**: Classification based on `ddG_ML`:
- Rows below -0.5 standard deviations are classified as 'destabilising'.
- Rows above +0.5 standard deviations are classified as 'stabilising'.
- Rows between -0.5 and 0.5 standard deviations are classified as 'neutral'.
This dataset offers a comprehensive view of protein mutations, their effects, and how they relate to the stability measurements made with trypsin and chymotrypsin.
### Understanding ΔG (delta G)
ΔG is the Gibbs free energy change of a process, dictating whether a process is thermodynamically favorable:
- **Negative ΔG**: Indicates the process is energetically favorable. For protein unfolding, it implies the protein is more stable in its unfolded form.
- **Positive ΔG**: Indicates the process is not energetically favorable. In protein unfolding, it means the protein requires energy to maintain its unfolded state, i.e. it is stable in folded form.
The **delta delta G** (ΔΔG) represents the deltaG of the mutation compared to the base protein:
- **Positive ΔΔG**: Suggests the mutation enhances protein stability.
- **Negative ΔΔG**: Suggests the mutation decreases protein stability.
### Data Cleanup and Validation:
1. Filtering: The dataset has been curated to only include examples of single mutations.
2. Sequence mutations were extracted from the row names. Base mutations are labelled as 'base'.
3. Consistency Check: Only rows with a consistent 'mutation', aligned with both the base and mutated sequences from the raw data, have been retained. | 6,740 | [
[
-0.0123748779296875,
-0.0172576904296875,
0.0016946792602539062,
-0.017059326171875,
-0.01543426513671875,
-0.0140380859375,
0.015869140625,
-0.01384735107421875,
0.047882080078125,
0.0122222900390625,
-0.04351806640625,
-0.03717041015625,
-0.0531005859375,
... |
mrlasagna07/cubesat_img | 2023-08-28T16:26:32.000Z | [
"license:cc-by-4.0",
"region:us"
] | mrlasagna07 | null | null | 0 | 10 | 2023-08-22T14:07:47 | ---
license: cc-by-4.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': cloud
'1': land
'2': space
'3': sunburn
'4': water
splits:
- name: train
num_bytes: 68702450.0
num_examples: 10000
download_size: 51466710
dataset_size: 68702450.0
---
Modified CubeSatNet Dataset
Dataset Summary
Cubesat imagery data for onorbit image classification, divided in 5 different labels: Land, Space, Sunburn, Water and Cloud
Dataset Structure
Data Fields
Dataset Creation
Source Data
https://data.mendeley.com/datasets/47vtp22vs7/1
| 747 | [
[
-0.056243896484375,
-0.02886962890625,
-0.00292205810546875,
0.00833892822265625,
-0.0177459716796875,
-0.0181884765625,
0.0299530029296875,
-0.0208740234375,
-0.0013866424560546875,
0.061737060546875,
-0.015350341796875,
-0.03643798828125,
-0.0164337158203125,
... |
yqzheng/semeval2014_laptops | 2023-08-25T09:53:01.000Z | [
"region:us"
] | yqzheng | null | null | 0 | 10 | 2023-08-25T09:52:50 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: aspect
dtype: string
- name: start
dtype: int64
- name: end
dtype: int64
- name: label
dtype: int64
splits:
- name: train
num_bytes: 342136
num_examples: 2328
- name: test
num_bytes: 82143
num_examples: 638
download_size: 157318
dataset_size: 424279
---
# Dataset Card for "semeval2014_laptops"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 672 | [
[
-0.051910400390625,
-0.0208892822265625,
0.018585205078125,
0.020721435546875,
-0.0155181884765625,
-0.0143280029296875,
0.0255584716796875,
-0.0053253173828125,
0.055267333984375,
0.0283355712890625,
-0.05865478515625,
-0.036865234375,
-0.0325927734375,
-0.... |
ProgramComputer/VGGFace2 | 2023-09-17T14:01:20.000Z | [
"license:cc-by-nc-4.0",
"arxiv:1710.08092",
"doi:10.57967/hf/1025",
"region:us"
] | ProgramComputer | null | @article{DBLP:journals/corr/abs-1710-08092,
author = {Qiong Cao and
Li Shen and
Weidi Xie and
Omkar M. Parkhi and
Andrew Zisserman},
title = {VGGFace2: {A} dataset for recognising faces across pose and age},
journal = {CoRR},
volume = {abs/1710.08092},
year = {2017},
url = {http://arxiv.org/abs/1710.08092},
eprinttype = {arXiv},
eprint = {1710.08092},
timestamp = {Wed, 04 Aug 2021 07:50:14 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1710-08092.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | 0 | 10 | 2023-08-26T21:57:14 | ---
license: cc-by-nc-4.0
paperswithcode_id: vggface2
pretty_name: vggface2
---
```
@article{DBLP:journals/corr/abs-1710-08092,
author = {Qiong Cao and
Li Shen and
Weidi Xie and
Omkar M. Parkhi and
Andrew Zisserman},
title = {VGGFace2: {A} dataset for recognising faces across pose and age},
journal = {CoRR},
volume = {abs/1710.08092},
year = {2017},
url = {http://arxiv.org/abs/1710.08092},
eprinttype = {arXiv},
eprint = {1710.08092},
timestamp = {Wed, 04 Aug 2021 07:50:14 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1710-08092.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
# README
## 关于超神经 Hyper.AI
超神经 Hyper.AI(https://hyper.ai)是科技实验媒体,专注报道人工智能与其适用场景。致力于推动中文领域对机器智能的认知与普及,探讨机器智能的对社会的影响。超神经为提高科研效率,提供大陆范围内最快最全的公开数据集下载节点、人工智能百科词条等多个产品,服务产业相关从业者和科研院所的师生。
## 关于数据集
- 数据集名称:VGG-Face2
- 发布机构:牛津大学工程科学系视觉几何组 Visual Geometry Group, Department of Engineering Science, University of Oxford
- 网址:http://www.robots.ox.ac.uk/~vgg/data/vgg_face/
- 大小:nan GB
- 简介:VGGFace2是一个大规模的人脸识别数据集,包含9131个人的面部。 图像从Google图片搜索下载,在姿势,年龄,照明,种族和职业方面有很大差异。该数据集于2015年由牛津大学工程科学系视觉几何组发布,相关论文为Deep Face Recognition。 | 1,289 | [
[
-0.038177490234375,
-0.036346435546875,
0.01480865478515625,
0.005123138427734375,
-0.016937255859375,
-0.008819580078125,
0.00948333740234375,
-0.05780029296875,
0.00983428955078125,
0.025299072265625,
-0.040313720703125,
-0.0266876220703125,
-0.027481079101562... |
katielink/moleculenet-benchmark | 2023-08-28T17:51:14.000Z | [
"license:apache-2.0",
"biology",
"chemistry",
"region:us"
] | katielink | null | null | 0 | 10 | 2023-08-28T17:36:25 | ---
license: apache-2.0
tags:
- biology
- chemistry
configs:
- config_name: bace
data_files:
- split: train
path: bace/train.csv
- split: test
path: bace/test.csv
- split: val
path: bace/valid.csv
- config_name: bbbp
data_files:
- split: train
path: bbbp/train.csv
- split: test
path: bbbp/test.csv
- split: val
path: bbbp/valid.csv
- config_name: clintox
data_files:
- split: train
path: clintox/train.csv
- split: test
path: clintox/test.csv
- split: val
path: clintox/valid.csv
- config_name: esol
data_files:
- split: train
path: esol/train.csv
- split: test
path: esol/test.csv
- split: val
path: esol/valid.csv
- config_name: freesolv
data_files:
- split: train
path: freesolv/train.csv
- split: test
path: freesolv/test.csv
- split: val
path: freesolv/valid.csv
- config_name: hiv
data_files:
- split: train
path: hiv/train.csv
- split: test
path: hiv/test.csv
- split: val
path: hiv/valid.csv
- config_name: lipo
data_files:
- split: train
path: lipo/train.csv
- split: test
path: lipo/test.csv
- split: val
path: lipo/valid.csv
- config_name: qm9
data_files:
- split: train
path: qm9/train.csv
- split: test
path: qm9/test.csv
- split: val
path: qm9/valid.csv
- config_name: sider
data_files:
- split: train
path: sider/train.csv
- split: test
path: sider/test.csv
- split: val
path: sider/valid.csv
- config_name: tox21
data_files:
- split: train
path: tox21/train.csv
- split: test
path: tox21/test.csv
- split: val
path: tox21/valid.csv
---
# MoleculeNet Benchmark ([website](https://moleculenet.org/))
MoleculeNet is a benchmark specially designed for testing machine learning methods of molecular properties. As we aim to facilitate the development of molecular machine learning method, this work curates a number of dataset collections, creates a suite of software that implements many known featurizations and previously proposed algorithms. All methods and datasets are integrated as parts of the open source DeepChem package(MIT license).
MoleculeNet is built upon multiple public databases. The full collection currently includes over 700,000 compounds tested on a range of different properties. We test the performances of various machine learning models with different featurizations on the datasets(detailed descriptions here), with all results reported in AUC-ROC, AUC-PRC, RMSE and MAE scores.
For users, please cite:
Zhenqin Wu, Bharath Ramsundar, Evan N. Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S. Pappu, Karl Leswing, Vijay Pande, MoleculeNet: A Benchmark for Molecular Machine Learning, arXiv preprint, arXiv: 1703.00564, 2017.
| 3,048 | [
[
-0.035186767578125,
-0.033782958984375,
0.0011510848999023438,
-0.0013189315795898438,
-0.0097503662109375,
0.00514984130859375,
-0.017791748046875,
-0.0220184326171875,
-0.0101470947265625,
0.03814697265625,
-0.025726318359375,
-0.04827880859375,
-0.03958129882... |
mickume/harry_potter_tiny | 2023-08-30T12:46:15.000Z | [
"region:us"
] | mickume | null | null | 0 | 10 | 2023-08-30T12:46:08 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1234764
num_examples: 7481
download_size: 747534
dataset_size: 1234764
---
# Dataset Card for "harrypotter_tiny"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 443 | [
[
-0.03912353515625,
-0.016021728515625,
0.0076141357421875,
0.01384735107421875,
-0.0121612548828125,
-0.016815185546875,
0.004062652587890625,
0.0018863677978515625,
0.06939697265625,
0.017425537109375,
-0.04852294921875,
-0.03021240234375,
-0.034393310546875,
... |
boapps/kmdb_base | 2023-09-27T07:46:48.000Z | [
"size_categories:10K<n<100K",
"language:hu",
"region:us"
] | boapps | null | null | 0 | 10 | 2023-09-03T21:10:30 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: title
dtype: string
- name: description
dtype: string
- name: keywords
sequence: string
- name: url
dtype: string
- name: authors
sequence: string
- name: date
dtype: string
- name: kmonitor_description
dtype: string
- name: kmonitor_title
dtype: string
- name: kmonitor_tags
sequence: string
- name: kmonitor_institutes
sequence: string
- name: kmonitor_people
sequence: string
- name: kmonitor_places
sequence: string
- name: kmonitor_collections
sequence: string
splits:
- name: validation
num_bytes: 18870175.609970674
num_examples: 3397
- name: test
num_bytes: 9452772.627906976
num_examples: 1710
- name: train
num_bytes: 149446447.68875885
num_examples: 26534
download_size: 106591991
dataset_size: 177769395.92663652
language:
- hu
pretty_name: K-Monitor sajtóadatbázis
size_categories:
- 10K<n<100K
---
Forrás: https://adatbazis.k-monitor.hu/
## Használat
```python
from datasets import load_dataset
dataset = load_dataset('boapps/kmdb_base')
# írjuk ki egy cikk címét
print(dataset['train'][0]['title'])
```
## Oszlopok magyarázata
`text`: cikk törzse
`title`: hírportál által adott cím
`description`: hírportál által adott lead (kis bevezető/összefoglaló az elején)
`keywords`: hírportál címkék (nem mindig van és nem mindig értelmes)
`url`: cikk url-je
`authors`: cikk szerzői, címkékhez hasonlóan, nem minden esetben van meg
`date`: cikk megjelenésének ideje, változó pontossággal és ritkán értelmetlen
`kmonitor_description`: csak korrupciós cikkek esetében, k-monitor adatbázisában tárolt leírás (sokszor egyezik a rendes leírással)
`kmonitor_title`: ugyanez címmel
`kmonitor_tags`: ez a keywords-el ellentétben K-Monitoros önkéntesek általi címkézés, meghatározott "címke halmazból" (ebben vannak még helyszínek, amiket people-höz és institutes-hoz hasonlóan szét lehetne (kéne) szedni egy places oszlopba)
`institutes`: cikkben megjelenő intézmények, K-Monitoros gyűjtés
`people`: ugyanez személyekkel | 2,294 | [
[
-0.0408935546875,
-0.020660400390625,
0.033416748046875,
0.0074615478515625,
-0.03253173828125,
-0.01282501220703125,
-0.0311431884765625,
0.007537841796875,
0.0275726318359375,
0.036376953125,
-0.040985107421875,
-0.060699462890625,
-0.05010986328125,
0.004... |
qqlu1992/Adobe_EntitySeg | 2023-09-07T01:03:14.000Z | [
"region:us"
] | qqlu1992 | null | null | 2 | 10 | 2023-09-04T03:58:26 | ---
viewer: false
---
The images and pretrained-models used in the ICCV oral paper 'High-Quality Entity Segmentation'.
The offical link is https://github.com/adobe-research/EntitySeg-Dataset.
The code link is https://github.com/qqlu/Entity/tree/main/Entityv2.
We noted that we do not own the copyright of the images. It is solely your responsibility to check the original licenses of the images before using them. Any use of the images are at your own discretion and risk. | 476 | [
[
-0.038848876953125,
-0.037322998046875,
0.014892578125,
0.00907135009765625,
-0.036407470703125,
-0.0078277587890625,
0.0216522216796875,
-0.04132080078125,
0.01204681396484375,
0.059661865234375,
-0.035888671875,
-0.04766845703125,
-0.026336669921875,
0.003... |
rizerphe/glaive-function-calling-v2-llama | 2023-09-05T12:51:42.000Z | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | rizerphe | null | null | 7 | 10 | 2023-09-04T09:32:26 | ---
license: cc-by-sa-4.0
task_categories:
- text-generation
language:
- en
size_categories:
- 100K<n<1M
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 228119663
num_examples: 103091
download_size: 95393598
dataset_size: 228119663
pretty_name: a
---
# Glaive's Function Calling V2 for Llama2
[Glaive's Function Calling V2 dataset](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2), formatted according to the Llama2 chat schema, with all the data that I wasn't able to automatically convert removed manually.
Adds a special `<function>` token. Here's an example prompt:
```
<s>[INST] <<SYS>>
<function>Available functions:
<function>{
"name": "generate_password",
"description": "Generate a random password with specified criteria",
"parameters": {
"type": "object",
"properties": {
"length": {
"type": "integer",
"description": "The length of the password"
},
"include_numbers": {
"type": "boolean",
"description": "Include numbers in the password"
},
"include_special_characters": {
"type": "boolean",
"description": "Include special characters in the password"
}
},
"required": [
"length"
]
}
}
<</SYS>>
I need a new password. Can you generate one for me? [/INST] Of course! How long would you like your password to be? And do you want it to include numbers and special characters?</s><s>[INST] I want it to be 12 characters long and yes, it should include both numbers and special characters. [/INST]<function>generate_password
{
"length": 12,
"include_numbers": true,
"include_special_characters": true
}</s><s>[INST] <function>{"password": "4#7gB6&9L1!0"} [/INST] Here is your new password: 4#7gB6&9L1!0. Please make sure to save it in a secure place.</s>
``` | 1,996 | [
[
0.01102447509765625,
-0.048431396484375,
0.0218505859375,
0.02392578125,
-0.035491943359375,
0.0098724365234375,
0.0190277099609375,
-0.0213775634765625,
0.0310821533203125,
0.053680419921875,
-0.0546875,
-0.049041748046875,
-0.039581298828125,
0.01390075683... |
diiogo/enem_v2 | 2023-09-04T13:07:38.000Z | [
"region:us"
] | diiogo | null | null | 1 | 10 | 2023-09-04T13:07:33 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 1853620
num_examples: 2368
download_size: 1230138
dataset_size: 1853620
---
# Dataset Card for "enem_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 511 | [
[
-0.037750244140625,
-0.0160675048828125,
0.01824951171875,
0.01372528076171875,
-0.01300048828125,
-0.0183563232421875,
0.0313720703125,
-0.02813720703125,
0.05859375,
0.047149658203125,
-0.051605224609375,
-0.04510498046875,
-0.04296875,
-0.0103912353515625... |
serbog/esco_occupations_details_multilingual | 2023-09-06T02:34:53.000Z | [
"region:us"
] | serbog | null | null | 0 | 10 | 2023-09-06T02:34:49 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: el
struct:
- name: alternativeLabel
sequence: string
- name: description
dtype: string
- name: preferredLabel
dtype: string
- name: preferredTerm
dtype: string
- name: lt
struct:
- name: alternativeLabel
sequence: string
- name: description
dtype: string
- name: preferredLabel
dtype: string
- name: preferredTerm
dtype: string
- name: code
dtype: string
- name: uk
struct:
- name: preferredLabel
dtype: string
- name: preferredTerm
dtype: string
- name: ga
struct:
- name: alternativeLabel
sequence: string
- name: description
dtype: string
- name: preferredLabel
dtype: string
- name: preferredTerm
dtype: string
- name: sv
struct:
- name: alternativeLabel
sequence: string
- name: description
dtype: string
- name: preferredLabel
dtype: string
- name: preferredTerm
dtype: string
- name: cs
struct:
- name: alternativeLabel
sequence: string
- name: description
dtype: string
- name: preferredLabel
dtype: string
- name: preferredTerm
dtype: string
- name: bg
struct:
- name: alternativeLabel
sequence: string
- name: description
dtype: string
- name: preferredLabel
dtype: string
- name: preferredTerm
dtype: string
- name: 'no'
struct:
- name: alternativeLabel
sequence: string
- name: description
dtype: string
- name: preferredLabel
dtype: string
- name: preferredTerm
dtype: string
- name: en
struct:
- name: alternativeLabel
sequence: string
- name: description
dtype: string
- name: preferredLabel
dtype: string
- name: preferredTerm
dtype: string
- name: lv
struct:
- name: alternativeLabel
sequence: string
- name: description
dtype: string
- name: preferredLabel
dtype: string
- name: preferredTerm
dtype: string
- name: ar
struct:
- name: alternativeLabel
sequence: string
- name: description
dtype: string
- name: preferredLabel
dtype: string
- name: preferredTerm
dtype: string
- name: es
struct:
- name: alternativeLabel
sequence: string
- name: description
dtype: string
- name: preferredLabel
dtype: string
- name: preferredTerm
dtype: string
- name: et
struct:
- name: alternativeLabel
sequence: string
- name: description
dtype: string
- name: preferredLabel
dtype: string
- name: preferredTerm
dtype: string
- name: fi
struct:
- name: alternativeLabel
sequence: string
- name: description
dtype: string
- name: preferredLabel
dtype: string
- name: preferredTerm
dtype: string
- name: sk
struct:
- name: alternativeLabel
sequence: string
- name: description
dtype: string
- name: preferredLabel
dtype: string
- name: preferredTerm
dtype: string
- name: da
struct:
- name: alternativeLabel
sequence: string
- name: description
dtype: string
- name: preferredLabel
dtype: string
- name: preferredTerm
dtype: string
- name: nl
struct:
- name: alternativeLabel
sequence: string
- name: description
dtype: string
- name: preferredLabel
dtype: string
- name: preferredTerm
dtype: string
- name: is
struct:
- name: alternativeLabel
sequence: string
- name: description
dtype: string
- name: preferredLabel
dtype: string
- name: preferredTerm
dtype: string
- name: sl
struct:
- name: alternativeLabel
sequence: string
- name: description
dtype: string
- name: preferredLabel
dtype: string
- name: preferredTerm
dtype: string
- name: hr
struct:
- name: alternativeLabel
sequence: string
- name: description
dtype: string
- name: preferredLabel
dtype: string
- name: preferredTerm
dtype: string
- name: pl
struct:
- name: alternativeLabel
sequence: string
- name: description
dtype: string
- name: preferredLabel
dtype: string
- name: preferredTerm
dtype: string
- name: it
struct:
- name: alternativeLabel
sequence: string
- name: description
dtype: string
- name: preferredLabel
dtype: string
- name: preferredTerm
dtype: string
- name: de
struct:
- name: alternativeLabel
sequence: string
- name: description
dtype: string
- name: preferredLabel
dtype: string
- name: preferredTerm
dtype: string
- name: url
dtype: string
- name: mt
struct:
- name: alternativeLabel
sequence: string
- name: description
dtype: string
- name: preferredLabel
dtype: string
- name: preferredTerm
dtype: string
- name: hu
struct:
- name: alternativeLabel
sequence: string
- name: description
dtype: string
- name: preferredLabel
dtype: string
- name: preferredTerm
dtype: string
- name: fr
struct:
- name: alternativeLabel
sequence: string
- name: description
dtype: string
- name: preferredLabel
dtype: string
- name: preferredTerm
dtype: string
- name: pt
struct:
- name: alternativeLabel
sequence: string
- name: description
dtype: string
- name: preferredLabel
dtype: string
- name: preferredTerm
dtype: string
- name: ro
struct:
- name: alternativeLabel
sequence: string
- name: description
dtype: string
- name: preferredLabel
dtype: string
- name: preferredTerm
dtype: string
splits:
- name: train
num_bytes: 52470213
num_examples: 3629
download_size: 22696020
dataset_size: 52470213
---
# Dataset Card for "esco_occupations_details_multilingual"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 6,398 | [
[
-0.0223236083984375,
-0.01317596435546875,
0.0162506103515625,
0.0212249755859375,
-0.004596710205078125,
0.01261138916015625,
0.001415252685546875,
-0.0340576171875,
0.06878662109375,
0.053619384765625,
-0.05145263671875,
-0.0716552734375,
-0.05572509765625,
... |
martka/mri_pairs | 2023-09-12T16:41:53.000Z | [
"region:us"
] | martka | null | null | 0 | 10 | 2023-09-12T01:10:43 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_image
dtype: image
- name: edited_image
dtype: image
- name: editing_prompt_num
dtype: string
- name: editing_promp_word
dtype: string
- name: editing_promp_bin
dtype: string
- name: editing_prompt_num_cd
dtype: string
- name: editing_promp_word_cd
dtype: string
- name: editing_promp_bin_cd
dtype: string
splits:
- name: train
num_bytes: 192154886.4
num_examples: 2452
- name: test
num_bytes: 48038721.6
num_examples: 613
download_size: 232980264
dataset_size: 240193608.0
---
# Dataset Card for "mri_pairs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 883 | [
[
-0.041656494140625,
-0.0033893585205078125,
0.0216827392578125,
0.01473236083984375,
-0.0224151611328125,
-0.004119873046875,
0.026275634765625,
-0.00879669189453125,
0.0626220703125,
0.017669677734375,
-0.041748046875,
-0.049957275390625,
-0.04962158203125,
... |
vlsp-2023-vllm/hhh_alignment | 2023-10-30T03:32:46.000Z | [
"region:us"
] | vlsp-2023-vllm | null | null | 0 | 10 | 2023-09-15T17:17:32 | ---
dataset_info:
features:
- name: input
dtype: string
- name: targets
struct:
- name: choices
sequence: string
- name: labels
sequence: int32
- name: metadata
struct:
- name: subset
dtype: string
splits:
- name: test
num_bytes: 285938
num_examples: 221
download_size: 66013
dataset_size: 285938
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
# HHH-alignment
## Install
To install `lm-eval` from the github repository main branch, run:
```bash
git clone https://github.com/hieunguyen1053/lm-evaluation-harness
cd lm-evaluation-harness
pip install -e .
```
## Basic Usage
> **Note**: When reporting results from eval harness, please include the task versions (shown in `results["versions"]`) for reproducibility. This allows bug fixes to tasks while also ensuring that previously reported scores are reproducible. See the [Task Versioning](#task-versioning) section for more info.
### Hugging Face `transformers`
To evaluate a model hosted on the [HuggingFace Hub](https://huggingface.co/models) (e.g. vlsp-2023-vllm/hoa-1b4) on `hellaswag_vi` you can use the following command:
```bash
python main.py \
--model hf-causal \
--model_args pretrained=vlsp-2023-vllm/hoa-1b4 \
--tasks hhh_alignment_vi \
--batch_size auto \
--device cuda:0
```
Additional arguments can be provided to the model constructor using the `--model_args` flag. Most notably, this supports the common practice of using the `revisions` feature on the Hub to store partially trained checkpoints, or to specify the datatype for running a model:
```bash
python main.py \
--model hf-causal \
--model_args pretrained=vlsp-2023-vllm/hoa-1b4,revision=step100000,dtype="float" \
--tasks hhh_alignment_vi \
--device cuda:0
```
To evaluate models that are loaded via `AutoSeq2SeqLM` in Huggingface, you instead use `hf-seq2seq`. *To evaluate (causal) models across multiple GPUs, use `--model hf-causal-experimental`*
> **Warning**: Choosing the wrong model may result in erroneous outputs despite not erroring. | 2,127 | [
[
-0.030120849609375,
-0.06036376953125,
0.045745849609375,
0.0254669189453125,
-0.0020999908447265625,
-0.01068878173828125,
0.007266998291015625,
-0.0095367431640625,
0.0165863037109375,
0.0182342529296875,
-0.0518798828125,
-0.026947021484375,
-0.03839111328125... |
macarious/en_corpora_parliament_processed | 2023-10-19T17:59:40.000Z | [
"region:us"
] | macarious | null | null | 0 | 10 | 2023-09-16T00:52:40 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 309185247
num_examples: 2051014
download_size: 0
dataset_size: 309185247
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "en_corpora_parliament_processed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 460 | [
[
-0.03839111328125,
-0.0160980224609375,
0.022064208984375,
0.0247039794921875,
-0.026123046875,
0.002689361572265625,
-0.0189361572265625,
-0.01198577880859375,
0.055450439453125,
0.058074951171875,
-0.05010986328125,
-0.0634765625,
-0.05517578125,
0.0082778... |
legacy107/qa_wikipedia | 2023-09-18T04:37:29.000Z | [
"region:us"
] | legacy107 | null | null | 0 | 10 | 2023-09-17T14:24:24 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answer_start
dtype: int64
- name: answer
dtype: string
- name: article
dtype: string
splits:
- name: train
num_bytes: 7477859892
num_examples: 138712
- name: test
num_bytes: 898641134
num_examples: 17341
- name: validation
num_bytes: 926495549
num_examples: 17291
download_size: 498772569
dataset_size: 9302996575
---
# Dataset Card for "qa_wikipedia"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 888 | [
[
-0.055145263671875,
-0.01776123046875,
0.0246124267578125,
0.002361297607421875,
-0.0192108154296875,
-0.01186370849609375,
0.0231475830078125,
-0.01032257080078125,
0.06365966796875,
0.017669677734375,
-0.052398681640625,
-0.0538330078125,
-0.019989013671875,
... |
Ibrahim-Alam/cornel_sentiment | 2023-09-19T01:50:16.000Z | [
"region:us"
] | Ibrahim-Alam | null | null | 0 | 10 | 2023-09-19T01:49:55 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ayoubkirouane/Arabic_common_voice_11_0 | 2023-09-19T15:51:03.000Z | [
"region:us"
] | ayoubkirouane | null | null | 0 | 10 | 2023-09-19T15:49:44 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 331885627.728
num_examples: 10438
- name: test
num_bytes: 318132067.84
num_examples: 10440
download_size: 577509839
dataset_size: 650017695.568
---
# Dataset Card for "Arabic_common_voice_11_0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 645 | [
[
-0.053192138671875,
-0.01371002197265625,
0.001369476318359375,
0.0225982666015625,
-0.021484375,
0.004405975341796875,
0.005443572998046875,
-0.0165252685546875,
0.0592041015625,
0.0270843505859375,
-0.05096435546875,
-0.07855224609375,
-0.0504150390625,
-0... |
Linyuyu/zhouguangbo | 2023-10-12T09:55:21.000Z | [
"region:us"
] | Linyuyu | null | null | 0 | 10 | 2023-09-20T09:15:53 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
lapups/evo_llama_v3 | 2023-09-21T07:36:34.000Z | [
"region:us"
] | lapups | null | null | 0 | 10 | 2023-09-21T07:36:12 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
tmfi/jawiki-20230911 | 2023-09-21T16:23:11.000Z | [
"region:us"
] | tmfi | null | null | 0 | 10 | 2023-09-21T16:02:39 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 8129791520
num_examples: 1386531
download_size: 3964405981
dataset_size: 8129791520
---
# Dataset Card for "jawiki-20230911"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 455 | [
[
-0.045166015625,
-0.003192901611328125,
0.0080413818359375,
0.01548004150390625,
-0.0307769775390625,
-0.00539398193359375,
0.01377105712890625,
-0.01580810546875,
0.07220458984375,
0.025421142578125,
-0.06573486328125,
-0.04302978515625,
-0.03875732421875,
... |
miikatoi/DocLayNet-tiny | 2023-09-22T06:24:24.000Z | [
"region:us"
] | miikatoi | null | null | 0 | 10 | 2023-09-22T06:22:19 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: texts
sequence: string
- name: bboxes_block
sequence:
sequence: int64
- name: bboxes_line
sequence:
sequence: int64
- name: categories
sequence:
class_label:
names:
'0': Caption
'1': Footnote
'2': Formula
'3': List-item
'4': Page-footer
'5': Page-header
'6': Picture
'7': Section-header
'8': Table
'9': Text
'10': Title
- name: image
dtype: image
- name: page_hash
dtype: string
- name: original_filename
dtype: string
- name: page_no
dtype: int32
- name: num_pages
dtype: int32
- name: original_width
dtype: int32
- name: original_height
dtype: int32
- name: coco_width
dtype: int32
- name: coco_height
dtype: int32
- name: collection
dtype: string
- name: doc_category
dtype: string
splits:
- name: train
num_bytes: 28393556.512301013
num_examples: 70
- name: validation
num_bytes: 2641091.359375
num_examples: 7
- name: test
num_bytes: 1779922.857142857
num_examples: 5
download_size: 31476812
dataset_size: 32814570.72881887
---
# Dataset Card for "DocLayNet-tiny"
Tiny set for unit tests based on https://huggingface.co/datasets/pierreguillou/DocLayNet-small.
Total ~0.1% of DocLayNet.
| 1,598 | [
[
-0.03717041015625,
-0.0299072265625,
0.0074615478515625,
0.0025043487548828125,
-0.0021209716796875,
-0.024749755859375,
0.01393890380859375,
0.01739501953125,
0.0712890625,
0.0177001953125,
-0.05023193359375,
-0.020050048828125,
-0.001850128173828125,
-0.02... |
dim/panorama_prompts_10k | 2023-09-25T15:16:40.000Z | [
"region:us"
] | dim | null | null | 0 | 10 | 2023-09-25T15:16:34 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 30478073
num_examples: 11024
download_size: 15784032
dataset_size: 30478073
---
# Dataset Card for "panorama_prompts_10k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 435 | [
[
-0.059722900390625,
-0.0116424560546875,
0.01477813720703125,
0.054534912109375,
-0.02569580078125,
-0.01690673828125,
0.0211181640625,
-0.007122039794921875,
0.062042236328125,
0.032958984375,
-0.06341552734375,
-0.04779052734375,
-0.048248291015625,
-0.000... |
mindchain/wikitext2 | 2023-09-26T19:13:55.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"languag... | mindchain | null | null | 0 | 10 | 2023-09-26T19:13:23 | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-3.0
- gfdl
multilinguality:
- monolingual
paperswithcode_id: wikitext-2
pretty_name: WikiText
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
dataset_info:
- config_name: wikitext-103-v1
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 1295579
num_examples: 4358
- name: train
num_bytes: 545142639
num_examples: 1801350
- name: validation
num_bytes: 1154755
num_examples: 3760
download_size: 190229076
dataset_size: 547592973
- config_name: wikitext-2-v1
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 1270951
num_examples: 4358
- name: train
num_bytes: 10918134
num_examples: 36718
- name: validation
num_bytes: 1134127
num_examples: 3760
download_size: 4475746
dataset_size: 13323212
- config_name: wikitext-103-raw-v1
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 1305092
num_examples: 4358
- name: train
num_bytes: 546501673
num_examples: 1801350
- name: validation
num_bytes: 1159292
num_examples: 3760
download_size: 191984949
dataset_size: 548966057
- config_name: wikitext-2-raw-v1
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 1305092
num_examples: 4358
- name: train
num_bytes: 11061733
num_examples: 36718
- name: validation
num_bytes: 1159292
num_examples: 3760
download_size: 4721645
dataset_size: 13526117
---
# Dataset Card for "wikitext"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [Pointer Sentinel Mixture Models](https://arxiv.org/abs/1609.07843)
- **Point of Contact:** [Stephen Merity](mailto:smerity@salesforce.com)
- **Size of downloaded dataset files:** 391.41 MB
- **Size of the generated dataset:** 1.12 GB
- **Total amount of disk used:** 1.52 GB
### Dataset Summary
The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified
Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike License.
Compared to the preprocessed version of Penn Treebank (PTB), WikiText-2 is over 2 times larger and WikiText-103 is over
110 times larger. The WikiText dataset also features a far larger vocabulary and retains the original case, punctuation
and numbers - all of which are removed in PTB. As it is composed of full articles, the dataset is well suited for models
that can take advantage of long term dependencies.
Each subset comes in two different variants:
- Raw (for character level work) contain the raw tokens, before the addition of the <unk> (unknown) tokens.
- Non-raw (for word level work) contain only the tokens in their vocabulary (wiki.train.tokens, wiki.valid.tokens, and wiki.test.tokens).
The out-of-vocabulary tokens have been replaced with the the <unk> token.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### wikitext-103-raw-v1
- **Size of downloaded dataset files:** 191.98 MB
- **Size of the generated dataset:** 549.42 MB
- **Total amount of disk used:** 741.41 MB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"text": "\" The gold dollar or gold one @-@ dollar piece was a coin struck as a regular issue by the United States Bureau of the Mint from..."
}
```
#### wikitext-103-v1
- **Size of downloaded dataset files:** 190.23 MB
- **Size of the generated dataset:** 548.05 MB
- **Total amount of disk used:** 738.27 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "\" Senjō no Valkyria 3 : <unk> Chronicles ( Japanese : 戦場のヴァルキュリア3 , lit . Valkyria of the Battlefield 3 ) , commonly referred to..."
}
```
#### wikitext-2-raw-v1
- **Size of downloaded dataset files:** 4.72 MB
- **Size of the generated dataset:** 13.54 MB
- **Total amount of disk used:** 18.26 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "\" The Sinclair Scientific Programmable was introduced in 1975 , with the same case as the Sinclair Oxford . It was larger than t..."
}
```
#### wikitext-2-v1
- **Size of downloaded dataset files:** 4.48 MB
- **Size of the generated dataset:** 13.34 MB
- **Total amount of disk used:** 17.82 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "\" Senjō no Valkyria 3 : <unk> Chronicles ( Japanese : 戦場のヴァルキュリア3 , lit . Valkyria of the Battlefield 3 ) , commonly referred to..."
}
```
### Data Fields
The data fields are the same among all splits.
#### wikitext-103-raw-v1
- `text`: a `string` feature.
#### wikitext-103-v1
- `text`: a `string` feature.
#### wikitext-2-raw-v1
- `text`: a `string` feature.
#### wikitext-2-v1
- `text`: a `string` feature.
### Data Splits
| name | train |validation|test|
|-------------------|------:|---------:|---:|
|wikitext-103-raw-v1|1801350| 3760|4358|
|wikitext-103-v1 |1801350| 3760|4358|
|wikitext-2-raw-v1 | 36718| 3760|4358|
|wikitext-2-v1 | 36718| 3760|4358|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The dataset is available under the [Creative Commons Attribution-ShareAlike License (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/).
### Citation Information
```
@misc{merity2016pointer,
title={Pointer Sentinel Mixture Models},
author={Stephen Merity and Caiming Xiong and James Bradbury and Richard Socher},
year={2016},
eprint={1609.07843},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset. | 9,573 | [
[
-0.044677734375,
-0.038116455078125,
0.0113525390625,
0.0172271728515625,
-0.01003265380859375,
-0.0031566619873046875,
-0.020294189453125,
-0.0443115234375,
0.0430908203125,
0.033355712890625,
-0.0572509765625,
-0.055877685546875,
-0.03985595703125,
0.00544... |
kewu93/natural_images_small | 2023-09-27T05:42:22.000Z | [
"region:us"
] | kewu93 | null | null | 0 | 10 | 2023-09-27T05:42:20 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
splits:
- name: train
num_bytes: 2325900.3395232814
num_examples: 50
download_size: 2333116
dataset_size: 2325900.3395232814
---
# Dataset Card for "natural_images_small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 504 | [
[
-0.05767822265625,
-0.03082275390625,
0.01186370849609375,
0.006381988525390625,
-0.03704833984375,
-0.02191162109375,
0.0007271766662597656,
-0.0279693603515625,
0.06854248046875,
0.0343017578125,
-0.050506591796875,
-0.044036865234375,
-0.03143310546875,
0... |
Vishal24/function_calling | 2023-09-27T09:44:38.000Z | [
"region:us"
] | Vishal24 | null | null | 2 | 10 | 2023-09-27T07:18:28 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ABC-iRobotics/oe_dataset | 2023-10-05T19:25:48.000Z | [
"task_categories:object-detection",
"task_categories:image-segmentation",
"task_categories:robotics",
"task_ids:instance-segmentation",
"task_ids:semantic-segmentation",
"annotations_creators:machine-generated",
"size_categories:1K<n<10K",
"language:en",
"license:gpl-3.0",
"vision",
"image segme... | ABC-iRobotics | An instance segmentation dataset for robotic manipulation in a tabletop environment.
The dataset incorporates real and synthetic images for testing sim-to-real model transfer after fine-tuning. | @ARTICLE{10145828,
author={Károly, Artúr István and Tirczka, Sebestyén and Gao, Huijun and Rudas, Imre J. and Galambos, Péter},
journal={IEEE Transactions on Cybernetics},
title={Increasing the Robustness of Deep Learning Models for Object Segmentation: A Framework for Blending Automatically Annotated Real and Synthetic Data},
year={2023},
volume={},
number={},
pages={1-14},
doi={10.1109/TCYB.2023.3276485}} | 1 | 10 | 2023-09-27T14:58:22 | ---
language:
- en
license: gpl-3.0
tags:
- vision
- image segmentation
- instance segmentation
- object detection
- synthetic
- sim-to-real
annotations_creators:
- machine-generated
pretty_name: OE Dataset
size_categories:
- 1K<n<10K
task_categories:
- object-detection
- image-segmentation
- robotics
task_ids:
- instance-segmentation
- semantic-segmentation
---
# The OE Dataset!

A dataset consisting of synthetic and real images annotated with instance segmentation masks for testing sim-to-real model performance for robotic manipulation.
### Dataset Summary
The OE Dataset is a collection of synthetic and real images of 3D-printed OE logos. Each image is annotated with instance segmentation masks. The dataset explicitly marks synthetic samples based on their creation method (either photorealistic synthetic samples or domain randomized samples) to facilitate sim-to-real performance tests on different synthetic datasets.
### Supported Tasks and Leaderboards
The dataset supports tasks such as semantic segmentation, instance segmentation, object detection, image classification, and testing sim-to-real transfer.
## Dataset Structure
### Data Instances
The instances of the dataset are 1920x1080x3 images in PNG format. The annotations are 1920x1080x4 PNG images representing the instance segmentation masks, where each instance is associated with a unique color.
### Data Fields
The data fields are:
1) 'image': 1920x1080x3 PNG image
2) 'mask': 1920x1080x4 PNG image
### Data Splits
The dataset contains training and validation splits for all image collections (real images, photorealistic synthetic images, domain randomized synthetic images) to facilitate cross-domain testing.
## Dataset Creation
### Curation Rationale
The dataset was created to provide a testbed for examining the effects of fine-tuning instance segmentation models on synthetic data (using various sim-to-real approaches).
### Source Data
The data is generated using two methods:
- Real images are recorded using a robotic setup and automatically annotated using the method proposed in [[1]](https://ieeexplore.ieee.org/abstract/document/9922852)
- Synthetic samples are generated using Blender and annotated using the [Blender Annotation Tool (BAT)](https://github.com/ABC-iRobotics/blender_annotation_tool)
### Citation Information
OE Dataset:
```bibtex
@ARTICLE{10145828,
author={Károly, Artúr István and Tirczka, Sebestyén and Gao, Huijun and Rudas, Imre J. and Galambos, Péter},
journal={IEEE Transactions on Cybernetics},
title={Increasing the Robustness of Deep Learning Models for Object Segmentation: A Framework for Blending Automatically Annotated Real and Synthetic Data},
year={2023},
volume={},
number={},
pages={1-14},
doi={10.1109/TCYB.2023.3276485}}
```
Automatically annotating real images with instance segmentation masks using a robotic arm:
```bibtex
@INPROCEEDINGS{9922852,
author={Károly, Artúr I. and Károly, Ármin and Galambos, Péter},
booktitle={2022 IEEE 10th Jubilee International Conference on Computational Cybernetics and Cyber-Medical Systems (ICCC)},
title={Automatic Generation and Annotation of Object Segmentation Datasets Using Robotic Arm},
year={2022},
volume={},
number={},
pages={000063-000068},
doi={10.1109/ICCC202255925.2022.9922852}}
```
Synthetic dataset generation and annotation method:
```bibtex
@INPROCEEDINGS{9780790,
author={Károly, Artúr I. and Galambos, Péter},
booktitle={2022 IEEE 20th Jubilee World Symposium on Applied Machine Intelligence and Informatics (SAMI)},
title={Automated Dataset Generation with Blender for Deep Learning-based Object Segmentation},
year={2022},
volume={},
number={},
pages={000329-000334},
doi={10.1109/SAMI54271.2022.9780790}}
```
Other related publications:
```bibtex
@INPROCEEDINGS{10029564,
author={Károly, Artúr I. and Tirczka, Sebestyén and Piricz, Tamás and Galambos, Péter},
booktitle={2022 IEEE 22nd International Symposium on Computational Intelligence and Informatics and 8th IEEE International Conference on Recent Achievements in Mechatronics, Automation, Computer Science and Robotics (CINTI-MACRo)},
title={Robotic Manipulation of Pathological Slides Powered by Deep Learning and Classical Image Processing},
year={2022},
volume={},
number={},
pages={000387-000392},
doi={10.1109/CINTI-MACRo57952.2022.10029564}}
```
```bibtex
@Article{app13010525,
AUTHOR = {Károly, Artúr István and Galambos, Péter},
TITLE = {Task-Specific Grasp Planning for Robotic Assembly by Fine-Tuning GQCNNs on Automatically Generated Synthetic Data},
JOURNAL = {Applied Sciences},
VOLUME = {13},
YEAR = {2023},
NUMBER = {1},
ARTICLE-NUMBER = {525},
URL = {https://www.mdpi.com/2076-3417/13/1/525},
ISSN = {2076-3417},
ABSTRACT = {In modern robot applications, there is often a need to manipulate previously unknown objects in an unstructured environment. The field of grasp-planning deals with the task of finding grasps for a given object that can be successfully executed with a robot. The predicted grasps can be evaluated according to certain criteria, such as analytical metrics, similarity to human-provided grasps, or the success rate of physical trials. The quality of a grasp also depends on the task which will be carried out after the grasping is completed. Current task-specific grasp planning approaches mostly use probabilistic methods, which utilize categorical task encoding. We argue that categorical task encoding may not be suitable for complex assembly tasks. This paper proposes a transfer-learning-based approach for task-specific grasp planning for robotic assembly. The proposed method is based on an automated pipeline that quickly and automatically generates a small-scale task-specific synthetic grasp dataset using Graspit! and Blender. This dataset is utilized to fine-tune pre-trained grasp quality convolutional neural networks (GQCNNs). The aim is to train GQCNNs that can predict grasps which do not result in a collision when placing the objects. Consequently, this paper focuses on the geometric feasibility of the predicted grasps and does not consider the dynamic effects. The fine-tuned GQCNNs are evaluated using the Moveit! Task Constructor motion planning framework, which enables the automated inspection of whether the motion planning for a task is feasible given a predicted grasp and, if not, which part of the task is responsible for the failure. Our results suggest that fine-tuning GQCNN models can result in superior grasp-planning performance (0.9 success rate compared to 0.65) in the context of an assembly task. Our method can be used to rapidly attain new task-specific grasp policies for flexible robotic assembly applications.},
DOI = {10.3390/app13010525}
}
``` | 6,883 | [
[
-0.03448486328125,
-0.0640869140625,
0.025146484375,
-0.033782958984375,
-0.00966644287109375,
-0.004364013671875,
0.00431060791015625,
-0.058013916015625,
0.0173492431640625,
0.040130615234375,
-0.059478759765625,
-0.04541015625,
-0.04052734375,
0.003112792... |
erhwenkuo/train_0.5m-chinese-zhtw | 2023-09-27T15:59:00.000Z | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:zh",
"alpaca",
"fine-tune",
"region:us"
] | erhwenkuo | null | null | 0 | 10 | 2023-09-27T15:55:31 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 265980267
num_examples: 519255
download_size: 183812396
dataset_size: 265980267
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- text-generation
language:
- zh
tags:
- alpaca
- fine-tune
size_categories:
- 100K<n<1M
---
# Dataset Card for "train_0.5m-chinese-zhtw"
## 內容
包含約 50 萬條由 [BELLE](https://github.com/LianjiaTech/BELLE) 專案產生的中文指令資料。
## 範例
```
{
"instruction": "給定一個文字輸入,將其中的所有數字加1。\n“明天的會議在9點開始,記得準時到達。 ”\n",
"input": "",
"output": "「明天的會議在10點開始,記得準時到達。 ”"
}
```
### 欄位:
```
instruction: 指令
input: 輸入(此資料集均為空)
output: 輸出
```
## 使用限制
僅允許將此資料集及使用此資料集產生的衍生物用於研究目的,不得用於商業,以及其他會對社會帶來危害的用途。
本資料集不代表任何一方的立場、利益或想法,無關任何團體的任何類型的主張。因使用本資料集所帶來的任何損害、糾紛,本專案不承擔任何責任。 | 913 | [
[
-0.0239105224609375,
-0.02264404296875,
0.00396728515625,
0.03179931640625,
-0.040252685546875,
-0.03826904296875,
0.0010766983032226562,
-0.01284027099609375,
0.0207061767578125,
0.0275115966796875,
-0.0643310546875,
-0.064453125,
-0.034912109375,
-0.005924... |
tuxmx/nfl_bets_scores | 2023-09-28T03:57:29.000Z | [
"region:us"
] | tuxmx | null | null | 0 | 10 | 2023-09-28T03:56:32 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
aswin1906/countries-inflation | 2023-09-30T11:05:59.000Z | [
"task_categories:tabular-regression",
"task_categories:text-classification",
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"region:us"
] | aswin1906 | null | null | 2 | 10 | 2023-09-30T10:54:30 | ---
license: apache-2.0
task_categories:
- tabular-regression
- text-classification
- text-generation
language:
- en
pretty_name: Countries by Inflation rate of 2022
size_categories:
- n<1K
---
# Dataset Summary
Inflation is a critical economic indicator that reflects the overall increase in prices of goods and services within an economy over a specific period. Understanding inflation trends on a global scale is crucial for economists, policymakers, investors, and businesses. This dataset provides comprehensive insights into the inflation rates of various countries for the year 2022. The data is sourced from reputable international organizations and government reports, making it a valuable resource for economic analysis and research.
This dataset includes four essential columns:
1. Countries: The names of countries for which inflation data is recorded. Each row represents a specific country.
1. Inflation, 2022: The inflation rate for each country in the year 2022. Inflation rates are typically expressed as a percentage and indicate the average increase in prices for that year.
1. Global Rank: The rank of each country based on its inflation rate in 2022. Countries with the highest inflation rates will have a lower rank, while those with lower inflation rates will have a higher rank.
1. Available Data: A binary indicator (Yes/No) denoting whether complete and reliable data for inflation in 2022 is available for a particular country. This column helps users identify the data quality and coverage.
## Potential Use Cases
**Economic Analysis:** Researchers and economists can use this dataset to analyze inflation trends globally, identify countries with high or low inflation rates, and make comparisons across regions.
**Investment Decisions:** Investors and financial analysts can incorporate inflation data into their risk assessments and investment strategies.
**Business Planning:** Companies operating in multiple countries can assess the impact of inflation on their costs and pricing strategies, helping them make informed decisions.
## Data Accuracy:
Efforts have been made to ensure the accuracy and reliability of the data; however, users are encouraged to cross-reference this dataset with official sources for critical decision-making processes.
## Updates:
This dataset will be periodically updated to include the latest available inflation data, making it an ongoing resource for tracking global inflation trends. | 2,455 | [
[
-0.036285400390625,
-0.0390625,
0.002254486083984375,
0.037109375,
-0.0187225341796875,
-0.027130126953125,
0.0156707763671875,
-0.050567626953125,
0.007480621337890625,
0.041656494140625,
-0.0254669189453125,
-0.0144195556640625,
-0.037353515625,
0.00071763... |
aswin1906/llama2-sql-instruct-2k | 2023-09-30T11:34:46.000Z | [
"region:us"
] | aswin1906 | null | null | 0 | 10 | 2023-09-30T11:33:42 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 998694
num_examples: 2000
download_size: 192228
dataset_size: 998694
---
# Dataset Card for "llama2-sql-instruct-2k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 447 | [
[
-0.00965118408203125,
-0.01505279541015625,
0.0150909423828125,
0.0265960693359375,
-0.0400390625,
0.0079345703125,
0.0340576171875,
-0.0165863037109375,
0.04852294921875,
0.0443115234375,
-0.051788330078125,
-0.044219970703125,
-0.034454345703125,
-0.020294... |
jscode13/mars-data | 2023-10-01T01:54:12.000Z | [
"region:us"
] | jscode13 | null | null | 0 | 10 | 2023-10-01T01:53:54 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Mxode/StackOverflow-QA-C-Language-40k | 2023-10-02T10:30:22.000Z | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"code",
"region:us"
] | Mxode | null | null | 1 | 10 | 2023-10-02T10:28:14 | ---
license: apache-2.0
language:
- en
tags:
- code
task_categories:
- question-answering
size_categories:
- 10K<n<100K
---
This is a collection of ~40k QA's in **C Language** from StackOverflow. The data has been initially cleaned, and each response is with **Accepted Answer**.
All data is **<1000** in length.
The questions and answers were organized into a **one-line** format. A sample format is shown below:
```json
{
"question": "```\nFILE* file = fopen(some file)\n\npcap_t* pd = pcap_fopen_offline(file)\n\npcap_close(pd)\n\nfclose(file)\n```\n\nThis code occurs double free error.\n\nCould you explain about this happening?\n\nMy Guess is that pd and file pointers are sharing some datas.\n",
"answer": "As the documentation says, thepcap_closefunction closes the files associated with thepcap_tstructure passed to it. Closing the file again withfcloseis an error.\n"
}
``` | 893 | [
[
-0.0102386474609375,
-0.050323486328125,
0.033447265625,
0.042236328125,
-0.0306549072265625,
0.029144287109375,
0.0072021484375,
-0.022735595703125,
0.01006317138671875,
0.049560546875,
-0.01654052734375,
-0.02593994140625,
-0.0269012451171875,
0.0007944107... |
Falah/military_drawing_descriptions | 2023-10-03T08:35:38.000Z | [
"region:us"
] | Falah | null | null | 0 | 10 | 2023-10-03T08:23:51 | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 163051
num_examples: 1000
download_size: 18457
dataset_size: 163051
---
# Dataset Card for "military_drawing_descriptions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 370 | [
[
-0.05853271484375,
-0.0115509033203125,
0.02691650390625,
0.019073486328125,
-0.0202178955078125,
0.0016651153564453125,
0.027618408203125,
-0.01220703125,
0.053070068359375,
0.045684814453125,
-0.07177734375,
-0.05181884765625,
-0.05218505859375,
-0.0176239... |
datazeit/gpt_target_group_v1-1 | 2023-10-03T13:06:28.000Z | [
"region:us"
] | datazeit | null | null | 0 | 10 | 2023-10-03T11:54:37 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 11289432
num_examples: 4452
download_size: 0
dataset_size: 11289432
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "gpt_target_group_v1-1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 520 | [
[
-0.0321044921875,
-0.025115966796875,
0.01446533203125,
0.010986328125,
-0.0271759033203125,
-0.00881195068359375,
0.03106689453125,
0.0031948089599609375,
0.055145263671875,
0.026519775390625,
-0.08001708984375,
-0.0609130859375,
-0.060394287109375,
-0.0293... |
vsarathy/nl-robotics-semantic-parsing-info_structure-30k-context | 2023-10-03T14:35:20.000Z | [
"region:us"
] | vsarathy | null | null | 0 | 10 | 2023-10-03T14:34:56 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
Malmika/ict_dataset | 2023-10-03T15:01:44.000Z | [
"region:us"
] | Malmika | null | null | 1 | 10 | 2023-10-03T15:01:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
MegPaulson/Melanoma_resized | 2023-10-05T14:14:41.000Z | [
"region:us"
] | MegPaulson | null | null | 0 | 10 | 2023-10-04T16:10:59 | ---
dataset_info:
features:
- name: image
dtype: image
- name: image_seg
dtype: image
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 11692204.0
num_examples: 26
download_size: 11702241
dataset_size: 11692204.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Melanoma_resized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 521 | [
[
-0.031890869140625,
-0.0149383544921875,
0.017333984375,
0.0100250244140625,
-0.0303802490234375,
0.0014476776123046875,
0.00890350341796875,
-0.0127410888671875,
0.06689453125,
0.0523681640625,
-0.054779052734375,
-0.07135009765625,
-0.040374755859375,
-0.0... |
adityarra07/czech_train_data | 2023-10-04T18:09:04.000Z | [
"region:us"
] | adityarra07 | null | null | 0 | 10 | 2023-10-04T18:08:37 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 669027003.0330192
num_examples: 12613
- name: test
num_bytes: 26521327.322326932
num_examples: 500
download_size: 658874865
dataset_size: 695548330.3553461
---
# Dataset Card for "czech_train_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 561 | [
[
-0.035858154296875,
-0.01050567626953125,
0.0250396728515625,
0.0253448486328125,
-0.0276336669921875,
-0.0011386871337890625,
-0.0057830810546875,
-0.008514404296875,
0.043975830078125,
0.034637451171875,
-0.070068359375,
-0.06427001953125,
-0.034759521484375,
... |
adityarra07/czech_test | 2023-10-04T18:09:08.000Z | [
"region:us"
] | adityarra07 | null | null | 0 | 10 | 2023-10-04T18:09:04 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 53042654.644653864
num_examples: 1000
download_size: 52259185
dataset_size: 53042654.644653864
---
# Dataset Card for "czech_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 484 | [
[
-0.041412353515625,
-0.03173828125,
0.0156707763671875,
0.0211029052734375,
-0.026763916015625,
-0.000006735324859619141,
-0.006145477294921875,
-0.0157928466796875,
0.0445556640625,
0.03594970703125,
-0.0694580078125,
-0.07208251953125,
-0.0289459228515625,
... |
Hack90/ncbi_genbank_part_2 | 2023-10-07T00:09:54.000Z | [
"region:us"
] | Hack90 | null | null | 0 | 10 | 2023-10-05T01:33:15 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 20552040218
num_examples: 10205
download_size: 6137836807
dataset_size: 20552040218
---
# Dataset Card for "ncbi_genbank_part_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 641 | [
[
-0.034912109375,
-0.0236968994140625,
0.0158233642578125,
0.01551055908203125,
-0.0296173095703125,
0.0155487060546875,
0.03656005859375,
-0.01375579833984375,
0.050140380859375,
0.035491943359375,
-0.04638671875,
-0.049285888671875,
-0.0362548828125,
-0.012... |
elenahuang/primary-sector-top-1k | 2023-10-05T12:55:32.000Z | [
"region:us"
] | elenahuang | null | null | 0 | 10 | 2023-10-05T12:55:29 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 8620437
num_examples: 1000
download_size: 4571154
dataset_size: 8620437
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "primary-sector-top-1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 449 | [
[
-0.044647216796875,
-0.0221099853515625,
0.0037975311279296875,
0.0233612060546875,
-0.0265960693359375,
0.003009796142578125,
0.025848388671875,
0.01381683349609375,
0.061431884765625,
0.04547119140625,
-0.07379150390625,
-0.06182861328125,
-0.0450439453125,
... |
vsarathy/nl-robotics-semantic-parsing-info_structure-10k-context-TEST | 2023-10-05T13:43:07.000Z | [
"region:us"
] | vsarathy | null | null | 0 | 10 | 2023-10-05T13:42:37 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
daspartho/agree_disagree | 2023-10-05T13:46:44.000Z | [
"region:us"
] | daspartho | null | null | 1 | 10 | 2023-10-05T13:46:36 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: statement
dtype: string
- name: reply
dtype: string
- name: sentiment
dtype: int64
splits:
- name: train
num_bytes: 267030
num_examples: 1660
download_size: 113328
dataset_size: 267030
---
# Dataset Card for "agree_disagree"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 515 | [
[
-0.053497314453125,
-0.022186279296875,
0.01506805419921875,
0.0277557373046875,
-0.0127716064453125,
-0.00817108154296875,
0.0028972625732421875,
-0.0168304443359375,
0.06298828125,
0.0286102294921875,
-0.050750732421875,
-0.04632568359375,
-0.057373046875,
... |
Intuit-GenSRF/jigsaw-toxic-comment-train-fr | 2023-10-05T19:27:43.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | 0 | 10 | 2023-10-05T19:27:39 | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
splits:
- name: train
num_bytes: 104185684
num_examples: 223394
download_size: 63513621
dataset_size: 104185684
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "jigsaw-train-fr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 488 | [
[
-0.03802490234375,
-0.00926971435546875,
0.007244110107421875,
0.0215911865234375,
-0.033111572265625,
0.0007829666137695312,
0.02203369140625,
-0.00926971435546875,
0.062103271484375,
0.0179290771484375,
-0.05755615234375,
-0.031494140625,
-0.04486083984375,
... |
Intuit-GenSRF/jigsaw-multilingual-train-unique | 2023-10-06T01:48:06.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | 0 | 10 | 2023-10-06T01:48:04 | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 24247216
num_examples: 60299
download_size: 15570943
dataset_size: 24247216
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "jigsaw-multilingual-train-unique"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 547 | [
[
-0.0249786376953125,
-0.00739288330078125,
-0.005008697509765625,
0.037872314453125,
-0.038360595703125,
0.01363372802734375,
0.0010890960693359375,
-0.01105499267578125,
0.06298828125,
0.025634765625,
-0.054779052734375,
-0.04803466796875,
-0.0430908203125,
... |
BEE-spoke-data/coedit-reworded-deduped | 2023-10-16T20:09:41.000Z | [
"license:apache-2.0",
"arxiv:2305.09857",
"region:us"
] | BEE-spoke-data | null | null | 0 | 10 | 2023-10-06T23:54:17 | ---
license: apache-2.0
dataset_info:
- config_name: dedup-by-target
features:
- name: task
dtype: string
- name: id
dtype: string
- name: original_instruction
dtype: string
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 23629242
num_examples: 79943
download_size: 11836738
dataset_size: 23629242
- config_name: dedup-input
features:
- name: task
dtype: string
- name: id
dtype: string
- name: original_instruction
dtype: string
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 23457166
num_examples: 79293
download_size: 11795306
dataset_size: 23457166
- config_name: default
features:
- name: task
dtype: string
- name: id
dtype: string
- name: original_instruction
dtype: string
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 23629242
num_examples: 79943
download_size: 11836739
dataset_size: 23629242
configs:
- config_name: dedup-by-target
data_files:
- split: train
path: dedup-by-target/train-*
- config_name: dedup-input
data_files:
- split: train
path: dedup-input/train-*
- config_name: default
data_files:
- split: train
path: data/train-*
source_dataasets: chargoddard/coedit-reworded
---
# BEE-spoke-data/coedit-reworded-deduped
Minhash deduplication on the `target` column. Source data from [coedit-reworded](https://hf.co/chargoddard/coedit-reworded)
## load
```
from datasets import load_dataset
dataset = load_dataset("BEE-spoke-data/coedit-reworded-deduped", revision="refs/convert/parquet")
dataset
```
output:
```python
DatasetDict({
train: Dataset({
features: ['task', 'id', 'original_instruction', 'instruction', 'input', 'output'],
num_rows: 79943
})
})
```
## Citation
Original dataset courtesy of Grammarly:
```
@article{raheja2023coedit,
title={CoEdIT: Text Editing by Task-Specific Instruction Tuning},
author={Vipul Raheja and Dhruv Kumar and Ryan Koo and Dongyeop Kang},
year={2023},
eprint={2305.09857},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | 2,373 | [
[
0.001659393310546875,
-0.052032470703125,
0.01165008544921875,
0.0289306640625,
-0.037017822265625,
-0.0113067626953125,
-0.01983642578125,
-0.004360198974609375,
0.0105743408203125,
0.04254150390625,
-0.04217529296875,
-0.057281494140625,
-0.05029296875,
0.... |
BBuf/chid | 2023-10-07T06:33:11.000Z | [
"region:us"
] | BBuf | null | null | 0 | 10 | 2023-10-07T06:33:01 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: candidates
sequence: string
- name: content
dtype: string
- name: answer
dtype: int64
splits:
- name: train
num_bytes: 88466
num_examples: 202
- name: validation
num_bytes: 87327
num_examples: 202
download_size: 140651
dataset_size: 175793
---
# Dataset Card for "chid"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 514 | [
[
-0.04144287109375,
-0.028778076171875,
0.01097869873046875,
0.021575927734375,
-0.0245819091796875,
0.009033203125,
0.023223876953125,
-0.0263214111328125,
0.0595703125,
0.0257415771484375,
-0.060211181640625,
-0.05584716796875,
-0.034149169921875,
-0.010391... |
Fraol/TrainDedupedRefDatasetWMetricFinal1 | 2023-10-08T04:25:28.000Z | [
"region:us"
] | Fraol | null | null | 0 | 10 | 2023-10-08T04:25:21 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: source
dtype: string
- name: path_name
dtype: string
- name: file_name
dtype: string
- name: ref_type
dtype: string
- name: hash
dtype: string
- name: class_name
dtype: string
- name: method_name
dtype: string
- name: row_number
dtype: int64
- name: cbo
dtype: float64
- name: wmc
dtype: float64
- name: lcom*
dtype: float64
- name: loc
dtype: float64
- name: astc2
dtype: string
- name: source_after
dtype: string
- name: cbo_after
dtype: float64
- name: wmc_after
dtype: float64
- name: lcom*_after
dtype: float64
- name: loc_after
dtype: float64
- name: astc1
dtype: string
- name: issue_name
dtype: string
- name: issue_localize
dtype: string
splits:
- name: train
num_bytes: 418141332
num_examples: 15000
- name: test
num_bytes: 80590478
num_examples: 3000
download_size: 113829036
dataset_size: 498731810
---
# Dataset Card for "TrainDedupedRefDatasetWMetricFinal1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,318 | [
[
-0.03948974609375,
-0.0004329681396484375,
0.0018529891967773438,
0.03057861328125,
-0.012420654296875,
0.0050048828125,
0.0291900634765625,
0.0026760101318359375,
0.05682373046875,
0.035980224609375,
-0.08062744140625,
-0.0400390625,
-0.03839111328125,
-0.0... |
hk-kaden-kim/uzh-hs23-etsp-eval-multi-base-bar | 2023-10-08T10:59:19.000Z | [
"region:us"
] | hk-kaden-kim | null | null | 0 | 10 | 2023-10-08T10:47:05 | ---
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
splits:
- name: test
num_bytes: 5362648.0
num_examples: 100
download_size: 0
dataset_size: 5362648.0
---
# Dataset Card for "uzh-hs23-etsp-eval-multi-base-bar"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 407 | [
[
-0.05145263671875,
-0.028839111328125,
0.0157318115234375,
0.0228118896484375,
-0.0247650146484375,
0.02996826171875,
0.00395965576171875,
0.005382537841796875,
0.050506591796875,
0.040771484375,
-0.043212890625,
-0.054473876953125,
-0.0218048095703125,
-0.0... |
hk-kaden-kim/uzh-hs23-etsp-eval-multi-subplot-bar | 2023-10-08T10:59:52.000Z | [
"region:us"
] | hk-kaden-kim | null | null | 0 | 10 | 2023-10-08T10:47:21 | ---
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
splits:
- name: test
num_bytes: 6192425.0
num_examples: 100
download_size: 6134847
dataset_size: 6192425.0
---
# Dataset Card for "uzh-hs23-etsp-eval-multi-subplot-bar"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 416 | [
[
-0.05047607421875,
-0.029205322265625,
0.0188140869140625,
0.0164947509765625,
-0.039764404296875,
0.031341552734375,
0.0183258056640625,
0.0013399124145507812,
0.060302734375,
0.02783203125,
-0.057281494140625,
-0.046417236328125,
-0.020416259765625,
-0.016... |
kelzla/klz-ds3 | 2023-10-09T00:26:51.000Z | [
"region:us"
] | kelzla | null | null | 0 | 10 | 2023-10-09T00:25:36 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
rchan26/english_char_split | 2023-10-10T22:40:41.000Z | [
"region:us"
] | rchan26 | null | null | 0 | 10 | 2023-10-10T22:40:35 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: word
dtype: string
- name: language
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: special_tokens_mask
sequence: int8
- name: tokens
sequence: string
splits:
- name: train
num_bytes: 5314782
num_examples: 37863
- name: test
num_bytes: 1979650
num_examples: 14129
- name: validation
num_bytes: 2613902
num_examples: 18649
download_size: 2205306
dataset_size: 9908334
---
# Dataset Card for "english_char_split"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 873 | [
[
-0.036376953125,
-0.027862548828125,
0.00762939453125,
0.01555633544921875,
-0.0173492431640625,
0.0211639404296875,
-0.0016298294067382812,
-0.0276031494140625,
0.07012939453125,
0.03656005859375,
-0.056854248046875,
-0.053680419921875,
-0.042388916015625,
... |
namespace-Pt/qrecc-corpus | 2023-10-12T03:17:17.000Z | [
"region:us"
] | namespace-Pt | null | null | 2 | 10 | 2023-10-11T17:20:54 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 84244312900
num_examples: 54573064
download_size: 21571487893
dataset_size: 84244312900
---
# Dataset Card for "qrecc"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 480 | [
[
-0.039520263671875,
0.0046844482421875,
0.0166168212890625,
0.00146484375,
-0.0124359130859375,
0.0088653564453125,
0.0296173095703125,
-0.003753662109375,
0.051116943359375,
0.0305633544921875,
-0.0643310546875,
-0.055633544921875,
-0.0291900634765625,
-0.0... |
semanticword-user/autotrain-dataset-2 | 2023-10-12T21:33:32.000Z | [
"region:us"
] | semanticword-user | null | null | 0 | 10 | 2023-10-12T01:49:51 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
sehyun66/News-sentiments | 2023-10-12T12:48:49.000Z | [
"region:us"
] | sehyun66 | null | null | 0 | 10 | 2023-10-12T12:32:56 | ---
dataset_info:
- config_name: bertplus
features:
- name: headline
dtype: string
- name: summary
dtype: string
- name: headline_sentiment
struct:
- name: postive
dtype: string
- name: negative
dtype: string
- name: neutral
dtype: string
- name: summary_sentiment
struct:
- name: postive
dtype: string
- name: negative
dtype: string
- name: neutral
dtype: string
splits:
- name: default
num_bytes: 130253804
num_examples: 316086
download_size: 73025646
dataset_size: 130253804
- config_name: debert
features:
- name: headline
dtype: string
- name: summary
dtype: string
- name: headline_sentiment
struct:
- name: postive
dtype: string
- name: negative
dtype: string
- name: neutral
dtype: string
- name: summary_sentiment
struct:
- name: postive
dtype: string
- name: negative
dtype: string
- name: neutral
dtype: string
splits:
- name: default
num_bytes: 130884482
num_examples: 316086
download_size: 73648726
dataset_size: 130884482
- config_name: distill
features:
- name: headline
dtype: string
- name: summary
dtype: string
- name: headline_sentiment
struct:
- name: postive
dtype: string
- name: negative
dtype: string
- name: neutral
dtype: string
- name: summary_sentiment
struct:
- name: postive
dtype: string
- name: negative
dtype: string
- name: neutral
dtype: string
splits:
- name: default
num_bytes: 131086592
num_examples: 316086
download_size: 71723929
dataset_size: 131086592
- config_name: finbert
features:
- name: headline
dtype: string
- name: summary
dtype: string
- name: headline_sentiment
struct:
- name: postive
dtype: string
- name: negative
dtype: string
- name: neutral
dtype: string
- name: summary_sentiment
struct:
- name: postive
dtype: string
- name: negative
dtype: string
- name: neutral
dtype: string
splits:
- name: default
num_bytes: 131074564
num_examples: 316086
download_size: 73670360
dataset_size: 131074564
configs:
- config_name: bertplus
data_files:
- split: default
path: bertplus/default-*
- config_name: debert
data_files:
- split: default
path: debert/default-*
- config_name: distill
data_files:
- split: default
path: distill/default-*
- config_name: finbert
data_files:
- split: default
path: finbert/default-*
---
# Dataset Card for "News-sentiments"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 2,780 | [
[
-0.051788330078125,
-0.01690673828125,
0.0164947509765625,
0.0262908935546875,
-0.0251922607421875,
-0.003253936767578125,
0.0009098052978515625,
-0.00275421142578125,
0.07830810546875,
0.0205078125,
-0.05767822265625,
-0.072021484375,
-0.042510986328125,
-0... |
AI-Secure/DecodingTrust | 2023-10-26T21:50:21.000Z | [
"task_categories:text-classification",
"task_categories:question-answering",
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-sa-4.0",
"arxiv:2306.11698",
"region:us"
] | AI-Secure | null | null | 1 | 10 | 2023-10-12T22:26:17 | ---
license: cc-by-sa-4.0
task_categories:
- text-classification
- question-answering
- text-generation
- text2text-generation
language:
- en
pretty_name: DecodingTrust
size_categories:
- 10K<n<100K
arixv: 2306.11698
configs:
- config_name: toxicity
data_files:
- split: realtoxicityprompts.nontoxic
path: "toxicity/user_prompts/nontoxic.jsonl"
- split: realtoxicityprompts.toxic
path: "toxicity/user_prompts/toxic.jsonl"
- split: toxic.gpt3.5
path: "toxicity/user_prompts/toxic.gpt3.5.jsonl"
- split: toxic.gpt4
path: "toxicity/user_prompts/toxic.gpt4.jsonl"
- config_name: adv_demonstration
data_files:
- split: counterfactual.snliPremiseCf
path: adv_demonstration/counterfactual/snli_premise_cf/42.jsonl
- split: counterfactual.snliHypothesisCf
path: adv_demonstration/counterfactual/snli_hypothesis_cf/42.jsonl
- split: counterfactual.controlRaisingCf
path: adv_demonstration/counterfactual/control_raising_cf/42.jsonl
- split: counterfactual.irregularFormCf
path: adv_demonstration/counterfactual/irregular_form_cf/42.jsonl
- split: counterfactual.mainVerbCf
path: adv_demonstration/counterfactual/main_verb_cf/42.jsonl
- split: counterfactual.syntacticCategoryCf
path: adv_demonstration/counterfactual/syntactic_category_cf/42.jsonl
- split: spurious.PP.entailBias
path: adv_demonstration/spurious/PP/entail-bias/42.jsonl
- split: spurious.PP.nonEntailBias
path: adv_demonstration/spurious/PP/non-entail-bias/42.jsonl
- split: spurious.adverb.entailBias
path: adv_demonstration/spurious/adverb/entail-bias/42.jsonl
- split: spurious.adverb.nonEntailBias
path: adv_demonstration/spurious/adverb/non-entail-bias/42.jsonl
- split: spurious.embeddedUnderVerb.entailBias
path: adv_demonstration/spurious/embedded_under_verb/entail-bias/42.jsonl
- split: spurious.embeddedUnderVerb.nonEntailBias
path: adv_demonstration/spurious/embedded_under_verb/non-entail-bias/42.jsonl
- split: spurious.lRelativeClause.entailBias
path: adv_demonstration/spurious/l_relative_clause/entail-bias/42.jsonl
- split: spurious.lRelativeClause.nonEntailBias
path: adv_demonstration/spurious/l_relative_clause/non-entail-bias/42.jsonl
- split: spurious.passive.entailBias
path: adv_demonstration/spurious/passive/entail-bias/42.jsonl
- split: spurious.passive.nonEntailBias
path: adv_demonstration/spurious/passive/non-entail-bias/42.jsonl
- split: spurious.sRelativeClause.entailBias
path: adv_demonstration/spurious/s_relative_clause/entail-bias/42.jsonl
- split: spurious.sRelativeClause.nonEntailBias
path: adv_demonstration/spurious/s_relative_clause/non-entail-bias/42.jsonl
- split: backdoor.sst2.setup1BadwordCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup1_badword_cacc/42.jsonl
- split: backdoor.sst2.setup1BadwordAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup1_badword_asr/42.jsonl
- split: backdoor.sst2.setup2BadwordCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup2_badword_cacc/42.jsonl
- split: backdoor.sst2.setup2BadwordAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup2_badword_asr/42.jsonl
- split: backdoor.sst2.setup3BadwordCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup3_badword_cacc/42.jsonl
- split: backdoor.sst2.setup3BadwordAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup3_badword_asr/42.jsonl
- split: backdoor.sst2.setup1AddsentCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup1_addsent_cacc/42.jsonl
- split: backdoor.sst2.setup1AddsentAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup1_addsent_asr/42.jsonl
- split: backdoor.sst2.setup2AddsentCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup2_addsent_cacc/42.jsonl
- split: backdoor.sst2.setup2AddsentAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup2_addsent_asr/42.jsonl
- split: backdoor.sst2.setup3AddsentCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup3_addsent_cacc/42.jsonl
- split: backdoor.sst2.setup3AddsentAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup3_addsent_asr/42.jsonl
- split: backdoor.sst2.setup1SynbkdCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup1_synbkd_cacc/42.jsonl
- split: backdoor.sst2.setup1SynbkdAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup1_synbkd_asr/42.jsonl
- split: backdoor.sst2.setup2SynbkdCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup2_synbkd_cacc/42.jsonl
- split: backdoor.sst2.setup2SynbkdAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup2_synbkd_asr/42.jsonl
- split: backdoor.sst2.setup3SynbkdCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup3_synbkd_cacc/42.jsonl
- split: backdoor.sst2.setup3SynbkdAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup3_synbkd_asr/42.jsonl
- split: backdoor.sst2.setup1StylebkdCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup1_stylebkd_cacc/42.jsonl
- split: backdoor.sst2.setup1StylebkdAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup1_stylebkd_asr/42.jsonl
- split: backdoor.sst2.setup2StylebkdCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup2_stylebkd_cacc/42.jsonl
- split: backdoor.sst2.setup2StylebkdAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup2_stylebkd_asr/42.jsonl
- split: backdoor.sst2.setup3StylebkdCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup3_stylebkd_cacc/42.jsonl
- split: backdoor.sst2.setup3StylebkdAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup3_stylebkd_asr/42.jsonl
- config_name: stereotype
data_files:
- split: stereotype
path: "stereotype/dataset/stereotype_bias_data.jsonl"
- config_name: adv-glue-plus-plus
data_files:
- split: sst2
path: "adv-glue-plus-plus/data/sst2.jsonl"
- split: qqp
path: "adv-glue-plus-plus/data/qqp.jsonl"
- split: mnli
path: "adv-glue-plus-plus/data/mnli.jsonl"
- split: mnli_mismatched
path: "adv-glue-plus-plus/data/mnli-mm.jsonl"
- split: qnli
path: "adv-glue-plus-plus/data/qnli.jsonl"
- split: rte
path: "adv-glue-plus-plus/data/rte.jsonl"
- config_name: machine_ethics
data_files:
- split: morality.train
path: "machine_ethics/cm_train.jsonl"
- split: morality.test
path: "machine_ethics/cm_test.jsonl"
- split: jiminy.train
path: "machine_ethics/jiminy_train.jsonl"
- split: jiminy.test
path: "machine_ethics/jiminy_test.jsonl"
- config_name: privacy
data_files:
- split: enron.context
path: "privacy/enron_data/context.jsonl"
- split: enron.email2name
path: "privacy/enron_data/email2name.jsonl"
- split: enron.one_shot_non_domain
path: "privacy/enron_data/one_shot_non_domain.jsonl"
- split: enron.one_shot
path: "privacy/enron_data/one_shot.jsonl"
- split: enron.two_shot_non_domain
path: "privacy/enron_data/two_shot_non_domain.jsonl"
- split: enron.two_shot
path: "privacy/enron_data/two_shot.jsonl"
- split: enron.five_shot_non_domain
path: "privacy/enron_data/five_shot_non_domain.jsonl"
- split: enron.five_shot
path: "privacy/enron_data/five_shot.jsonl"
- config_name: fairness
data_files:
- split: adult.zero_shot.br_0.0
path: "fairness/fairness_data/adult_0_200_test_base_rate_0.0.jsonl"
- split: adult.zero_shot.br_0.5
path: "fairness/fairness_data/adult_0_200_test_base_rate_0.5.jsonl"
- split: adult.zero_shot.br_1.0
path: "fairness/fairness_data/adult_0_200_test_base_rate_1.0.jsonl"
- split: adult.few_shot.tr_br_0.0
path: "fairness/fairness_data/adult_32_200_train_base_rate_0.0.jsonl"
- split: adult.few_shot.tr_br_0.5
path: "fairness/fairness_data/adult_32_200_train_base_rate_0.5.jsonl"
- split: adult.few_shot.tr_br_1.0
path: "fairness/fairness_data/adult_32_200_train_base_rate_1.0.jsonl"
- split: adult.few_shot.num_train_0
path: "fairness/fairness_data/adult_0_200_train_br_0.0_test_br_0.5.jsonl"
- split: adult.few_shot.num_train_16
path: "fairness/fairness_data/adult_16_200_train_br_0.0_test_br_0.5.jsonl"
- split: adult.few_shot.num_train_32
path: "fairness/fairness_data/adult_32_200_train_br_0.0_test_br_0.5.jsonl"
- split: crime.zero_shot.br_0.0
path: "fairness/fairness_data/crime_0_300_test_base_rate_0.0.jsonl"
- split: crime.zero_shot.br_0.5
path: "fairness/fairness_data/crime_0_300_test_base_rate_0.5.jsonl"
- split: crime.zero_shot.br_1.0
path: "fairness/fairness_data/crime_0_300_test_base_rate_1.0.jsonl"
- config_name: ood
data_files:
- split: style
path: "ood/style.jsonl"
- split: knowledge
path: "ood/knowledge.jsonl"
---
# DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
## Overview
This repo contains the source code of DecodingTrust. This research endeavor is designed to help researchers better understand the capabilities, limitations, and potential risks associated with deploying these state-of-the-art Large Language Models (LLMs). See our paper for details.
[**DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models**](https://arxiv.org/abs//2306.11698)
*Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li.*
https://arxiv.org/pdf/2306.11698.pdf
This project is organized around the following **eight** primary areas of trustworthiness, including:
1. Toxicity
2. Stereotype and bias
3. Adversarial robustness
4. Out-of-Distribution Robustness
5. Privacy
6. Robustness to Adversarial Demonstrations
7. Machine Ethics
8. Fairness
## Getting Started
To evaluate using DecodingTrust dataset, please install the DecodingTrust package as below:
### (Conda +) Pip
For now, we suggest installing DecodingTrust by cloning our repository and install it in editable mode. This will keep the data, code, and configurations in the same place.
```bash
git clone https://github.com/AI-secure/DecodingTrust.git && cd DecodingTrust
pip install -e .
```
Please note that this will install PyTorch with `pip`. If your system does not have a `CUDA` version compatible with the PyTorch `pip` wheel. To install `PyTorch` with `Conda` first, as shown below.
```bash
conda create --name dt-test python=3.9 pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia
conda activate dt-test
pip install "decoding-trust @ git+https://github.com/AI-secure/DecodingTrust.git"
```
It is also possible to install DecodingTrust as a standalone package, but you will need to clone our repository again to run it will our data.
```bash
conda create --name dt-test python=3.9
conda activate dt-test
pip install "decoding-trust @ git+https://github.com/AI-secure/DecodingTrust.git"
```
### Support for the `ppc64le` Architecture
We also support the `ppc64le` architecture of IBM Power-9 platforms. To install on this platform, please first make sure you have the following `conda` channels so that we can utilize pre-built packages.
```
--add channels 'defaults' # lowest priority
--add channels 'https://public.dhe.ibm.com/ibmdl/export/pub/software/server/ibm-ai/conda-early-access/'
--add channels 'https://public.dhe.ibm.com/ibmdl/export/pub/software/server/ibm-ai/conda/'
--add channels 'https://opence.mit.edu'
--add channels 'https://ftp.osuosl.org/pub/open-ce/current/'
--add channels 'conda-forge' # highest priority
```
Then, install the following pre-built packages.
```bash
mamba create --name dt-test python==3.9 pytorch=2.0.1 torchvision=0.15.2 spacy=3.5.3 scipy=1.10.1 fairlearn~=0.9.0 scikit-learn~=1.1.2 pandas~=2.0.3 pyarrow~=11.0.0 rust -c conda-forge
```
Finally, install DecodingTrust with `pip` as usual.
### Docker / Singularity
To use DecodingTrust with docker, simply pull the following docker image.
```bash
sudo docker pull danielz01/decoding-trust
docker run -it \
-v /path/on/host:/path/in/container \
--gpus all \
decoding-trust/v1.0:latest [arg1 arg2 ...]
```
To use it in through singularity or apptainer container environments on HPC environments, simply run the following.
```bash
module load singularity # Change it to whatever module name your singularity / apptainer environment was given
singularity pull decoding-trust-v1.0.sif docker://danielz01/decoding-trust
singularity exec --nv --bind /path/on/host:/path/in/container decoding-trust-v1.0.sif [arg1 arg2]
```
We will also have a container build for `ppc64le` platforms soon. Stay tuned!
### Notes
+ Each of the eight areas has its own subdirectory containing the respective code and README.
+ Follow the specific `README`: Every subdirectory has its own README. Refer to these documents for information on how to run the scripts and interpret the results.
## [Important] Candidate models
In our benchmark, to have consistent conclusions and results, currently we mianly focus on evaluating the following two OpenAI models:
- `gpt-3.5-turbo-0301`
- `gpt-4-0314`
**Note we use `gpt-3.5-turbo-0301` (with time stamp) released in March instead of `gpt-3.5-turbo` for sake of model evolution to ensure reproducibility.**
Currently, we have supported evaluating all the causal LLMs **hosted in Huggingface** or hosted locally. Specifically, we have tested the following open LLMs:
- `Llama-v2-7B-Chat`
- `Vicuna-7BAlpaca-7B`
- `MPT-7B`
- `Falcon-7B`
- `Alpaca-7B`
- `RedPajama-INCITE-7B-Instruct`
## Tutorial
We have provided a [Tutorial](Tutorial.md) to help you walk through the usage of API to evaluate different trustworthiness perspectives and LLMs.
## Useful tips
- Please first evaluate your experiments with `++dry_run=True` flags on to check the input / output format, and use `gpt-3.5-turbo-0301` to check the generation since it has lower costs.
- Suggesting saving the responses from OpenAI.
## File usage
- `main.py` provides a unified entry point to evaluate all the perspectives and different LLMs with proper configuration
- `chat.py` provides robust APIs for creating requests to OpenAI **Chat Compleition** models and Huggingface autoregressive LLMs. Recommend implementing experiments based on this file. If you think `chat.py` is not good enough and want to make modifications, please let @acphile and @boxinw know.
- `utils.py` provide auxiliary functions
For other files, please refer to each subdirs for more information.
## License
This project is licensed under the [CC BY-SA 4.0 ]("http://creativecommons.org/licenses/by-sa/4.0/legalcode") - see the LICENSE file for details.
## Citation
Please cite the paper as follows if you use the data or code from DecodingTrust:
```
@article{wang2023decodingtrust,
title={DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models},
author={Wang, Boxin and Chen, Weixin and Pei, Hengzhi and Xie, Chulin and Kang, Mintong and Zhang, Chenhui and Xu, Chejian and Xiong, Zidi and Dutta, Ritik and Schaeffer, Rylan and others},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023}
}
```
## Contact
Please reach out to us if you have any questions or suggestions. You can submit an issue or pull request, or send an email to boxinw2@illinois.edu.
Thank you for your interest in DecodingTrust. We hope our work will contribute to a more trustworthy, fair, and robust AI future. | 15,566 | [
[
-0.02020263671875,
-0.048553466796875,
0.04046630859375,
0.01320648193359375,
-0.0207061767578125,
-0.00946044921875,
-0.0023956298828125,
-0.040130615234375,
-0.003589630126953125,
0.0171966552734375,
-0.014495849609375,
-0.045989990234375,
-0.0726318359375,
... |
Open-Orca/SlimOrca-Dedup | 2023-11-01T01:33:00.000Z | [
"task_categories:text-classification",
"task_categories:question-answering",
"task_categories:text-generation",
"size_categories:100K<n<1M",
"license:mit",
"code",
"art",
"music",
"legal",
"finance",
"biology",
"chemistry",
"arxiv:2306.02707",
"arxiv:2301.13688",
"region:us"
] | Open-Orca | null | null | 2 | 10 | 2023-10-13T16:45:49 | ---
license: mit
task_categories:
- text-classification
- question-answering
- text-generation
tags:
- code
- art
- music
- legal
- finance
- biology
- chemistry
pretty_name: SlimOrca Dedup
size_categories:
- 100K<n<1M
---
----
# Overview
----
"SlimOrca Dedup" is a deduplicated, unfiltered subset of the SlimOrca dataset, excluding RLHF instances, resulting in 363k unique examples.
# Key Features
- Removal of RLHF instances.
- Deduplication using minhash and Jaccard similarity techniques.
# Demo Models
Note: These models were trained on the full SlimOrca dataset, not the deduplicated, unfiltered version.
* https://huggingface.co/openaccess-ai-collective/jackalope-7b
* https://huggingface.co/Open-Orca/Mistral-7B-SlimOrca
----
# Dataset format
----
**Basic Structure**
This dataset uses basic sharegpt formatting. Example and explanation of the schema is below:
```json
{
"conversations": [
{"from": "system", "value": "You are an AI assistant..."},
{"from": "human", "value": "Write an article based on this..."},
{"from": "gpt", "value": "Title: Tragedy Strikes in Sydney..."}
]
}
```
**Message Formatting**
- **"from"**: A string indicating the sender of the message. Possible senders are "system", "human", and "gpt".
- **"value"**: A string containing the message or instruction from the sender.
**Message roles**
- ** System: ** The system provides instructions or guidelines for the task to the large language model (LLM).
- ** Human: ** The human provides prompts or queries for the AI model to respond to.
- ** GPT: ** The language model, generates responses or content based on the prompts or queries provided by the human. messages from this role only ever follow messages from the human role.
----
# Citation
----
```bibtex
@misc{SlimOrcaDedup,
title = {SlimOrca Dedup: A Deduplicated Subset of SlimOrca},
author = {Wing Lian and Guan Wang and Bleys Goodson and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium" and Nathan Hoos},
year = {2023},
publisher = {HuggingFace},
url = {https://huggingface.co/datasets/unaidedelf87777/SlimOrca-dedup-unfiltered/}
}
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@misc{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts},
year={2023},
eprint={2301.13688},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
``` | 2,956 | [
[
-0.036895751953125,
-0.070068359375,
0.0086517333984375,
-0.01161956787109375,
0.0023174285888671875,
-0.013885498046875,
-0.0209808349609375,
-0.039459228515625,
0.0189056396484375,
0.034332275390625,
-0.0645751953125,
-0.03759765625,
-0.03399658203125,
0.0... |
sordonia/facts-text-davinci-003_clen128_maxD500_maxC-1 | 2023-10-14T07:09:12.000Z | [
"region:us"
] | sordonia | null | null | 0 | 10 | 2023-10-14T07:08:59 | ## model_name: text-davinci-003
## max_contexts_per_subject: -1
## max_documents_per_subject: 500
## max_context_length: 128
| 125 | [
[
-0.032135009765625,
-0.0380859375,
0.056396484375,
0.02911376953125,
-0.042633056640625,
-0.034637451171875,
0.0205078125,
0.02032470703125,
0.0020294189453125,
0.033935546875,
-0.055816650390625,
-0.034423828125,
-0.0693359375,
0.00952911376953125,
-0.0... |
HomoLiang/ADL_HW1 | 2023-10-14T13:40:10.000Z | [
"region:us"
] | HomoLiang | null | null | 0 | 10 | 2023-10-14T13:39:05 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
sordonia/id-maxD500 | 2023-10-14T17:01:50.000Z | [
"region:us"
] | sordonia | null | null | 0 | 10 | 2023-10-14T17:01:37 | ## max_context_length: 128
## max_documents_per_subject: 500
| 61 | [
[
-0.03765869140625,
-0.0252685546875,
0.057861328125,
0.072021484375,
-0.037078857421875,
-0.040313720703125,
-0.015960693359375,
0.00475311279296875,
0.011566162109375,
0.043487548828125,
-0.0186614990234375,
-0.04656982421875,
-0.06768798828125,
0.010437011... |
omarc/partial-asr | 2023-10-15T19:15:21.000Z | [
"task_categories:automatic-speech-recognition",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"partial-audio-transcripts",
"automatic-speech-transcription",
"whipser-small.en",
"region:us"
] | omarc | LibriSpeech is a corpus of approximately 1000 hours of read English speech with sampling rate of 16 kHz,
prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read
audiobooks from the LibriVox project, and has been carefully segmented and aligned.87 | @inproceedings{panayotov2015librispeech,
title={Librispeech: an ASR corpus based on public domain audio books},
author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev},
booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on},
pages={5206--5210},
year={2015},
organization={IEEE}
} | 0 | 10 | 2023-10-14T20:33:44 | ---
license: mit
language:
- en
tags:
- partial-audio-transcripts
- automatic-speech-transcription
- whipser-small.en
pretty_name: Partially Removed 3-Best ASR Trancripts
size_categories:
- 10K<n<100K
task_categories:
- automatic-speech-recognition
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | 4,613 | [
[
-0.04034423828125,
-0.0419921875,
0.009765625,
0.0178070068359375,
-0.0300445556640625,
-0.00893402099609375,
-0.0026874542236328125,
-0.048431396484375,
0.043212890625,
0.059478759765625,
-0.05938720703125,
-0.069580078125,
-0.042205810546875,
0.00993347167... |
Falah/character_prompts_arabic | 2023-10-15T06:45:09.000Z | [
"region:us"
] | Falah | null | null | 0 | 10 | 2023-10-15T06:45:08 | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 5947578
num_examples: 10000
download_size: 686117
dataset_size: 5947578
---
# Dataset Card for "character_prompts_arabic"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 369 | [
[
-0.04736328125,
-0.02362060546875,
0.0173797607421875,
0.0189361572265625,
-0.014739990234375,
0.0084381103515625,
0.01507568359375,
-0.0039825439453125,
0.0545654296875,
0.031280517578125,
-0.06854248046875,
-0.0762939453125,
-0.04425048828125,
0.0136795043... |
Falah/character_prompts_arabic_best | 2023-10-15T08:01:50.000Z | [
"region:us"
] | Falah | null | null | 0 | 10 | 2023-10-15T08:01:49 | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 6877577
num_examples: 10000
download_size: 1004618
dataset_size: 6877577
---
# Dataset Card for "character_prompts_arabic_best"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 375 | [
[
-0.05029296875,
-0.01535797119140625,
0.0163726806640625,
0.01294708251953125,
-0.01959228515625,
0.0066375732421875,
0.0118560791015625,
0.0004744529724121094,
0.053314208984375,
0.0203094482421875,
-0.05560302734375,
-0.0809326171875,
-0.05029296875,
0.020... |
olaaaiap/SG-SS-dataset-editado | 2023-10-15T10:04:26.000Z | [
"region:us"
] | olaaaiap | null | null | 0 | 10 | 2023-10-15T10:04:04 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01494598388671875,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.0465087890625,
0.052490234375,
0.00505828857421875,
0.051361083984375,
0.0170135498046875,
-0.05206298828125,
-0.0149993896484375,
-0.06036376953125,
0.0379028320... |
khalidalt/flores_text | 2023-10-16T21:45:59.000Z | [
"region:us"
] | khalidalt | null | null | 0 | 10 | 2023-10-16T21:45:54 | ---
dataset_info:
features:
- name: id
dtype: int32
- name: URL
dtype: string
- name: domain
dtype: string
- name: topic
dtype: string
- name: has_image
dtype: int32
- name: has_hyperlink
dtype: int32
- name: sentence_arb_Arab
dtype: string
- name: sentence_eng_Latn
dtype: string
- name: text
dtype: string
splits:
- name: dev
num_bytes: 816795
num_examples: 997
download_size: 435355
dataset_size: 816795
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
---
# Dataset Card for "flores_text"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 730 | [
[
-0.020599365234375,
-0.0222320556640625,
0.0208740234375,
0.0301513671875,
-0.0106353759765625,
-0.0050811767578125,
-0.002765655517578125,
-0.0216827392578125,
0.06591796875,
0.036407470703125,
-0.05462646484375,
-0.052947998046875,
-0.03863525390625,
0.002... |
pbaoo2705/biomedqa_processed_eval | 2023-10-17T09:59:53.000Z | [
"region:us"
] | pbaoo2705 | null | null | 0 | 10 | 2023-10-17T00:49:22 | ---
dataset_info:
features:
- name: 'Unnamed: 0.1'
dtype: int64
- name: 'Unnamed: 0'
dtype: int64
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: answer
dtype: string
- name: start_positions
dtype: int64
- name: end_positions
dtype: int64
splits:
- name: train
num_bytes: 347583
num_examples: 100
download_size: 124060
dataset_size: 347583
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "biomedqa_processed_eval"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 873 | [
[
-0.026885986328125,
-0.032958984375,
0.033538818359375,
0.005512237548828125,
-0.01049041748046875,
0.02734375,
0.02667236328125,
-0.004711151123046875,
0.056671142578125,
0.038116455078125,
-0.0491943359375,
-0.057037353515625,
-0.032745361328125,
0.0000759... |
gayathrimanoj/dataset_shell_alpaca | 2023-10-18T06:07:02.000Z | [
"region:us"
] | gayathrimanoj | null | null | 1 | 10 | 2023-10-18T06:06:29 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.