id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
mstz/wall_following | 2023-04-16T18:03:59.000Z | [
"task_categories:tabular-classification",
"size_categories:1K<n<5K",
"language:en",
"license:cc",
"wall_following",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_wall-following_robot_navigation_data_194,
author = {Freire,Ananda, Veloso,Marcus & Barreto,Guilherme},
title = {{Wall-Following Robot Navigation Data}},
year = {2010},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C57C8W}}
} | 0 | 7 | 2023-04-14T15:49:57 | ---
language:
- en
tags:
- wall_following
- tabular_classification
- binary_classification
- multiclass_classification
- UCI
pretty_name: WallFollowing
size_categories:
- 1K<n<5K
task_categories:
- tabular-classification
configs:
- wall_following
license: cc
---
# WallFollowing
The [WallFollowing dataset](https://archive-beta.ics.uci.edu/dataset/194/wall+following+robot+navigation+data) from the [UCI repository](https://archive-beta.ics.uci.edu/).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-----------------------|---------------------------|-------------------------|
| wall_following | Multiclass classification.| |
| wall_following_0 | Binary classification. | Is the instance of class 0? |
| wall_following_1 | Binary classification. | Is the instance of class 1? |
| wall_following_2 | Binary classification. | Is the instance of class 2? |
| wall_following_3 | Binary classification. | Is the instance of class 3? | | 1,100 | [
[
-0.03265380859375,
-0.028778076171875,
0.0178070068359375,
0.0236053466796875,
0.00970458984375,
-0.0018663406372070312,
0.01338958740234375,
-0.000011801719665527344,
0.0195770263671875,
0.042724609375,
-0.04937744140625,
-0.0609130859375,
-0.03472900390625,
... |
mstz/arcene | 2023-04-17T08:46:30.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"arcene",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_arcene_167,
author = {Guyon,Isabelle, Gunn,Steve, Ben-Hur,Asa & Dror,Gideon},
title = {{Arcene}},
year = {2008},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C58P55}}
} | 0 | 7 | 2023-04-17T08:36:34 | ---
language:
- en
tags:
- arcene
- tabular_classification
- binary_classification
- UCI
pretty_name: Arcene
size_categories:
- n<1K
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- tabular-classification
configs:
- arcene
---
# Arcene
The [Arcene dataset](https://archive-beta.ics.uci.edu/dataset/167/arcene) from the [UCI repository](https://archive-beta.ics.uci.edu/).
| 440 | [
[
-0.031951904296875,
-0.002925872802734375,
0.02386474609375,
0.00385284423828125,
0.015716552734375,
0.003631591796875,
0.0198974609375,
-0.0053253173828125,
0.038604736328125,
0.06475830078125,
-0.04443359375,
-0.045074462890625,
-0.019683837890625,
-0.0085... |
mstz/dexter | 2023-04-20T10:23:41.000Z | [
"task_categories:tabular-classification",
"language:en",
"dexter",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_dexter_168,
author = {Guyon,Isabelle, Gunn,Steve, Ben-Hur,Asa & Dror,Gideon},
title = {{Dexter}},
year = {2008},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5P898}}
} | 0 | 7 | 2023-04-17T10:21:58 | ---
language:
- en
tags:
- dexter
- tabular_classification
- binary_classification
- UCI
pretty_name: Dexter
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- tabular-classification
configs:
- dexter
---
# Dexter
The [Dexter dataset](https://archive-beta.ics.uci.edu/dataset/168/dexter) from the [UCI repository](https://archive-beta.ics.uci.edu/).
# Configurations and tasks
| **Configuration** | **Task** |
|-----------------------|---------------------------|
| dexter | Binary classification.|
| 601 | [
[
-0.0249481201171875,
-0.007965087890625,
0.01340484619140625,
0.022369384765625,
-0.0202789306640625,
0.00909423828125,
0.016143798828125,
-0.01267242431640625,
0.04486083984375,
0.03839111328125,
-0.031494140625,
-0.051971435546875,
-0.049591064453125,
0.00... |
mstz/gisette | 2023-04-17T10:55:16.000Z | [
"task_categories:tabular-classification",
"language:en",
"gisette",
"tabular_classification",
"binary_classification",
"region:us"
] | mstz | null | @misc{misc_gisette_170,
author = {Guyon,Isabelle, Gunn,Steve, Ben-Hur,Asa & Dror,Gideon},
title = {{Gisette}},
year = {2008},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5HP5B}}
} | 0 | 7 | 2023-04-17T10:43:21 | ---
language:
- en
tags:
- gisette
- tabular_classification
- binary_classification
pretty_name: Gisette
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- tabular-classification
configs:
- gisette
---
# Gisette
The [Gisette dataset](https://archive-beta.ics.uci.edu/dataset/170/gisette) from the [UCI repository](https://archive-beta.ics.uci.edu/).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-----------------------|---------------------------|-------------------------|
| gisette | Binary classification.| |
| 681 | [
[
-0.03253173828125,
-0.001873016357421875,
0.0191497802734375,
0.01297760009765625,
-0.0210418701171875,
-0.006954193115234375,
0.0047454833984375,
-0.029388427734375,
0.0281982421875,
0.032745361328125,
-0.01641845703125,
-0.07135009765625,
-0.06939697265625,
... |
mstz/sydt | 2023-04-18T08:27:15.000Z | [
"task_categories:tabular-classification",
"language:en",
"sydt",
"tabular_classification",
"binary_classification",
"synthetic",
"region:us"
] | mstz | null | null | 0 | 7 | 2023-04-18T08:25:12 | ---
language:
- en
tags:
- sydt
- tabular_classification
- binary_classification
- synthetic
pretty_name: Sydt
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- tabular-classification
configs:
- sydt
---
# Sydt
Synthetic dataset. | 295 | [
[
-0.01666259765625,
-0.03338623046875,
0.0165252685546875,
0.0333251953125,
-0.0300750732421875,
0.02532958984375,
0.0167236328125,
0.007518768310546875,
0.046478271484375,
0.0404052734375,
-0.065185546875,
-0.0252685546875,
-0.025146484375,
0.024185180664062... |
jxu124/visdial | 2023-05-20T19:18:49.000Z | [
"license:cc-by-4.0",
"region:us"
] | jxu124 | null | null | 0 | 7 | 2023-04-18T10:06:36 | ---
license: cc-by-4.0
dataset_info:
features:
- name: caption
dtype: string
- name: dialog
sequence:
sequence: string
- name: image_path
dtype: string
- name: global_image_id
dtype: string
- name: anns_id
dtype: string
splits:
- name: train
num_bytes: 77657548
num_examples: 123287
- name: test
num_bytes: 3495490
num_examples: 8000
- name: validation
num_bytes: 1408883
num_examples: 2064
download_size: 34814702
dataset_size: 82561921
---
Usage:
```python
from dataclasses import dataclass
import datasets
# load and path setting
ds_visdial = datasets.load_dataset('jxu124/visdial')
path_map = {
"coco/train2014": f"/datasets/coco/train2014",
"coco/val2014": f"/datasets/coco/val2014",
"visdial/VisualDialog_test2018": f"/datasets/visdial/VisualDialog_test2018",
"visdial/VisualDialog_val2018": f"/datasets/visdial/VisualDialog_val2018"
}
# apply to your datasets
@dataclass
class ReplaceImagePath():
path_map: {}
def __call__(self, features):
for k, v in self.path_map.items():
features['image'] = features['image'].replace(k, v)
return features
ds_visdial = ds_visdial.map(ReplaceImagePath(path_map=path_map)).cast_column("image", datasets.Image())
``` | 1,286 | [
[
-0.0217437744140625,
-0.0293426513671875,
0.00930023193359375,
-0.0008792877197265625,
-0.0217742919921875,
-0.0035800933837890625,
0.0013866424560546875,
-0.0121612548828125,
0.0031299591064453125,
0.04998779296875,
-0.0244140625,
-0.0279693603515625,
-0.038085... |
alpayariyak/MATH_Instruction_Format | 2023-04-19T02:27:52.000Z | [
"region:us"
] | alpayariyak | null | null | 2 | 7 | 2023-04-19T02:27:44 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 9836383
num_examples: 12500
download_size: 4859969
dataset_size: 9836383
---
# Dataset Card for "MATH_Instruction_Format"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 442 | [
[
-0.028564453125,
-0.039520263671875,
0.004138946533203125,
0.022674560546875,
0.0035247802734375,
-0.01180267333984375,
0.00360870361328125,
0.0322265625,
0.045654296875,
0.026336669921875,
-0.059814453125,
-0.05816650390625,
-0.041107177734375,
-0.032958984... |
dirtycomputer/Toxic_Comment_Classification_Challenge | 2023-04-19T07:04:33.000Z | [
"region:us"
] | dirtycomputer | null | null | 1 | 7 | 2023-04-19T07:00:09 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
mattymchen/mr | 2023-04-19T15:20:03.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"language:en",
"region:us"
] | mattymchen | null | null | 0 | 7 | 2023-04-19T14:44:35 | ---
language:
- en
task_categories:
- text-classification
task_ids:
- sentiment-classification
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: test
num_bytes: 1352524
num_examples: 10662
download_size: 883903
dataset_size: 1352524
---
# Dataset Card for "mr"
## Dataset Description
Movie review dataset from SentEval.
## Data Fields
- `sentence`: Complete sentence expressing an opinion about a film.
- `label`: Sentiment of the opinion, either "negative" (0) or positive (1).
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 688 | [
[
-0.044403076171875,
-0.03472900390625,
-0.003902435302734375,
0.0017032623291015625,
-0.033050537109375,
0.00930023193359375,
0.0059051513671875,
0.006805419921875,
0.0631103515625,
0.036956787109375,
-0.07568359375,
-0.048004150390625,
-0.045196533203125,
0... |
roemmele/ablit | 2023-05-08T16:26:23.000Z | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:summarization",
"language:en",
"license:cc-by-sa-4.0",
"arxiv:2302.06579",
"region:us"
] | roemmele | This dataset contains abridged versions of 10 classic English literature books,
aligned with their original versions on various passage levels.The abridgements were written and made publically available by Emma Laybourn: http://www.englishliteratureebooks.com/classicnovelsabridged.html.This is the first known dataset for NLP research that focuses on the abridgement task. | @inproceedings{roemmele2023ablit,
title={AbLit: A Resource for Analyzing and Generating Abridged Versions of English Literature},
author={Roemmele, Melissa and Shaffer, Kyle and Olsen, Katrina and Wang, Yiyi and DeNeefe, Steve},
booktitle = {Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume},
publisher = {Association for Computational Linguistics},
year={2023}
} | 0 | 7 | 2023-04-20T19:50:35 | ---
license: cc-by-sa-4.0
task_categories:
- text-generation
- text2text-generation
- summarization
language:
- en
---
# Dataset Card for AbLit
## Dataset Description
- **Homepage:** https://github.com/roemmele/AbLit
- **Repository:** https://github.com/roemmele/AbLit
- **Paper:** https://arxiv.org/pdf/2302.06579.pdf
- **Point of Contact:** melissa@roemmele.io
### Dataset Summary
The AbLit dataset contains **ab**ridged versions of 10 classic English **lit**erature books, aligned with their original versions on various passage levels.
The abridgements were written and made publically available by Emma Laybourn [here](http://www.englishliteratureebooks.com/classicnovelsabridged.html).
This is the first known dataset for NLP research that focuses on the abridgement task.
See the paper for a detailed description of the dataset, as well as the results of several modeling experiments. The GitHub repo also provides more extensive ways to interact with the data beyond what is provided here.
### Languages
English
## Dataset Structure
Each passage in the original version of a book chapter is aligned with its corresponding passage in the abridged version. These aligned pairs are available for various passage sizes: sentences, paragraphs, and multi-paragraph "chunks". The passage size is specified when loading the dataset. There are train/dev/test splits for items of each size.
| Passage Size | Description | # Train | # Dev | # Test |
| --------------------- | ------------- | ------- | ------- | ------- |
| chapters | Each passage is a single chapter | 808 | 10 | 50
| sentences | Each passage is a sentence delimited by the NLTK sentence tokenizer | 122,219 | 1,143 | 10,431 |
| paragraphs | Each passage is a paragraph delimited by a line break | 37,227 | 313 | 3,125 |
| chunks-10-sentences | Each passage consists of up to X=10 number of sentences, which may span more than one paragraph. To derive chunks with other lengths X, see GitHub repo above | 14,857 | 141 | 1,264
#### Example Usage
To load aligned paragraphs:
```
from datasets import load_dataset
data = load_dataset("roemmele/ablit", "paragraphs")
```
### Data Fields
- original: passage text in the original version
- abridged: passage text in the abridged version
- book: title of book containing passage
- chapter: title of chapter containing passage
## Dataset Creation
### Curation Rationale
Abridgement is the task of making a text easier to understand while preserving its linguistic qualities. Abridgements are different from typical summaries: whereas summaries abstractively describe the original text, abridgements simplify the original primarily through a process of extraction. We present this dataset to promote further research on modeling the abridgement process.
### Source Data
The author Emma Laybourn wrote abridged versions of classic English literature books available through Project Gutenberg. She has also provided her abridgements for free on her [website](http://www.englishliteratureebooks.com/classicnovelsabridged.html). This is how she describes her work: “This is a collection of famous novels which have been shortened and slightly simplified for the general reader. These are not summaries; each is half to two-thirds of the original length. I’ve selected works that people often find daunting because of their density or complexity: the aim is to make them easier to read, while keeping the style intact.”
#### Initial Data Collection and Normalization
We obtained the original and abridged versions of the books from the respective websites.
#### Who are the source language producers?
Emma Laybourn
### Annotations
#### Annotation process
We designed a procedure for automatically aligning passages between the original and abridged version of each chapter. We conducted a human evaluation to verify these alignments had high accuracy. The training split of the dataset has ~99% accuracy. The dev and test splits of the dataset were fully human-validated to ensure 100% accuracy. See the paper for further explanation.
#### Who are the annotators?
The alignment accuracy evaluation was conducted by the authors of the paper, who have expertise in linguistics and NLP.
### Personal and Sensitive Information
None
## Considerations for Using the Data
### Social Impact of Dataset
We hope this dataset will promote more research on the authoring process for producing abridgements, including models for automatically generating abridgements. Because it is a labor-intensive writing task, there are relatively few abridged versions of books. Systems that automatically produce abridgements could vastly expand the number of abridged versions of books and thus increase their readership.
### Discussion of Biases
We present this dataset to introduce abridgement as an NLP task, but these abridgements are scoped to one small set of texts associated with a specific domain and author. There are significant practical reasons for this limited scope. In particular, in constrast to the books in AbLit, most recently published books are not included in publicly accessible datasets due to copyright restrictions, and the same restrictions typically apply to any abridgements of these books. For this reason, AbLit consists of British English literature from the 18th and 19th centuries. Some of the linguistic properties of these original books do not generalize to other types of English texts that would be beneficial to abridge. Moreover, the narrow cultural perspective reflected in these books is certainly not representative of the diverse modern population. Readers may find some content offensive.
### Dataset Curators
The curators are the authors of the paper.
### Licensing Information
cc-by-sa-4.0
### Citation Information
Roemmele, Melissa, Kyle Shaffer, Katrina Olsen, Yiyi Wang, and Steve DeNeefe. "AbLit: A Resource for Analyzing and Generating Abridged Versions of English Literature." Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume (2023).
| 6,087 | [
[
-0.0128936767578125,
-0.053741455078125,
0.00424957275390625,
0.005092620849609375,
-0.0386962890625,
-0.027923583984375,
-0.0214691162109375,
-0.036163330078125,
0.00792694091796875,
0.0687255859375,
-0.052764892578125,
-0.03668212890625,
-0.01493072509765625,
... |
CM/codexglue_codetrans | 2023-04-27T23:09:43.000Z | [
"region:us"
] | CM | null | null | 0 | 7 | 2023-04-22T01:07:30 | ---
dataset_info:
features:
- name: id
dtype: int32
- name: java
dtype: string
- name: cs
dtype: string
splits:
- name: train
num_bytes: 4372641
num_examples: 10300
- name: validation
num_bytes: 226407
num_examples: 500
- name: test
num_bytes: 418587
num_examples: 1000
download_size: 0
dataset_size: 5017635
---
# Dataset Card for "codexglue_codetrans"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 542 | [
[
-0.038818359375,
0.0022716522216796875,
0.007549285888671875,
0.0230865478515625,
-0.012359619140625,
0.01690673828125,
0.00493621826171875,
-0.0142822265625,
0.057403564453125,
0.045806884765625,
-0.053009033203125,
-0.07525634765625,
-0.04095458984375,
-0.... |
DeadPixels/DPhi_Sprint_25_Flowers | 2023-04-29T10:34:03.000Z | [
"task_categories:image-classification",
"size_categories:1K<n<10K",
"license:cc-by-2.0",
"region:us"
] | DeadPixels | null | null | 0 | 7 | 2023-04-29T10:25:36 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': daisy
'1': dandelion
'2': rose
'3': sunflower
'4': tulip
splits:
- name: train
num_bytes: 123964921.405
num_examples: 2589
- name: test
num_bytes: 47588262
num_examples: 864
- name: validation
num_bytes: 47493769
num_examples: 864
download_size: 237386772
dataset_size: 219046952.405
license: cc-by-2.0
task_categories:
- image-classification
pretty_name: 'Data Sprint #25: Flower Recognition Datas'
size_categories:
- 1K<n<10K
---
# Dataset Card for "DPhi_Sprint_25_Flowers"
All images in this archive are licensed under the Creative Commons By-Attribution License, available at:
https://creativecommons.org/licenses/by/2.0/
The photographers are listed in LICENSE.txt, thanks to all of them for making their work available.
However, you will observe the image file names are different in this file than those we have provided. The file names were changed solely for the purpose of the data sprint. | 1,115 | [
[
0.001354217529296875,
0.002536773681640625,
0.0169830322265625,
0.045379638671875,
-0.039306640625,
0.003879547119140625,
0.01262664794921875,
-0.0355224609375,
-0.0060272216796875,
0.041534423828125,
-0.0928955078125,
-0.04400634765625,
-0.025787353515625,
... |
CCOM/pianos_mel | 2023-10-10T05:42:10.000Z | [
"task_categories:audio-classification",
"task_categories:image-classification",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"music",
"art",
"arxiv:2310.04722",
"region:us"
] | CCOM | pianos_mel is a mel spectrogram dataset of piano sounds.
It consists of 8 kinds of pianos_mel including
PearlRiver, YoungChang, Steinway-T, Hsinghai, Kawai, Steinway, Kawai-G and Yamaha.
Data was annotated by students from the China Conservatory of Music (CCMUSIC) in Beijing. | @article{CSMT2023HEPSQ,
title = {A Holistic Evaluation of Piano Sound Quality},
author = {Monan Zhou, Shangda Wu, Shaohua Ji, Zijin Li, Wei Li*},
journal = {Springer},
year = {2023},
url = {https://github.com/george-chou/Piano-Classification}
} | 2 | 7 | 2023-05-01T05:45:31 | ---
license: mit
task_categories:
- audio-classification
- image-classification
language:
- en
tags:
- music
- art
pretty_name: Pianos
size_categories:
- 10K<n<100K
---
# Dataset card for pianos_mel
## Dataset Description
- **Homepage:** [CCOM/pianos_mel](https://huggingface.co/datasets/CCOM/pianos_mel)
- **Repository:** `git@hf.co:datasets/CCOM/pianos_mel`
- **Paper:** [A Holistic Evaluation of Piano Sound Quality](https://arxiv.org/pdf/2310.04722.pdf)
- **Leaderboard:** [arxiv:2309.13259](https://arxiv.org/abs/2310.04722)
- **Point of Contact:** CNN, ERB, piano sound quality, audio classification
## Dataset Summary
This dataset aims to develop a holistic evaluation method for piano sound quality to assist in purchasing decisions. Unlike previous studies that focused on the effect of piano performance techniques on sound quality, this study evaluates the inherent sound quality of different pianos. To derive quality evaluation systems, the study uses subjective questionnaires based on a piano sound quality dataset. The method selects the optimal piano classification models by comparing the fine-tuning results of different pre-training models of Convolutional Neural Networks (CNN). To improve the interpretability of the models, the study applies Equivalent Rectangular Bandwidth (ERB) analysis. The results reveal that musically trained individuals are better able to distinguish between the sound quality differences of different pianos. The best fine-tuned CNN pre-trained backbone achieves a high accuracy of 98.3%as the piano classifier. However, the dataset is limited, and the audio is sliced to increase its quantity, resulting in a lack of diversity and balance, so we use focal loss to reduce the impact of data imbalance.
To optimize the method, the dataset will be expanded, or few-shot learning techniques will be employed in future research.
## Supported Tasks and Leaderboards
Audio classification, pitch detection, etc
## Languages
English
## Usage
```
from datasets import load_dataset
data = load_dataset("CCOM/pianos_mel")
trainset = data['train']
validset = data['validation']
testset = data['test']
labels = trainset.features['label'].names
for item in trainset:
print('image: ', item['image'].convert('RGB'))
print('label name: ' + labels[item['label']])
for item in validset:
print('image: ', item['image'].convert('RGB'))
print('label name: ' + labels[item['label']])
for item in testset:
print('image: ', item['image'].convert('RGB'))
print('label name: ' + labels[item['label']])
```
## Maintenance
```
git clone git@hf.co:datasets/CCOM/pianos_mel
```
## Dataset Structure
### Data Instances
.jsonl, .zip(.jpg, .csv)
### Data Fields
piano sound mel, sound quality, pitch
### Data Splits
Train, validation, test
## Dataset Creation
### Curation Rationale
Promoting the development of AI in the music industry
### Source Data
#### Initial Data Collection and Normalization
Monan Zhou, Shangda Wu, Shaohua Ji, Zhaorui Liu
#### Who are the source language producers?
Composers of the songs in dataset
### Annotations
#### Annotation process
1. Record different piano sounds
2. Annotate sound files with quality labels
#### Who are the annotators?
Annotators from CCMUSIC, CCOM and Xinghai Conservatory of Music
### Personal and Sensitive Information
None
## Considerations for Using the Data
### Social Impact of Dataset
Promoting the development of AI in the music industry
### Discussion of Biases
All are piano songs
### Other Known Limitations
Samples are not balanced enough
## Additional Information
### Dataset Curators
Monan Zhou
### Licensing Information
```
MIT License
Copyright (c) CCOM
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
### Citation Information
```
@misc{zhou2023holistic,
title={A Holistic Evaluation of Piano Sound Quality},
author={Monan Zhou and Shangda Wu and Shaohua Ji and Zijin Li and Wei Li},
year={2023},
eprint={2310.04722},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
### Contributions
Develop a holistic evaluation method for piano sound quality to assist in purchasing decisions. | 5,134 | [
[
-0.053436279296875,
-0.0423583984375,
-0.0025348663330078125,
0.02301025390625,
-0.0249481201171875,
-0.015777587890625,
-0.04498291015625,
-0.0286712646484375,
0.004581451416015625,
0.040557861328125,
-0.052642822265625,
-0.0762939453125,
-0.0198211669921875,
... |
frncscp/patacon-730 | 2023-05-04T01:51:07.000Z | [
"region:us"
] | frncscp | null | null | 0 | 7 | 2023-05-04T01:50:38 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Patacon-False
'1': Patacon-True
splits:
- name: train
num_bytes: 114865007.0
num_examples: 874
- name: validation
num_bytes: 18290064.0
num_examples: 143
- name: test
num_bytes: 59447780.0
num_examples: 442
download_size: 192218294
dataset_size: 192602851.0
---
# Dataset Card for "patacon-730"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 612 | [
[
-0.0357666015625,
-0.00897216796875,
0.011016845703125,
0.0223541259765625,
-0.0391845703125,
-0.007343292236328125,
0.01287841796875,
-0.0107574462890625,
0.067138671875,
0.05340576171875,
-0.052581787109375,
-0.0496826171875,
-0.03656005859375,
-0.00128746... |
birkhoffg/folktables-acs-income | 2023-05-08T19:31:11.000Z | [
"task_categories:tabular-classification",
"size_categories:1M<n<10M",
"language:en",
"adult",
"region:us"
] | birkhoffg | null | null | 1 | 7 | 2023-05-08T19:07:24 | ---
dataset_info:
features:
- name: AGEP
dtype: float64
- name: COW
dtype: float64
- name: SCHL
dtype: float64
- name: MAR
dtype: float64
- name: OCCP
dtype: float64
- name: POBP
dtype: float64
- name: RELP
dtype: float64
- name: WKHP
dtype: float64
- name: SEX
dtype: float64
- name: RAC1P
dtype: float64
- name: STATE
dtype: string
- name: YEAR
dtype: int64
- name: PINCP
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 808018860
num_examples: 7345626
- name: test
num_bytes: 269339730
num_examples: 2448543
download_size: 197308481
dataset_size: 1077358590
task_categories:
- tabular-classification
language:
- en
tags:
- adult
size_categories:
- 1M<n<10M
---
# Dataset Card for "folktables-acs-income"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 989 | [
[
-0.0277252197265625,
-0.01064300537109375,
0.00829315185546875,
-0.0013399124145507812,
-0.0172882080078125,
0.0206451416015625,
0.01476287841796875,
-0.01556396484375,
0.080810546875,
0.03216552734375,
-0.057220458984375,
-0.046173095703125,
-0.03790283203125,
... |
0x22almostEvil/reasoning-gsm-qna-oa | 2023-05-13T15:43:31.000Z | [
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"QnA",
"math",
"programming",
"region:us"
] | 0x22almostEvil | null | null | 2 | 7 | 2023-05-13T15:09:16 | ---
license: mit
task_categories:
- question-answering
language:
- en
tags:
- QnA
- math
- programming
size_categories:
- 1K<n<10K
---
# Dataset Card for GSM QnA reasoning with ~8.8K entries.
### Dataset Summary
Contains Parquet of a list of instructions and answers.
Each row consists of
* INSTRUCTION
* RESPONSE
* SOURCE
* METADATA (json with language).
### Original Datasets are available here:
* https://huggingface.co/datasets/gsm8k
* https://huggingface.co/datasets/reasoning-machines/gsm-hard | 506 | [
[
-0.039459228515625,
-0.0222625732421875,
0.03741455078125,
0.004852294921875,
-0.02740478515625,
-0.012786865234375,
0.01502227783203125,
0.0086669921875,
0.019012451171875,
0.059356689453125,
-0.03961181640625,
-0.056060791015625,
-0.0274505615234375,
0.005... |
Fraol/Py150-processed | 2023-05-19T23:58:41.000Z | [
"region:us"
] | Fraol | null | null | 1 | 7 | 2023-05-17T20:23:00 | ---
dataset_info:
features:
- name: repository_path
dtype: string
- name: code
dtype: string
splits:
- name: train
num_bytes: 726142896.0
num_examples: 120000
- name: val
num_bytes: 90767862.0
num_examples: 15000
- name: test
num_bytes: 90767862.0
num_examples: 15000
download_size: 343675742
dataset_size: 907678620.0
---
# Dataset Card for "Py150-processed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
# Dataset Creation
The original dataset is at https://www.sri.inf.ethz.ch/py150.
# Citation Information
@article{raychev2016probabilistic,
title={Probabilistic model for code with decision trees},
author={Raychev, Veselin and Bielik, Pavol and Vechev, Martin},
journal={ACM SIGPLAN Notices},
volume={51},
number={10},
pages={731--747},
year={2016},
publisher={ACM New York, NY, USA}
} | 947 | [
[
0.0013666152954101562,
-0.042022705078125,
0.0268096923828125,
0.03271484375,
-0.00012165307998657227,
-0.0175933837890625,
-0.008453369140625,
-0.0230712890625,
0.03131103515625,
0.0362548828125,
-0.049163818359375,
-0.03790283203125,
-0.026611328125,
0.001... |
silk-road/chinese-dolly-15k | 2023-05-22T00:26:02.000Z | [
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:zh",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] | silk-road | null | null | 15 | 7 | 2023-05-22T00:18:48 | ---
license: cc-by-sa-3.0
task_categories:
- question-answering
- summarization
- text-generation
language:
- zh
- en
size_categories:
- 10K<n<100K
---
Chinese-Dolly-15k是骆驼团队翻译的Dolly instruction数据集
最后49条数据因为翻译长度超过限制,没有翻译成功,建议删除或者手动翻译一下
原来的数据集'databricks/databricks-dolly-15k'是由数千名Databricks员工根据InstructGPT论文中概述的几种行为类别生成的遵循指示记录的开源数据集。这几个行为类别包括头脑风暴、分类、封闭型问答、生成、信息提取、开放型问答和摘要。
在知识共享署名-相同方式共享3.0(CC BY-SA 3.0)许可下,此数据集可用于任何学术或商业用途。
我们会陆续将更多数据集发布到hf,包括
- [ ] Coco Caption的中文翻译
- [x] CoQA的中文翻译
- [ ] CNewSum的Embedding数据
- [x] 增广的开放QA数据
- [x] WizardLM的中文翻译
- [x] MMC4的中文翻译
如果你也在做这些数据集的筹备,欢迎来联系我们,避免重复花钱。
# 骆驼(Luotuo): 开源中文大语言模型
[https://github.com/LC1332/Luotuo-Chinese-LLM](https://github.com/LC1332/Luotuo-Chinese-LLM)
骆驼(Luotuo)项目是由[冷子昂](https://blairleng.github.io) @ 商汤科技, 陈启源 @ 华中师范大学 以及 李鲁鲁 @ 商汤科技 发起的中文大语言模型开源项目,包含了一系列语言模型。
骆驼项目**不是**商汤科技的官方产品。
## Citation
Please cite the repo if you use the data or code in this repo.
```
@misc{alpaca,
author={Ziang Leng, Qiyuan Chen and Cheng Li},
title = {Luotuo: An Instruction-following Chinese Language model, LoRA tuning on LLaMA},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/LC1332/Luotuo-Chinese-LLM}},
}
```
| 1,246 | [
[
-0.01284027099609375,
-0.0673828125,
-0.0057830810546875,
0.0433349609375,
-0.029327392578125,
-0.007465362548828125,
0.0051116943359375,
-0.0126800537109375,
0.01824951171875,
0.033172607421875,
-0.03985595703125,
-0.054534912109375,
-0.0263214111328125,
0.... |
muhrafli/heart-diseases | 2023-05-22T08:57:31.000Z | [
"region:us"
] | muhrafli | null | null | 0 | 7 | 2023-05-22T08:56:38 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
mcimpoi/dtd_split_1 | 2023-05-22T12:42:00.000Z | [
"task_categories:image-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-4.0",
"texture",
"computer-vision",
"region:us"
] | mcimpoi | null | null | 0 | 7 | 2023-05-22T10:17:50 | ---
license: cc-by-4.0
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': banded
'1': blotchy
'2': braided
'3': bubbly
'4': bumpy
'5': chequered
'6': cobwebbed
'7': cracked
'8': crosshatched
'9': crystalline
'10': dotted
'11': fibrous
'12': flecked
'13': freckled
'14': frilly
'15': gauzy
'16': grid
'17': grooved
'18': honeycombed
'19': interlaced
'20': knitted
'21': lacelike
'22': lined
'23': marbled
'24': matted
'25': meshed
'26': paisley
'27': perforated
'28': pitted
'29': pleated
'30': polka-dotted
'31': porous
'32': potholed
'33': scaly
'34': smeared
'35': spiralled
'36': sprinkled
'37': stained
'38': stratified
'39': striped
'40': studded
'41': swirly
'42': veined
'43': waffled
'44': woven
'45': wrinkled
'46': zigzagged
splits:
- name: train
num_bytes: 226313270.04
num_examples: 1880
- name: test
num_bytes: 172035822
num_examples: 1880
- name: validation
num_bytes: 222278767.48
num_examples: 1880
download_size: 629315160
dataset_size: 620627859.52
task_categories:
- image-classification
language:
- en
tags:
- texture
- computer-vision
pretty_name: Describable Textures Dataset
size_categories:
- 1K<n<10K
---
# Dataset Card for Describable Textures Dataset (DTD)
## Dataset Description
- Homepage: https://www.robots.ox.ac.uk/~vgg/data/dtd/
- Repository: https://github.com/mcimpoi/deep-fbanks
- Paper: https://openaccess.thecvf.com/content_cvpr_2014/html/Cimpoi_Describing_Textures_in_2014_CVPR_paper.html
- Leaderboard: https://paperswithcode.com/sota/image-classification-on-dtd
### Dataset Summary
Texture classification dataset; consists of 47 categories, 120 images per class.
### Data Splits
Equally split into train, val, test; The original paper proposed 10 splits; recent works (BYOL, arxiv:2006.07733) use only first split.
### Licensing Information
Not defined at https://www.robots.ox.ac.uk/~vgg/data/dtd/
### Citation Information
@InProceedings{cimpoi14describing,
Author = {M. Cimpoi and S. Maji and I. Kokkinos and S. Mohamed and and A. Vedaldi},
Title = {Describing Textures in the Wild},
Booktitle = {Proceedings of the {IEEE} Conf. on Computer Vision and Pattern Recognition ({CVPR})},
Year = {2014}}
| 2,771 | [
[
-0.039886474609375,
-0.047088623046875,
0.0156402587890625,
0.0557861328125,
-0.05645751953125,
0.012939453125,
-0.00853729248046875,
-0.031951904296875,
0.01361846923828125,
0.035888671875,
-0.025543212890625,
-0.06494140625,
-0.041656494140625,
-0.00681304... |
jlh/home-credit-example-raw | 2023-05-26T02:29:12.000Z | [
"region:us"
] | jlh | null | null | 0 | 7 | 2023-05-26T02:29:10 | ---
dataset_info:
features:
- name: SK_ID_CURR
dtype: int64
- name: TARGET
dtype: int64
- name: NAME_CONTRACT_TYPE
dtype: string
- name: CODE_GENDER
dtype: string
- name: FLAG_OWN_CAR
dtype: string
- name: FLAG_OWN_REALTY
dtype: string
- name: CNT_CHILDREN
dtype: int64
- name: AMT_INCOME_TOTAL
dtype: float64
- name: AMT_CREDIT
dtype: float64
- name: AMT_ANNUITY
dtype: float64
- name: AMT_GOODS_PRICE
dtype: float64
- name: NAME_TYPE_SUITE
dtype: string
- name: NAME_INCOME_TYPE
dtype: string
- name: NAME_EDUCATION_TYPE
dtype: string
- name: NAME_FAMILY_STATUS
dtype: string
- name: NAME_HOUSING_TYPE
dtype: string
- name: REGION_POPULATION_RELATIVE
dtype: float64
- name: DAYS_BIRTH
dtype: int64
- name: DAYS_EMPLOYED
dtype: int64
- name: DAYS_REGISTRATION
dtype: float64
- name: DAYS_ID_PUBLISH
dtype: int64
- name: OWN_CAR_AGE
dtype: float64
- name: FLAG_MOBIL
dtype: int64
- name: FLAG_EMP_PHONE
dtype: int64
- name: FLAG_WORK_PHONE
dtype: int64
- name: FLAG_CONT_MOBILE
dtype: int64
- name: FLAG_PHONE
dtype: int64
- name: FLAG_EMAIL
dtype: int64
- name: OCCUPATION_TYPE
dtype: string
- name: CNT_FAM_MEMBERS
dtype: float64
- name: REGION_RATING_CLIENT
dtype: int64
- name: REGION_RATING_CLIENT_W_CITY
dtype: int64
- name: WEEKDAY_APPR_PROCESS_START
dtype: string
- name: HOUR_APPR_PROCESS_START
dtype: int64
- name: REG_REGION_NOT_LIVE_REGION
dtype: int64
- name: REG_REGION_NOT_WORK_REGION
dtype: int64
- name: LIVE_REGION_NOT_WORK_REGION
dtype: int64
- name: REG_CITY_NOT_LIVE_CITY
dtype: int64
- name: REG_CITY_NOT_WORK_CITY
dtype: int64
- name: LIVE_CITY_NOT_WORK_CITY
dtype: int64
- name: ORGANIZATION_TYPE
dtype: string
- name: EXT_SOURCE_1
dtype: float64
- name: EXT_SOURCE_2
dtype: float64
- name: EXT_SOURCE_3
dtype: float64
- name: APARTMENTS_AVG
dtype: float64
- name: BASEMENTAREA_AVG
dtype: float64
- name: YEARS_BEGINEXPLUATATION_AVG
dtype: float64
- name: YEARS_BUILD_AVG
dtype: float64
- name: COMMONAREA_AVG
dtype: float64
- name: ELEVATORS_AVG
dtype: float64
- name: ENTRANCES_AVG
dtype: float64
- name: FLOORSMAX_AVG
dtype: float64
- name: FLOORSMIN_AVG
dtype: float64
- name: LANDAREA_AVG
dtype: float64
- name: LIVINGAPARTMENTS_AVG
dtype: float64
- name: LIVINGAREA_AVG
dtype: float64
- name: NONLIVINGAPARTMENTS_AVG
dtype: float64
- name: NONLIVINGAREA_AVG
dtype: float64
- name: APARTMENTS_MODE
dtype: float64
- name: BASEMENTAREA_MODE
dtype: float64
- name: YEARS_BEGINEXPLUATATION_MODE
dtype: float64
- name: YEARS_BUILD_MODE
dtype: float64
- name: COMMONAREA_MODE
dtype: float64
- name: ELEVATORS_MODE
dtype: float64
- name: ENTRANCES_MODE
dtype: float64
- name: FLOORSMAX_MODE
dtype: float64
- name: FLOORSMIN_MODE
dtype: float64
- name: LANDAREA_MODE
dtype: float64
- name: LIVINGAPARTMENTS_MODE
dtype: float64
- name: LIVINGAREA_MODE
dtype: float64
- name: NONLIVINGAPARTMENTS_MODE
dtype: float64
- name: NONLIVINGAREA_MODE
dtype: float64
- name: APARTMENTS_MEDI
dtype: float64
- name: BASEMENTAREA_MEDI
dtype: float64
- name: YEARS_BEGINEXPLUATATION_MEDI
dtype: float64
- name: YEARS_BUILD_MEDI
dtype: float64
- name: COMMONAREA_MEDI
dtype: float64
- name: ELEVATORS_MEDI
dtype: float64
- name: ENTRANCES_MEDI
dtype: float64
- name: FLOORSMAX_MEDI
dtype: float64
- name: FLOORSMIN_MEDI
dtype: float64
- name: LANDAREA_MEDI
dtype: float64
- name: LIVINGAPARTMENTS_MEDI
dtype: float64
- name: LIVINGAREA_MEDI
dtype: float64
- name: NONLIVINGAPARTMENTS_MEDI
dtype: float64
- name: NONLIVINGAREA_MEDI
dtype: float64
- name: FONDKAPREMONT_MODE
dtype: string
- name: HOUSETYPE_MODE
dtype: string
- name: TOTALAREA_MODE
dtype: float64
- name: WALLSMATERIAL_MODE
dtype: string
- name: EMERGENCYSTATE_MODE
dtype: string
- name: OBS_30_CNT_SOCIAL_CIRCLE
dtype: float64
- name: DEF_30_CNT_SOCIAL_CIRCLE
dtype: float64
- name: OBS_60_CNT_SOCIAL_CIRCLE
dtype: float64
- name: DEF_60_CNT_SOCIAL_CIRCLE
dtype: float64
- name: DAYS_LAST_PHONE_CHANGE
dtype: float64
- name: FLAG_DOCUMENT_2
dtype: int64
- name: FLAG_DOCUMENT_3
dtype: int64
- name: FLAG_DOCUMENT_4
dtype: int64
- name: FLAG_DOCUMENT_5
dtype: int64
- name: FLAG_DOCUMENT_6
dtype: int64
- name: FLAG_DOCUMENT_7
dtype: int64
- name: FLAG_DOCUMENT_8
dtype: int64
- name: FLAG_DOCUMENT_9
dtype: int64
- name: FLAG_DOCUMENT_10
dtype: int64
- name: FLAG_DOCUMENT_11
dtype: int64
- name: FLAG_DOCUMENT_12
dtype: int64
- name: FLAG_DOCUMENT_13
dtype: int64
- name: FLAG_DOCUMENT_14
dtype: int64
- name: FLAG_DOCUMENT_15
dtype: int64
- name: FLAG_DOCUMENT_16
dtype: int64
- name: FLAG_DOCUMENT_17
dtype: int64
- name: FLAG_DOCUMENT_18
dtype: int64
- name: FLAG_DOCUMENT_19
dtype: int64
- name: FLAG_DOCUMENT_20
dtype: int64
- name: FLAG_DOCUMENT_21
dtype: int64
- name: AMT_REQ_CREDIT_BUREAU_HOUR
dtype: float64
- name: AMT_REQ_CREDIT_BUREAU_DAY
dtype: float64
- name: AMT_REQ_CREDIT_BUREAU_WEEK
dtype: float64
- name: AMT_REQ_CREDIT_BUREAU_MON
dtype: float64
- name: AMT_REQ_CREDIT_BUREAU_QRT
dtype: float64
- name: AMT_REQ_CREDIT_BUREAU_YEAR
dtype: float64
splits:
- name: raw
num_bytes: 10681044
num_examples: 10000
download_size: 1985577
dataset_size: 10681044
---
# Dataset Card for "home-credit-example-raw"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 5,974 | [
[
-0.0211944580078125,
-0.03314208984375,
0.00899505615234375,
0.00927734375,
-0.012603759765625,
0.011505126953125,
0.00809478759765625,
0.0038051605224609375,
0.0299072265625,
0.031707763671875,
-0.048675537109375,
-0.06378173828125,
-0.0142059326171875,
-0.... |
datatab/SrpWikiDataset | 2023-06-03T23:56:04.000Z | [
"task_categories:text-generation",
"language:sr",
"license:apache-2.0",
"region:us"
] | datatab | null | null | 1 | 7 | 2023-06-03T23:20:59 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 468569155
num_examples: 3796604
download_size: 257869459
dataset_size: 468569155
license: apache-2.0
task_categories:
- text-generation
language:
- sr
pretty_name: Serbian Wiki Dataset
---
---
# Dataset Card for "Serbian Wiki Dataset"
---
> **Dataset contain text from Wikipedia articles in Serbian (obtained in early 2020) totaling in 477473 articles, as well as some of the WikiSource.**
- Dataset is constituted of TXT files.
- [Fixed and used from: **JeRTeh/SrpWiki**](https://huggingface.co/datasets/JeRTeh/SrpWiki) | 636 | [
[
-0.034515380859375,
-0.031524658203125,
0.00807952880859375,
0.0029506683349609375,
-0.040313720703125,
-0.0179595947265625,
-0.00814056396484375,
-0.048187255859375,
0.0511474609375,
0.03472900390625,
-0.06982421875,
-0.0384521484375,
-0.042694091796875,
0.... |
d0rj/hh-rlhf-ru | 2023-06-05T13:53:03.000Z | [
"language_creators:translated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:Anthropic/hh-rlhf",
"language:ru",
"license:mit",
"human-feedback",
"ChatGPT",
"reward",
"region:us"
] | d0rj | null | null | 2 | 7 | 2023-06-05T13:39:37 | ---
language_creators:
- translated
language:
- ru
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
pretty_name: HH for RLHF (ru)
source_datasets:
- Anthropic/hh-rlhf
license: mit
tags:
- human-feedback
- ChatGPT
- reward
dataset_info:
features:
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 573845356.0
num_examples: 160800
- name: test
num_bytes: 30792414.0
num_examples: 8552
download_size: 281014419
dataset_size: 604637770.0
---
# Dataset Card for "hh-rlhf-ru"
This is translated version of [Anthropic/hh-rlhf dataset](https://huggingface.co/datasets/Anthropic/hh-rlhf) into Russian.
| 694 | [
[
0.0013675689697265625,
-0.041656494140625,
0.0013265609741210938,
0.0036983489990234375,
-0.057159423828125,
0.007038116455078125,
0.0114898681640625,
-0.03765869140625,
0.0511474609375,
0.03204345703125,
-0.0723876953125,
-0.0582275390625,
-0.0308380126953125,
... |
atom-in-the-universe/vggsound | 2023-06-06T15:23:29.000Z | [
"region:us"
] | atom-in-the-universe | null | null | 0 | 7 | 2023-06-05T14:17:25 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ruanchaves/visual7w-gpt | 2023-06-14T15:34:49.000Z | [
"region:us"
] | ruanchaves | null | null | 0 | 7 | 2023-06-06T12:02:29 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
zachgitt/comedy-transcripts | 2023-06-08T21:39:54.000Z | [
"size_categories:n<1K",
"language:en",
"art",
"region:us"
] | zachgitt | null | null | 1 | 7 | 2023-06-08T21:26:43 | ---
language:
- en
tags:
- art
pretty_name: comedy_transcripts
size_categories:
- n<1K
---
### Dataset Summary
This is a dataset of stand up comedy transcripts. It was scraped from
https://scrapsfromtheloft.com/stand-up-comedy-scripts/ and all terms of use
apply. The transcripts are offered to the public as a contribution to education
and scholarship, and for the private, non-profit use of the academic community. | 419 | [
[
0.011932373046875,
-0.0243988037109375,
0.0152435302734375,
0.018310546875,
-0.01678466796875,
0.00612640380859375,
0.0126190185546875,
0.0189666748046875,
0.07806396484375,
0.056243896484375,
-0.05975341796875,
-0.030487060546875,
-0.04180908203125,
0.01408... |
tathagataraha/ficle | 2023-07-18T11:00:53.000Z | [
"task_categories:token-classification",
"task_categories:text-classification",
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:gpl-3.0",
"span",
"explanation",
"arxiv:2306.08872",
"region:us"
] | tathagataraha | null | null | 3 | 7 | 2023-06-11T07:37:34 | ---
dataset_info:
features:
- name: Claim
dtype: string
- name: Context
dtype: string
- name: Source
dtype: string
- name: Source Indices
dtype: string
- name: Relation
dtype: string
- name: Relation Indices
dtype: string
- name: Target
dtype: string
- name: Target Indices
dtype: string
- name: Inconsistent Claim Component
dtype: string
- name: Inconsistent Context-Span
dtype: string
- name: Inconsistent Context-Span Indices
dtype: string
- name: Inconsistency Type
dtype: string
- name: Fine-grained Inconsistent Entity-Type
dtype: string
- name: Coarse Inconsistent Entity-Type
dtype: string
splits:
- name: train
num_bytes: 2657091
num_examples: 6443
- name: validation
num_bytes: 333142
num_examples: 806
- name: test
num_bytes: 332484
num_examples: 806
download_size: 1784422
dataset_size: 3322717
task_categories:
- token-classification
- text-classification
- text-generation
language:
- en
pretty_name: FICLE
size_categories:
- 1K<n<10K
license: gpl-3.0
tags:
- span
- explanation
---
# FICLE Dataset
The dataset can be loaded and utilized through the following:
```python
from datasets import load_dataset
ficle_data = load_dataset("tathagataraha/ficle")
```
# Dataset card for FICLE
## Dataset Description
* **GitHub Repo:** https://github.com/blitzprecision/FICLE
* **Paper:**
* **Point of Contact:**
### Dataset Summary
The FICLE dataset is a derivative of the FEVER dataset, which is a collection of 185,445 claims generated by modifying sentences obtained from Wikipedia.
These claims were then verified without knowledge of the original sentences they were derived from. Each sample in the FEVER dataset consists of a claim sentence, a context sentence extracted from a Wikipedia URL as evidence, and a type label indicating whether the claim is supported, refuted, or lacks sufficient information.
### Languages
The FICLE Dataset contains only English.
## Dataset Structure
### Data Fields
* `Claim (string)`: A statement or proposition relating to the consistency or inconsistency of certain facts or information.
* `Context (string)`: The surrounding information or background against which the claim is being evaluated or compared. It provides additional details or evidence that can support or challenge the claim.
* `Source (string)`: It is the linguistic chunk containing the entity lying to the left of the main verb/relating chunk.
* `Source Indices (string)`: Source indices refer to the specific indices or positions within the source string that indicate the location of the relevant information.
* `Relation (string)`: It is the linguistic chunk containing the verb/relation at the core of the identified inconsistency.
* `Relation Indices (string)`: Relation indices indicate the specific indices or positions within the relation string that highlight the location of the relevant information.
* `Target (string)`: It is the linguistic chunk containing the entity lying to the right of the main verb/relating chunk.
* `Target Indices (string)`: Target indices represent the specific indices or positions within the target string that indicate the location of the relevant information.
* `Inconsistent Claim Component (string)`: The inconsistent claim component refers to a specific linguistic chunk within the claim that is identified as inconsistent with the context. It helps identify which part of the claim triple is problematic in terms of its alignment with the surrounding information.
* `Inconsistent Context-Span (string)`: A span or portion marked within the context sentence that is found to be inconsistent with the claim. It highlights a discrepancy or contradiction between the information in the claim and the corresponding context.
* `Inconsistent Context-Span Indices (string)`: The specific indices or location within the context sentence that indicate the inconsistent span.
* `Inconsistency Type (string)`: The category or type of inconsistency identified in the claim and context.
* `Fine-grained Inconsistent Entity-Type (string)`: The specific detailed category or type of entity causing the inconsistency within the claim or context. It provides a more granular classification of the entity associated with the inconsistency.
* `Coarse Inconsistent Entity-Type (string)`: The broader or general category or type of entity causing the inconsistency within the claim or context. It provides a higher-level classification of the entity associated with the inconsistency.
### Data Splits
The FICLE dataset comprises a total of 8,055 samples in the English language, each representing different instances of inconsistencies.
These inconsistencies are categorized into five types: Taxonomic Relations (4,842 samples), Negation (1,630 samples), Set Based (642 samples), Gradable (526 samples), and Simple (415 samples).
Within the dataset, there are six possible components that contribute to the inconsistencies found in the claim sentences.
These components are distributed as follows: Target-Head (3,960 samples), Target-Modifier (1,529 samples), Relation-Head (951 samples), Relation-Modifier (1,534 samples), Source-Head (45 samples), and Source-Modifier (36 samples).
The dataset is split into `train`, `validation`, and `test`.
* `train`: 6.44k rows
* `validation`: 806 rows
* `test`: 806 rows
## Dataset Creation
### Curation Rationale
We propose a linguistically enriched dataset to help detect inconsistencies and explain them.
To this end, the broad requirements are to locate where the inconsistency is present between a claim and a context and to have a classification scheme for better explainability.
### Data Collection and Preprocessing
The FICLE dataset is derived from the FEVER dataset, using the following-
ing processing steps. FEVER (Fact Extraction and VERification) consists of
185,445 claims were generated by altering sentences extracted from Wikipedia and
subsequently verified without knowledge of the sentence they were derived from.
Every sample in the FEVER dataset contains the claim sentence, evidence (or
context) sentence from a Wikipedia URL, and a type label (‘supports’, ‘refutes’, or
‘not enough info’). Out of these, we leverage only the samples with the ‘refutes’ label
to build our dataset.
### Annotations
You can see the annotation guidelines [here](https://github.com/blitzprecision/FICLE/blob/main/ficle_annotation_guidelines.pdf).
In order to provide detailed explanations for inconsistencies, extensive annotations were conducted for each sample in the FICLE dataset. The annotation process involved two iterations, with each iteration focusing on different aspects of the dataset.
In the first iteration, the annotations were primarily "syntactic-oriented." These fields included identifying the inconsistent claim fact triple, marking inconsistent context spans, and categorizing the six possible inconsistent claim components.
The second iteration of annotations concentrated on "semantic-oriented" aspects. Annotators labeled semantic fields for each sample, such as the type of inconsistency, coarse inconsistent entity types, and fine-grained inconsistent entity types.
This stage aimed to capture the semantic nuances and provide a deeper understanding of the inconsistencies present in the dataset.
The annotation process was carried out by a group of four annotators, two of whom are also authors of the dataset. The annotators possess a strong command of the English language and hold Bachelor's degrees in Computer Science, specializing in computational linguistics.
Their expertise in the field ensured accurate and reliable annotations. The annotators' ages range from 20 to 22 years, indicating their familiarity with contemporary language usage and computational linguistic concepts.
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Citation Information
```
@misc{raha2023neural,
title={Neural models for Factual Inconsistency Classification with Explanations},
author={Tathagata Raha and Mukund Choudhary and Abhinav Menon and Harshit Gupta and KV Aditya Srivatsa and Manish Gupta and Vasudeva Varma},
year={2023},
eprint={2306.08872},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contact | 8,459 | [
[
-0.023681640625,
-0.070068359375,
0.0035228729248046875,
0.022918701171875,
0.005077362060546875,
-0.007343292236328125,
-0.0200958251953125,
-0.043487548828125,
0.0230865478515625,
0.02435302734375,
-0.02911376953125,
-0.03271484375,
-0.046600341796875,
0.0... |
zachary-shah/musdb18-spec-pix2pix-test | 2023-06-11T15:21:15.000Z | [
"region:us"
] | zachary-shah | null | null | 0 | 7 | 2023-06-11T15:21:14 | ---
dataset_info:
features:
- name: original_prompt
dtype: string
- name: original_image
dtype: image
- name: edit_prompt
dtype: string
- name: edited_prompt
dtype: string
- name: edited_image
dtype: image
splits:
- name: train
num_bytes: 18297334.0
num_examples: 196
download_size: 18266177
dataset_size: 18297334.0
---
# Dataset Card for "musdb18-spec-pix2pix-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 548 | [
[
-0.05712890625,
-0.00872039794921875,
0.0166168212890625,
0.01885986328125,
-0.0184173583984375,
-0.004199981689453125,
0.015899658203125,
-0.011993408203125,
0.048492431640625,
0.0230865478515625,
-0.06463623046875,
-0.0306243896484375,
-0.03936767578125,
-... |
mattymchen/refinedweb-3m | 2023-06-12T06:01:04.000Z | [
"region:us"
] | mattymchen | null | null | 2 | 7 | 2023-06-12T05:58:49 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 7834920949
num_examples: 3000000
download_size: 4904877808
dataset_size: 7834920949
---
# Dataset Card for "refinedweb-3m"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 367 | [
[
-0.05035400390625,
-0.0219879150390625,
0.014404296875,
0.0200653076171875,
-0.01532745361328125,
-0.0043182373046875,
0.0185089111328125,
-0.02227783203125,
0.044464111328125,
0.045135498046875,
-0.056854248046875,
-0.06292724609375,
-0.029205322265625,
-0.... |
RikRaes/common_voice_13_0_validated | 2023-06-13T13:38:37.000Z | [
"region:us"
] | RikRaes | null | null | 0 | 7 | 2023-06-13T09:24:24 | ---
dataset_info:
features:
- name: client_id
dtype: string
- name: path
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accents
dtype: string
- name: variant
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: validated
num_bytes: 3134671952.746
num_examples: 86798
download_size: 2624065513
dataset_size: 3134671952.746
---
# Dataset Card for "common_voice_13_0_validated"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 785 | [
[
-0.0377197265625,
-0.024322509765625,
0.006839752197265625,
0.0333251953125,
-0.0184478759765625,
-0.00882720947265625,
-0.004528045654296875,
-0.00836181640625,
0.04156494140625,
0.0361328125,
-0.06689453125,
-0.061248779296875,
-0.032806396484375,
0.001434... |
asoria/nell | 2023-06-14T14:41:25.000Z | [
"task_categories:text-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:fact-checking-retrieval",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100M<n<1B",
"size_categories:10M<n<100M",
"size_categories:1M<n<10M",... | asoria | This dataset provides version 1115 of the belief
extracted by CMU's Never Ending Language Learner (NELL) and version
1110 of the candidate belief extracted by NELL. See
http://rtw.ml.cmu.edu/rtw/overview. NELL is an open information
extraction system that attempts to read the Clueweb09 of 500 million
web pages (http://boston.lti.cs.cmu.edu/Data/clueweb09/) and general
web searches.
The dataset has 4 configurations: nell_belief, nell_candidate,
nell_belief_sentences, and nell_candidate_sentences. nell_belief is
certainties of belief are lower. The two sentences config extracts the
CPL sentence patterns filled with the applicable 'best' literal string
for the entities filled into the sentence patterns. And also provides
sentences found using web searches containing the entities and
relationships.
There are roughly 21M entries for nell_belief_sentences, and 100M
sentences for nell_candidate_sentences. | @inproceedings{mitchell2015,
added-at = {2015-01-27T15:35:24.000+0100},
author = {Mitchell, T. and Cohen, W. and Hruscha, E. and Talukdar, P. and Betteridge, J. and Carlson, A. and Dalvi, B. and Gardner, M. and Kisiel, B. and Krishnamurthy, J. and Lao, N. and Mazaitis, K. and Mohammad, T. and Nakashole, N. and Platanios, E. and Ritter, A. and Samadi, M. and Settles, B. and Wang, R. and Wijaya, D. and Gupta, A. and Chen, X. and Saparov, A. and Greaves, M. and Welling, J.},
biburl = {https://www.bibsonomy.org/bibtex/263070703e6bb812852cca56574aed093/hotho},
booktitle = {AAAI},
description = {Papers by William W. Cohen},
interhash = {52d0d71f6f5b332dabc1412f18e3a93d},
intrahash = {63070703e6bb812852cca56574aed093},
keywords = {learning nell ontology semantic toread},
note = {: Never-Ending Learning in AAAI-2015},
timestamp = {2015-01-27T15:35:24.000+0100},
title = {Never-Ending Learning},
url = {http://www.cs.cmu.edu/~wcohen/pubs.html},
year = 2015
} | 2 | 7 | 2023-06-14T14:41:01 | ---
annotations_creators:
- machine-generated
language_creators:
- crowdsourced
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100M<n<1B
- 10M<n<100M
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-retrieval
task_ids:
- entity-linking-retrieval
- fact-checking-retrieval
paperswithcode_id: nell
pretty_name: Never Ending Language Learning (NELL)
tags:
- relation-extraction
- text-to-structured
- text-to-tabular
dataset_info:
- config_name: nell_belief
features:
- name: entity
dtype: string
- name: relation
dtype: string
- name: value
dtype: string
- name: iteration_of_promotion
dtype: string
- name: score
dtype: string
- name: source
dtype: string
- name: entity_literal_strings
dtype: string
- name: value_literal_strings
dtype: string
- name: best_entity_literal_string
dtype: string
- name: best_value_literal_string
dtype: string
- name: categories_for_entity
dtype: string
- name: categories_for_value
dtype: string
- name: candidate_source
dtype: string
splits:
- name: train
num_bytes: 4592559704
num_examples: 2766079
download_size: 929107246
dataset_size: 4592559704
- config_name: nell_candidate
features:
- name: entity
dtype: string
- name: relation
dtype: string
- name: value
dtype: string
- name: iteration_of_promotion
dtype: string
- name: score
dtype: string
- name: source
dtype: string
- name: entity_literal_strings
dtype: string
- name: value_literal_strings
dtype: string
- name: best_entity_literal_string
dtype: string
- name: best_value_literal_string
dtype: string
- name: categories_for_entity
dtype: string
- name: categories_for_value
dtype: string
- name: candidate_source
dtype: string
splits:
- name: train
num_bytes: 23497433060
num_examples: 32687353
download_size: 2687057812
dataset_size: 23497433060
- config_name: nell_belief_sentences
features:
- name: entity
dtype: string
- name: relation
dtype: string
- name: value
dtype: string
- name: score
dtype: string
- name: sentence
dtype: string
- name: count
dtype: int32
- name: url
dtype: string
- name: sentence_type
dtype: string
splits:
- name: train
num_bytes: 4459368426
num_examples: 21031531
download_size: 929107246
dataset_size: 4459368426
- config_name: nell_candidate_sentences
features:
- name: entity
dtype: string
- name: relation
dtype: string
- name: value
dtype: string
- name: score
dtype: string
- name: sentence
dtype: string
- name: count
dtype: int32
- name: url
dtype: string
- name: sentence_type
dtype: string
splits:
- name: train
num_bytes: 20058197787
num_examples: 100866414
download_size: 2687057812
dataset_size: 20058197787
config_names:
- nell_belief
- nell_belief_sentences
- nell_candidate
- nell_candidate_sentences
---
# Dataset Card for Never Ending Language Learning (NELL)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
http://rtw.ml.cmu.edu/rtw/
- **Repository:**
http://rtw.ml.cmu.edu/rtw/
- **Paper:**
Never-Ending Learning.
T. Mitchell, W. Cohen, E. Hruschka, P. Talukdar, J. Betteridge, A. Carlson, B. Dalvi, M. Gardner, B. Kisiel, J. Krishnamurthy, N. Lao, K. Mazaitis, T. Mohamed, N. Nakashole, E. Platanios, A. Ritter, M. Samadi, B. Settles, R. Wang, D. Wijaya, A. Gupta, X. Chen, A. Saparov, M. Greaves, J. Welling. In Proceedings of the Conference on Artificial Intelligence (AAAI), 2015
### Dataset Summary
This dataset provides version 1115 of the belief
extracted by CMU's Never Ending Language Learner (NELL) and version
1110 of the candidate belief extracted by NELL. See
http://rtw.ml.cmu.edu/rtw/overview. NELL is an open information
extraction system that attempts to read the Clueweb09 of 500 million
web pages (http://boston.lti.cs.cmu.edu/Data/clueweb09/) and general
web searches.
The dataset has 4 configurations: nell_belief, nell_candidate,
nell_belief_sentences, and nell_candidate_sentences. nell_belief is
certainties of belief are lower. The two sentences config extracts the
CPL sentence patterns filled with the applicable 'best' literal string
for the entities filled into the sentence patterns. And also provides
sentences found using web searches containing the entities and
relationships.
There are roughly 21M entries for nell_belief_sentences, and 100M
sentences for nell_candidate_sentences.
From the NELL website:
- **Research Goal**
To build a never-ending machine learning system that acquires the ability to extract structured information from unstructured web pages. If successful, this will result in a knowledge base (i.e., a relational database) of structured information that mirrors the content of the Web. We call this system NELL (Never-Ending Language Learner).
- **Approach**
The inputs to NELL include (1) an initial ontology defining hundreds of categories (e.g., person, sportsTeam, fruit, emotion) and relations (e.g., playsOnTeam(athlete,sportsTeam), playsInstrument(musician,instrument)) that NELL is expected to read about, and (2) 10 to 15 seed examples of each category and relation.
Given these inputs, plus a collection of 500 million web pages and access to the remainder of the web through search engine APIs, NELL runs 24 hours per day, continuously, to perform two ongoing tasks:
Extract new instances of categories and relations. In other words, find noun phrases that represent new examples of the input categories (e.g., "Barack Obama" is a person and politician), and find pairs of noun phrases that correspond to instances of the input relations (e.g., the pair "Jason Giambi" and "Yankees" is an instance of the playsOnTeam relation). These new instances are added to the growing knowledge base of structured beliefs.
Learn to read better than yesterday. NELL uses a variety of methods to extract beliefs from the web. These are retrained, using the growing knowledge base as a self-supervised collection of training examples. The result is a semi-supervised learning method that couples the training of hundreds of different extraction methods for a wide range of categories and relations. Much of NELL’s current success is due to its algorithm for coupling the simultaneous training of many extraction methods.
For more information, see: http://rtw.ml.cmu.edu/rtw/resources
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
en, and perhaps some others
## Dataset Structure
### Data Instances
There are four configurations for the dataset: nell_belief, nell_candidate, nell_belief_sentences, nell_candidate_sentences.
nell_belief and nell_candidate defines:
``
{'best_entity_literal_string': 'Aspect Medical Systems',
'best_value_literal_string': '',
'candidate_source': '%5BSEAL-Iter%3A215-2011%2F02%2F26-04%3A27%3A09-%3Ctoken%3Daspect_medical_systems%2Cbiotechcompany%3E-From%3ACategory%3Abiotechcompany-using-KB+http%3A%2F%2Fwww.unionegroup.com%2Fhealthcare%2Fmfg_info.htm+http%3A%2F%2Fwww.conventionspc.com%2Fcompanies.html%2C+CPL-Iter%3A1103-2018%2F03%2F08-15%3A32%3A34-%3Ctoken%3Daspect_medical_systems%2Cbiotechcompany%3E-grant+support+from+_%092%09research+support+from+_%094%09unrestricted+educational+grant+from+_%092%09educational+grant+from+_%092%09research+grant+support+from+_%091%09various+financial+management+positions+at+_%091%5D',
'categories_for_entity': 'concept:biotechcompany',
'categories_for_value': 'concept:company',
'entity': 'concept:biotechcompany:aspect_medical_systems',
'entity_literal_strings': '"Aspect Medical Systems" "aspect medical systems"',
'iteration_of_promotion': '1103',
'relation': 'generalizations',
'score': '0.9244426550775064',
'source': 'MBL-Iter%3A1103-2018%2F03%2F18-01%3A35%3A42-From+ErrorBasedIntegrator+%28SEAL%28aspect_medical_systems%2Cbiotechcompany%29%2C+CPL%28aspect_medical_systems%2Cbiotechcompany%29%29',
'value': 'concept:biotechcompany',
'value_literal_strings': ''}
``
nell_belief_sentences, nell_candidate_sentences defines:
``
{'count': 4,
'entity': 'biotechcompany:aspect_medical_systems',
'relation': 'generalizations',
'score': '0.9244426550775064',
'sentence': 'research support from [[ Aspect Medical Systems ]]',
'sentence_type': 'CPL',
'url': '',
'value': 'biotechcompany'}
``
### Data Fields
For nell_belief and nell_canddiate configurations. From http://rtw.ml.cmu.edu/rtw/faq:
* entity: The Entity part of the (Entity, Relation, Value) tripple. Note that this will be the name of a concept and is not the literal string of characters seen by NELL from some text source, nor does it indicate the category membership of that concept
* relation: The Relation part of the (Entity, Relation, Value) tripple. In the case of a category instance, this will be "generalizations". In the case of a relation instance, this will be the name of the relation.
* value: The Value part of the (Entity, Relation, Value) tripple. In the case of a category instance, this will be the name of the category. In the case of a relation instance, this will be another concept (like Entity).
* iteration_of_promotion: The point in NELL's life at which this category or relation instance was promoted to one that NELL beleives to be true. This is a non-negative integer indicating the number of iterations of bootstrapping NELL had gone through.
* score: A confidence score for the belief. Note that NELL's scores are not actually probabilistic at this time.
* source: A summary of the provenance for the belief indicating the set of learning subcomponents (CPL, SEAL, etc.) that had submitted this belief as being potentially true.
* entity_literal_strings: The set of actual textual strings that NELL has read that it believes can refer to the concept indicated in the Entity column.
* value_literal_strings: For relations, the set of actual textual strings that NELL has read that it believes can refer to the concept indicated in the Value column. For categories, this should be empty but may contain something spurious.
* best_entity_literal_string: Of the set of strings in the Entity literalStrings, column, which one string can best be used to describe the concept.
* best_value_literal_string: Same thing, but for Value literalStrings.
* categories_for_entity: The full set of categories (which may be empty) to which NELL belives the concept indicated in the Entity column to belong.
* categories_for_value: For relations, the full set of categories (which may be empty) to which NELL believes the concept indicated in the Value column to belong. For categories, this should be empty but may contain something spurious.
* candidate_source: A free-form amalgamation of more specific provenance information describing the justification(s) NELL has for possibly believing this category or relation instance.
For the nell_belief_sentences and nell_candidate_sentences, we have extracted the underlying sentences, sentence count and URLs and provided a shortened version of the entity, relation and value field by removing the string "concept:" and "candidate:". There are two types of sentences, 'CPL' and 'OE', which are generated by two of the modules of NELL, pattern matching and open web searching, respectively. There may be duplicates. The configuration is as follows:
* entity: The Entity part of the (Entity, Relation, Value) tripple. Note that this will be the name of a concept and is not the literal string of characters seen by NELL from some text source, nor does it indicate the category membership of that concept
* relation: The Relation part of the (Entity, Relation, Value) tripple. In the case of a category instance, this will be "generalizations". In the case of a relation instance, this will be the name of the relation.
* value: The Value part of the (Entity, Relation, Value) tripple. In the case of a category instance, this will be the name of the category. In the case of a relation instance, this will be another concept (like Entity).
* score: A confidence score for the belief. Note that NELL's scores are not actually probabilistic at this time.
* sentence: the raw sentence. For 'CPL' type sentences, there are "[[" "]]" arounds the entity and value. For 'OE' type sentences, there are no "[[" and "]]".
* url: the url if there is one from which this sentence was extracted
* count: the count for this sentence
* sentence_type: either 'CPL' or 'OE'
### Data Splits
There are no splits.
## Dataset Creation
### Curation Rationale
This dataset was gathered and created over many years of running the NELL system on web data.
### Source Data
#### Initial Data Collection and Normalization
See the research paper on NELL. NELL searches a subset of the web
(Clueweb09) and the open web using various open information extraction
algorithms, including pattern matching.
#### Who are the source language producers?
The NELL authors at Carnegie Mellon Univiersty and data from Cluebweb09 and the open web.
### Annotations
#### Annotation process
The various open information extraction modules of NELL.
#### Who are the annotators?
Machine annotated.
### Personal and Sensitive Information
Unkown, but likely there are names of famous individuals.
## Considerations for Using the Data
### Social Impact of Dataset
The goal for the work is to help machines learn to read and understand the web.
### Discussion of Biases
Since the data is gathered from the web, there is likely to be biased text and relationships.
[More Information Needed]
### Other Known Limitations
The relationships and concepts gathered from NELL are not 100% accurate, and there could be errors (maybe as high as 30% error).
See https://en.wikipedia.org/wiki/Never-Ending_Language_Learning
We did not 'tag' the entity and value in the 'OE' sentences, and this might be an extension in the future.
## Additional Information
### Dataset Curators
The authors of NELL at Carnegie Mellon Univeristy
### Licensing Information
There does not appear to be a license on http://rtw.ml.cmu.edu/rtw/resources. The data is made available by CMU on the web.
### Citation Information
@inproceedings{mitchell2015,
added-at = {2015-01-27T15:35:24.000+0100},
author = {Mitchell, T. and Cohen, W. and Hruscha, E. and Talukdar, P. and Betteridge, J. and Carlson, A. and Dalvi, B. and Gardner, M. and Kisiel, B. and Krishnamurthy, J. and Lao, N. and Mazaitis, K. and Mohammad, T. and Nakashole, N. and Platanios, E. and Ritter, A. and Samadi, M. and Settles, B. and Wang, R. and Wijaya, D. and Gupta, A. and Chen, X. and Saparov, A. and Greaves, M. and Welling, J.},
biburl = {https://www.bibsonomy.org/bibtex/263070703e6bb812852cca56574aed093/hotho},
booktitle = {AAAI},
description = {Papers by William W. Cohen},
interhash = {52d0d71f6f5b332dabc1412f18e3a93d},
intrahash = {63070703e6bb812852cca56574aed093},
keywords = {learning nell ontology semantic toread},
note = {: Never-Ending Learning in AAAI-2015},
timestamp = {2015-01-27T15:35:24.000+0100},
title = {Never-Ending Learning},
url = {http://www.cs.cmu.edu/~wcohen/pubs.html},
year = 2015
}
### Contributions
Thanks to [@ontocord](https://github.com/ontocord) for adding this dataset. | 16,347 | [
[
-0.00829315185546875,
-0.053314208984375,
0.031341552734375,
0.0013093948364257812,
-0.0006213188171386719,
-0.003692626953125,
-0.0029296875,
-0.01387786865234375,
0.0250396728515625,
0.014495849609375,
-0.039642333984375,
-0.078125,
-0.03814697265625,
0.01... |
HachiML/databricks-dolly-15k-ja-alpaca-format | 2023-08-13T01:22:14.000Z | [
"license:cc-by-sa-3.0",
"region:us"
] | HachiML | null | null | 0 | 7 | 2023-06-15T11:28:56 | ---
license: cc-by-sa-3.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: index
dtype: string
splits:
- name: train
num_bytes: 17831534
num_examples: 15015
download_size: 9365745
dataset_size: 17831534
---
This dataset is a translation of "databricks-dolly-15k-ja", which was created by automatically translating "databricks-dolly-15k" into Japanese, into input and output formats.
This dataset is licensed under CC BY SA 3.0
Last Update : 2023-06-15
databricks-dolly-15k-ja
https://github.com/kunishou/databricks-dolly-15k-ja
databricks-dolly-15k
https://github.com/databrickslabs/dolly/tree/master/data | 772 | [
[
-0.01247406005859375,
-0.041748046875,
0.0024127960205078125,
0.03790283203125,
-0.03240966796875,
0.004940032958984375,
0.0183563232421875,
0.001506805419921875,
0.029571533203125,
0.058990478515625,
-0.08514404296875,
-0.0384521484375,
-0.037322998046875,
... |
alxfgh/ChEMBL_Drug_Instruction_Tuning | 2023-06-24T03:22:42.000Z | [
"task_categories:question-answering",
"language:en",
"region:us"
] | alxfgh | null | null | 1 | 7 | 2023-06-15T19:46:49 | ---
task_categories:
- question-answering
language:
- en
pretty_name: ChEMBL Drug Instruction Tuning
---
# Dataset Card for ChEMBL Drug Instruction Tuning
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 1,654 | [
[
-0.01462554931640625,
-0.036041259765625,
0.00847625732421875,
0.007808685302734375,
-0.02294921875,
0.0155792236328125,
-0.00206756591796875,
-0.0110626220703125,
0.031463623046875,
0.048858642578125,
-0.07098388671875,
-0.09002685546875,
-0.0465087890625,
... |
ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split | 2023-06-17T21:33:36.000Z | [
"region:us"
] | ehartford | null | null | 20 | 7 | 2023-06-17T18:55:18 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
dmayhem93/agieval-gaokao-biology | 2023-06-18T17:16:57.000Z | [
"license:mit",
"arxiv:2304.06364",
"region:us"
] | dmayhem93 | null | null | 0 | 7 | 2023-06-18T12:47:19 | ---
dataset_info:
features:
- name: query
dtype: string
- name: choices
sequence: string
- name: gold
sequence: int64
splits:
- name: test
num_bytes: 159178
num_examples: 210
download_size: 94276
dataset_size: 159178
license: mit
---
# Dataset Card for "agieval-gaokao-biology"
Dataset taken from https://github.com/microsoft/AGIEval and processed as in that repo.
MIT License
Copyright (c) Microsoft Corporation.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE
@misc{zhong2023agieval,
title={AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models},
author={Wanjun Zhong and Ruixiang Cui and Yiduo Guo and Yaobo Liang and Shuai Lu and Yanlin Wang and Amin Saied and Weizhu Chen and Nan Duan},
year={2023},
eprint={2304.06364},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | 1,837 | [
[
-0.01141357421875,
-0.0406494140625,
0.006015777587890625,
0.0193939208984375,
-0.023712158203125,
-0.01125335693359375,
0.01215362548828125,
-0.0311126708984375,
0.0111083984375,
0.0289154052734375,
-0.04400634765625,
-0.04254150390625,
-0.04052734375,
0.01... |
dmayhem93/agieval-gaokao-geography | 2023-06-18T17:19:48.000Z | [
"license:mit",
"arxiv:2304.06364",
"region:us"
] | dmayhem93 | null | null | 0 | 7 | 2023-06-18T12:48:09 | ---
dataset_info:
features:
- name: query
dtype: string
- name: choices
sequence: string
- name: gold
sequence: int64
splits:
- name: test
num_bytes: 116612
num_examples: 199
download_size: 52868
dataset_size: 116612
license: mit
---
# Dataset Card for "agieval-gaokao-geography"
Dataset taken from https://github.com/microsoft/AGIEval and processed as in that repo.
MIT License
Copyright (c) Microsoft Corporation.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE
@misc{zhong2023agieval,
title={AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models},
author={Wanjun Zhong and Ruixiang Cui and Yiduo Guo and Yaobo Liang and Shuai Lu and Yanlin Wang and Amin Saied and Weizhu Chen and Nan Duan},
year={2023},
eprint={2304.06364},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | 1,839 | [
[
-0.0301055908203125,
-0.035064697265625,
0.018218994140625,
0.0286407470703125,
-0.024200439453125,
-0.0159454345703125,
-0.00023174285888671875,
-0.0272216796875,
0.004726409912109375,
0.047454833984375,
-0.04290771484375,
-0.05499267578125,
-0.04315185546875,
... |
dmayhem93/agieval-gaokao-history | 2023-06-18T17:20:33.000Z | [
"license:mit",
"arxiv:2304.06364",
"region:us"
] | dmayhem93 | null | null | 0 | 7 | 2023-06-18T12:48:28 | ---
dataset_info:
features:
- name: query
dtype: string
- name: choices
sequence: string
- name: gold
sequence: int64
splits:
- name: test
num_bytes: 120008
num_examples: 235
download_size: 78981
dataset_size: 120008
license: mit
---
# Dataset Card for "agieval-gaokao-history"
Dataset taken from https://github.com/microsoft/AGIEval and processed as in that repo.
MIT License
Copyright (c) Microsoft Corporation.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE
@misc{zhong2023agieval,
title={AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models},
author={Wanjun Zhong and Ruixiang Cui and Yiduo Guo and Yaobo Liang and Shuai Lu and Yanlin Wang and Amin Saied and Weizhu Chen and Nan Duan},
year={2023},
eprint={2304.06364},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | 1,837 | [
[
-0.0193939208984375,
-0.042327880859375,
0.00948333740234375,
0.016326904296875,
-0.023681640625,
-0.01166534423828125,
0.0161590576171875,
-0.0267791748046875,
-0.0013399124145507812,
0.041290283203125,
-0.0513916015625,
-0.04010009765625,
-0.035430908203125,
... |
dmayhem93/agieval-gaokao-mathqa | 2023-06-18T17:21:09.000Z | [
"license:mit",
"arxiv:2304.06364",
"region:us"
] | dmayhem93 | null | null | 0 | 7 | 2023-06-18T12:48:39 | ---
dataset_info:
features:
- name: query
dtype: string
- name: choices
sequence: string
- name: gold
sequence: int64
splits:
- name: test
num_bytes: 140041
num_examples: 351
download_size: 62472
dataset_size: 140041
license: mit
---
# Dataset Card for "agieval-gaokao-mathqa"
Dataset taken from https://github.com/microsoft/AGIEval and processed as in that repo.
MIT License
Copyright (c) Microsoft Corporation.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE
@misc{zhong2023agieval,
title={AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models},
author={Wanjun Zhong and Ruixiang Cui and Yiduo Guo and Yaobo Liang and Shuai Lu and Yanlin Wang and Amin Saied and Weizhu Chen and Nan Duan},
year={2023},
eprint={2304.06364},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | 1,836 | [
[
-0.019317626953125,
-0.039031982421875,
-0.005504608154296875,
0.0247039794921875,
-0.0184478759765625,
-0.006694793701171875,
0.0177764892578125,
-0.0245819091796875,
-0.002079010009765625,
0.035614013671875,
-0.053375244140625,
-0.036773681640625,
-0.035949707... |
anyspeech/doreco | 2023-06-20T23:15:40.000Z | [
"region:us"
] | anyspeech | null | null | 0 | 7 | 2023-06-20T22:54:21 | ---
dataset_info:
features:
- name: words
sequence: string
- name: phones
sequence: string
- name: filename
dtype: string
- name: language
dtype: string
- name: audio
dtype: audio
splits:
- name: nort2641
num_bytes: 62230893.0
num_examples: 442
- name: ana1239
num_bytes: 42852964.0
num_examples: 442
- name: apah1238
num_bytes: 4893478.0
num_examples: 64
- name: arap1274
num_bytes: 100238950.0
num_examples: 783
- name: bain1259
num_bytes: 46462067.0
num_examples: 463
- name: beja1238
num_bytes: 41707331.0
num_examples: 559
- name: bora1263
num_bytes: 39668897.0
num_examples: 281
- name: cabe1245
num_bytes: 43389700.0
num_examples: 375
- name: cash1254
num_bytes: 84443120.0
num_examples: 736
- name: daaki_port1286
num_bytes: 25799472.0
num_examples: 250
- name: dolg1241
num_bytes: 79383187.0
num_examples: 584
- name: even1259
num_bytes: 76546038.0
num_examples: 614
- name: goro1270
num_bytes: 32647382.0
num_examples: 345
- name: jeha1242
num_bytes: 52285285.0
num_examples: 451
- name: jeju1234
num_bytes: 2911567.0
num_examples: 35
- name: kaka1265
num_bytes: 68118487.0
num_examples: 513
- name: kama1351
num_bytes: 96608483.0
num_examples: 837
- name: komn1238
num_bytes: 31373903.0
num_examples: 377
- name: light_warlpiri_ligh1234
num_bytes: 53717542.0
num_examples: 482
- name: mojeno_trinitario_trin178
num_bytes: 74795313.0
num_examples: 412
- name: ngal1292
num_bytes: 9478115.0
num_examples: 120
- name: nisv1234
num_bytes: 65014890.0
num_examples: 651
- name: nngg1234
num_bytes: 14217341.0
num_examples: 166
- name: nort2875
num_bytes: 60030363.0
num_examples: 672
- name: north_alta_nort2875
num_bytes: 60031531.0
num_examples: 672
- name: orko1234
num_bytes: 24863337.0
num_examples: 276
- name: pnar1238
num_bytes: 33981487.0
num_examples: 128
- name: resi1247
num_bytes: 146842670.0
num_examples: 840
- name: ruul1235
num_bytes: 37365906.0
num_examples: 372
- name: sadu1234
num_bytes: 17483638.0
num_examples: 198
- name: sanz1248
num_bytes: 20058675.0
num_examples: 129
- name: savo1255
num_bytes: 94909030.0
num_examples: 572
- name: sout2856
num_bytes: 41663125.0
num_examples: 213
- name: stan1290
num_bytes: 36355445.0
num_examples: 411
- name: sumi1235
num_bytes: 16441364.0
num_examples: 187
- name: svan1243
num_bytes: 64642203.0
num_examples: 423
- name: taba1259
num_bytes: 24147643.0
num_examples: 181
- name: teop1238
num_bytes: 75408373.0
num_examples: 795
- name: texi1237
num_bytes: 9029606.0
num_examples: 106
- name: tsim1256
num_bytes: 32547837.0
num_examples: 361
- name: urum1249
num_bytes: 42911916.0
num_examples: 289
- name: vera1241
num_bytes: 66218151.0
num_examples: 582
- name: warl1254
num_bytes: 108201039.0
num_examples: 926
- name: yong1270
num_bytes: 34901432.0
num_examples: 257
- name: yuca1254
num_bytes: 39947340.0
num_examples: 360
download_size: 2225663503
dataset_size: 2236766516.0
---
# Dataset Card for "doreco"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 3,503 | [
[
-0.0307159423828125,
-0.00656890869140625,
0.019012451171875,
0.01316070556640625,
-0.00879669189453125,
0.0006833076477050781,
0.0147857666015625,
-0.02001953125,
0.0577392578125,
0.043060302734375,
-0.050506591796875,
-0.04754638671875,
-0.042694091796875,
... |
richardr1126/spider-context-instruct | 2023-07-18T17:47:59.000Z | [
"source_datasets:spider",
"language:en",
"license:cc-by-4.0",
"text-to-sql",
"SQL",
"Spider",
"fine-tune",
"region:us"
] | richardr1126 | null | null | 1 | 7 | 2023-06-21T04:01:28 | ---
language:
- en
license:
- cc-by-4.0
source_datasets:
- spider
pretty_name: Spider Context Instruct
tags:
- text-to-sql
- SQL
- Spider
- fine-tune
dataset_info:
features:
- name: db_id
dtype: string
- name: text
dtype: string
---
# Dataset Card for Spider Context Instruct
### Dataset Summary
Spider is a large-scale complex and cross-domain semantic parsing and text-to-SQL dataset annotated by 11 Yale students
The goal of the Spider challenge is to develop natural language interfaces to cross-domain databases.
This dataset was created to finetune LLMs in a `### Instruction:` and `### Response:` format with database context.
### Yale Lily Spider Leaderboards
The leaderboard can be seen at https://yale-lily.github.io/spider
### Languages
The text in the dataset is in English.
### Licensing Information
The spider dataset is licensed under
the [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode)
### Citation
```
@article{yu2018spider,
title={Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task},
author={Yu, Tao and Zhang, Rui and Yang, Kai and Yasunaga, Michihiro and Wang, Dongxu and Li, Zifan and Ma, James and Li, Irene and Yao, Qingning and Roman, Shanelle and others},
journal={arXiv preprint arXiv:1809.08887},
year={2018}
}
``` | 1,368 | [
[
-0.002147674560546875,
-0.034637451171875,
0.0184326171875,
0.005168914794921875,
-0.0135498046875,
0.008880615234375,
-0.00457763671875,
-0.0330810546875,
0.0321044921875,
0.0209808349609375,
-0.048553466796875,
-0.06158447265625,
-0.037353515625,
0.0376892... |
Patt/HellaSwag_TH_drop | 2023-07-20T15:26:47.000Z | [
"language:th",
"language:en",
"arxiv:1907.04307",
"region:us"
] | Patt | null | null | 0 | 7 | 2023-06-22T09:10:40 | ---
language:
- th
- en
---
# Dataset Card for HellaSwag_TH_drop
### Dataset Description
This dataset is Thai translated version of [hellaswag](https://huggingface.co/datasets/hellaswag) using google translate with [Multilingual Universal Sentence Encoder](https://arxiv.org/abs/1907.04307) to calculate score for Thai translation.
The score was penalized by the length of original text compare to translated text. The row that any score < 0.5 was dropped.
### Languages
- EN
- TH | 485 | [
[
-0.026458740234375,
-0.040008544921875,
0.009246826171875,
0.0225067138671875,
-0.059844970703125,
-0.00785064697265625,
-0.0299072265625,
-0.01361083984375,
0.02099609375,
0.047607421875,
-0.06524658203125,
-0.0777587890625,
-0.05255126953125,
0.02789306640... |
Patt/MultiRC_TH_drop | 2023-07-20T15:26:22.000Z | [
"task_categories:text-classification",
"language:en",
"language:th",
"arxiv:1907.04307",
"region:us"
] | Patt | null | null | 0 | 7 | 2023-06-22T13:20:37 | ---
task_categories:
- text-classification
language:
- en
- th
dataset_info:
features:
- name: paragraph
dtype: string
- name: paragraph_TH
dtype: string
- name: question
dtype: string
- name: question_TH
dtype: string
- name: answer
dtype: string
- name: answer_TH
dtype: string
- name: idx
struct:
- name: answer
dtype: int64
- name: paragraph
dtype: int64
- name: question
dtype: int64
- name: label
dtype: int64
- name: score_paragraph
dtype: float64
- name: score_question
dtype: float64
- name: score_answer
dtype: float64
splits:
- name: train
num_bytes: 133061823
num_examples: 23520
- name: validation
num_bytes: 22534453
num_examples: 4212
- name: test
num_bytes: 42757726
num_examples: 8272
download_size: 5756232
dataset_size: 198354002
---
# Dataset Card for MultiRC_TH_drop
### Dataset Description
This dataset is Thai translated version of [multirc](https://huggingface.co/datasets/super_glue/viewer/multirc) using google translate with [Multilingual Universal Sentence Encoder](https://arxiv.org/abs/1907.04307) to calculate score for Thai translation.
The score was penalized by the length of original text compare to translated text. The row that any score < 0.66 was dropped. | 1,330 | [
[
-0.041534423828125,
-0.04119873046875,
0.0012340545654296875,
0.0154876708984375,
-0.038665771484375,
0.0098114013671875,
-0.035247802734375,
-0.0115509033203125,
0.033599853515625,
0.034088134765625,
-0.06463623046875,
-0.055328369140625,
-0.041961669921875,
... |
Falah/skin-cancer | 2023-07-02T12:41:06.000Z | [
"region:us"
] | Falah | null | null | 0 | 7 | 2023-06-24T14:29:49 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': benign
'1': malignant
splits:
- name: train
num_bytes: 146274097.953
num_examples: 2637
download_size: 136183890
dataset_size: 146274097.953
---
# Skin Cancer Dataset
This dataset contains skin cancer images labeled as benign (class 0) or malignant (class 1). It can be used for various tasks related to skin cancer classification, such as image recognition, machine learning, and deep learning models.
## Class Labels
The dataset consists of two class labels:
- Class 0: Benign
- Class 1: Malignant
## Number of Rows
The dataset contains 2,637 rows, each corresponding to a unique skin cancer image.
## Usage
To load this dataset using the Hugging Face library, you can utilize the `load_dataset` function as follows:
```python
from datasets import load_dataset
dataset = load_dataset("Falah/skin-cancer", split="train")
```
This code will load the dataset with the training split and return an object that allows you to access the dataset's features, labels, and other relevant information.
Example code to access the dataset and obtain the class names:
```python
# Load the dataset
dataset = load_dataset("Falah/skin-cancer", split="train")
# Access the class names
class_names = dataset.features["class_label"]["names"]
# Print the class names with their respective codes
for code, name in class_names.items():
print(f"'{code}': {name}")
```
The above code will print the class names along with their corresponding codes, as specified in the dataset.
Please note that you need to have the Hugging Face library installed in order to use the `load_dataset` function.
## License
The dataset is provided under an unspecified license. Please refer to the dataset source or contact the dataset owner, Falah, for more information about the licensing details.
## Citation
If you use this dataset in your work or research, please consider citing it as:
```
@misc{Falah/skin-cancer,
title={Skin Cancer Dataset},
author={Falah},
year={2023},
publisher={Hugging Face},
howpublished={\url{https://huggingface.co/datasets/Falah/skin-cancer}}
}
```
| 2,241 | [
[
-0.0093841552734375,
-0.0293426513671875,
-0.00942230224609375,
0.00937652587890625,
-0.006191253662109375,
-0.01059722900390625,
0.00983428955078125,
-0.0260009765625,
0.016265869140625,
0.043212890625,
-0.033782958984375,
-0.06561279296875,
-0.04052734375,
... |
FreedomIntelligence/alpaca-gpt4-french | 2023-08-06T08:09:08.000Z | [
"license:apache-2.0",
"region:us"
] | FreedomIntelligence | null | null | 0 | 7 | 2023-06-26T08:17:53 | ---
license: apache-2.0
---
The dataset is used in the research related to [MultilingualSIFT](https://github.com/FreedomIntelligence/MultilingualSIFT). | 152 | [
[
-0.0284271240234375,
-0.0214385986328125,
-0.000301361083984375,
0.01971435546875,
-0.004512786865234375,
0.004093170166015625,
-0.0194091796875,
-0.0303192138671875,
0.0289154052734375,
0.033966064453125,
-0.0643310546875,
-0.032958984375,
-0.012969970703125,
... |
FreedomIntelligence/alpaca-gpt4-spanish | 2023-08-06T08:11:10.000Z | [
"region:us"
] | FreedomIntelligence | null | null | 2 | 7 | 2023-06-26T08:19:08 | The dataset is used in the research related to [MultilingualSIFT](https://github.com/FreedomIntelligence/MultilingualSIFT). | 124 | [
[
-0.0284271240234375,
-0.0214385986328125,
-0.000301361083984375,
0.01971435546875,
-0.004512786865234375,
0.004093170166015625,
-0.0194091796875,
-0.0303192138671875,
0.0289154052734375,
0.033966064453125,
-0.0643310546875,
-0.032958984375,
-0.012969970703125,
... |
notrichardren/elem_tf | 2023-06-28T12:58:02.000Z | [
"region:us"
] | notrichardren | null | null | 0 | 7 | 2023-06-27T18:40:23 | ---
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: Topic
dtype: string
- name: Question
dtype: string
- name: Correct
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 229401
num_examples: 2310
download_size: 102669
dataset_size: 229401
---
# Dataset Card for "elem_tf"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 506 | [
[
-0.03564453125,
-0.0345458984375,
0.0102081298828125,
0.0142974853515625,
-0.0197906494140625,
0.00864410400390625,
0.0124359130859375,
-0.016845703125,
0.0670166015625,
0.03216552734375,
-0.0609130859375,
-0.07598876953125,
-0.044677734375,
-0.009765625,
... |
aisyahhrazak/ms-rotikaya | 2023-06-29T03:54:09.000Z | [
"language:ms",
"region:us"
] | aisyahhrazak | null | null | 0 | 7 | 2023-06-27T21:33:42 | ---
language:
- ms
---
Roti Kaya articles scraped on 27.6.2023 | 63 | [
[
-0.01161956787109375,
-0.034027099609375,
0.00940704345703125,
0.05767822265625,
-0.041351318359375,
-0.014190673828125,
0.044219970703125,
-0.054718017578125,
0.06494140625,
0.07244873046875,
-0.0296630859375,
-0.002750396728515625,
-0.03192138671875,
0.016... |
Ibrahim-Alam/Tweet_Sentiment_pos_neg | 2023-06-29T03:19:58.000Z | [
"region:us"
] | Ibrahim-Alam | null | null | 0 | 7 | 2023-06-29T03:19:33 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
FreedomIntelligence/evol-instruct-arabic | 2023-08-06T08:11:34.000Z | [
"region:us"
] | FreedomIntelligence | null | null | 1 | 7 | 2023-06-30T03:42:46 | The dataset is used in the research related to [MultilingualSIFT](https://github.com/FreedomIntelligence/MultilingualSIFT). | 124 | [
[
-0.0284271240234375,
-0.0214385986328125,
-0.000301361083984375,
0.01971435546875,
-0.004512786865234375,
0.004093170166015625,
-0.0194091796875,
-0.0303192138671875,
0.0289154052734375,
0.033966064453125,
-0.0643310546875,
-0.032958984375,
-0.012969970703125,
... |
aisyahhrazak/ms-majalahsains | 2023-07-03T00:47:04.000Z | [
"language:ms",
"region:us"
] | aisyahhrazak | null | null | 0 | 7 | 2023-07-02T02:35:04 | ---
language:
- ms
---
About
- Scraped articles from https://www.majalahsains.com/
- Data scraped on 1.7.2023
Dataset Format
```
{"url": "...", "headline": "...", "content": [...,...], "tags": [......,.....,]}
``` | 217 | [
[
-0.04388427734375,
-0.07928466796875,
0.00899505615234375,
0.01377105712890625,
-0.030487060546875,
0.01580810546875,
0.0207672119140625,
-0.0006918907165527344,
0.041259765625,
0.041900634765625,
-0.0673828125,
-0.041473388671875,
-0.018524169921875,
0.0195... |
aisyahhrazak/ms-melakahariini | 2023-07-03T00:47:31.000Z | [
"language:ms",
"region:us"
] | aisyahhrazak | null | null | 0 | 7 | 2023-07-02T02:36:38 | ---
language:
- ms
---
About
- Scraped articles from https://www.melakahariini.my/
- Data scraped on 1.7.2023
Dataset Format
```
{"url": "...", "headline": "...", "content": [...,...]}
``` | 192 | [
[
-0.044464111328125,
-0.07806396484375,
0.00844573974609375,
0.0166015625,
-0.048370361328125,
0.00485992431640625,
0.011932373046875,
-0.00807952880859375,
0.0572509765625,
0.0523681640625,
-0.05279541015625,
-0.0506591796875,
-0.0264434814453125,
0.02076721... |
aisyahhrazak/ms-malaysiakini-my | 2023-07-03T00:49:22.000Z | [
"language:ms",
"region:us"
] | aisyahhrazak | null | null | 0 | 7 | 2023-07-02T16:09:52 | ---
language:
- ms
---
About
- Scraped articles from https://www.malaysiakini.com/my
- Not including other domains (page.malaysiakini/newslab.malaysiakini)
- Data scraped on 2.7.2023
Dataset Format
```
{"url": "...", "headline": "...", "content": [...,...]}
``` | 263 | [
[
-0.034454345703125,
-0.07073974609375,
0.0133056640625,
0.025848388671875,
-0.0281829833984375,
0.01161956787109375,
0.00916290283203125,
-0.003936767578125,
0.042144775390625,
0.03594970703125,
-0.05694580078125,
-0.051910400390625,
-0.0283966064453125,
0.0... |
NomaDamas/Ko-StrategyQA | 2023-07-07T06:04:35.000Z | [
"region:us"
] | NomaDamas | null | null | 6 | 7 | 2023-07-05T12:23:09 | # Ko-StrategyQA
이 데이터셋은 [StrategyQA](https://allenai.org/data/strategyqa)의 한국어 버전입니다.
기존 데이터셋의 모든 질문과 단락들을 [DeepL](https://www.deepl.com/translator)을 사용하여 번역했습니다.
## 데이터셋 설명
이 데이터셋은 [StrategyQA](https://allenai.org/data/strategyqa)의 한국어 버전입니다.
StrategyQA는 오픈 도메인 질의 응답 태스크 분야에서 multi-hop 질문들만을 모아 놓은 데이터셋입니다.
오픈 도메인 질의 응답(ODQA)은 특정한 도메인 없이, 일반적인 지식 분야에서 질문에 대한 올바른 응답을 하는 인공지능 모델을 만드는 태스크입니다.
multi-hop 질문들은 한 질문에 답하기 위하여 두 가지 이상의 사실을 두 가지 이상의 단락들에서 알아내야만 하는 질문들입니다.
이 데이터셋을 활용하여 multi-hop 질문들을 해결하기 위해 복수 개의 단락을 자동으로 단락 뭉치에서 검색하고 찾아낼 수 있는 성능을 측정할 수 있습니다.
또한, 거대 언어 모델 (LLM) 등의 언어 모델이 multi-hop 질문들에 정답을 말하는지 성능을 측정할 수 있습니다.
해당 데이터셋은 예/아니오 로만 답할 수 있는 질문들로만 이루어 져 있습니다.
이 데이터셋에서는 질문 분리에 대한 메트릭인 SARI는 아직 측정할 수 없습니다.
## 평가
이 [레포](https://github.com/edai-club/KoPrivateGPT/tree/main/evaluate/strategyQA)에서 평가 코드를 볼 수 있습니다. 정확도(Accruacy)와 Recall@10을 지원합니다.
## 파일 설명
- ```ko-strategyqa_full.json``` : 질문, 설명, 사용된 단락들, 사용된 사실들, 분해한 질문들이 모두 들어있습니다.
- ```ko-strategyqa_train.json``` : 전체 데이터셋의 train set입니다. **주의** 이 train set은 공식 train set과 차이가 있습니다! Ko-StrategyQA 제작자가 임의로 자른 것이므로 유의해주세요.
- ```ko-strategyqa_dev.json``` : 교차 검증을 위한 전체 데이터셋의 dev set입니다. **주의** 이 dev set은 공식 dev set과 차이가 있습니다! Ko-StrategyQA 제작자가 임의로 자른 것이므로 유의해주세요.
- ```ko-strategyqa_test.json``` : 공식 StrategyQA [리더보드](https://leaderboard.allenai.org/strategyqa/submissions/public) 등재를 위한 test 질문들의 한국어 버전입니다.
- ```ko-strategyqa_paragraphs.csv``` : 모든 단락들입니다.
- ```ko-strategyqa_paragraphs.parquet``` : 모든 단락들의 parquet 파일 버전입니다.
This dataset is Korean version of [StrategyQA](https://allenai.org/data/strategyqa).
We translated all questions and paragraphs to Korean using [DeepL](https://www.deepl.com/translator).
## Overview
This dataset is Korean version of StrategyQA. Strategy QA is multi-hop question datasets for open-domain question answering (ODQA).
For answering all questions in this dataset, the model must know multiple facts from multiple paragraphs.
You can measure performance of retriever system for multi-hop questions.
All questions answer is Ture or False questions, so you can measure model's performance by accuracy.
In korean version, you can't measure SARI yet, metrics that measure question decomposition.
## Dataset Files
- ```ko-strategyqa_full.json``` : full questions, descriptions, decomposition, facts, evidence.
- ```ko-strategyqa_train.json``` : train set from full dataset. **Warning!** Our split is not official train set of StrategyQA. Questions may be different with official StrategyQA train dataset.
- ```ko-strategyqa_dev.json``` : dev set from full dataset. **Warning!** Our split is not official dev (validation) set of StrategyQA. Questions may be different with official StrategyQA dev dataset.
- ```ko-strategyqa_test.json``` : test questions from official StrategyQA [leaderboard](https://leaderboard.allenai.org/strategyqa/submissions/public)
- ```ko-strategyqa_paragraphs.csv``` : all paragraphs (contexts)
- ```ko-strategyqa_paragraphs.parquet``` : all paragraphs (contexts) to parquet file.
## Evaluation
You can evaluate this dataset at this [repo](https://github.com/edai-club/KoPrivateGPT/tree/main/evaluate/strategyQA). We support Recall@10 and Accuracy metrics.
## License
Apache 2.0 license
| 3,230 | [
[
-0.031097412109375,
-0.047119140625,
0.02923583984375,
0.0245361328125,
-0.017425537109375,
0.01142120361328125,
-0.0016241073608398438,
-0.007137298583984375,
0.036590576171875,
0.0254058837890625,
-0.049835205078125,
-0.04742431640625,
-0.03631591796875,
0... |
richardr1126/spider-natsql-context-instruct | 2023-07-06T15:25:36.000Z | [
"source_datasets:spider",
"language:en",
"license:cc-by-4.0",
"sql",
"spider",
"natsql",
"text-to-sql",
"sql finetune",
"arxiv:1809.08887",
"arxiv:2109.05153",
"region:us"
] | richardr1126 | null | null | 0 | 7 | 2023-07-06T15:24:08 | ---
language:
- en
license:
- cc-by-4.0
source_datasets:
- spider
tags:
- sql
- spider
- natsql
- text-to-sql
- sql finetune
dataset_info:
features:
- name: db_id
dtype: string
- name: text
dtype: string
---
# Dataset Card for Spider NatSQL Context Instruct
### Dataset Summary
[Spider](https://arxiv.org/abs/1809.08887) is a large-scale complex and cross-domain semantic parsing and text-to-SQL dataset annotated by 11 Yale students
The goal of the Spider challenge is to develop natural language interfaces to cross-domain databases.
This dataset was created to finetune LLMs on the Spider dataset with database context using NatSQL.
### NatSQL
[NatSQL](https://arxiv.org/abs/2109.05153) is an intermediate representation for SQL that simplifies the queries and reduces the mismatch between
natural language and SQL. NatSQL preserves the core functionalities of SQL, but removes some clauses and keywords
that are hard to infer from natural language descriptions. NatSQL also makes schema linking easier by reducing the
number of schema items to predict. NatSQL can be easily converted to executable SQL queries and can improve the
performance of text-to-SQL models.
### Yale Lily Spider Leaderboards
The leaderboard can be seen at https://yale-lily.github.io/spider
### Languages
The text in the dataset is in English.
### Licensing Information
The spider dataset is licensed under
the [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode)
### Citation
```
@article{yu2018spider,
title={Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task},
author={Yu, Tao and Zhang, Rui and Yang, Kai and Yasunaga, Michihiro and Wang, Dongxu and Li, Zifan and Ma, James and Li, Irene and Yao, Qingning and Roman, Shanelle and others},
journal={arXiv preprint arXiv:1809.08887},
year={2018}
}
```
```
@inproceedings{gan-etal-2021-natural-sql,
title = "Natural {SQL}: Making {SQL} Easier to Infer from Natural Language Specifications",
author = "Gan, Yujian and
Chen, Xinyun and
Xie, Jinxia and
Purver, Matthew and
Woodward, John R. and
Drake, John and
Zhang, Qiaofu",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.174",
doi = "10.18653/v1/2021.findings-emnlp.174",
pages = "2030--2042",
}
``` | 2,605 | [
[
-0.0167236328125,
-0.0526123046875,
0.01219940185546875,
0.0166168212890625,
-0.021942138671875,
0.01551055908203125,
-0.0174407958984375,
-0.0469970703125,
0.037994384765625,
0.040618896484375,
-0.033355712890625,
-0.047210693359375,
-0.0261077880859375,
0.... |
Atom007/mc4-japanese-data | 2023-07-09T15:04:14.000Z | [
"task_categories:conversational",
"language:ja",
"license:apache-2.0",
"region:us"
] | Atom007 | null | null | 0 | 7 | 2023-07-09T14:56:56 | ---
license: apache-2.0
task_categories:
- conversational
language:
- ja
---
Reference https://huggingface.co/datasets/mc4 | 123 | [
[
-0.038909912109375,
-0.0039825439453125,
0.027740478515625,
0.01422119140625,
0.007640838623046875,
-0.0082550048828125,
0.0303802490234375,
-0.0250396728515625,
0.04034423828125,
0.0462646484375,
-0.06768798828125,
-0.042755126953125,
-0.024017333984375,
0.... |
BigSuperbPrivate/NoiseDetectionGaussian_VoxcelebMusan | 2023-07-12T12:05:24.000Z | [
"region:us"
] | BigSuperbPrivate | null | null | 0 | 7 | 2023-07-10T23:29:32 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: instruction
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 3088904904.0
num_examples: 24000
- name: validation
num_bytes: 671579798.0
num_examples: 5218
- name: test
num_bytes: 1254610620.0
num_examples: 9748
download_size: 5004286185
dataset_size: 5015095322.0
---
# Dataset Card for "NoiseDetectiongaussian_VoxcelebMusan"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 636 | [
[
-0.05169677734375,
-0.0277862548828125,
0.0229644775390625,
0.021636962890625,
-0.01206207275390625,
-0.00007176399230957031,
0.00955963134765625,
-0.016357421875,
0.042755126953125,
0.023284912109375,
-0.064697265625,
-0.05780029296875,
-0.02886962890625,
-... |
BigSuperbPrivate/ReverberationDetectionSmallRoom_VoxcelebRirsNoises | 2023-07-12T16:42:23.000Z | [
"region:us"
] | BigSuperbPrivate | null | null | 0 | 7 | 2023-07-11T04:07:59 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: instruction
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 3088676087.0
num_examples: 24000
- name: validation
num_bytes: 671529456.0
num_examples: 5218
- name: test
num_bytes: 1254515820.0
num_examples: 9748
download_size: 4999935933
dataset_size: 5014721363.0
---
# Dataset Card for "ReverberationDetectionsmallroom_VoxcelebRirsNoises"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 650 | [
[
-0.05426025390625,
-0.0130462646484375,
0.00283050537109375,
0.037261962890625,
-0.0091400146484375,
-0.00215911865234375,
0.0019292831420898438,
-0.00019693374633789062,
0.042449951171875,
0.037506103515625,
-0.071533203125,
-0.05950927734375,
-0.01768493652343... |
BigSuperbPrivate/SpeechDetection_Aishell1Train | 2023-07-17T22:07:03.000Z | [
"region:us"
] | BigSuperbPrivate | null | null | 0 | 7 | 2023-07-14T05:16:03 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: file2
dtype: string
- name: instruction
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 17446748199.188
num_examples: 120418
- name: validation
num_bytes: 2087003488.92
num_examples: 14331
download_size: 19206609847
dataset_size: 19533751688.108
---
# Dataset Card for "SpeechDetection_AISHELL1Train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 608 | [
[
-0.03167724609375,
-0.01509857177734375,
-0.002910614013671875,
0.014617919921875,
-0.0018978118896484375,
0.006458282470703125,
0.0095062255859375,
-0.01369476318359375,
0.050567626953125,
0.0212249755859375,
-0.0648193359375,
-0.054534912109375,
-0.05029296875... |
PedroCJardim/QASports | 2023-10-27T18:36:32.000Z | [
"task_categories:question-answering",
"size_categories:1M<n<10M",
"language:en",
"license:mit",
"sports",
"open-domain-qa",
"extractive-qa",
"region:us"
] | PedroCJardim | null | null | 2 | 7 | 2023-07-14T17:28:19 | ---
license: mit
task_categories:
- question-answering
language:
- en
tags:
- sports
- open-domain-qa
- extractive-qa
size_categories:
- 1M<n<10M
pretty_name: QASports
---
### Dataset Summary
QASports is the first large sports-themed question answering dataset counting over 1.5 million questions and answers about 54k preprocessed wiki pages, using as documents the wiki of 3 of the most popular sports in the world, Soccer, American Football and Basketball. Each sport can be downloaded individually as a subset, with the train, test and validation splits, or all 3 can be downloaded together.
- 🎲 Complete dataset: https://osf.io/n7r23/
- 🔧 Processing scripts: https://github.com/leomaurodesenv/qasports-dataset-scripts/
### Supported Tasks and Leaderboards
Extractive Question Answering.
### Languages
English.
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
{
"answer": {
"offset": [42,44],
"text": "16"
},
"context": "The following is a list of squads for all 16 national teams competing at the Copa América Centenario. Each national team had to submit a squad of 23 players, 3 of whom must be goalkeepers. The provisional squads were announced on 4 May 2016. A final selection was provided to the organisers on 20 May 2016." ,
"qa_id": "61200579912616854316543272456523433217",
"question": "How many national teams competed at the Copa América Centenario?",
"context_id": "171084087809998484545703642399578583178",
"context_title": "Copa América Centenario squads | Football Wiki | Fandom",
"url": "https://football.fandom.com/wiki/Copa_Am%C3%A9rica_Centenario_squads"
}
```
### Data Fields
The data fields are the same among all splits.
- `id_qa`: a `string` feature.
- `context_id`: a `string` feature.
- `context_title`: a `string` feature.
- `url`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `offset`: a list feature containing:
- 2 `int32` features for start and end.
### Citation
```
@inproceedings{jardim:2023:qasports-dataset,
author={Pedro Calciolari Jardim and Leonardo Mauro Pereira Moraes and Cristina Dutra Aguiar},
title = {{QASports}: A Question Answering Dataset about Sports},
booktitle = {Proceedings of the Brazilian Symposium on Databases: Dataset Showcase Workshop},
address = {Belo Horizonte, MG, Brazil},
url = {https://github.com/leomaurodesenv/qasports-dataset-scripts},
publisher = {Brazilian Computer Society},
pages = {1-12},
year = {2023}
}
```
| 2,649 | [
[
-0.055267333984375,
-0.040130615234375,
0.03125,
0.026458740234375,
-0.0240020751953125,
0.016021728515625,
0.006221771240234375,
-0.02752685546875,
0.041015625,
0.015838623046875,
-0.0670166015625,
-0.04583740234375,
-0.02825927734375,
0.034820556640625,
... |
dipudl/hc3-and-gpt-wiki-intro-with-perplexity | 2023-07-20T19:23:00.000Z | [
"region:us"
] | dipudl | null | null | 0 | 7 | 2023-07-16T09:38:19 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: text
dtype: string
- name: source
dtype: string
- name: label
dtype: int64
- name: perplexity
dtype: float64
splits:
- name: train
num_bytes: 396594042.354058
num_examples: 330344
- name: test
num_bytes: 20925699.0
num_examples: 17387
download_size: 251965361
dataset_size: 417519741.354058
---
# Dataset Card for "hc3-and-gpt-wiki-intro-with-perplexity"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 610 | [
[
-0.043243408203125,
-0.027252197265625,
0.031585693359375,
0.009063720703125,
-0.020965576171875,
-0.0122528076171875,
0.0225372314453125,
-0.006847381591796875,
0.032196044921875,
0.0227508544921875,
-0.06280517578125,
-0.053619384765625,
-0.041107177734375,
... |
alexshengzhili/SciCapInstructed-graph-only-stage1 | 2023-07-17T15:53:12.000Z | [
"region:us"
] | alexshengzhili | null | null | 0 | 7 | 2023-07-17T14:59:25 | ---
dataset_info:
features:
- name: image_file
dtype: string
- name: id
dtype: string
- name: caption
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: first_mention
dtype: string
- name: response
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
splits:
- name: forty2seventy
num_bytes: 546502923
num_examples: 105606
- name: first_twenty
num_bytes: 363824537
num_examples: 70404
- name: twenty_to_forty
num_bytes: 364128099
num_examples: 70403
- name: seventy2ninty
num_bytes: 364417544
num_examples: 70403
- name: ninty2onehundred
num_bytes: 181984295
num_examples: 35202
download_size: 921991197
dataset_size: 1820857398
---
# Dataset Card for "SciCapInstructed-graph-only-stage1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,021 | [
[
-0.0230865478515625,
-0.0215606689453125,
0.01320648193359375,
0.031829833984375,
-0.01264190673828125,
0.0157623291015625,
0.04302978515625,
-0.00348663330078125,
0.07598876953125,
0.03515625,
-0.086669921875,
-0.06365966796875,
-0.045562744140625,
-0.01615... |
alexshengzhili/SciGraphQA-295K-train | 2023-08-08T05:59:29.000Z | [
"license:mit",
"arxiv:2308.03349",
"region:us"
] | alexshengzhili | null | null | 4 | 7 | 2023-07-17T19:48:13 | ---
license: mit
dataset_info:
features:
- name: image_file
dtype: string
- name: id
dtype: string
- name: caption
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: first_mention
dtype: string
- name: response
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
- name: q_a_pairs
sequence:
sequence: string
splits:
- name: train
num_bytes: 1586351961.3841674
num_examples: 295602
download_size: 770588612
dataset_size: 1586351961.3841674
---
# Dataset Card for Dataset Name
Here is a filled out dataset card for the SciGraphQA dataset:
\## Dataset Description
- **Homepage:** https://github.com/findalexli/SciGraphQA
- **Repository:** https://huggingface.co/datasets/alexshengzhili/SciGraphQA-295K-train
- **Paper:** https://arxiv.org/abs/2308.03349
- **Leaderboard:** N/A
- **Point of Contact Alex Li alex.shengzhi@gmail.com:**
\### Dataset Summary
SciGraphQA is a large-scale synthetic multi-turn question-answering dataset for scientific graphs. It contains 295K samples of open-vocabulary multi-turn question-answering dialogues about graphs from 290K academic papers. The dataset was created by using the Palm-2 API to generate dialogues conditioned on rich textual context including paper titles, abstracts, captions, paragraphs mentioning the figure.
\### Supported Tasks and Leaderboards
- Scientific graph question answering
- Visual question answering
- Multi-modal reasoning
Please see our paper for leaderboard
\### Languages
English
\## Dataset Structure
\### Data Instances
Each data instance contains:
- Paper title
- Paper abstract
- Figure caption
- Paragraph mentioning the figure
- Multi-turn question-answer conversation (2.23 turns on average)
\### Data Fields
- `title`: Paper title
- `abstract`: Paper abstract
- `caption`: Figure caption
- `paragraph`: Paragraph mentioning the figure
- `questions`: List of question strings
- `answers`: List of answer strings
\### Data Splits
- Training data: 295K samples
- Validation data: N/A
- Test data: 3K samples
\## Dataset Creation
\### Curation Rationale
This dataset was created to provide a large-scale benchmark for training and evaluating multi-modal models on scientific graph question answering.
\### Source Data
Figures, captions, paragraphs and metadata were sourced from 290K academic papers on ArXiv focused on Computer Science and Machine Learning.
\#### Initial Data Collection and Normalization
Figures were extracted using PDFFigures 2.0. Captions and paragraphs were extracted using regular expressions and heuristic rules.
\#### Who are the source language producers?
The source data consists of academic papers written in English by researchers in computer science and machine learning.
\### Annotations
\#### Annotation process
The multi-turn question-answer dialogues were generated using the Palm-2 conversational API conditioned on the sourced data context. The quality was validated by rating a subset with GPT-4.
\#### Who are the annotators?
The dialogues were automatically generated by Palm-2, an AI system developed by Anthropic.
\### Personal and Sensitive Information
The source academic papers may contain limited personal information about the authors such as name, affiliation, email. No other personal or sensitive information is included in this dataset.
\## Considerations for Using the Data
\### Social Impact of Dataset
This dataset presents minimal social risks since it contains only synthetic dialogues about scientific graphs and related metadata sourced from public academic papers.
\### Discussion of Biases
The dialogues reflect the characteristics and limitations of the Palm-2 system used to generate them. There may also be biases inherent in the academic source material.
\### Other Known Limitations
The dataset focuses specifically on computer science and machine learning papers. Performance on scientific graphs from other domains may differ.
\## Additional Information
\### Dataset Curators
Shengzhi Li, Nima Tajbakhsh
\### Licensing Information
This dataset is licensed under the MIT license.
\### Citation Information
```
@misc{li2023scigraphqa,
title={SciGraphQA: A Large-Scale Synthetic Multi-Turn Question-Answering Dataset for Scientific Graphs},
author={Shengzhi Li and Nima Tajbakhsh},
year={2023},
eprint={2308.03349},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
\### Contributions
We welcome contributions to improve the dataset! Please open an issue or pull request on the GitHub repository. | 4,687 | [
[
-0.020965576171875,
-0.06439208984375,
0.0216827392578125,
-0.0017080307006835938,
-0.01502227783203125,
0.0244903564453125,
0.00634765625,
-0.0280609130859375,
0.02947998046875,
0.028564453125,
-0.047698974609375,
-0.04443359375,
-0.02734375,
0.026046752929... |
Guilherme34/Cabrita-lora-ptbr | 2023-07-20T12:44:35.000Z | [
"region:us"
] | Guilherme34 | null | null | 3 | 7 | 2023-07-20T12:43:29 | its not my dataset, im just posting it here | 43 | [
[
-0.03338623046875,
-0.0386962890625,
0.00200653076171875,
0.040985107421875,
-0.01276397705078125,
-0.00492095947265625,
0.0177459716796875,
0.012481689453125,
0.06854248046875,
0.029693603515625,
-0.057220458984375,
-0.04144287109375,
-0.045745849609375,
0.... |
AhmedBou/Arabic_Quotes | 2023-09-07T15:54:26.000Z | [
"task_categories:text-classification",
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:ar",
"license:apache-2.0",
"region:us"
] | AhmedBou | null | null | 2 | 7 | 2023-07-22T14:01:00 | ---
license: apache-2.0
task_categories:
- text-classification
- text-generation
language:
- ar
size_categories:
- 1K<n<10K
---
# Arabic Quotes Dataset




## Overview
The **Arabic Quotes Dataset** is an open-source collection of 5900+ quotes in the Arabic language, accompanied by up to three tags for each quote.
The dataset is suitable for various Natural Language Processing (NLP) tasks, such as text classification and tagging.
## Data Description
- Contains 5900+ quotes with up to three associated tags per quote.
- All quotes and tags are in Arabic.
## Use Cases
- Text Classification: Classify quotes into predefined categories.
- Tagging: Assign relevant labels or themes to quotes.
- Sentiment Analysis: Analyze sentiment expressed in quotes.
- Language Modeling: Train models to generate Arabic quotes.
- Information Retrieval: Retrieve quotes relevant to specific topics.
## License
The "Arabic Quotes" dataset is distributed under the Apache License 2.0. Feel free to use it for any purpose, giving appropriate credit to the original source.
**Github Repository:** https://github.com/BoulahiaAhmed/Arabic-Quotes-Dataset
## Data Format
The dataset is available in CSV format. Each row represents a quote with its associated tags. Example structure:
```
quote,tags
"أنا لا أبالي برأي الناس، أنا لست عبدًا لتقييماتهم.","[حرية, تحفيز, قوة]"
"الصمت هو أكبر إجابة.", "[سكوت, حكمة]"
...
```
--- | 1,710 | [
[
-0.0214691162109375,
-0.031890869140625,
0.0008091926574707031,
0.027679443359375,
-0.037384033203125,
0.01242828369140625,
-0.010284423828125,
-0.0210113525390625,
-0.0030689239501953125,
0.035858154296875,
-0.03985595703125,
-0.084716796875,
-0.037445068359375... |
PeterBrendan/Ads_Creative_Ad_Copy_Programmatic | 2023-07-26T18:51:34.000Z | [
"license:mit",
"region:us"
] | PeterBrendan | null | null | 0 | 7 | 2023-07-26T18:48:12 | ---
license: mit
---
### Dataset Summary
The Programmatic Ad Creatives dataset contains 7097 samples of online programmatic ad creatives along with their ad sizes. The dataset includes 8 unique ad sizes, such as (300, 250), (728, 90), (970, 250), (300, 600), (160, 600), (970, 90), (336, 280), and (320, 50). The dataset is in a tabular format and represents a random sample from Project300x250.com's complete creative data set. It is primarily used for training and evaluating natural language processing models in the context of advertising creatives.
### Supported Tasks
This dataset supports a range of tasks, including language modeling, text generation, and text augmentation. The full dataset has been utilized to fine-tune open-source models for creative ad copy. We hope this dataset will inspire contributors to join [Project 300x250](https://www.Project300x250.com) in creating open-source alternatives to Google and Meta, ensuring the existence of independent advertising.
### Languages
The dataset primarily consists of English language text.
### Dataset Structure
#### Data Fields
The dataset contains the following fields:
- 'text': Represents the text collected from the programmatic ad creative.
- 'dimensions': Represents the dimensions of the creative ad size.
#### Data Splits
The data is not split into separate subsets; it is provided as a whole.
## Dataset Creation
### Curation Rationale
The dataset of online programmatic ad creatives was curated to serve as a valuable resource for researchers and developers. It provides a unique collection of advertising creative text that is typically only available within walled gardens. The dataset aims to foster the development of independent advertising alternatives to Google and Meta, particularly in the field of AI, by promoting open-source solutions in the advertising domain.
### Source Data
The data is generated from a vast collection of programmatic creative images hosted by [Project 300x250](https://www.Project300x250.com)
. The text was extracted from each creative image.
## Dataset Use
### Use Cases
The dataset can be used for various tasks related to language understanding, natural language processing, machine learning model training, and model performance evaluation. Initially, the dataset has been utilized to fine-tune open-source models using programmatic ad text to generate unique ad copy. These models were created to inspire ad creatives and provide a starting point for developing effective marketing content.
### Usage Caveats
As this dataset is a sampled subset, it is recommended to regularly check for updates and improvements or reach out to the author for access to the full dataset.
| 2,708 | [
[
-0.02490234375,
-0.04168701171875,
0.0182647705078125,
0.032379150390625,
0.006519317626953125,
0.009246826171875,
-0.03497314453125,
-0.0197906494140625,
0.023956298828125,
0.05548095703125,
-0.05841064453125,
-0.04705810546875,
-0.01352691650390625,
-0.001... |
TrainingDataPro/russian-marketplace-reviews-e-commerce-dataset | 2023-09-14T16:39:15.000Z | [
"task_categories:text-classification",
"language:en",
"license:cc-by-nc-nd-4.0",
"finance",
"code",
"region:us"
] | TrainingDataPro | null | null | 1 | 7 | 2023-07-28T12:05:16 | ---
license: cc-by-nc-nd-4.0
task_categories:
- text-classification
language:
- en
tags:
- finance
- code
---
# Russian Marketplace Reviews E-Commerce Dataset
The **Russian Marketplace Reviews E-Commerce Dataset** is a comprehensive collection of data curated from a popular e-commerce platform. It contains a vast amount of *reviews, information about date and time of the review and its ratings*, offering valuable insights into consumer preferences and behaviors in the Russian marketplace.
This dataset encompasses a wide range of products across different categories, including *electronics, appliances, clothing, cosmetics, home goods, and more*. It is also valuable for sentiment analysis and opinion mining. Researchers can leverage the labeled review ratings to train models that classify reviews into *positive, negative, or neutral* sentiments.
### The dataset's possible applications:
- recommendation systems
- sentiment analysis algorithms
- consumer behavior analysis
- customer satisfaction analysis
- marketing and advertising

# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=russian-marketplace-reviews-e-commerce-dataset) to discuss your requirements, learn about the price and buy the dataset.
# File with the extension .xlsx
includes the following information:
- **product_url**: link to the product,
- **product_title**: title of the product,
- **user_nickname**: nickname of the comment's author,
- **comment_date**: date of the comment,
- **comment_stars**: number of stars given to the product,
- **comment_text**: text of the comment,
- **comment_likes_count**: number of likes on the comment,
- **comment_dislikes_count**: number of dislikes on the comment
# Reviews Parsing might be made in accordance with your requirements.
## [**TrainingData**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=russian-marketplace-reviews-e-commerce-dataset) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** | 2,518 | [
[
-0.0298309326171875,
-0.044464111328125,
0.002628326416015625,
0.0225372314453125,
-0.0294647216796875,
-0.0062713623046875,
-0.00460052490234375,
-0.02459716796875,
0.026519775390625,
0.04278564453125,
-0.0528564453125,
-0.06976318359375,
-0.02301025390625,
... |
Norquinal/claude_multi_instruct_30k | 2023-08-10T01:10:30.000Z | [
"region:us"
] | Norquinal | null | null | 2 | 7 | 2023-08-09T23:19:09 | This dataset is an adapation of my previous [claude_multiround_chat_30k](https://huggingface.co/datasets/Norquinal/claude_multiround_chat_30k) dataset with only the first 30k instruction/response pairs and parsed into an instruct format.
The instructions were generated synethically using a method that can be tenatively described as "multi-instruct." These instructions consist of numerous discrete tasks that the AI has to work its way through, thereby hopefully increasing its comprehension and awareness of complex instructions.
The topics of the instruction ranged from STEM, Arts & Humanities, Social Knowledge, and General Knowledge. | 642 | [
[
-0.0305328369140625,
-0.0687255859375,
0.01033782958984375,
0.02484130859375,
0.0182342529296875,
-0.00537872314453125,
-0.006221771240234375,
-0.01126861572265625,
0.02288818359375,
0.056640625,
-0.0750732421875,
-0.048126220703125,
-0.03167724609375,
-0.01... |
KhalfounMehdi/MURA | 2023-08-16T17:23:39.000Z | [
"region:us"
] | KhalfounMehdi | null | null | 0 | 7 | 2023-08-11T18:49:40 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: int64
splits:
- name: train
num_bytes: 3191859573.735
num_examples: 40005
download_size: 3368404383
dataset_size: 3191859573.735
configs:
- config_name: mehdi
data_files:
- split: train
path: data/train-*
- config_name: "KhalfounMehdi--MURA"
data_files:
- split: train
path: data/train-*
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "MURA"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 659 | [
[
-0.041900634765625,
-0.01898193359375,
0.01342010498046875,
0.00872802734375,
-0.011260986328125,
0.0082855224609375,
0.0268096923828125,
-0.011474609375,
0.0726318359375,
0.031097412109375,
-0.0557861328125,
-0.037872314453125,
-0.043670654296875,
-0.027282... |
ImagenHub/Control_Guided_Image_Generation | 2023-10-05T18:29:07.000Z | [
"arxiv:2310.01596",
"region:us"
] | ImagenHub | null | null | 1 | 7 | 2023-08-13T00:27:18 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
- name: guide
dtype: image
- name: source
dtype: string
- name: control_type
dtype: string
- name: img_id
dtype: int64
splits:
- name: full
num_bytes: 123906287.0
num_examples: 500
- name: eval
num_bytes: 40786402.0
num_examples: 150
- name: extra
num_bytes: 83119885.0
num_examples: 350
download_size: 245420860
dataset_size: 247812574.0
configs:
- config_name: default
data_files:
- split: full
path: data/full-*
- split: eval
path: data/eval-*
- split: extra
path: data/extra-*
---
# Dataset Card
Dataset in [ImagenHub](arxiv.org/abs/2310.01596).
# Citation
Please kindly cite our paper if you use our code, data, models or results:
```
@article{ku2023imagenhub,
title={ImagenHub: Standardizing the evaluation of conditional image generation models},
author={Max Ku, Tianle Li, Kai Zhang, Yujie Lu, Xingyu Fu, Wenwen Zhuang, Wenhu Chen},
journal={arXiv preprint arXiv:2310.01596},
year={2023}
}
``` | 1,092 | [
[
-0.021728515625,
-0.0195159912109375,
0.01116943359375,
-0.003650665283203125,
-0.0413818359375,
-0.049896240234375,
-0.0000928640365600586,
-0.0216522216796875,
-0.01409912109375,
0.03546142578125,
-0.0158233642578125,
-0.054168701171875,
-0.03192138671875,
... |
FreedomIntelligence/sharegpt-japanese | 2023-08-13T16:46:02.000Z | [
"license:apache-2.0",
"region:us"
] | FreedomIntelligence | null | null | 0 | 7 | 2023-08-13T16:41:10 | ---
license: apache-2.0
---
Japanese ShareGPT data translated by gpt-3.5-turbo.
The dataset is used in the research related to [MultilingualSIFT](https://github.com/FreedomIntelligence/MultilingualSIFT). | 206 | [
[
-0.042938232421875,
-0.037933349609375,
0.03082275390625,
0.029052734375,
-0.0280914306640625,
0.0008420944213867188,
-0.0213165283203125,
-0.034210205078125,
0.0211029052734375,
0.025299072265625,
-0.07427978515625,
-0.0241546630859375,
-0.041839599609375,
... |
serenaz/llama2-medical-meadow | 2023-08-17T01:32:36.000Z | [
"region:us"
] | serenaz | null | null | 0 | 7 | 2023-08-17T01:20:12 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
sartmis1/wikisql-processed | 2023-08-17T14:45:26.000Z | [
"region:us"
] | sartmis1 | null | null | 1 | 7 | 2023-08-17T14:31:27 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: messages
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 10327196
num_examples: 56355
- name: test
num_bytes: 2917591
num_examples: 15878
- name: validation
num_bytes: 2917591
num_examples: 15878
download_size: 0
dataset_size: 16162378
---
# Dataset Card for "wikisql-processed"
Based out of [wikisql](https://huggingface.co/datasets/wikisql)
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 774 | [
[
-0.03424072265625,
-0.0291748046875,
0.0103302001953125,
0.0099334716796875,
-0.01280975341796875,
-0.0037384033203125,
0.00780487060546875,
-0.025054931640625,
0.060333251953125,
0.056793212890625,
-0.0784912109375,
-0.0443115234375,
-0.02362060546875,
-0.0... |
Villian7/Emotions_Data | 2023-08-18T15:16:29.000Z | [
"license:apache-2.0",
"doi:10.57967/hf/1000",
"region:us"
] | Villian7 | null | null | 1 | 7 | 2023-08-18T14:53:57 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
- name: label_text
dtype: string
splits:
- name: train
num_bytes: 109428773
num_examples: 1096869
- name: validation
num_bytes: 13025428
num_examples: 133105
- name: test
num_bytes: 13047201
num_examples: 133104
download_size: 77478115
dataset_size: 135501402
license: apache-2.0
---
# Dataset Card for "emotions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 758 | [
[
-0.048858642578125,
-0.01027679443359375,
0.01519012451171875,
0.0237274169921875,
-0.01861572265625,
-0.0007290840148925781,
0.00962066650390625,
-0.015960693359375,
0.0650634765625,
0.0127105712890625,
-0.072998046875,
-0.0552978515625,
-0.042724609375,
-0... |
Ichsan2895/alpaca-gpt4-indonesian | 2023-08-19T13:08:53.000Z | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:id",
"license:cc-by-sa-4.0",
"region:us"
] | Ichsan2895 | null | null | 2 | 7 | 2023-08-19T12:03:55 | ---
license: cc-by-sa-4.0
language:
- id
size_categories:
- 10K<n<100K
task_categories:
- question-answering
---
Base model : [FreedomIntelligence/alpaca-gpt4-indonesian](https://huggingface.co/datasets/FreedomIntelligence/alpaca-gpt4-indonesian)
We wrangled the original dataset format to 'input' & 'output' format. For example:
BEFORE:
```
[ { "from": "human",
"value": "Saranlah slogan untuk kampanye daur ulang\n" },
{ "from": "gpt",
"value": "1. \"Kurangi, gunakan kembali, daur ulang: Bersama untuk masa depan yang lebih hijau.\"\n2. \
"Daur ulanglah hari ini, untuk masa depan yang lebih baik.\"\n3. \"Ubah sampahmu menjadi harta karun - Daur ulang!\"\n4. \
"Daur ulang untuk siklus kehidupan.\"\n5. \"Simpan sumber daya, daur ulang lebih banyak.\""
}
]
```
AFTER:
| input | output |
| ---- | ---- |
| Saranlah slogan untuk kampanye daur ulang\n | 1. "Kurangi, gunakan kembali, daur ulang: Bersama untuk masa depan yang lebih hijau.\"\n2. "Daur ulanglah hari ini, untuk masa depan yang lebih baik.\"\n3. \"Ubah sampahmu menjadi harta karun - Daur ulang!\"\n4. "Daur ulang untuk siklus kehidupan.\"\n5. \"Simpan sumber daya, daur ulang lebih banyak. |
## CITATION
```
@article{peng2023instruction,
title={Instruction Tuning with GPT-4},
author={Peng, Baolin and Li, Chunyuan and He, Pengcheng and Galley, Michel and Gao, Jianfeng},
journal={arXiv preprint arXiv:2304.03277},
year={2023}
}
@software{Chen_MultilingualSIFT_Multilingual_Supervised_2023,
author = {Chen, Zhihong and Yan, Shuo and Liang, Juhao and Jiang, Feng and Wu, Xiangbo and Yu, Fei and Chen, Guiming Hardy and Chen, Junying and Zhang, Hongbo and Li Jianquan and Wan Xiang and Wang, Benyou},
month = july,
title = {{MultilingualSIFT: Multilingual Supervised Instruction Fine-tuning}},
url = {https://github.com/FreedomIntelligence/MultilingualSIFT.git},
version = {0.1},
year = {2023}
}
``` | 1,909 | [
[
-0.03521728515625,
-0.050048828125,
0.0181884765625,
0.012115478515625,
-0.030792236328125,
-0.0218505859375,
-0.026458740234375,
-0.01015472412109375,
0.019561767578125,
0.03497314453125,
-0.04608154296875,
-0.052581787109375,
-0.056915283203125,
0.02059936... |
Photolens/MedText-llama-2 | 2023-08-19T18:26:13.000Z | [
"license:cc-by-4.0",
"region:us"
] | Photolens | null | null | 5 | 7 | 2023-08-19T15:18:38 | ---
license: cc-by-4.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 971728
num_examples: 1412
download_size: 499669
dataset_size: 971728
---
This is the shuffled version of medtext_1, so the datapoints are in random order and not sorted by category. This is to prevent catastrophic forgetting by category.
This is a medical diagnosis dataset containing over 1000 top notch textbook quality patient presentations and diagnosis/treatments. The 100 most common diseases and the 30 most common injuries people go to the hospital with, are, among others, fully captured in the dataset, with multiple datapoints for each ranging from mild to complicated to severe. Full list below. The dataset also contains completions about the nature of the AI itself, that it never can replace a doctor and always emphasizes to go to a professional and some nonsensical or doubtful presentations. A model trained on this dataset explicitly tells when it CANNOT answer with confidence or if the presentation is insufficient. This is to prevent hallucinations.
Medtext is a free to use (CC BY 4.0) dataset of over 1000 patient presentations and their diagnosis/treatment plans.
This is original data, converted into uniform datapoints using GPT-4.
We then pulled 10 random examples of the dataset and showed them to 3 different doctors, 2 of them involved and 1 of them uninvolved, and they all categorize the quality as „textbook quality“.
It’s content includes:
NOISE/DATA POLLUTION
*Dismissing of non-medical or non-psychological issues
*specifically asking for more information / admitting no possible diagnosis with confidence if insufficient data
*conflicting/contradicting and irrelevant information
*cases where symptoms are misleading to seemingly obvious diagnosis but actually being something different
*information about the model (What are you? What can you do? Are you able to replace a doctor? This is to make the model humble and always emphasize that it can never replace a professional and it is just there to do some substitute analysis)
MISC
*emergency cases / first aid / almost fatal njuries that require emergency surgery
*injuries from crimes
*sexual injuries and STDs
*Infant specific cases
*Gynecological and urological cases
*genetic anomalies
*Previous medical mishandling
*Abuse/Overdosing/Misuse of drugs
*Cross side effects of drugs
ANALYSIS
*Textual analysis of blood tests, ultrasound, CT, MRI and X-ray examinations.
INJURIES:
* Sprains and strains
* Fractures
* Contusions (bruises)
* Cuts and lacerations
* Concussions
* Burns
* Dislocations
* Abrasions (scrapes)
* Whiplash injuries
* Eye injuries
* Puncture wounds
* Bites and stings
* Back injuries
* Broken nose
* Knee injuries
* Ankle injuries
* Shoulder injuries
* Wrist injuries
* Chest injuries
* Head injuries
DISEASES:
* Acne
* Allergies
* Alzheimer's Disease
* Anemia
* Angina
* Anxiety Disorders
* Arthritis
* Asthma
* Atherosclerosis
* Athlete's Foot
* Attention Deficit Hyperactivity Disorder (ADHD)
* Autism Spectrum Disorder
* Back Pain
* Bipolar Disorder
* Bronchitis
* Cataracts
* Chickenpox
* Chronic Obstructive Pulmonary Disease (COPD)
* Common Cold
* Conjunctivitis (Pink Eye)
* Constipation
* Coronary Heart Disease
* Cystitis
* Dementia
* Depression
* Diabetes Type 1
* Diabetes Type 2
* Diarrhea
* Diverticulitis
* Dizziness (Vertigo)
* Ear Infections
* Eczema
* Endometriosis
* Erectile Dysfunction
* Fibromyalgia
* Flu (Influenza)
* Food Poisoning
* Gallstones
* Gastroenteritis
* Gastroesophageal Reflux Disease (GERD)
* Gout
* Hay Fever (Allergic Rhinitis)
* Headaches
* Heart Failure
* Hemorrhoids
* Hepatitis B
* Hepatitis C
* Herpes Simplex Virus (HSV)
* High Blood Pressure (Hypertension)
* High Cholesterol (Hypercholesterolemia)
* HIV/AIDS
* Hyperthyroidism (Overactive Thyroid)
* Hypothyroidism (Underactive Thyroid)
* Inflammatory Bowel Disease (Including Crohn's and Ulcerative Colitis)
* Insomnia
* Iron Deficiency Anemia
* Irritable Bowel Syndrome (IBS)
* Kidney Stones
* Lactose Intolerance
* Lyme Disease
* Macular Degeneration
* Malaria
* Menopause
* Migraine
* Multiple Sclerosis
* Obesity
* Osteoarthritis
* Osteoporosis
* Otitis Media (Middle Ear Infection)
* Pancreatitis
* Parkinson's Disease
* Peptic Ulcers
* Periodontal Disease
* Pneumonia
* Polycystic Ovary Syndrome (PCOS)
* Prostate Enlargement (Benign Prostatic Hyperplasia)
* Psoriasis
* Pulmonary Embolism
* Restless Legs Syndrome
* Rheumatoid Arthritis
* Rosacea
* Schizophrenia
* Sciatica
* Scoliosis
* Seasonal Affective Disorder (SAD)
* Sinusitis
* Skin Cancer
* Sleep Apnea
* Strokes
* Tendonitis
* Tonsillitis
* Tuberculosis
* Urinary Tract Infection (UTI)
* Varicose Veins
* Vitiligo
* Yeast Infection (Candidiasis)
* Zika Virus
# Dataset card from [BI55/MedText](https://huggingface.co/datasets/BI55/MedText) | 5,175 | [
[
-0.016387939453125,
-0.0279083251953125,
0.0296630859375,
-0.01280975341796875,
-0.0095672607421875,
-0.01326751708984375,
0.016571044921875,
-0.048614501953125,
0.04608154296875,
0.045166015625,
-0.04150390625,
-0.052001953125,
-0.055572509765625,
0.0092773... |
karanzrk/ielts | 2023-08-19T19:07:34.000Z | [
"region:us"
] | karanzrk | null | null | 0 | 7 | 2023-08-19T19:07:11 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ziq/RSNA-ATD2023 | 2023-08-31T14:31:16.000Z | [
"task_categories:image-segmentation",
"task_ids:semantic-segmentation",
"annotations_creators:other",
"language_creators:found",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other",
"language:en",
"license:mit",
"reg... | ziq | The dataset is the processed version of Kaggle Competition: RSNA 2023 Abdominal Trauma Detection.
It comprises of segmentation of 205 series of CT scans with 5 classes (liver, spleen, right_kidney,
left_kidney, bowel). | @InProceedings{huggingface:dataset,
title = {RSNA-ATD2023},
author = {Yeow Zi Qin},
year = {2023}
} | 0 | 7 | 2023-08-20T09:28:18 | ---
annotations_creators:
- other
language:
- en
language_creators:
- found
- expert-generated
license:
- mit
multilinguality:
- monolingual
pretty_name: RSNA-ATD2023
size_categories:
- 10K<n<100K
source_datasets:
- extended|other
tags: []
task_categories:
- image-segmentation
task_ids:
- semantic-segmentation
---
# 📁 Dataset
This dataset only comprised of 205 series of CT scans in `.png` file with raw images and raw mask.
Data source: [Kaggle RSNA 2023 Abdominal Trauma Detection](https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/data)
# 🚀 Setup
```bash
pip install datasets
```
# 🤩 Feel the Magic
### Load Dataset
```python
from datasets import load_dataset
data = load_dataset('ziq/RSNA-ATD2023')
print(data)
```
```bash
DatasetDict({
train: Dataset({
features: ['patient_id', 'series_id', 'frame_id', 'image', 'mask'],
num_rows: 70291
})
})
```
### Set Labels
```python
labels = ["background", "liver", "spleen", "right_kidney", "left_kidney", "bowel"]
```
### Train Test Split
```python
data = data['train'].train_test_split(test_size=0.2)
```
```python
train, test = data['train'], data['test']
# train[0]['patient_id']
# train[0]['image'] -> PIL Image
# train[0]['mask'] -> PIL Image
```
### Get Image & Segmentation Mask
```python
ids = 3
image, mask = train[ids]['image'], \ # shape: (512, 512)
train[ids]['mask'] # shape: (512, 512)
```
### Convert mask into np.ndarray
```python
mask = np.array(mask)
```
### Visualize Image & Mask
```python
fig = plt.figure(figsize=(16,16))
ax1 = fig.add_subplot(131)
plt.axis('off')
ax1.imshow(image, cmap='gray')
ax2 = fig.add_subplot(132)
plt.axis('off')
ax2.imshow(mask, cmap='gray')
ax3 = fig.add_subplot(133)
ax3.imshow(image*np.where(mask>0,1,0), cmap='gray')
plt.axis('off')
plt.show()
```

### Write Custom Plotting Function
```python
from matplotlib.colors import ListedColormap, BoundaryNorm
colors = ['#02020e', '#520e6d', '#c13a50', '#f57d15', '#fac62c', '#f4f88e'] # inferno
bounds = range(0, len(colors) + 1)
# Define the boundaries for each class in the colormap
cmap, norm = ListedColormap(colors), BoundaryNorm(bounds, len(colors))
# Plot the segmentation mask with the custom colormap
def plot_mask(mask, alpha=1.0):
_, ax = plt.subplots()
cax = ax.imshow(mask, cmap=cmap, norm=norm, alpha=alpha)
cbar = plt.colorbar(cax, cmap=cmap, norm=norm, boundaries=bounds, ticks=bounds)
cbar.set_ticks([])
_labels = [""] + labels
for i in range(1, len(_labels)):
cbar.ax.text(2, -0.5 + i, _labels[i], ha='left', color=colors[i - 1], fontsize=8)
plt.axis('off')
plt.show()
```
### Custom Color
```python
plot_mask(mask)
```

### Plot only one class (e.g. liver)
```python
liver, spleen, right_kidney, left_kidney, bowel = [(mask == i,1,0)[0] * i for i in range(1, len(labels))]
plot_mask(liver)
```

| 3,174 | [
[
-0.0445556640625,
-0.01222991943359375,
0.033477783203125,
0.01055908203125,
-0.04583740234375,
-0.004581451416015625,
0.0189361572265625,
-0.0100250244140625,
0.055511474609375,
0.02947998046875,
-0.03228759765625,
-0.050811767578125,
-0.03582763671875,
0.0... |
mHossain/merge_new_para_detection_data_v6 | 2023-08-21T15:46:23.000Z | [
"region:us"
] | mHossain | null | null | 0 | 7 | 2023-08-21T15:46:12 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 18268704.9
num_examples: 108000
- name: test
num_bytes: 2029856.1
num_examples: 12000
download_size: 9186455
dataset_size: 20298561.0
---
# Dataset Card for "merge_new_para_detection_data_v6"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 643 | [
[
-0.053314208984375,
-0.0011196136474609375,
0.02490234375,
0.00567626953125,
-0.034698486328125,
-0.0159759521484375,
0.0225067138671875,
-0.019500732421875,
0.046722412109375,
0.031646728515625,
-0.044036865234375,
-0.05963134765625,
-0.052642822265625,
-0.... |
aboonaji/wiki_medical_terms_llam2_format | 2023-08-23T14:03:22.000Z | [
"region:us"
] | aboonaji | null | null | 2 | 7 | 2023-08-23T09:44:45 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
RuterNorway/OpenOrcaNo-15k | 2023-10-11T06:06:31.000Z | [
"task_categories:conversational",
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:table-question-answering",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:summarization",
"task_categories:feature-extra... | RuterNorway | null | null | 3 | 7 | 2023-08-23T12:25:07 | ---
language:
- no
license: mit
task_categories:
- conversational
- text-classification
- token-classification
- table-question-answering
- question-answering
- zero-shot-classification
- summarization
- feature-extraction
- text-generation
- text2text-generation
pretty_name: OpenOrcaNO
size_categories:
- 10k<n<20k
---
<p><h1>🐋 The OpenOrca Dataset Norwegian! 🐋</h1></p>
This is a subset of 15000 rows of the OpenOrca dataset, translated into Norwegian.
Translation is done with Amazon Translate, and is provided by [Ruter](https://ruter.no) as an artifact from Ruter AI Lab.
## Dataset structure
The dataset is structured in the following way:
```json
{
"instruction": "Norwegian instruction",
"input": "Norwegian input",
"output": "Norwegian output",
"instruction_en": "English instruction",
"input_en": "English input",
"output_en": "English output",
}
```
## Dataset creation
Please refer the original [OpenOrca modelcard](https://huggingface.co/datasets/Open-Orca/OpenOrca) for more information on how the dataset was created.
## License
The dataset is licensed under the MIT license.
<br><br>
<p><h1>🐋 OpenOrca Datasett på Norsk! 🐋</h1></p>
Dette er et utvalg på 15000 rader fra OpenOrca datasettet, oversatt til norsk.
Oversettelsen er gjort med Amazon Translate, og er levert av [Ruter](https://ruter.no) som et produkt fra Ruter AI Lab.
## Datasettstruktur
Datasettet er strukturert på følgende måte:
```json
{
"instruction": "Instruksjon på norsk",
"input": "Inndata på norsk",
"output": "Utdata på norsk",
"instruction_en": "Instruksjon på engelsk",
"input_en": "Engelsk inndata",
"output_en": "Engelsk utdata",
}
```
## Opprettelse av datasett
Vennligst se den originale [OpenOrca modelkortet](https://huggingface.co/datasets/Open-Orca/OpenOrca) for mer informasjon om hvordan datasettet ble opprettet.
## Lisens
Datasettet er lisensiert under MIT-lisensen. | 1,937 | [
[
-0.0233154296875,
-0.0411376953125,
-0.003604888916015625,
0.004825592041015625,
-0.0215301513671875,
-0.03515625,
-0.01055145263671875,
-0.035186767578125,
0.033050537109375,
0.04718017578125,
-0.03076171875,
-0.073486328125,
-0.0248565673828125,
0.00432586... |
AmelieSchreiber/aging_proteins | 2023-08-24T05:53:07.000Z | [
"task_categories:text-classification",
"language:en",
"license:mit",
"esm",
"esm2",
"ESM-2",
"aging proteins",
"protein laguage model",
"biology",
"region:us"
] | AmelieSchreiber | null | null | 0 | 7 | 2023-08-24T05:07:13 | ---
license: mit
task_categories:
- text-classification
language:
- en
tags:
- esm
- esm2
- ESM-2
- aging proteins
- protein laguage model
- biology
---
# Description of the Dataset
This is (part of) the dataset used in
[Prediction and characterization of human ageing-related proteins by using machine learning](https://www.nature.com/articles/s41598-018-22240-w).
This can be used to train a binary sequence classifier using protein language models such as [ESM-2](https://huggingface.co/facebook/esm2_t6_8M_UR50D).
Please also see [the github for the paper](https://github.com/kerepesi/aging_ml/blob/master/aging_labels.csv) for more information.
| 655 | [
[
-0.004772186279296875,
-0.03936767578125,
-0.005931854248046875,
-0.00861358642578125,
-0.01983642578125,
0.002124786376953125,
0.0293426513671875,
-0.03436279296875,
0.00012636184692382812,
0.049652099609375,
-0.06494140625,
-0.043914794921875,
-0.0270080566406... |
marianna13/mattermodeling-stackexchange | 2023-08-24T18:46:13.000Z | [
"region:us"
] | marianna13 | null | null | 0 | 7 | 2023-08-24T18:44:39 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
mwinn99/biovdb_1000 | 2023-08-28T22:09:14.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1k",
"size_categories:1K<n<10K",
"license:cc-by-4.0",
"biology",
"region:us"
] | mwinn99 | null | null | 0 | 7 | 2023-08-24T21:06:02 | ---
license: cc-by-4.0
task_categories:
- tabular-classification
pretty_name: Biovdb
size_categories:
- n<1k
- 1K<n<10K
viewer: false
tags:
- biology
---
# Biovdb
Test set of ~1000 samples from GEO.
| 203 | [
[
-0.047576904296875,
-0.0361328125,
0.020111083984375,
0.01544189453125,
0.003574371337890625,
0.01399993896484375,
0.041961669921875,
0.004207611083984375,
0.05218505859375,
0.05694580078125,
-0.04840087890625,
-0.047882080078125,
-0.01947021484375,
0.020980... |
mandeepbagga/phone-laptop-description | 2023-09-01T06:58:15.000Z | [
"region:us"
] | mandeepbagga | null | null | 0 | 7 | 2023-09-01T05:47:45 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
marianna13/physics-stackexchange | 2023-09-19T10:34:24.000Z | [
"region:us"
] | marianna13 | null | null | 0 | 7 | 2023-09-02T11:16:26 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
AdithyaSK/Avalon_instruction_30k | 2023-09-02T13:24:22.000Z | [
"region:us"
] | AdithyaSK | null | null | 0 | 7 | 2023-09-02T13:24:02 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 18435074
num_examples: 29655
download_size: 9047078
dataset_size: 18435074
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Avalon_instruction_30k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 453 | [
[
-0.034637451171875,
-0.005756378173828125,
0.007488250732421875,
0.0439453125,
0.00101470947265625,
-0.01436614990234375,
0.0223388671875,
-0.0021114349365234375,
0.0433349609375,
0.058258056640625,
-0.066162109375,
-0.06890869140625,
-0.0287017822265625,
-0... |
if001/aozorabunko-clean-sin | 2023-09-04T05:02:32.000Z | [
"task_categories:text-generation",
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:ja",
"license:cc-by-4.0",
"region:us"
] | if001 | null | null | 0 | 7 | 2023-09-04T04:22:55 | ---
language:
- ja
license: cc-by-4.0
size_categories:
- 10K<n<100K
task_categories:
- text-generation
- text-classification
dataset_info:
features:
- name: text
dtype: string
- name: footnote
dtype: string
- name: meta
struct:
- name: 作品ID
dtype: string
- name: 作品名
dtype: string
- name: 作品名読み
dtype: string
- name: ソート用読み
dtype: string
- name: 副題
dtype: string
- name: 副題読み
dtype: string
- name: 原題
dtype: string
- name: 初出
dtype: string
- name: 分類番号
dtype: string
- name: 文字遣い種別
dtype: string
- name: 作品著作権フラグ
dtype: string
- name: 公開日
dtype: timestamp[s]
- name: 最終更新日
dtype: timestamp[s]
- name: 図書カードURL
dtype: string
- name: 人物ID
dtype: string
- name: 姓
dtype: string
- name: 名
dtype: string
- name: 姓読み
dtype: string
- name: 名読み
dtype: string
- name: 姓読みソート用
dtype: string
- name: 名読みソート用
dtype: string
- name: 姓ローマ字
dtype: string
- name: 名ローマ字
dtype: string
- name: 役割フラグ
dtype: string
- name: 生年月日
dtype: string
- name: 没年月日
dtype: string
- name: 人物著作権フラグ
dtype: string
- name: 底本名1
dtype: string
- name: 底本出版社名1
dtype: string
- name: 底本初版発行年1
dtype: string
- name: 入力に使用した版1
dtype: string
- name: 校正に使用した版1
dtype: string
- name: 底本の親本名1
dtype: string
- name: 底本の親本出版社名1
dtype: string
- name: 底本の親本初版発行年1
dtype: string
- name: 底本名2
dtype: string
- name: 底本出版社名2
dtype: string
- name: 底本初版発行年2
dtype: string
- name: 入力に使用した版2
dtype: string
- name: 校正に使用した版2
dtype: string
- name: 底本の親本名2
dtype: string
- name: 底本の親本出版社名2
dtype: string
- name: 底本の親本初版発行年2
dtype: string
- name: 入力者
dtype: string
- name: 校正者
dtype: string
- name: テキストファイルURL
dtype: string
- name: テキストファイル最終更新日
dtype: timestamp[s]
- name: テキストファイル符号化方式
dtype: string
- name: テキストファイル文字集合
dtype: string
- name: テキストファイル修正回数
dtype: string
- name: XHTML/HTMLファイルURL
dtype: string
- name: XHTML/HTMLファイル最終更新日
dtype: timestamp[s]
- name: XHTML/HTMLファイル符号化方式
dtype: string
- name: XHTML/HTMLファイル文字集合
dtype: string
- name: XHTML/HTMLファイル修正回数
dtype: string
---
this is fork
https://huggingface.co/datasets/globis-university/aozorabunko-clean
filtered
row["meta"]["文字遣い種別"] == "新字新仮名" | 2,626 | [
[
-0.0208587646484375,
-0.05816650390625,
0.00688934326171875,
-0.019927978515625,
-0.0484619140625,
0.002399444580078125,
0.0199127197265625,
-0.0185089111328125,
0.0802001953125,
0.05517578125,
-0.052764892578125,
-0.0540771484375,
-0.040283203125,
-0.001035... |
AlexWortega/habr_qa_sbs | 2023-09-04T09:49:31.000Z | [
"task_categories:question-answering",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:ru",
"license:apache-2.0",
"code",
"finance",
"region:us"
] | AlexWortega | null | null | 3 | 7 | 2023-09-04T09:38:00 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: question
dtype: string
- name: best
dtype: string
- name: bad
dtype: string
splits:
- name: train
num_bytes: 119263751
num_examples: 102558
download_size: 66726288
dataset_size: 119263751
license: apache-2.0
task_categories:
- question-answering
- text-generation
language:
- ru
tags:
- code
- finance
pretty_name: habr_qa_sbs
size_categories:
- 10K<n<100K
---
# Habr sbs qa
Датасет основан на сайте habr qa, лучший ответ - тот на котором есть лайки, худший - тот на котором меньше всего лайков.
Датасет собран [Love.Death.Transformers.](https://t.me/lovedeathtransformers) и [Дата-Утренник](https://t.me/data_morning)
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 913 | [
[
-0.02392578125,
-0.043792724609375,
0.004230499267578125,
0.0428466796875,
-0.0396728515625,
0.0199737548828125,
0.035858154296875,
-0.0135650634765625,
0.0682373046875,
0.0273284912109375,
-0.056396484375,
-0.040435791015625,
-0.0266571044921875,
-0.0108489... |
minwook/imgKoNovel | 2023-09-04T19:15:46.000Z | [
"region:us"
] | minwook | null | null | 0 | 7 | 2023-09-04T13:59:38 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
nampdn-ai/mini-CoT-Collection | 2023-09-05T00:21:39.000Z | [
"region:us"
] | nampdn-ai | null | null | 6 | 7 | 2023-09-05T00:13:32 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Tasfiul/Agricultural-dataset | 2023-09-06T19:45:34.000Z | [
"region:us"
] | Tasfiul | null | null | 2 | 7 | 2023-09-06T19:44:07 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
rombodawg/LimitlessCodeTraining | 2023-10-19T16:28:59.000Z | [
"license:mit",
"region:us"
] | rombodawg | null | null | 12 | 7 | 2023-09-07T04:10:53 | ---
license: mit
---
_________________
----- BREAK THROUGH YOUR LIMITS -----
_________________

LimitlessCodeTraining is the direct sequal to Megacodetraining that is now called Legacy_MegaCodeTraining200k.
This dataset is just over 646k lines of pure refined coding data.
It is the pinacle of open source code training. It is the combination of the filtered Megacode training dataset filtered by shahules786 (shoutout to him) and the bigcode commitpackft dataset I converted to alpaca format.
The dataset that were used to create this dataset are linked bellow:
- https://huggingface.co/datasets/rombodawg/Rombodawgs_commitpackft_Evolinstruct_Converted
- https://huggingface.co/datasets/shahules786/megacode-best | 851 | [
[
-0.0621337890625,
-0.0207366943359375,
0.005527496337890625,
0.0155181884765625,
-0.03729248046875,
-0.0179595947265625,
-0.006809234619140625,
-0.030792236328125,
0.022430419921875,
0.0701904296875,
-0.07080078125,
-0.015533447265625,
-0.050384521484375,
0.... |
MilanHrab/Kosice_nlp_speech2class | 2023-09-07T17:54:23.000Z | [
"region:us"
] | MilanHrab | null | null | 0 | 7 | 2023-09-07T12:25:18 | ---
dataset_info:
features:
- name: name_of_record
dtype: string
- name: speech_array
sequence: float64
- name: sampling_rate
dtype: int64
- name: label
dtype: string
splits:
- name: train
num_bytes: 1473550702
num_examples: 5600
download_size: 1117840025
dataset_size: 1473550702
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Kosice_nlp_speech2class"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 590 | [
[
-0.029998779296875,
-0.0214080810546875,
0.0122833251953125,
0.0060882568359375,
-0.017364501953125,
-0.0038890838623046875,
-0.0252227783203125,
-0.0241241455078125,
0.042510986328125,
0.044464111328125,
-0.0462646484375,
-0.053009033203125,
-0.045745849609375,... |
ASSERT-KTH/megadiff-single-function | 2023-09-12T10:08:06.000Z | [
"size_categories:10K<n<100K",
"language:code",
"arxiv:2108.04631",
"region:us"
] | ASSERT-KTH | null | null | 0 | 7 | 2023-09-12T10:05:19 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: diff
dtype: string
- name: is_single_chunk
dtype: bool
- name: is_single_function
dtype: bool
- name: buggy_function
dtype: string
- name: fixed_function
dtype: string
splits:
- name: train
num_bytes: 1624059115.752317
num_examples: 72393
download_size: 546172221
dataset_size: 1624059115.752317
language:
- code
pretty_name: megadiff
size_categories:
- 10K<n<100K
---
# Megadiff, a dataset of source code changes
Contains only single-function diffs.
If you use Megadiff, please cite the following technical report:
"[Megadiff: A Dataset of 600k Java Source Code Changes Categorized by Diff Size](http://arxiv.org/pdf/2108.04631)". Technical Report 2108.04631, Arxiv; 2021.
```
@techreport{megadiff,
TITLE = {{Megadiff: A Dataset of 600k Java Source Code Changes Categorized by Diff Size}},
AUTHOR = {Martin Monperrus and Matias Martinez and He Ye and Fernanda Madeiral and Thomas Durieux and Zhongxing Yu},
URL = {http://arxiv.org/pdf/2108.04631},
INSTITUTION = {Arxiv},
NUMBER = {2108.04631},
YEAR = {2021},
}
``` | 1,202 | [
[
-0.03851318359375,
-0.0249481201171875,
0.024993896484375,
0.0306854248046875,
-0.037384033203125,
-0.0290069580078125,
0.01052093505859375,
-0.0050811767578125,
0.02044677734375,
0.04437255859375,
-0.0230865478515625,
-0.032012939453125,
-0.0467529296875,
0... |
huangyt/FINETUNE4 | 2023-09-16T06:02:11.000Z | [
"license:openrail",
"region:us"
] | huangyt | null | null | 0 | 7 | 2023-09-15T16:22:29 | ---
license: openrail
---

# 📔 **DATASET**
| **Dataset** | Class | Number of Questions |
| ------- | ----------------------------------------------------------------- | ------------------------ |
| **FLAN_CoT(zs)** | Reasoning 、 MATH 、 ScienceQA 、 Commonsense | 8000 |
| **Prm800k** | Reasoning 、 MATH | 6713 |
| **ScienceQA** | ScienceQA | 5177 |
| **SciBench** | ScienceQA | 695 |
| **ReClor** | Reasoning | 1624 |
| **TheoremQA** | Commonsense 、 MATH 、 ScienceQA | 800 |
| **OpenBookQA** | Text_Understanding 、 Reasoning 、 Commonsense 、 ScienceQA | 5957 |
| **ARB** | Reasoning 、 MATH 、 ScienceQA 、 Commonsense 、 Text_Understanding | 605 |
| **Openassistant-guanaco** | Commonsense 、 Text_Understanding 、 Reasoning | 802 |
| **SAT** | Text_Understanding 、 Reasoning 、 MATH | 426 |
| **GRE、GMAT** | Reasoning 、 MATH | 254 |
| **AMC、AIME** | Reasoning 、 MATH | 1000 |
| **LSAT** | Reasoning 、 LAW | 1009 |
| **Gaokao-biology** | Comprehensive | 210 |
| **Gaokao-chemistry** | Comprehensive | 207 |
| **Gaokao-chinese** | Comprehensive | 246 |
| **Gaokao-english** | Comprehensive | 306 |
| **Gaokao-geography** | Comprehensive | 199 |
| **Gaokao-mathcloze** | Comprehensive | 118 |
| **Gaokao-mathqa** | Comprehensive | 351 |
| **Gaokao-physics** | Comprehensive | 200 |
| **LogiQA** | Reasoning | 651 |
| **LeetCode** | Reasoning 、 Code | 2359 |
# 📌 **Methon**
## *Improving the dataset*
Based on the content of the "Textbooks are all you need" paper, We want to try fine-tuning using advanced questions.
## *Dataset Format Definition*
Use "instruction、input、output" tend to lean towards guided datasets. In this format, each sample includes an instruction, an input, and an expected output. The instruction provides guidance on how to process the input to generate the output. This format of dataset is often used to train models to perform specific tasks, as they explicitly indicate the operations the model should perform.
```
{
"input": "",
"output": "",
"instruction": ""
}
```
- ### [FLAN_V2 COT(ZS)](https://huggingface.co/datasets/conceptofmind/cot_submix_original/tree/main)
We only extract the 'zs_opt' from COT and categorize each task.
- ### SAT、GRE、GMAT、AMC、AIME、LSAT
We will configure the input for datasets such as GRE, GMAT, SAT etc. as "Please read the question and options carefully, then select the most appropriate answer and provide the corresponding explanation." Meanwhile, for the math input, it will be set as "Please provide the answer along with a corresponding explanation based on the given question." Moreover, the questions will be arranged in ascending order of difficulty levels. This is done because, according to the ORCA paper, they started training the model using GPT-3.5 and later transitioned to GPT-4. To avoid the student model from acquiring knowledge beyond its scope and thereby delivering suboptimal results, a progressive learning strategy was utilized. This approach was found to be effective, therefore, in datasets like AMC, AIME which have various levels of difficulty, I have arranged them in a way that embodies this gradual, progressive learning technique.
Furthermore, their question and options are combined to form the instruction, and the label and solution are merged to become the output.
Lastly, for the LSAT dataset, since it doesn't involve step-by-step processes, the passage is transformed into instruction, while the combination of the question and options serves as the input, and the label represents the output.
- ### Gaokao
Most of the inputs are configured by us:
"Please read and understand the requirements and content of the question carefully, and then choose the option that best fits the description of the question or best answers the question from the options provided."
Only gaokao-mathcloze is configured by us:
"Please read and comprehend the requirements and content of the question carefully. Gradually ponder upon it and present the most appropriate answer based on your judgment."
- ### LeetCode
Input configuration:
"Analyze the problem description and constraints, then develop a step-by-step Python function to generate the expected output based on the given inputs. Include brief explanations at each step to illustrate your solution process."
- ### LogiQA
Only perform general conversion
- ### [OTHER](https://github.com/arielnlee/Platypus/tree/main/data_pipeline)
Prm800k, ScienceQA, SciBench, ReClor, TheoremQA, OpenBookQA, ARB, and OpenAssistant-Guanaco datasets adopt the same format as Platypus.
## *Sampling Algorithms*
Since the flan_v2 cot dataset includes tasks like:
- cot_esnli
- cot_strategyqa
- cot_qasc
- stream_qed
- cot_gsm8k
- cot_ecqa
- cot_creak
- stream_aqua
To ensure this dataset contains diverse high-quality data, we first select zs_opt questions. Then, we filter out questions with output lengths exceeding the average length. This step aims to help the model learn richer reasoning steps. After that, we perform stratified sampling. Initially, we attempted stratified sampling before the length-based filtering, but we found that this approach resulted in varying sample sizes, making it challenging to reproduce. Thus, we decided to first filter by length and then perform stratified sampling.
```py
import json
import random
with open("cot_ORIGINAL.json", "r") as f:
abc = json.load(f)
# --- part1 ---
zsopt_data = [] # "zs_opt"
for i in abc :
if i["template_type"] == "zs_opt":
zsopt_data.append(i)
# --- part2 ---
output_lengths = [len(i["targets"]) for i in zsopt_data]
average_length = sum(output_lengths) / len(output_lengths) # average length
filtered_data = []
for a in zsopt_data:
if len(a["targets"]) >= average_length:
filtered_data.append(a) # output length need to >= average_length
class_counts = {} # Count the number of samples for each class
for a in filtered_data:
task_name = a["task_name"]
if task_name in class_counts:
class_counts[task_name] += 1
else:
class_counts[task_name] = 1
# --- part3 ---
total_samples = 8000 # we plan to select a total of 8000 samples
sample_ratios = {}
for task_name, count in class_counts.items():
sample_ratios[task_name] = count / len(filtered_data)
sample_sizes = {}
for task_name, sample_ratio in sample_ratios.items():
sample_sizes[task_name] = round(sample_ratio * total_samples)
stratified_samples = {} # Perform stratified sampling for each class
for task_name, sample_size in sample_sizes.items():
class_samples = []
for data in filtered_data:
if data["task_name"] == task_name:
class_samples.append(data)
selected_samples = random.sample(class_samples, sample_size)
stratified_samples[task_name] = selected_samples
final_samples = [] # Convert to the specified format
for task_name, samples in stratified_samples.items():
for sample in samples:
final_samples.append(
{
"input": "", # use ""
"output": sample["targets"], # output
"instruction": sample["inputs"], # question
}
)
with open("cot_change.json", "w") as f:
json.dump(final_samples, f, indent=2)
```
LSAT arranged according to LEVEL
```py
import json
with open("math-json.json", "r", encoding="utf-8") as f:
data_list = json.load(f)
sorted_data = sorted(data_list, key=lambda x: x["other"]["level"])
output_data = [
{
"input": "Please provide the answer along with a corresponding explanation based on the given question.",
"output": f"{item['answer']},solution:{item['other']['solution']}",
"instruction": item["question"],
}
for item in sorted_data
]
with open("math_convert.json", "w", encoding="utf-8") as output_file:
json.dump(output_data, output_file, ensure_ascii=False, indent=4)
``` | 10,211 | [
[
-0.04034423828125,
-0.057281494140625,
0.028106689453125,
0.0011243820190429688,
-0.0291290283203125,
-0.0133209228515625,
-0.0220489501953125,
-0.01117706298828125,
0.0113372802734375,
0.04254150390625,
-0.05535888671875,
-0.04937744140625,
-0.0357666015625,
... |
indiejoseph/ted-transcriptions-cantonese | 2023-09-18T19:49:07.000Z | [
"region:us"
] | indiejoseph | null | null | 1 | 7 | 2023-09-18T19:49:04 | ---
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1569597
num_examples: 249
download_size: 1066997
dataset_size: 1569597
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ted-transcriptions-cantonese"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 486 | [
[
-0.01131439208984375,
-0.034149169921875,
0.0162200927734375,
0.0372314453125,
-0.0177154541015625,
0.005962371826171875,
0.0005354881286621094,
-0.0037097930908203125,
0.06878662109375,
0.041473388671875,
-0.05242919921875,
-0.0606689453125,
-0.0367431640625,
... |
godoyj/temario | 2023-09-19T01:37:27.000Z | [
"region:us"
] | godoyj | null | null | 0 | 7 | 2023-09-19T01:28:46 | language:
- pt
task_categories:
- summarization
not official | 65 | [
[
-0.0114288330078125,
-0.0285186767578125,
0.010009765625,
0.058563232421875,
-0.044921875,
0.036865234375,
-0.01343536376953125,
0.01322174072265625,
0.05126953125,
0.03533935546875,
-0.047515869140625,
-0.01861572265625,
-0.04864501953125,
0.04205322265625,... |
eswardivi/Tam_MSA | 2023-09-19T06:33:58.000Z | [
"region:us"
] | eswardivi | null | null | 0 | 7 | 2023-09-19T06:22:20 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: audio
dtype: audio
- name: label
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
splits:
- name: train
num_bytes: 79205685.0
num_examples: 64
download_size: 78906043
dataset_size: 79205685.0
---
# Dataset Card for "Tam_MSA"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 572 | [
[
-0.034820556640625,
-0.0318603515625,
0.021484375,
0.0216522216796875,
-0.0276641845703125,
-0.00458526611328125,
0.04315185546875,
-0.008819580078125,
0.0732421875,
0.039306640625,
-0.060150146484375,
-0.042327880859375,
-0.0411376953125,
-0.007827758789062... |
NexaAIDev/opensource_model_images_new_text | 2023-09-21T23:20:34.000Z | [
"region:us"
] | NexaAIDev | null | null | 0 | 7 | 2023-09-20T19:10:20 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
- name: text_blip
dtype: string
splits:
- name: train
num_bytes: 2293613435.125
num_examples: 33959
download_size: 2241674834
dataset_size: 2293613435.125
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "opensource_model_images_new_text"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 549 | [
[
-0.0287628173828125,
-0.0236968994140625,
0.0220184326171875,
0.01497650146484375,
-0.0266571044921875,
-0.014434814453125,
0.006622314453125,
-0.0192108154296875,
0.041595458984375,
0.052886962890625,
-0.044830322265625,
-0.0672607421875,
-0.048095703125,
-... |
Lei-USYD/datasets_medical | 2023-09-21T09:41:17.000Z | [
"region:us"
] | Lei-USYD | null | null | 0 | 7 | 2023-09-21T08:59:04 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.