id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
gfbati/AjwaOrMedjool | 2023-10-09T07:47:47.000Z | [
"task_categories:image-classification",
"task_categories:tabular-classification",
"language:ar",
"language:en",
"license:cc-by-4.0",
"doi:10.57967/hf/1116",
"region:us"
] | gfbati | null | null | 1 | 5 | 2023-09-15T10:08:08 | ---
license: cc-by-4.0
task_categories:
- image-classification
- tabular-classification
language:
- ar
- en
---
The dataset contains three subsets:
1) a dataset containing hand-crafted features to classify two types of organic dates (Ajwa or Medjool);
2) a dataset containing tabular data with features created automatically using deep learning to classify the two organic date types (Ajwa or Medjool);
3) a dataset for images of Ajwa and Medjool.
This study is considered as the first work in Arabic using shallow machine learning and deep learning to create accurate models for classifying organic Saudi dates, which would enable scholars, researchers, and developers to create machine learning applications for classifying Saudi dates in various forms like websites, mobile apps, microcontrollers, tiny machine learning and internet of things applications.
Please cite the following paper: Bati GF. Ajwa or Medjool: a binary balanced dataset to teach machine
learning. Journal of Information Studies & Technology 2023:2.12.
https://doi.org/10.5339/jist.2023.12
عجوة أو مجدول هي مجموعة بيانات متوازنة الصنفين لتصنيف التمور السعودية العضوية تتكون من ثلاث مجموعات فرعية:
الأولى: تحوي البيانات المجدولة ذات الخصائص اليدوية لتصنيف التمور العضوية (عجوة أو مجدول)،
والثانية: تجمع البيانات المجدولة ذات الخصائص المولدة أتوماتيكيّاً باستخدام التعلم العميق لتصنيف التمور العضوية (عجوة أو مجدول)،
والثالثة: تجمع صوراً لتمور العجوة والمجدول.
كما أنه أول بحث باللغة العربية يستخدم نماذج تعلم الآلة التقليدية والتعلم العميق لإنشاء نماذج ذات أداء عالٍ لتصنيف التمور السعودية العضوية بدون برمجة، مما يمكن الدارسين والباحثين والمطورين من تطوير تطبيقات تعلم آلة لتصنيف التمور السعودية بأشكال متنوعة في مواقع الإنترنت أو تطبيقات الجوالات أو في المتحكمات الدقيقة وتطبيقات إنترنت الأشياء وتعلم الآلات الصغيرة.
كرماً الاستشهاد بالبحث التالي عند استخدام مجموعة البيانات: Bati GF. Ajwa or Medjool: a binary balanced dataset to teach machine
learning. Journal of Information Studies & Technology 2023:2.12.
https://doi.org/10.5339/jist.2023.12
فيديوهات عربية تشرح مجموعة البيانات:
https://youtu.be/bPYHOYo4_Tw?feature=shared&t=1418
https://youtu.be/ADOuweANc5I?feature=shared&t=5775
https://youtu.be/PThKbc1kTSM?feature=shared&t=3253 | 2,218 | [
[
-0.044830322265625,
-0.052398681640625,
0.014495849609375,
0.00677490234375,
-0.0304107666015625,
0.00417327880859375,
0.01145172119140625,
-0.039306640625,
0.003185272216796875,
0.04437255859375,
-0.041259765625,
-0.055572509765625,
-0.049285888671875,
0.01... |
adityarra07/master_test | 2023-09-16T17:03:26.000Z | [
"region:us"
] | adityarra07 | null | null | 0 | 5 | 2023-09-16T16:39:41 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 337025121.8032651
num_examples: 2000
download_size: 330351099
dataset_size: 337025121.8032651
---
# Dataset Card for "master_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 484 | [
[
-0.05108642578125,
-0.03118896484375,
-0.0002887248992919922,
0.0175018310546875,
-0.004871368408203125,
0.00020587444305419922,
0.018341064453125,
0.007080078125,
0.053802490234375,
0.024261474609375,
-0.06451416015625,
-0.044403076171875,
-0.03515625,
-0.0... |
duckaiml/Polylingual_Id | 2023-10-12T16:58:37.000Z | [
"license:other",
"region:us"
] | duckaiml | \ | 0 | 5 | 2023-09-16T17:33:13 | ---
license: other
---
# Polylingual Indonesia Dataset/Model Card
## Description
Polylingual Indonesia is a diverse dataset composed of a collection of publicly available data and some self-crawled data in the Indonesian language. The data originates from various sources including news articles, books, forums, and stories. This dataset is processed and filtered through specific techniques detailed in the sections below.
## Prerequisites
To run and utilize this dataset, make sure you have the `zstandard` package installed in your environment.
## Format
The original format of the dataset is JSONL with zstandard compressed
## Dataset Details
### Filtering
The dataset undergoes a filtration process using a specific filter from BigScience. Detailed information about the filter used can be found here: [BigScience Filter](https://drive.google.com/file/d/1cCJ8sWE88TRLDAa3eHLmXO4JlkR2QzLY/view?usp=sharing).
### Data Sources
The data originates from various sources, and each folder in the dataset represents the source from where the original data came. Here are the details of each folder/source:
#### 1. HPLT_filtered
- Link: [HPLT Project](https://hplt-project.org/datasets/v1)
- Source : Internet Archive Snapshot WIDE15 and WIDE17 and CC-MAIN-2022-40
#### 2. Mc4-und-id
- Data is filtered from the undefined language segment of MC4 (c4-und) using FastText, BigScience filters.
- Sample: [MC4 Sample](https://huggingface.co/datasets/allenai/c4/blob/mC4310/multilingual/c4-und.00000-00001-00002-00003-00004-00005-00006-00007.json.gz)
#### 3. Indonesia-Crawl
This folder contains a collection of the Common Crawl dataset and self crawled data, specific for the Indonesian language, accumulated from various snapshots. The data is divided into several sections:
- **Mc4 original(dedup)**: Ranging from snapshot CC-2013-20 to CC-2020-34.
- **Kopi-CC(dedup)**: This covers data from snapshot CC-2020-34 to CC-2023-06. More details can be found [here](https://huggingface.co/datasets/acul3/KoPI-CC) (Note: Last snapshot to be uploaded).
- **KoPI-CC_News**: This includes the Common Crawl News Dataset ranging from the year 2016 to 2022. Detailed information can be accessed [here](https://huggingface.co/datasets/acul3/KoPI-CC) (Note: Last snapshot to be uploaded).
- **Self Crawled Data**: consists of data crawled from various platforms including news sites, story sites, forums, and others.
## Usage
install zstandard first
```
from datasets import load_dataset
hplt = load_dataset('duckaiml/Polylingual_Id','hplt') #hplt only
mc4_und = load_dataset('duckaiml/Polylingual_Id','mc4_und') #mc4_und only
indonesia_crawl = load_dataset('duckaiml/Polylingual_Id','indonesia_crawl') #indonesia_crawl only
load_dataset('duckaiml/Polylingual_Id','full') #load all
```
## Limitation/Issue
- Although some source data already dedup separately, some duplicate might be found as dataset need dedup as whole dataset
## Contributing
Feel free to contribute to the dataset by adding more diverse sources or helping in enhancing the filtration process.
## License
The data is collected from public sources, and it's recommended to refer to the original data sources for information on licensing and usage restrictions.
## Contact
For any queries or contributions to the dataset, please feel free to reach out (contact information to be added).
| 3,363 | [
[
-0.042205810546875,
-0.03533935546875,
0.016204833984375,
0.0197601318359375,
-0.029998779296875,
-0.0059051513671875,
-0.015960693359375,
-0.031768798828125,
0.0389404296875,
0.05352783203125,
-0.037628173828125,
-0.04736328125,
-0.0301055908203125,
0.04730... | |
BEE-spoke-data/SYSK-Transcripts | 2023-10-10T20:27:51.000Z | [
"size_categories:1K<n<10K",
"license:cc-by-4.0",
"region:us"
] | BEE-spoke-data | null | null | 0 | 5 | 2023-09-17T21:38:31 | ---
license: cc-by-4.0
size_categories:
- 1K<n<10K
dataset_info:
- config_name: default
features:
- name: id
dtype: string
- name: title
dtype: string
- name: link
dtype: string
- name: desc
dtype: string
- name: summary
dtype: string
- name: pubDate
dtype: string
- name: pubFormatted
dtype: string
- name: enc_len
dtype: string
- name: enc_type
dtype: string
- name: audio_url
dtype: string
- name: transcript
dtype: string
splits:
- name: train
num_bytes: 72117648
num_examples: 1965
download_size: 40830798
dataset_size: 72117648
- config_name: segmented
features:
- name: id
dtype: string
- name: title
dtype: string
- name: link
dtype: string
- name: desc
dtype: string
- name: summary
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 69521425.32519084
num_examples: 1911
download_size: 40832338
dataset_size: 69521425.32519084
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- config_name: segmented
data_files:
- split: train
path: segmented/train-*
---
# Dataset Card for "SYSK_Transcripts"
Transcriptions + Summaries of _Stuff You Should Know_. DOI 10.17605/OSF.IO/VM9NT
```python
Dataset({
features: ['id', 'title', 'link', 'desc', 'summary', 'pubDate', 'pubFormatted', 'enc_len', 'enc_type', 'audio_url', 'transcript'],
num_rows: 1965
})
```
## citation
```
@article{https://doi.org/10.17605/osf.io/vm9nt,
doi = {10.17605/OSF.IO/VM9NT},
url = {https://osf.io/vm9nt/},
author = {Pierson, Britt},
keywords = {audio processing, audio timestamp, corpus, dataset, natural language processing, nlp, podcast, podcasts trancripts, podcasts transcript, podcast transcript, podcast transcripts, sentiment analysis, transcript, transcripts, transcripts of podcast, transcripts of podcasts},
title = {"Stuff You Should Know" Podcast Transcripts - Full Dataset with Transcript of All Episodes (SYSK_Transcripts)},
publisher = {Open Science Framework},
year = {2022},
copyright = {CC-By Attribution 4.0 International}
}
``` | 2,176 | [
[
-0.01392364501953125,
-0.0261077880859375,
0.012939453125,
0.0256805419921875,
-0.0279388427734375,
-0.0081024169921875,
-0.0257415771484375,
0.001735687255859375,
0.041259765625,
0.019622802734375,
-0.048797607421875,
-0.0745849609375,
-0.03179931640625,
0.... |
hungeni/amrutaDB | 2023-09-18T11:11:08.000Z | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"language:vi",
"language:hi",
"license:other",
"region:us"
] | hungeni | null | null | 0 | 5 | 2023-09-18T11:02:51 | ---
license: other
task_categories:
- text-generation
language:
- en
- vi
- hi
size_categories:
- 1K<n<10K
---
This dataset clone from amruta.org for training LLM
Contact: hungbui@sahajayoga.edu.vn
By the grace of Our H.H. Shri Mataji Nirmala Devi | 250 | [
[
0.009796142578125,
-0.0289306640625,
-0.01800537109375,
0.01416778564453125,
-0.01116943359375,
0.0198516845703125,
0.0156402587890625,
-0.006435394287109375,
0.06451416015625,
0.0458984375,
-0.0633544921875,
-0.006221771240234375,
-0.02093505859375,
-0.0023... |
spyropoulos/products | 2023-09-18T11:54:21.000Z | [
"region:us"
] | spyropoulos | null | null | 0 | 5 | 2023-09-18T11:53:51 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
fia24/banel_including_pos_training_dataset_90 | 2023-09-18T12:36:47.000Z | [
"region:us"
] | fia24 | null | null | 0 | 5 | 2023-09-18T12:36:44 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: translation
struct:
- name: en
dtype: string
- name: fr
dtype: string
splits:
- name: train
num_bytes: 1386207
num_examples: 18105
- name: test
num_bytes: 155599
num_examples: 2012
download_size: 621202
dataset_size: 1541806
---
# Dataset Card for "banel_including_pos_training_dataset_90"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 667 | [
[
-0.03326416015625,
-0.0011091232299804688,
0.002742767333984375,
0.020263671875,
-0.007694244384765625,
-0.01335906982421875,
0.0200653076171875,
0.004852294921875,
0.035308837890625,
0.03228759765625,
-0.054351806640625,
-0.057342529296875,
-0.04095458984375,
... |
mertkarabacak/NCDB-Meningioma | 2023-09-18T19:25:32.000Z | [
"region:us"
] | mertkarabacak | null | null | 0 | 5 | 2023-09-18T19:25:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
changjacHp/lol_champions_abilities | 2023-09-21T06:25:51.000Z | [
"region:us"
] | changjacHp | null | null | 0 | 5 | 2023-09-21T06:25:37 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
dharun2049/autotrain-data-skinnnnnnn | 2023-09-21T22:51:24.000Z | [
"task_categories:image-classification",
"region:us"
] | dharun2049 | null | null | 0 | 5 | 2023-09-21T22:49:57 | ---
task_categories:
- image-classification
---
# AutoTrain Dataset for project: skinnnnnnn
## Dataset Description
This dataset has been automatically processed by AutoTrain for project skinnnnnnn.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<224x224 RGB PIL image>",
"target": 0
},
{
"image": "<224x224 RGB PIL image>",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['benign', 'malignant'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 397 |
| valid | 101 |
| 944 | [
[
-0.0202789306640625,
0.0080413818359375,
-0.00592041015625,
0.011749267578125,
-0.0246429443359375,
0.02325439453125,
-0.004428863525390625,
-0.032257080078125,
0.0026149749755859375,
0.0439453125,
-0.045806884765625,
-0.060150146484375,
-0.0281829833984375,
... |
mychen76/receipt_cord_ocr_v2 | 2023-09-22T22:21:40.000Z | [
"region:us"
] | mychen76 | null | null | 0 | 5 | 2023-09-22T22:20:48 | ---
dataset_info:
features:
- name: image
dtype: image
- name: id
dtype: string
- name: parsed_data
dtype: string
- name: raw_data
dtype: string
splits:
- name: train
num_bytes: 119205560.0
num_examples: 800
- name: test
num_bytes: 15152937.0
num_examples: 100
- name: valid
num_bytes: 15152937.0
num_examples: 100
download_size: 147437931
dataset_size: 149511434.0
---
# Dataset Card for "receipt_cord_ocr_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 605 | [
[
-0.0216064453125,
-0.008209228515625,
0.020477294921875,
-0.01349639892578125,
-0.02777099609375,
-0.00705718994140625,
0.032867431640625,
-0.028472900390625,
0.046234130859375,
0.0352783203125,
-0.032958984375,
-0.03692626953125,
-0.0400390625,
-0.016921997... |
Mireu-Lab/CIC-IDS | 2023-10-07T11:30:24.000Z | [
"task_categories:feature-extraction",
"code",
"region:us"
] | Mireu-Lab | null | null | 0 | 5 | 2023-09-23T16:13:47 | ---
task_categories:
- feature-extraction
tags:
- code
---
# CIC-IDS
This dataset is a dataset that sorts multiple tracks that are attacked by the network.
The data on that dataset are as follows.
## 자료
The types of Attacks are as follows.
- DDoS
- Web_Attack_�_Brute_Force
- Infiltration
- DoS_GoldenEye
- DoS_Hulk
- Heartbleed
- Bot
- DoS_Slowhttptest
- Web_Attack_�_XSS
- DoS_slowloris
- FTP-Patator
- SSH-Patator
- Web_Attack_�_Sql_Injection
- PortScan
The percentage of attack attempts is as follows.

Detailed Attack Rate Chart
<img src="./image-20230926152655774.png" alt="image-20230926152655774" style="zoom:40%;" />

A dataset made up of .
In addition, the data set is configured with files as follows.
| File Name | the manner of attack | weight of attack (%) |
| ----------------------------------------------------------- | ------------------------------------------------------------ | ------------- |
| Friday-WorkingHours-Afternoon-DDos.pcap_ISCX.csv | DDoS | 56 |
| Tuesday-WorkingHours.pcap_ISCX.csv | FTP-Patator, SSH-Patator | 3 |
| Friday-WorkingHours-Afternoon-PortScan.pcap_ISCX.csv | PortScan | 55 |
| Thursday-WorkingHours-Afternoon-Infilteration.pcap_ISCX.csv | Infiltration | 0.01 |
| Wednesday-workingHours.pcap_ISCX.csv | DoS_Hulk, DoS_Slowhttptest, DoS_GoldenEye, Heartbleed, DoS_slowloris | 36 |
| Friday-WorkingHours-Morning.pcap_ISCX.csv | Bot | 1.02 |
| Thursday-WorkingHours-Morning-WebAttacks.pcap_ISCX.csv | Web_Attack_�_XSS, Web_Attack_�_Brute_Force, Web_Attack_�_Sql_Injection | 1.27 |
- License
The CICIDS2017 dataset consists of labeled network flows, including full packet payloads in pcap format, the corresponding profiles and the labeled flows (GeneratedLabelledFlows.zip) and CSV files for machine and deep learning purpose (MachineLearningCSV.zip) are publicly available for researchers. If you are using our dataset, you should cite our related paper which outlining the details of the dataset and its underlying principles:
Iman Sharafaldin, Arash Habibi Lashkari, and Ali A. Ghorbani, “Toward Generating a New Intrusion Detection Dataset and Intrusion Traffic Characterization”, 4th International Conference on Information Systems Security and Privacy (ICISSP), Portugal, January 2018 | 2,892 | [
[
-0.0396728515625,
-0.039764404296875,
0.0129852294921875,
0.0292816162109375,
-0.018280029296875,
0.0180816650390625,
0.01187896728515625,
-0.00988006591796875,
0.037567138671875,
0.043792724609375,
-0.0313720703125,
-0.06695556640625,
-0.046173095703125,
0.... |
sarahpann/gsm8k_small_test | 2023-09-23T20:21:36.000Z | [
"region:us"
] | sarahpann | null | null | 0 | 5 | 2023-09-23T20:18:53 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: prompt
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 109109
num_examples: 200
download_size: 64934
dataset_size: 109109
---
# Dataset Card for "gsm8k_small_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 473 | [
[
-0.05078125,
-0.00920867919921875,
0.01535797119140625,
0.00499725341796875,
-0.022705078125,
-0.0190277099609375,
0.01457977294921875,
0.0017652511596679688,
0.053253173828125,
0.025604248046875,
-0.052764892578125,
-0.0440673828125,
-0.0360107421875,
-0.00... |
ASR-HypR/AISHELL1_withLM | 2023-09-24T15:04:06.000Z | [
"region:us"
] | ASR-HypR | null | null | 0 | 5 | 2023-09-24T13:36:28 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: dev
path: data/dev-*
dataset_info:
features:
- name: ref
dtype: string
- name: hyps
sequence: string
- name: ctc_score
sequence: float64
- name: att_score
sequence: float64
- name: lm_score
sequence: float64
- name: utt_id
dtype: string
- name: score
sequence: float64
splits:
- name: train
num_bytes: 572977340
num_examples: 120098
- name: test
num_bytes: 34410820
num_examples: 7176
- name: dev
num_bytes: 67924134
num_examples: 14326
download_size: 355095107
dataset_size: 675312294
---
# Dataset Card for "AISHELL1_withLM"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 882 | [
[
-0.0242767333984375,
-0.010986328125,
0.0128631591796875,
0.01074981689453125,
-0.0205841064453125,
0.00972747802734375,
0.035675048828125,
-0.01515960693359375,
0.055633544921875,
0.043487548828125,
-0.055816650390625,
-0.0679931640625,
-0.050872802734375,
... |
larryvrh/belle_resampled_78K_CN | 2023-09-26T05:01:58.000Z | [
"task_categories:text-generation",
"task_categories:conversational",
"size_categories:10K<n<100K",
"language:zh",
"region:us"
] | larryvrh | null | null | 1 | 5 | 2023-09-25T02:35:33 | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: id
dtype: string
- name: category
dtype: string
splits:
- name: train
num_bytes: 197491538
num_examples: 78000
download_size: 108247334
dataset_size: 197491538
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- text-generation
- conversational
language:
- zh
size_categories:
- 10K<n<100K
---
# Dataset Card for "belle_resampled_78K_CN"
通过启发式方法过滤后的[BelleGroup/train_3.5M_CN](https://huggingface.co/datasets/BelleGroup/train_3.5M_CN),移除了语料中明显的错误和低质量内容。
之后进行了均匀采样,从每个类别中提取出6000条最长的对话。
## 样例
```
{
"id":"66623028",
"conversations":[
{
"from":"human",
"value":"归纳介绍三种不同类型的电影题材"
},
{
"from":"assistant",
"value":"1. 动作片:动作片通常强调动作和战斗场面,故事情节紧凑,充满了紧张和兴奋的氛围。主角一般是具有超凡实力或特殊技能的英雄人物,往往会与反派角色进行一系列激烈的格斗,追逐和爆破等酷炫场面。例如《速度与激情》和《复仇者联盟》等。\n2. 爱情片:爱情片是以爱情为主题的电影,着重呈现角色之间的感情纠葛,展现了爱情的美好和悲壮。情节与角色之间的发展令人深思,增加了观众的感性体验。这种电影的故事情节主要以人类与人类之间的情感变化为主题。例如《泰坦尼克号》和《让子弹飞》等。\n3. 科幻片:科幻片是以未来科技和未知世界为主题的电影,通常包括时间旅行、外星人、虚拟现实、未来社会等各种奇幻的元素。这种电影描绘了一种比现实更加宏伟、更加神秘和惊奇的世界。例如《星际穿越》和《发条橙》等。"
}
]
}
{
"id":"18205437",
"conversations":[
{
"from":"human",
"value":"根据给定的食材,编写一道美食菜谱。\n鸡腿肉、蘑菇、洋葱"
},
{
"from":"assistant",
"value":"鸡肉炒蘑菇洋葱\n食材:\n- 鸡腿肉 400克,去骨去皮切成小块\n- 蘑菇 200克,切片\n- 洋葱 1个,切丝\n- 生姜 适量,切丝\n- 酱油 2勺\n- 料酒 1勺\n- 盐 适量\n- 白胡椒粉 适量\n- 植物油 适量\n做法:\n1. 在一个碗里放入切好的鸡腿肉块,加入酱油、料酒和适量的白胡椒粉,搅拌均匀,腌制10分钟。\n2. 热锅凉油,加入姜丝翻炒,放入腌好的鸡肉块翻炒至变色。\n3. 加入洋葱丝和蘑菇,翻炒2-3分钟至蘑菇变软。\n4. 加入适量的盐和少量的水,盖上锅盖焖煮5分钟直至鸡肉熟透。\n5. 最后翻炒几下,使汤汁变浓稠即可。\n温馨提示:\n1. 在腌制鸡肉时,如果想让肉质更嫩滑,可以加入少量的淀粉和蛋清。\n2. 可以根据个人口味加入其他调料,如蒜末、辣椒等。\n3. 洋葱和蘑菇可以用其他蔬菜代替,如青椒、胡萝卜等。"
},
{
"from":"human",
"value":"太好了,我会试着照着你教的做一次,但是我最近买了一部新相机,想让你推荐一些好的拍照地点。"
},
{
"from":"assistant",
"value":"当地的风景区、公园、景点等等都是比较适合拍照的地点。如果你想要一些特别的地方,可以考虑一些建筑群,街道巷子、咖啡馆、书店等具有文艺气息的地方。"
},
{
"from":"human",
"value":"嗯,这些地方都很不错,但是我还想找一些可以体验当地文化和风俗的地点,你有什么好的建议吗?"
},
{
"from":"assistant",
"value":"如果你想要更深入地了解当地文化和风俗,可以考虑参加一些游学活动,比如参观当地的博物馆、参加民俗节庆、尝试当地美食等等。此外,也可以去一些小城镇,感受那里的古朴风情。"
}
]
}
```
### 字段:
```
id: 数据id
conversations: 数据内容,以对话形式给出,包括多轮和单轮对话的数据
category: 数据类别
``` | 2,509 | [
[
-0.038330078125,
-0.04083251953125,
0.00186920166015625,
0.0229339599609375,
-0.03143310546875,
-0.0270538330078125,
0.00804901123046875,
-0.0266876220703125,
0.0618896484375,
0.04901123046875,
-0.043426513671875,
-0.046844482421875,
-0.045928955078125,
0.02... |
NusaCrowd/korpus_nusantara | 2023-09-26T12:31:37.000Z | [
"language:ind",
"language:jav",
"language:xdy",
"language:bug",
"language:sun",
"language:mad",
"language:bjn",
"language:bbc",
"language:msa",
"language:min",
"license:unknown",
"machine-translation",
"region:us"
] | NusaCrowd | This parallel corpus was collected from several studies, assignments, and thesis of
students of the Informatics Study Program, Tanjungpura University. Some of the corpus
are used in the translation machine from Indonesian to local languages http://nustor.untan.ac.id/cammane/.
This corpus can be used freely for research purposes by citing the paper
https://ijece.iaescore.com/index.php/IJECE/article/download/20046/13738.
The dataset is a combination of multiple machine translation works from the author,
Herry Sujaini, covering Indonesian to 25 local dialects in Indonesia. Since not all
dialects have ISO639-3 standard coding, as agreed with Pak Herry , we decided to
group the dataset into the closest language family, i.e.: Javanese, Dayak, Buginese,
Sundanese, Madurese, Banjar, Batak Toba, Khek, Malay, Minangkabau, and Tiociu. | @article{sujaini2020improving,
title={Improving the role of language model in statistical machine translation (Indonesian-Javanese)},
author={Sujaini, Herry},
journal={International Journal of Electrical and Computer Engineering},
volume={10},
number={2},
pages={2102},
year={2020},
publisher={IAES Institute of Advanced Engineering and Science}
} | 0 | 5 | 2023-09-26T11:15:53 | ---
license: unknown
tags:
- machine-translation
language:
- ind
- jav
- xdy
- bug
- sun
- mad
- bjn
- bbc
- msa
- min
---
# korpus_nusantara
This parallel corpus was collected from several studies, assignments, and thesis of
students of the Informatics Study Program, Tanjungpura University. Some of the corpus
are used in the translation machine from Indonesian to local languages http://nustor.untan.ac.id/cammane/.
This corpus can be used freely for research purposes by citing the paper
https://ijece.iaescore.com/index.php/IJECE/article/download/20046/13738.
The dataset is a combination of multiple machine translation works from the author,
Herry Sujaini, covering Indonesian to 25 local dialects in Indonesia. Since not all
dialects have ISO639-3 standard coding, as agreed with Pak Herry , we decided to
group the dataset into the closest language family, i.e.: Javanese, Dayak, Buginese,
Sundanese, Madurese, Banjar, Batak Toba, Khek, Malay, Minangkabau, and Tiociu.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@article{sujaini2020improving,
title={Improving the role of language model in statistical machine translation (Indonesian-Javanese)},
author={Sujaini, Herry},
journal={International Journal of Electrical and Computer Engineering},
volume={10},
number={2},
pages={2102},
year={2020},
publisher={IAES Institute of Advanced Engineering and Science}
}
```
## License
Unknown
## Homepage
[https://github.com/herrysujaini/korpusnusantara](https://github.com/herrysujaini/korpusnusantara)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) | 1,767 | [
[
-0.0362548828125,
-0.019012451171875,
0.00933074951171875,
0.024688720703125,
-0.01282501220703125,
-0.01444244384765625,
-0.0382080078125,
-0.01404571533203125,
0.022064208984375,
0.0662841796875,
-0.01055145263671875,
-0.0390625,
-0.03228759765625,
0.06457... |
sc3069/zx | 2023-09-27T09:47:20.000Z | [
"region:us"
] | sc3069 | null | null | 0 | 5 | 2023-09-26T12:27:13 | ---
dataset_info:
features:
- name: input
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 10329536
num_examples: 350
download_size: 1991265
dataset_size: 10329536
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "zx"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 465 | [
[
-0.0457763671875,
0.0012865066528320312,
0.033843994140625,
0.01067352294921875,
-0.024566650390625,
-0.00214385986328125,
0.0213470458984375,
-0.011505126953125,
0.0572509765625,
0.03192138671875,
-0.0780029296875,
-0.06768798828125,
-0.0362548828125,
-0.01... |
amitness/mlrs-pos-mt | 2023-09-26T16:27:07.000Z | [
"region:us"
] | amitness | null | null | 0 | 5 | 2023-09-26T16:05:32 | ---
dataset_info:
features:
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADV
'2': COMP
'3': CONJ_CORD
'4': CONJ_SUB
'5': DEF
'6': FOC
'7': FUT
'8': GEN
'9': GEN_DEF
'10': GEN_PRON
'11': HEMM
'12': INT
'13': KIEN
'14': LIL
'15': LIL_DEF
'16': LIL_PRON
'17': NEG
'18': NOUN
'19': NOUN_PROP
'20': NUM_CRD
'21': NUM_FRC
'22': NUM_ORD
'23': NUM_WHD
'24': PART_ACT
'25': PART_PASS
'26': PREP
'27': PREP_DEF
'28': PREP_PRON
'29': PROG
'30': PRON_DEM
'31': PRON_DEM_DEF
'32': PRON_INDEF
'33': PRON_INT
'34': PRON_PERS
'35': PRON_PERS_NEG
'36': PRON_REC
'37': PRON_REF
'38': QUAN
'39': VERB
'40': VERB_PSEU
'41': X_ABV
'42': X_BOR
'43': X_DIG
'44': X_ENG
'45': X_FOR
'46': X_PUN
- name: tokens
sequence: string
splits:
- name: train
num_bytes: 1443609
num_examples: 4935
- name: validation
num_bytes: 234214
num_examples: 616
- name: test
num_bytes: 212745
num_examples: 616
download_size: 0
dataset_size: 1890568
---
# Dataset Card for "mlrs-pos-mt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,624 | [
[
-0.0406494140625,
-0.0189971923828125,
0.01555633544921875,
0.010040283203125,
-0.0161895751953125,
-0.016571044921875,
0.0226593017578125,
0.005096435546875,
0.053924560546875,
0.0408935546875,
-0.0521240234375,
-0.05108642578125,
-0.055145263671875,
-0.003... |
pixel-coping/pubmed_derived | 2023-10-06T02:26:15.000Z | [
"language:en",
"region:us"
] | pixel-coping | null | null | 0 | 5 | 2023-09-28T11:45:25 | ---
configs:
- config_name: default
data_files:
- split: pubmed
path: data/pubmed-*
- split: nonbiomedical
path: data/nonbiomedical-*
- split: counterfactual
path: data/counterfactual-*
- split: casual
path: data/casual-*
- split: rap
path: data/rap-*
dataset_info:
features:
- name: PubmedData
struct:
- name: ArticleIdList
sequence:
- name: ArticleId
sequence: string
- name: PublicationStatus
dtype: string
- name: History
struct:
- name: PubMedPubDate
sequence:
- name: Year
dtype: int32
- name: Month
dtype: int32
- name: Day
dtype: int32
- name: ReferenceList
sequence:
- name: Citation
dtype: string
- name: CitationId
dtype: int32
- name: text
dtype: string
splits:
- name: pubmed
num_bytes: 1166668
num_examples: 1000
- name: nonbiomedical
num_bytes: 1141909
num_examples: 1000
- name: counterfactual
num_bytes: 1179347
num_examples: 991
- name: casual
num_bytes: 1205949
num_examples: 1000
- name: rap
num_bytes: 1252260
num_examples: 1000
download_size: 3357032
dataset_size: 5946133
language:
- en
---
# A corpus of rewritten pubmed abstracts
This corpus contains a 1k example subset from the [pubmed](https://huggingface.co/datasets/pubmed) corpus and various rewritten versions. The rewritten versions change one aspect of the orginal text and keeps other aspects unchanged as much as possible.
- **Paper:** [Dissecting learning and forgetting in language model finetuning](link pending)
Another corpus of rewritten general text is provided here: [c4_derived](https://huggingface.co/datasets/pixel-coping/c4_derived)
### Data Splits
- pubmed: a 1k example subset from the original pubmed corpus
- nonbiomedical: main topic of text changed to nonbiomedical topic
- counerfactual: factuals knowledge in text replaced by incorrect factuals
- casual: style of text changed to a casual style
- rap: style of text changed to a rap style
## Dataset Creation
Text is generated by ChatGPT with corresponding prompts. Refer to the paper for the instructions used to generate text in each derived subsets.
Please check the terms and conditions of pubmed data [here](https://www.nlm.nih.gov/databases/download/terms_and_conditions.html).
### Citation Information
```
pending
``` | 2,449 | [
[
0.00543212890625,
-0.06402587890625,
0.04412841796875,
0.009613037109375,
-0.015350341796875,
0.003345489501953125,
-0.00402069091796875,
-0.0208282470703125,
0.04791259765625,
0.053985595703125,
-0.0469970703125,
-0.054901123046875,
-0.031707763671875,
0.02... |
skaltenp/textworld_turn_top_demonstrations_no_drop | 2023-09-29T09:55:38.000Z | [
"region:us"
] | skaltenp | null | null | 0 | 5 | 2023-09-28T15:56:52 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: demonstration
sequence:
sequence: string
- name: moves
dtype: int64
- name: score
dtype: int64
splits:
- name: train
num_bytes: 9260283
num_examples: 2640
- name: valid
num_bytes: 453898
num_examples: 132
- name: test
num_bytes: 1343379
num_examples: 268
download_size: 1932762
dataset_size: 11057560
---
# Dataset Card for "textworld_turn_top_demonstrations_no_drop"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 793 | [
[
-0.047332763671875,
-0.03302001953125,
0.00771331787109375,
0.02325439453125,
-0.01462554931640625,
-0.0028018951416015625,
0.005313873291015625,
0.00640106201171875,
0.036285400390625,
0.040313720703125,
-0.08587646484375,
-0.0633544921875,
-0.05218505859375,
... |
adityarra07/ATC_5 | 2023-09-28T22:36:31.000Z | [
"region:us"
] | adityarra07 | null | null | 0 | 5 | 2023-09-28T22:36:27 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 113637638.0
num_examples: 504
download_size: 112879747
dataset_size: 113637638.0
---
# Dataset Card for "ATC_5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 429 | [
[
-0.044281005859375,
0.0025959014892578125,
0.026275634765625,
0.00986480712890625,
-0.022613525390625,
0.010833740234375,
0.037933349609375,
-0.020782470703125,
0.053741455078125,
0.030670166015625,
-0.052337646484375,
-0.07452392578125,
-0.036834716796875,
... |
AnikaBasu/CyberbullyingDataset | 2023-09-29T17:59:07.000Z | [
"region:us"
] | AnikaBasu | null | null | 1 | 5 | 2023-09-29T03:35:47 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Honaker/eurosat_dataset | 2023-10-24T13:01:03.000Z | [
"task_categories:image-classification",
"license:mit",
"region:us"
] | Honaker | null | null | 0 | 5 | 2023-09-29T15:07:13 | ---
license: mit
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': AnnualCrop
'1': Forest
'2': HerbaceousVegetation
'3': Highway
'4': Industrial
'5': Pasture
'6': PermanentCrop
'7': Residential
'8': River
'9': SeaLake
splits:
- name: train
num_bytes: 70666516.8
num_examples: 21600
- name: validation
num_bytes: 8700747.8
num_examples: 2700
- name: test
num_bytes: 8631409.1
num_examples: 2700
download_size: 93886033
dataset_size: 87998673.69999999
task_categories:
- image-classification
---
# Dataset Card for EuroSat
## Table of Contents
- [How to Use](#How-to-Use)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
## How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- How to use in Python
```py
from datasets import load_dataset
train_data = load_dataset("Honaker/eurosat_dataset", split="train")
```
## Dataset Description
- **Homepage:** https://zenodo.org/record/7711810#.ZAm3k-zMKEA
### Dataset Summary
EuroSat is an image classification dataset with 10 different classes on satellite imagery. There is over 27,000 labeled images.
## Dataset Structure
The dataset is structured as follows:
```py
DatasetDict({
train: Dataset({
features: ['image', 'labels'],
num_rows: 21600
})
validation: Dataset({
features: ['image', 'labels'],
num_rows: 2700
})
test: Dataset({
features: ['image', 'labels'],
num_rows: 2700
})
})
```
### Data Instances
An example of the data for one image is:
```py
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=64x64>,
'labels': 0
}
```
With the type of each field being defined as:
```py
{
'image': <PIL.JpegImagePlugin.JpegImageFile>,
'labels': Integer
```
### Data Fields
The dataset has the following fields:
- 'image': Satellite image that is of type <PIL.TiffImagePlugin.TiffImageFile image>
- 'labels': the label of the Satellite image as an integer
### Data Splits
| | Train | Validation | Test |
|----------------|--------|------------|------|
| Images | 21600 | 2700 | 2700 |
## Additional Information
### Licensing Information
EuroSat is licensed under a MIT | 2,705 | [
[
-0.05596923828125,
-0.019561767578125,
0.0159149169921875,
0.014373779296875,
-0.0308837890625,
-0.023101806640625,
-0.0101318359375,
-0.025299072265625,
-0.0018358230590820312,
0.031463623046875,
-0.03619384765625,
-0.05303955078125,
-0.038177490234375,
0.0... |
decoy4600/sgm-shiro1 | 2023-09-29T16:02:56.000Z | [
"region:us"
] | decoy4600 | null | null | 0 | 5 | 2023-09-29T15:52:47 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
shelvin94/Darija_translation | 2023-09-29T16:38:45.000Z | [
"task_categories:translation",
"language:en",
"language:ar",
"region:us"
] | shelvin94 | null | null | 1 | 5 | 2023-09-29T16:14:53 | ---
task_categories:
- translation
language:
- en
- ar
pretty_name: Darija_translation
---
multilinguality:
- multilingual
language:
- en
- TRANSCRIBED MOROCCAN ARABIC
| 176 | [
[
-0.0208587646484375,
-0.02764892578125,
-0.01345062255859375,
0.0679931640625,
-0.0269927978515625,
0.0309600830078125,
-0.00875091552734375,
-0.01259613037109375,
0.0484619140625,
0.0576171875,
-0.018310546875,
-0.035552978515625,
-0.04791259765625,
0.03674... |
DamarJati/GreenLabel-Waste-Types | 2023-09-30T06:54:29.000Z | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"region:us"
] | DamarJati | null | null | 1 | 5 | 2023-09-29T22:14:22 | ---
viewer: true
task_categories:
- text-classification
language:
- en
pretty_name: GreenLabel-Waste-Types
size_categories:
- 10K<n<100K
---
Original Datasets: https://www.kaggle.com/datasets/techsash/waste-classification-data?select=DATASET | 242 | [
[
-0.0032215118408203125,
0.01418304443359375,
0.028411865234375,
-0.02618408203125,
0.002300262451171875,
-0.01023101806640625,
0.0235443115234375,
0.005764007568359375,
0.03369140625,
0.06866455078125,
-0.05450439453125,
-0.0302581787109375,
-0.0190277099609375,... |
MaxReynolds/Lee_Souder_RocketLauncher | 2023-09-30T01:57:33.000Z | [
"region:us"
] | MaxReynolds | null | null | 0 | 5 | 2023-09-29T23:50:29 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 279829.0
num_examples: 28
download_size: 0
dataset_size: 279829.0
---
# Dataset Card for "Lee_Souder_RocketLauncher"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 480 | [
[
-0.01548004150390625,
-0.0087890625,
-0.0031833648681640625,
0.021392822265625,
0.016998291015625,
0.0179290771484375,
0.021514892578125,
0.0238800048828125,
0.04937744140625,
0.0279083251953125,
-0.07354736328125,
-0.047027587890625,
-0.032379150390625,
-0.... |
nfliu/decontextualization | 2023-10-01T00:32:58.000Z | [
"region:us"
] | nfliu | null | null | 0 | 5 | 2023-10-01T00:32:38 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: example_id
dtype: int64
- name: original_sentence
dtype: string
- name: page_title
dtype: string
- name: section_title
sequence: string
- name: paragraph_text
dtype: string
- name: sentence_start_byte_offset
dtype: int64
- name: sentence_end_byte_offset
dtype: int64
- name: article_url
dtype: string
- name: annotations
list:
- name: category
dtype: string
- name: decontextualized_sentence
dtype: string
- name: example_id
dtype: int64
- name: original_sentence
dtype: string
- name: worker_id
dtype: int64
splits:
- name: train
num_bytes: 15224065
num_examples: 11290
- name: validation
num_bytes: 5315183
num_examples: 1945
- name: test
num_bytes: 5359001
num_examples: 1945
download_size: 13617475
dataset_size: 25898249
---
# Dataset Card for "decontextualization"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,255 | [
[
-0.0362548828125,
-0.0183258056640625,
0.00662994384765625,
0.0256500244140625,
-0.020294189453125,
0.0024261474609375,
0.0132904052734375,
-0.02740478515625,
0.058258056640625,
0.01354217529296875,
-0.048980712890625,
-0.0673828125,
-0.047119140625,
-0.0125... |
Spiderman01/Domestic_violence_info_support_fromposts | 2023-10-02T10:22:42.000Z | [
"region:us"
] | Spiderman01 | null | null | 0 | 5 | 2023-10-02T08:03:26 | ---
dataset_info:
features:
- name: train
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 945794
num_examples: 273
download_size: 527319
dataset_size: 945794
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Domestic_violence_info_support_fromposts"
This is the dataset of posts from domestic violence victim, which include the content of the post and its kinds of information support.
There are total 14 kinds of info support need:\
(1) Shelters/ DV center/ Agency\
(2) Legal\
(3) Childbearing\
(4) Police\
(5) Wound assessment/record\
(6) DV report procedure/Documentation\
(7) Safety planning\
(8) Finance\
(9) Housing\
(10) Healthcare information (counselling, psychiatrist, doctor etc.)\
(11) DV survivors’ network/ (Online) support groups\
(12) DV knowledge\
(13) Communication\
(14) Miscellaneous (Other)
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,063 | [
[
-0.018157958984375,
-0.049346923828125,
0.0262908935546875,
0.032257080078125,
-0.0263214111328125,
-0.00511932373046875,
0.0364990234375,
0.01091766357421875,
0.015655517578125,
0.035064697265625,
-0.06597900390625,
-0.050811767578125,
-0.05413818359375,
0.... |
Sviluppo/test02 | 2023-10-03T07:46:26.000Z | [
"region:us"
] | Sviluppo | null | null | 0 | 5 | 2023-10-02T13:54:41 | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 1,732 | [
[
-0.038177490234375,
-0.02984619140625,
-0.0036067962646484375,
0.027130126953125,
-0.0323486328125,
0.0037822723388671875,
-0.01727294921875,
-0.02020263671875,
0.049041748046875,
0.04046630859375,
-0.0634765625,
-0.08062744140625,
-0.052947998046875,
0.0020... |
nairaxo/shimaore-asr | 2023-10-02T18:19:28.000Z | [
"region:us"
] | nairaxo | null | null | 0 | 5 | 2023-10-02T18:15:40 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: path
dtype: string
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 375585328.0
num_examples: 787
download_size: 373013374
dataset_size: 375585328.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "shikomori-asr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 556 | [
[
-0.032012939453125,
-0.0159454345703125,
0.0169219970703125,
0.0169677734375,
-0.0139923095703125,
-0.0019426345825195312,
0.01100921630859375,
-0.01067352294921875,
0.050750732421875,
0.0380859375,
-0.051025390625,
-0.05377197265625,
-0.059600830078125,
-0.... |
ayoubkirouane/arxiv-physics | 2023-10-02T19:00:27.000Z | [
"region:us"
] | ayoubkirouane | null | null | 0 | 5 | 2023-10-02T19:00:24 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 21113244.0
num_examples: 30231
download_size: 11217441
dataset_size: 21113244.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "arxiv-physics"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 488 | [
[
-0.04266357421875,
-0.01033782958984375,
0.0217132568359375,
0.0100860595703125,
-0.01345062255859375,
-0.00353240966796875,
0.036712646484375,
-0.004913330078125,
0.04571533203125,
0.0200653076171875,
-0.038055419921875,
-0.03912353515625,
-0.03887939453125,
... |
nguyenthanhdo/viettel_v3.2 | 2023-10-03T08:52:34.000Z | [
"region:us"
] | nguyenthanhdo | null | null | 0 | 5 | 2023-10-03T08:52:17 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: translated
dtype: bool
- name: output_len
dtype: int64
- name: source
dtype: string
- name: input
dtype: string
splits:
- name: train
num_bytes: 327564182.0
num_examples: 100000
download_size: 157982995
dataset_size: 327564182.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "viettel_v3.2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 638 | [
[
-0.027984619140625,
-0.0106201171875,
0.018096923828125,
0.0182342529296875,
-0.0293426513671875,
-0.02374267578125,
0.0278472900390625,
-0.01763916015625,
0.04058837890625,
0.055389404296875,
-0.04986572265625,
-0.0634765625,
-0.03875732421875,
-0.019042968... |
rohanbalkondekar/generate_json_long | 2023-10-05T11:58:38.000Z | [
"region:us"
] | rohanbalkondekar | null | null | 0 | 5 | 2023-10-03T12:24:09 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
DialogueCharacter/english_general_instruction_with_reward_score_judged_by_13B_llama2 | 2023-10-29T03:49:25.000Z | [
"region:us"
] | DialogueCharacter | null | null | 0 | 5 | 2023-10-03T14:34:05 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: reward_score
dtype: float64
splits:
- name: train
num_bytes: 3053305957
num_examples: 1006809
download_size: 1633060464
dataset_size: 3053305957
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "general_instruction_with_reward_score_judged_by_13B_llama2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 576 | [
[
-0.01544189453125,
-0.0237274169921875,
0.0245513916015625,
0.038909912109375,
-0.02685546875,
-0.003597259521484375,
0.0205535888671875,
-0.00797271728515625,
0.051300048828125,
0.0228424072265625,
-0.05426025390625,
-0.070068359375,
-0.05908203125,
-0.0138... |
shossain/qa-no-pad-16384 | 2023-10-15T03:21:50.000Z | [
"region:us"
] | shossain | null | null | 0 | 5 | 2023-10-04T04:56:35 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 691233451
num_examples: 14119
download_size: 181098407
dataset_size: 691233451
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "qa-no-pad-16384"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 538 | [
[
-0.04864501953125,
-0.01535797119140625,
0.0250244140625,
0.0258941650390625,
-0.0290069580078125,
0.0046234130859375,
0.045989990234375,
0.0052337646484375,
0.07623291015625,
0.036712646484375,
-0.049530029296875,
-0.05120849609375,
-0.025543212890625,
-0.0... |
Mxode/University-News-Instruction-Zh | 2023-10-04T09:43:25.000Z | [
"task_categories:zero-shot-classification",
"task_categories:summarization",
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:zh",
"license:apache-2.0",
"news",
"campus",
"region:us"
] | Mxode | null | null | 0 | 5 | 2023-10-04T09:09:32 | ---
license: apache-2.0
task_categories:
- zero-shot-classification
- summarization
- text-generation
tags:
- news
- campus
size_categories:
- 100K<n<1M
language:
- zh
---
一些高校校园新闻,约 65k * 3(类任务) 条,**稍微做了一点点脱敏,尽可能地遮盖了作者名等**。数据已经整理成了指令的形式,格式如下:
```json
{
"id": <id>,
"category": "(title_summarize|news_classify|news_generate)",
"instruction": <对应的具体指令>,
"input": <空>,
"output": <指令对应的输出>
}
```
总共三类任务:**标题总结**、**栏目分类**、**新闻生成**,本质上是利用新闻元数据中的标题、栏目、内容排列组合生成的,所以可以保证数据完全准确。每个字段内容已经整理成了**单行**的格式。下面是三类任务的样例:
```json
// 标题总结
{
"id": 22106,
"category": "title_summarize",
"instruction": "请你给下面的新闻取一则标题:\n点击图片观看视频 科技创新潜质人才是海院自主选拔的一种新尝试,根据规定,凡在高中阶段获得省级科技创新大赛一等奖或以上的应届高中毕业生,都可申请报名。今年苏浙沪三地共有300多名学生,申请以科技创新潜质人才身份入学海城海院。经过专家甄选,今天上午,有82名学生参加了考试。 上午的面试不考语数外,考生要靠自己的科技成果来证明自己是具有科技创新潜质的人才。记者发现,学生们带来的创新成果五花八门,涉及生物、微电子等多种前沿学科。经过上午的筛选,下午只剩下一半的学生进入了特色测试部分,这部分仍不考语数外,重点考察学生的发散性思维,及兴趣爱好和潜质,海院将根据测试结果,最终确定获得科技创新潜质人才的自主选拔认定名单,进入名单的考生,高考成绩只要达到当地的一本线,就能被海院录取。",
"input": "",
"output": "该新闻的标题可以是:《不考语数外 海院自主选拔创新潜质人才》"
}
```
```json
// 栏目分类
{
"id": 134781,
"category": "news_classify",
"instruction": "根据下面的新闻,归纳出它的分类:\n## 英国格拉斯哥大学医学院副院长访问海城海院生命学院\n\n11月20日,英国格拉斯哥大学(University of Glasgow,以下简称格大)医学院副院长Chris Loughrey一行访问海城海院生命科学技术学院,生命学院副院长张晓君出席会议,魏芳、王晓雷、徐沁等多位教师与会参加交流。格大和海城海院同为国际大学交流组织Universitas 21的成员,在生命和医学领域合作紧密。双方互相介绍了两校在生命科学领域的学科专业、课程体系、学生培养等发展情况。在相互交流中,双方希望在这个宏观的合作框架下,逐步增进相互了解,在人才培养、科学研究等方面形成更多更实质性的合作。格拉斯哥大学是英联邦地区建校历史最悠久的四所大学之一。研究经费及教育质量均排名英国前十,出过7位诺奖获得者及3位英国首相。格大下设四大学院,其中医学院有教职员工2000余人,学生5000余人,医学院下设七个研究中心,涉及分子/细胞/系统生物学、植物、微生物、免疫、癌症、生物多样性、神经科学、心血管医学等领域。",
"input": "",
"output": "本则新闻的分类栏目是:综合新闻"
}
```
```json
// 新闻生成
{
"id": 61402,
"category": "news_generate",
"instruction": "帮我写一则综合新闻栏目的新闻,标题是机械与动力工程学院举办心理健康月系列活动之“与压力一路同行”",
"input": "",
"output": "《机械与动力工程学院举办心理健康月系列活动之“与压力一路同行”》:\n\n5月27日,机械与动力工程学院心理健康月系列活动之“与压力一路同行”讲座在高田会堂举行。活动特邀海城铁路学院兼职心理咨询师李毅作为主讲嘉宾,与大家分享如何正确认识压力以及积极应对压力 。本次活动由机械与动力工程学院王睿瑞主持。相较于受到心理教育较为全面的本科生同学,研究生同学面临更多的现实问题,更加需要心理健康的知识,来实现压力与情绪的合理管理。李毅首先结合心理学知识,从生理学、心理学等多个角度带领同学们认识了压力的本质,并指出并非所有压力都是有害的,负性的慢性压力对人产生危害,合理规避有害的压力尤为重要,关键是找到自己的最佳压力水平。李毅进一步讲述了心理压力与认知、情绪、行为之间的关系。通过对“情绪的身体地图”的说明,使同学们更加深刻地体会到了“情绪丰富了体验,情绪述说着需求,情绪激发出行为,情绪联结了他人”的含义。最后,李毅从时间管理、简化生活、认知需求、活在当下、友善待人等方面给出了应对压力的建议,并寄语大家“与并不完美的自我共处,与压力一路同行”。机械与动力工程学院心理健康月系列活动已举办4年,覆盖2700余名研究生,力求以多样的活动形式帮助学生和导师了解更多的心理知识,引导大家关爱自己、关心他人。"
}
```
这里也给出数据的分位点(`instruction` + `output` 的长度),可按需修剪:
```
count 196101.000000
mean 1059.199912
std 571.623822
min 32.000000
0% 32.000000
5% 369.000000
10% 471.000000
15% 539.000000
20% 596.000000
25% 649.000000
30% 700.000000
35% 753.000000
40% 806.000000
45% 862.000000
50% 920.000000
55% 983.000000
60% 1053.000000
65% 1132.000000
70% 1225.000000
75% 1337.000000
80% 1473.000000
85% 1652.000000
90% 1898.000000
95% 2270.000000
max 3054.000000
``` | 3,012 | [
[
-0.054931640625,
-0.036285400390625,
0.0251617431640625,
0.003387451171875,
-0.024810791015625,
-0.00942230224609375,
-0.018768310546875,
-0.0134429931640625,
0.032379150390625,
0.0144195556640625,
-0.02294921875,
-0.0455322265625,
-0.0386962890625,
0.010177... |
legacy107/qa_wikipedia_sentence_transformer_negative_farming | 2023-10-04T13:45:59.000Z | [
"region:us"
] | legacy107 | null | null | 0 | 5 | 2023-10-04T13:45:45 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: negatives
sequence: string
- name: positive
dtype: string
splits:
- name: train
num_bytes: 147665416
num_examples: 27742
- name: test
num_bytes: 18591659
num_examples: 3468
- name: validation
num_bytes: 18443101
num_examples: 3458
download_size: 37917812
dataset_size: 184700176
---
# Dataset Card for "qa_wikipedia_sentence_transformer_negative_farming"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 920 | [
[
-0.036285400390625,
-0.0247650146484375,
0.01165008544921875,
0.00704193115234375,
-0.00940704345703125,
-0.0230865478515625,
0.01146697998046875,
-0.01092529296875,
0.056243896484375,
0.03076171875,
-0.06634521484375,
-0.052520751953125,
-0.0294952392578125,
... |
derekiya/sql-create-context-llama2-78k | 2023-10-05T00:03:58.000Z | [
"region:us"
] | derekiya | null | null | 0 | 5 | 2023-10-04T15:13:55 | This is dataset contain (78k samples) of the excellent [b-mc2/sql-create-context](https://huggingface.co/datasets/b-mc2/sql-create-context/viewer/default/train) and changed to [derekiya/sql-create-context-llama2-78k](https://huggingface.co/datasets/derekiya/sql-create-context-llama2-78k/viewer/default/train) dataset,
processed to match Llama 2's prompt format as described in this article.
Useful if you don't want to reformat it by yourself (e.g., using a script). It was designed for this article about fine-tuning a Llama 2 (chat) | 536 | [
[
-0.0215911865234375,
-0.045135498046875,
0.02020263671875,
0.03570556640625,
-0.046783447265625,
-0.011810302734375,
-0.00799560546875,
-0.005641937255859375,
0.04364013671875,
0.04010009765625,
-0.081298828125,
-0.034820556640625,
-0.0159912109375,
0.014762... |
ewre324/appy-llama2-1k | 2023-10-06T13:28:59.000Z | [
"region:us"
] | ewre324 | null | null | 0 | 5 | 2023-10-04T16:06:49 | ---
dataset_info:
features:
- name: prompt
dtype: large_string
- name: main_topic
dtype: large_string
- name: subtopic
dtype: large_string
- name: adjective
dtype: large_string
- name: action_verb
dtype: large_string
- name: scenario
dtype: large_string
- name: target_audience
dtype: large_string
- name: programming_language
dtype: large_string
- name: common_sense_topic
dtype: large_string
- name: idx
dtype: int64
- name: response
dtype: large_string
- name: text
dtype: string
splits:
- name: train
num_bytes: 236790880
num_examples: 100000
download_size: 100584419
dataset_size: 236790880
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "appy-llama2-1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 944 | [
[
-0.034271240234375,
-0.0094146728515625,
0.005130767822265625,
0.0301055908203125,
-0.032073974609375,
0.002819061279296875,
0.041717529296875,
-0.01291656494140625,
0.07501220703125,
0.038726806640625,
-0.057525634765625,
-0.0523681640625,
-0.0406494140625,
... |
LLMGlobalyTest/categories-11k | 2023-10-04T16:42:01.000Z | [
"region:us"
] | LLMGlobalyTest | null | null | 0 | 5 | 2023-10-04T16:40:20 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
autoevaluate/autoeval-eval-medical_questions_pairs-default-d0c070-68078145610 | 2023-10-04T17:31:43.000Z | [
"autotrain",
"evaluation",
"region:us"
] | autoevaluate | null | null | 0 | 5 | 2023-10-04T17:30:06 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- medical_questions_pairs
eval_info:
task: summarization
model: ARTeLab/it5-summarization-ilpost
metrics: []
dataset_name: medical_questions_pairs
dataset_config: default
dataset_split: train
col_mapping:
text: question_1
target: question_2
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: ARTeLab/it5-summarization-ilpost
* Dataset: medical_questions_pairs
* Config: default
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@halmj](https://huggingface.co/halmj) for evaluating this model. | 874 | [
[
-0.032257080078125,
-0.01495361328125,
0.0210113525390625,
0.00891876220703125,
-0.00742340087890625,
-0.0128631591796875,
0.01336669921875,
-0.0299072265625,
0.02423095703125,
0.0287322998046875,
-0.06317138671875,
-0.0171356201171875,
-0.048187255859375,
0... |
Intuit-GenSRF/toxigen-test-annotated | 2023-10-05T01:50:16.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | 0 | 5 | 2023-10-05T01:50:15 | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
splits:
- name: train
num_bytes: 103023
num_examples: 940
download_size: 64072
dataset_size: 103023
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "toxigen-test-annotated"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 483 | [
[
-0.04205322265625,
-0.01328277587890625,
0.0167999267578125,
0.02850341796875,
-0.00896453857421875,
-0.0089263916015625,
0.0088958740234375,
-0.015045166015625,
0.040283203125,
0.032012939453125,
-0.053466796875,
-0.06597900390625,
-0.041259765625,
-0.00142... |
mrabhi0505/instruction_output_dataset | 2023-10-05T11:38:37.000Z | [
"region:us"
] | mrabhi0505 | null | null | 0 | 5 | 2023-10-05T11:37:59 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
jfrei/GPTNERMED | 2023-10-08T22:05:18.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:de",
"bio",
"biomedical",
"medical",
"c... | jfrei | GPTNERMED is a novel open synthesized dataset and neural named-entity-recognition (NER) model for German texts in medical natural language processing (NLP). | @article{FREI2023104478,
title = {Annotated dataset creation through large language models for non-english medical NLP},
journal = {Journal of Biomedical Informatics},
volume = {145},
pages = {104478},
year = {2023},
issn = {1532-0464},
doi = {https://doi.org/10.1016/j.jbi.2023.104478},
url = {https://www.sciencedirect.com/science/article/pii/S1532046423001995},
author = {Johann Frei and Frank Kramer},
keywords = {Natural language processing, Information extraction, Named entity recognition, Data augmentation, Knowledge distillation, Medication detection},
abstract = {Obtaining text datasets with semantic annotations is an effortful process, yet crucial for supervised training in natural language processing (NLP). In general, developing and applying new NLP pipelines in domain-specific contexts for tasks often requires custom-designed datasets to address NLP tasks in a supervised machine learning fashion. When operating in non-English languages for medical data processing, this exposes several minor and major, interconnected problems such as the lack of task-matching datasets as well as task-specific pre-trained models. In our work, we suggest to leverage pre-trained large language models for training data acquisition in order to retrieve sufficiently large datasets for training smaller and more efficient models for use-case-specific tasks. To demonstrate the effectiveness of your approach, we create a custom dataset that we use to train a medical NER model for German texts, GPTNERMED, yet our method remains language-independent in principle. Our obtained dataset as well as our pre-trained models are publicly available at https://github.com/frankkramer-lab/GPTNERMED.}
} | 0 | 5 | 2023-10-05T12:39:54 | ---
annotations_creators:
- machine-generated
language:
- de
language_creators:
- machine-generated
license: []
multilinguality:
- monolingual
pretty_name: GPTNERMED
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- bio
- biomedical
- medical
- clinical
task_categories:
- token-classification
task_ids:
- named-entity-recognition
---
# GPTNERMED Dataset for German medical NER entities
## Dataset Description
- **Repository:** https://github.com/frankkramer-lab/GPTNERMED
- **Paper:** https://doi.org/10.1016/j.jbi.2023.104478
- **ArXiv-Preprint:** https://arxiv.org/abs/2208.14493
## Dataset Summary
This dataset contains the synthetic German sentences with annotated entities (`Medikation`, `Dosis`, `Diagnose`) from the GPTNERMED project.
The sentences as well as the annotations are **not** manually validated by medical professionals and therefore this dataset is **no** gold standard dataset.
The dataset consists of 9,845 sentences (121,027 tokens by SpaCy Tokenizer, 245,107 tokens by the GPT tokenizer) with the following labels:
| Label | Count | #Tokens (SpaCy) |
| --- | --- | -- |
| Medikation | 9868 | 10138 |
| Dosis | 7547 | 15845 |
| Diagnose | 5996 | 7656 |
## Dataset Structure
The train/test/dev-split (80%, 10%, 10%) of the data loader is as follows:\
`<-- train: 0.8 --><-- test: 0.1 --><-- dev: 0.1 -->`\
The splits are selected arbitrarily as the dataloader requires a split configuration. All sample sentences are however homogeneous in origin and splits could also be performed otherwise.
Every sample is a sentence with its text (property `sentence`) and its corresponding NER labels (property `ner_labels` / List of labels).\
Every NER label entry has a char-wise start and stop index (property `start`, `stop`) and a label class (property `ner_class`).
### Citation Information
If you like our work, cite our paper and give us a star on GitHub.\
(See the links above)
| 1,934 | [
[
-0.0271759033203125,
-0.05474853515625,
0.0330810546875,
0.00804901123046875,
-0.01059722900390625,
-0.02435302734375,
-0.01337432861328125,
-0.022674560546875,
0.025726318359375,
0.033966064453125,
-0.038848876953125,
-0.044891357421875,
-0.047821044921875,
... |
teragron/reviews | 2023-10-09T23:55:54.000Z | [
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:en",
"license:mit",
"finance",
"region:us"
] | teragron | null | null | 0 | 5 | 2023-10-05T13:32:32 | ---
license: mit
language:
- en
tags:
- finance
pretty_name: review_me
size_categories:
- 1M<n<10M
task_categories:
- text-generation
---
Following packages are necessary to compile the model in C:
```bash
sudo apt install gcc-7
```
```bash
sudo apt-get install build-essential
```
```python
for i in range(1,21):
!wget https://huggingface.co/datasets/teragron/reviews/resolve/main/chunk_{i}.bin
```
```bash
git clone https://github.com/karpathy/llama2.c.git
```
```bash
cd llama2.c
```
```bash
pip install -r requirements.txt
```
Path: data/TinyStories_all_data | 570 | [
[
-0.018218994140625,
-0.044677734375,
0.04931640625,
0.0113983154296875,
-0.0171966552734375,
0.0007376670837402344,
0.026519775390625,
-0.025177001953125,
0.0214996337890625,
0.034271240234375,
-0.04168701171875,
-0.043975830078125,
-0.04620361328125,
-0.011... |
slaqrichi/processed_Cosmic_dataset_V3_inst_splitted | 2023-10-05T14:01:46.000Z | [
"region:us"
] | slaqrichi | null | null | 0 | 5 | 2023-10-05T14:01:37 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 77676.57894736843
num_examples: 85
- name: test
num_bytes: 4569.210526315789
num_examples: 5
- name: valid
num_bytes: 4569.210526315789
num_examples: 5
download_size: 42971
dataset_size: 86815.0
---
# Dataset Card for "processed_Cosmic_dataset_V3_inst_splitted"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 690 | [
[
-0.036834716796875,
-0.030426025390625,
0.0254974365234375,
0.0217742919921875,
-0.0379638671875,
0.01139068603515625,
0.0210723876953125,
-0.01708984375,
0.060943603515625,
0.056060791015625,
-0.066650390625,
-0.036590576171875,
-0.040740966796875,
-0.00704... |
Fraol/RunMetrics | 2023-10-05T15:59:41.000Z | [
"region:us"
] | Fraol | null | null | 0 | 5 | 2023-10-05T15:59:22 | ---
dataset_info:
features:
- name: source
dtype: string
- name: path_name
dtype: string
- name: file_name
dtype: string
- name: ref_type
dtype: string
- name: ref_status
dtype: string
- name: hash
dtype: string
- name: class_name
dtype: string
- name: method_name
dtype: string
- name: row_number
dtype: int64
splits:
- name: train
num_bytes: 2296248627
num_examples: 385811
download_size: 480698181
dataset_size: 2296248627
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "RunMetrics"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 752 | [
[
-0.046112060546875,
-0.005451202392578125,
0.0102081298828125,
0.01427459716796875,
-0.0036258697509765625,
0.0193023681640625,
0.019012451171875,
-0.00177764892578125,
0.060699462890625,
0.037261962890625,
-0.0733642578125,
-0.043426513671875,
-0.02497863769531... |
librarian-bots/arxiv_abstracts | 2023-10-12T09:26:52.000Z | [
"region:us"
] | librarian-bots | null | null | 0 | 5 | 2023-10-05T18:42:24 | ---
dataset_info:
features:
- name: title
dtype: string
- name: abstract
dtype: string
- name: url
dtype: string
- name: category
dtype: string
- name: prediction
dtype: string
- name: probability
dtype: float64
- name: arxiv_id
dtype: string
splits:
- name: train
num_bytes: 715878
num_examples: 500
download_size: 411327
dataset_size: 715878
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "arxiv_abstracts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 663 | [
[
-0.043426513671875,
-0.0112457275390625,
0.0235748291015625,
0.01001739501953125,
-0.022918701171875,
-0.00016105175018310547,
0.032745361328125,
-0.012603759765625,
0.057708740234375,
0.029083251953125,
-0.0292816162109375,
-0.06396484375,
-0.053680419921875,
... |
joey234/sst2_affix_pos | 2023-10-23T02:28:30.000Z | [
"region:us"
] | joey234 | null | null | 0 | 5 | 2023-10-05T23:27:32 | ---
dataset_info:
features:
- name: idx
dtype: int32
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': positive
- name: words_with_affixes
sequence: string
splits:
- name: validation
num_bytes: 8687
num_examples: 55
download_size: 9498
dataset_size: 8687
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
# Dataset Card for "sst2_affix_pos"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 640 | [
[
-0.0185394287109375,
-0.0195159912109375,
0.0169677734375,
0.0181427001953125,
-0.03466796875,
-0.01105499267578125,
0.0288543701171875,
0.0017614364624023438,
0.0540771484375,
0.0304718017578125,
-0.046173095703125,
-0.042083740234375,
-0.04766845703125,
-0... |
Falah/Islamic_forest_image_prompts | 2023-10-06T07:54:16.000Z | [
"region:us"
] | Falah | null | null | 0 | 5 | 2023-10-06T07:54:15 | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 3517467
num_examples: 10000
download_size: 151517
dataset_size: 3517467
---
# Dataset Card for "Islamic_forest_image_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 373 | [
[
-0.05047607421875,
-0.030303955078125,
0.0127716064453125,
0.026611328125,
-0.0285186767578125,
-0.0009822845458984375,
0.0164794921875,
-0.00988006591796875,
0.03564453125,
0.030731201171875,
-0.07354736328125,
-0.059173583984375,
-0.04449462890625,
-0.0022... |
lafnac/sl-dataset | 2023-10-06T10:09:07.000Z | [
"task_categories:text-classification",
"size_categories:1M<n<10M",
"language:ar",
"license:afl-3.0",
"region:us"
] | lafnac | null | null | 0 | 5 | 2023-10-06T10:01:10 | ---
license: afl-3.0
task_categories:
- text-classification
language:
- ar
size_categories:
- 1M<n<10M
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 1,638 | [
[
-0.038177490234375,
-0.02984619140625,
-0.0036067962646484375,
0.027130126953125,
-0.0323486328125,
0.0037822723388671875,
-0.01727294921875,
-0.02020263671875,
0.049041748046875,
0.04046630859375,
-0.0634765625,
-0.08062744140625,
-0.052947998046875,
0.0020... |
carnival13/massive_val_DA2_tokenized | 2023-10-06T13:41:11.000Z | [
"region:us"
] | carnival13 | null | null | 0 | 5 | 2023-10-06T13:41:06 | ---
dataset_info:
features:
- name: pass_label
dtype: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 16518290
num_examples: 24160
download_size: 3770585
dataset_size: 16518290
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "massive_val_DA2_tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 545 | [
[
-0.03399658203125,
-0.043914794921875,
0.00909423828125,
0.018890380859375,
-0.0175628662109375,
0.00005626678466796875,
0.02752685546875,
-0.00545501708984375,
0.0631103515625,
0.04156494140625,
-0.039154052734375,
-0.045806884765625,
-0.05584716796875,
-0.... |
Intuit-GenSRF/tweet-eval-hate-es | 2023-10-06T20:05:33.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | 0 | 5 | 2023-10-06T20:05:31 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
- name: processed_text
sequence: string
- name: text_es
dtype: string
splits:
- name: train
num_bytes: 3680715
num_examples: 9000
download_size: 2493022
dataset_size: 3680715
---
# Dataset Card for "tweet_eval-hate-es"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 566 | [
[
-0.0294036865234375,
-0.03875732421875,
0.0122833251953125,
0.021728515625,
-0.0084991455078125,
0.0300750732421875,
0.008087158203125,
0.00673675537109375,
0.07037353515625,
0.0191497802734375,
-0.055145263671875,
-0.06805419921875,
-0.05877685546875,
-0.02... |
Intuit-GenSRF/hate-speech-offensive-es | 2023-10-06T22:24:18.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | 0 | 5 | 2023-10-06T22:24:15 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
- name: processed_text
sequence: string
- name: text_es
dtype: string
splits:
- name: train
num_bytes: 7320813
num_examples: 24783
download_size: 4759804
dataset_size: 7320813
---
# Dataset Card for "hate_speech_offensive-es"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 573 | [
[
-0.036163330078125,
-0.0361328125,
-0.0055694580078125,
0.01142120361328125,
-0.01140594482421875,
0.00566864013671875,
-0.00177001953125,
-0.02008056640625,
0.055938720703125,
0.0256500244140625,
-0.047393798828125,
-0.05767822265625,
-0.0665283203125,
-0.0... |
ContextualAI/hellaswag | 2023-10-06T23:57:13.000Z | [
"region:us"
] | ContextualAI | null | null | 0 | 5 | 2023-10-06T23:56:54 | ---
dataset_info:
features:
- name: query
dtype: string
- name: choices
sequence: string
- name: gold_generation
dtype: string
splits:
- name: dev
num_bytes: 9610103
num_examples: 10042
- name: test
num_bytes: 7885767
num_examples: 10003
download_size: 10451785
dataset_size: 17495870
---
# Dataset Card for "hellaswag"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 498 | [
[
-0.0421142578125,
-0.019989013671875,
0.0142974853515625,
0.01415252685546875,
-0.0298919677734375,
-0.00846099853515625,
0.0193939208984375,
-0.0077362060546875,
0.05914306640625,
0.040740966796875,
-0.0721435546875,
-0.06866455078125,
-0.049560546875,
-0.0... |
Fraol/DedupedRefDatasetWMetricF | 2023-10-07T01:04:15.000Z | [
"region:us"
] | Fraol | null | null | 0 | 5 | 2023-10-07T01:03:46 | ---
dataset_info:
features:
- name: source
dtype: string
- name: path_name
dtype: string
- name: file_name
dtype: string
- name: ref_type
dtype: string
- name: ref_status
dtype: string
- name: hash
dtype: string
- name: class_name
dtype: string
- name: method_name
dtype: string
- name: row_number
dtype: int64
- name: cbo
dtype: float64
- name: wmc
dtype: float64
- name: lcom*
dtype: float64
- name: loc
dtype: float64
splits:
- name: train
num_bytes: 2308835214
num_examples: 385811
download_size: 482442415
dataset_size: 2308835214
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "DedupedRefDatasetWMetricF"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 901 | [
[
-0.0439453125,
-0.0160980224609375,
0.013641357421875,
0.0233612060546875,
-0.01316070556640625,
0.00991058349609375,
0.03509521484375,
-0.01015472412109375,
0.061767578125,
0.0384521484375,
-0.07354736328125,
-0.045196533203125,
-0.0460205078125,
-0.0065460... |
gayathrimanoj/dataset_cpp | 2023-10-07T09:06:39.000Z | [
"region:us"
] | gayathrimanoj | null | null | 1 | 5 | 2023-10-07T09:06:18 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Intuit-GenSRF/joangaes-depression-es | 2023-10-07T12:00:23.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | 0 | 5 | 2023-10-07T12:00:20 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
- name: processed_text
sequence: string
- name: num_tokens
dtype: int64
- name: text_es
dtype: string
splits:
- name: train
num_bytes: 31578828
num_examples: 25828
download_size: 19244935
dataset_size: 31578828
---
# Dataset Card for "joangaes-depression-es"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 612 | [
[
-0.036651611328125,
-0.0207366943359375,
0.0266571044921875,
0.0298919677734375,
-0.0125274658203125,
-0.01523590087890625,
0.01255035400390625,
0.002140045166015625,
0.08013916015625,
0.0335693359375,
-0.06951904296875,
-0.0650634765625,
-0.05828857421875,
... |
Buffett/ntuadl_hw1 | 2023-10-07T12:59:32.000Z | [
"region:us"
] | Buffett | null | null | 0 | 5 | 2023-10-07T12:39:58 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
layoric/tiny-codes-alpaca-csharp | 2023-10-08T02:45:13.000Z | [
"region:us"
] | layoric | null | null | 0 | 5 | 2023-10-08T02:33:48 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: instruction
dtype: string
- name: main_topic
dtype: string
- name: subtopic
dtype: string
- name: adjective
dtype: string
- name: action_verb
dtype: string
- name: scenario
dtype: string
- name: target_audience
dtype: string
- name: programming_language
dtype: string
- name: common_sense_topic
dtype: string
- name: idx
dtype: int64
- name: output
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 326727978
num_examples: 125478
download_size: 126103184
dataset_size: 326727978
---
# Dataset Card for "tiny-codes-alpaca-csharp"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 909 | [
[
-0.04345703125,
-0.006572723388671875,
0.01422119140625,
0.023895263671875,
-0.0227203369140625,
0.003200531005859375,
0.001926422119140625,
-0.0147857666015625,
0.07061767578125,
0.0177764892578125,
-0.04791259765625,
-0.0494384765625,
-0.03759765625,
-0.01... |
Prisha290/Dataset_onlycorrect | 2023-10-08T05:58:51.000Z | [
"region:us"
] | Prisha290 | null | null | 0 | 5 | 2023-10-08T05:57:42 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
rongrong77/ADL_HW1 | 2023-10-08T06:52:58.000Z | [
"region:us"
] | rongrong77 | null | null | 0 | 5 | 2023-10-08T06:47:03 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Ronal999/finance-alpaca-demo | 2023-10-08T12:51:43.000Z | [
"region:us"
] | Ronal999 | null | null | 1 | 5 | 2023-10-08T12:51:39 | ---
dataset_info:
features:
- name: text
dtype: string
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 825832
num_examples: 690
download_size: 456544
dataset_size: 825832
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "finance-alpaca-demo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 587 | [
[
-0.044525146484375,
-0.026763916015625,
0.00701904296875,
0.028076171875,
-0.02301025390625,
-0.0023250579833984375,
0.0206298828125,
-0.01213836669921875,
0.06744384765625,
0.033355712890625,
-0.06585693359375,
-0.05841064453125,
-0.035125732421875,
-0.0325... |
lollox/math_dataset_50k | 2023-10-08T14:51:58.000Z | [
"region:us"
] | lollox | null | null | 0 | 5 | 2023-10-08T14:04:54 |
task_categories:
- question-answering
--- | 42 | [
[
-0.034942626953125,
-0.051605224609375,
0.0207977294921875,
0.047393798828125,
-0.017120361328125,
0.00588226318359375,
0.033050537109375,
0.005279541015625,
0.01500701904296875,
0.06585693359375,
-0.0533447265625,
-0.0035552978515625,
-0.0246429443359375,
0... |
AryanNsc/spacehubdataset | 2023-10-08T16:41:27.000Z | [
"region:us"
] | AryanNsc | null | null | 0 | 5 | 2023-10-08T16:15:48 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
hanifabdlh/quac-merged | 2023-10-09T02:15:54.000Z | [
"region:us"
] | hanifabdlh | null | null | 0 | 5 | 2023-10-09T02:15:37 | ---
dataset_info:
features:
- name: context
dtype: string
- name: instruction
dtype: string
- name: response
dtype: string
- name: instruction_source
dtype: string
splits:
- name: train
num_bytes: 271212149
num_examples: 482055
download_size: 142626540
dataset_size: 271212149
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "quac-merged"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 574 | [
[
-0.047943115234375,
-0.00872039794921875,
0.0322265625,
-0.00408172607421875,
-0.0233917236328125,
0.0223541259765625,
0.034088134765625,
-0.0180206298828125,
0.0570068359375,
0.031829833984375,
-0.04705810546875,
-0.045379638671875,
-0.0305633544921875,
-0.... |
d4un/training-bias | 2023-10-09T05:12:26.000Z | [
"region:us"
] | d4un | null | null | 0 | 5 | 2023-10-09T03:15:54 | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
This dataset is purely in English.
Some of the responses were generated by ChatGPT.
### Discussion of Biases
This dataset intentionally carries gender and job-related biases which reflect ones that exist in society,
for the research purposes of examining the effects the biases have on the model. Creators do not support these biases.
| 1,029 | [
[
-0.022064208984375,
-0.037567138671875,
-0.00844573974609375,
0.03277587890625,
-0.0258941650390625,
-0.006511688232421875,
0.00130462646484375,
-0.0205078125,
0.046630859375,
0.028411865234375,
-0.067138671875,
-0.06439208984375,
-0.052886962890625,
-0.0049... |
kelzla/ds_test1 | 2023-10-09T05:35:13.000Z | [
"region:us"
] | kelzla | null | null | 0 | 5 | 2023-10-09T05:22:40 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
kelzla/ds_test2 | 2023-10-09T07:14:01.000Z | [
"region:us"
] | kelzla | null | null | 0 | 5 | 2023-10-09T07:02:05 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Falah/coloring_book_animals | 2023-10-09T09:09:03.000Z | [
"region:us"
] | Falah | null | null | 0 | 5 | 2023-10-09T09:09:02 | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 284135
num_examples: 1000
download_size: 3100
dataset_size: 284135
---
# Dataset Card for "coloring_book_animals"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 361 | [
[
-0.04638671875,
0.00405120849609375,
-0.01273345947265625,
0.023193359375,
-0.021209716796875,
-0.007617950439453125,
0.01470184326171875,
-0.0313720703125,
0.048919677734375,
0.030914306640625,
-0.056671142578125,
-0.05401611328125,
-0.0355224609375,
-0.006... |
Globaly/clases21k | 2023-10-09T15:49:09.000Z | [
"region:us"
] | Globaly | null | null | 0 | 5 | 2023-10-09T15:36:25 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ContextualAI/tiny-nq_open | 2023-10-09T19:42:33.000Z | [
"region:us"
] | ContextualAI | null | null | 0 | 5 | 2023-10-09T19:42:30 | ---
dataset_info:
features:
- name: query
dtype: string
- name: gold_generation
sequence: string
splits:
- name: dev
num_bytes: 7565
num_examples: 100
download_size: 7451
dataset_size: 7565
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
---
# Dataset Card for "tiny-nq_open"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 472 | [
[
-0.0312042236328125,
-0.01690673828125,
0.0110626220703125,
-0.00821685791015625,
-0.01303863525390625,
-0.01349639892578125,
0.0087127685546875,
0.00499725341796875,
0.05316162109375,
0.027191162109375,
-0.059600830078125,
-0.040863037109375,
-0.01495361328125,... |
FinGPT/fingpt-fineval | 2023-10-10T06:45:52.000Z | [
"region:us"
] | FinGPT | null | null | 0 | 5 | 2023-10-10T06:44:46 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 441991
num_examples: 1056
- name: test
num_bytes: 117516
num_examples: 265
download_size: 269193
dataset_size: 559507
---
# Dataset Card for "fingpt-fineval"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 612 | [
[
-0.056488037109375,
-0.029083251953125,
0.0103302001953125,
-0.0029506683349609375,
-0.0341796875,
-0.015777587890625,
0.0160980224609375,
-0.0150146484375,
0.043487548828125,
0.04547119140625,
-0.04864501953125,
-0.04547119140625,
-0.033355712890625,
-0.022... |
Linyuyu/zhanglunwen | 2023-10-12T10:41:33.000Z | [
"region:us"
] | Linyuyu | null | null | 0 | 5 | 2023-10-10T07:04:43 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
relaxtraffic/ggenerator2 | 2023-10-10T09:19:12.000Z | [
"region:us"
] | relaxtraffic | null | null | 0 | 5 | 2023-10-10T09:16:04 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
johannes-garstenauer/pooling_net_embeddings_dim_16_masked_dataset_1p | 2023-10-10T09:50:55.000Z | [
"region:us"
] | johannes-garstenauer | null | null | 0 | 5 | 2023-10-10T09:50:52 | ---
dataset_info:
features:
- name: last_hs
sequence: float32
- name: label
dtype: int64
splits:
- name: train
num_bytes: 51148
num_examples: 673
download_size: 61004
dataset_size: 51148
---
# Dataset Card for "pooling_net_embeddings_dim_16_masked_dataset_1p"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 422 | [
[
-0.0521240234375,
-0.0162200927734375,
-0.0175018310546875,
0.03094482421875,
-0.0243988037109375,
0.00551605224609375,
0.0305023193359375,
-0.0025310516357421875,
0.07501220703125,
0.043365478515625,
-0.048736572265625,
-0.0479736328125,
-0.045562744140625,
... |
atulsinghphd/e2r-finetune-data1 | 2023-10-10T12:18:08.000Z | [
"region:us"
] | atulsinghphd | null | null | 0 | 5 | 2023-10-10T12:18:07 | ---
dataset_info:
features:
- name: text
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 120837
num_examples: 430
download_size: 26024
dataset_size: 120837
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "e2r-finetune-data1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 486 | [
[
-0.046295166015625,
-0.03515625,
0.004505157470703125,
0.00040078163146972656,
-0.017486572265625,
-0.025115966796875,
0.0179901123046875,
-0.00818634033203125,
0.054779052734375,
0.02276611328125,
-0.058258056640625,
-0.02978515625,
-0.03704833984375,
-0.01... |
mponty/code_champs_solutions | 2023-10-10T15:34:13.000Z | [
"region:us"
] | mponty | null | null | 1 | 5 | 2023-10-10T14:43:15 | ---
dataset_info:
features:
- name: submission_id
dtype: string
- name: problem_id
dtype: string
- name: date
dtype: int64
- name: language
dtype: string
- name: verdict
dtype: string
- name: cpu_time
dtype: int64
- name: memory
dtype: int64
- name: code
dtype: string
- name: source
dtype: string
- name: testcount
dtype: int64
- name: lenght
dtype: int64
splits:
- name: train
num_bytes: 48699691541
num_examples: 34994861
download_size: 18591747965
dataset_size: 48699691541
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "code_champs_solutions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 827 | [
[
-0.0413818359375,
-0.013275146484375,
0.007442474365234375,
0.0075225830078125,
-0.003936767578125,
0.019287109375,
0.0093994140625,
0.005580902099609375,
0.05096435546875,
0.033660888671875,
-0.056243896484375,
-0.059112548828125,
-0.032379150390625,
-0.004... |
si3mshady/aws_whitepapers | 2023-10-10T20:15:26.000Z | [
"region:us"
] | si3mshady | null | null | 0 | 5 | 2023-10-10T20:13:40 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Coldog2333/tiage | 2023-10-11T06:27:05.000Z | [
"size_categories:n<1K",
"language:en",
"license:mit",
"dialogue segmentation",
"region:us"
] | Coldog2333 | \ | \ | 0 | 5 | 2023-10-11T06:15:49 | ---
license: mit
language:
- en
tags:
- dialogue segmentation
size_categories:
- n<1K
---
# Dataset Card for SuperDialseg
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/HuiyuanXie/tiage](https://github.com/HuiyuanXie/tiage)
- **Repository:** [https://github.com/HuiyuanXie/tiage](https://github.com/HuiyuanXie/tiage)
- **Paper:** TIAGE: A Benchmark for Topic-Shift Aware Dialog Modeling
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages: English
## Dataset Structure
### Data Instances
```
{
"dial_data": {
"tiage": [
{
"dial_id": "tiage_dial_001",
"turns": [
{
"da": "",
"role": "user",
"turn_id": 0,
"utterance": "hello , how are you doing tonight ?",
"topic_id": 0,
"segmentation_label": 0
},
...
{
"da": "",
"role": "user",
"turn_id": 15,
"utterance": "i bet it is oh i could not",
"topic_id": 4,
"segmentation_label": 1
}
],
...
}
]
}
```
### Data Fields
#### Dialogue-Level
+ `dial_id`: ID of a dialogue;
+ `turns`: All utterances of a dialogue.
#### Utterance-Level
+ `da`: Dialogue Act annotation (here is Null);
+ `role`: Role annotation (here is user/agent/user/agent... in default);
+ `turn_id`: ID of an utterance;
+ `utterance`: Text of the utterance;
+ `topic_id`: ID (order) of the current topic;
+ `segmentation_label`: 1: it is the end of a topic; 0: others.
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
MIT
### Citation Information
```
@article{xie2021tiage,
title={TIAGE: A Benchmark for Topic-Shift Aware Dialog Modeling},
author={Xie, Huiyuan and Liu, Zhenghao and Xiong, Chenyan and Liu, Zhiyuan and Copestake, Ann},
journal={arXiv preprint arXiv:2109.04562},
year={2021}
}
```
### Contributions
+ Thanks to [@HuiyuanXie](https://github.com/HuiyuanXie/) for collecting this dataset.
+ Thanks to [@Coldog2333](https://github.com/Coldog2333) for adding this dataset. | 4,019 | [
[
-0.031646728515625,
-0.067138671875,
0.0166015625,
-0.0029430389404296875,
-0.0197601318359375,
0.0019779205322265625,
-0.007350921630859375,
-0.0081024169921875,
0.02520751953125,
0.042633056640625,
-0.0867919921875,
-0.07049560546875,
-0.040924072265625,
0... |
arifzanko/conversation_kelantan | 2023-10-11T06:50:24.000Z | [
"region:us"
] | arifzanko | null | null | 0 | 5 | 2023-10-11T06:24:34 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
tanvirsrbd1/vary_merge_dataset_filter_number1 | 2023-10-11T06:45:43.000Z | [
"region:us"
] | tanvirsrbd1 | null | null | 0 | 5 | 2023-10-11T06:45:38 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: html
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 3301248
num_examples: 5960
download_size: 1069863
dataset_size: 3301248
---
# Dataset Card for "vary_merge_dataset_filter_number1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 498 | [
[
-0.0684814453125,
-0.006153106689453125,
0.005832672119140625,
0.01496124267578125,
-0.03839111328125,
-0.0124969482421875,
0.02642822265625,
-0.00693511962890625,
0.050048828125,
0.052764892578125,
-0.076416015625,
-0.05133056640625,
-0.05255126953125,
-0.0... |
timestap/fighter_jet_captions | 2023-10-11T19:09:27.000Z | [
"region:us"
] | timestap | null | null | 0 | 5 | 2023-10-11T19:09:17 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 4591975.0
num_examples: 25
download_size: 4584088
dataset_size: 4591975.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "fighter_jet_captions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 483 | [
[
-0.043975830078125,
-0.00838470458984375,
0.0119781494140625,
0.02020263671875,
-0.027862548828125,
0.03558349609375,
0.018035888671875,
0.0086212158203125,
0.051513671875,
0.04290771484375,
-0.0606689453125,
-0.04168701171875,
-0.043304443359375,
-0.0118408... |
smangrul/hf-stack-peft | 2023-10-12T06:43:30.000Z | [
"region:us"
] | smangrul | null | null | 0 | 5 | 2023-10-12T06:43:27 | ---
dataset_info:
features:
- name: repo_id
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1280407
num_examples: 158
download_size: 424682
dataset_size: 1280407
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "hf-stack-peft"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 561 | [
[
-0.046966552734375,
-0.036651611328125,
0.0158233642578125,
0.0281982421875,
-0.004833221435546875,
0.0174102783203125,
0.030914306640625,
-0.003910064697265625,
0.053070068359375,
0.0484619140625,
-0.0504150390625,
-0.041717529296875,
-0.032470703125,
-0.01... |
renumics/spotlight-textvqa-enrichment | 2023-10-13T10:32:18.000Z | [
"region:us"
] | renumics | null | null | 0 | 5 | 2023-10-12T14:03:30 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image_id.embedding
sequence: float32
length: 2
- name: question.embedding
sequence: float32
length: 2
- name: image.embedding
sequence: float32
length: 2
- name: flickr_original_url.embedding
sequence: float32
length: 2
- name: flickr_300k_url.embedding
sequence: float32
length: 2
- name: set_name.embedding
sequence: float32
length: 2
splits:
- name: train
num_bytes: 1660896
num_examples: 34602
- name: validation
num_bytes: 240000
num_examples: 5000
- name: test
num_bytes: 275232
num_examples: 5734
download_size: 3028800
dataset_size: 2176128
---
# Dataset Card for "spotlight-textvqa-enrichment"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,043 | [
[
-0.035003662109375,
-0.0158233642578125,
0.0165557861328125,
0.0203399658203125,
-0.01316070556640625,
0.0175933837890625,
0.0082855224609375,
-0.00972747802734375,
0.0467529296875,
0.037384033203125,
-0.05908203125,
-0.052337646484375,
-0.037200927734375,
-... |
kakooch/ganjoor-processed | 2023-10-14T06:21:52.000Z | [
"language:fa",
"license:gpl-2.0",
"region:us"
] | kakooch | null | null | 0 | 5 | 2023-10-12T14:04:32 | ---
name: Persian Poetry Dataset
description: |
This dataset contains a rich collection of Persian poems along with metadata about the poets and the verses.
The data spans various poets and their poems, and includes the verses with associated information about their position within each poem.
The dataset is split into a training set and a test set, with 90% of the verses of each poem for each poet used for training and 10% used for testing.
license: gpl-2.0
url: https://github.com/ganjoor/desktop/releases/tag/v2.81
citation: |
Persian Poetry Dataset. Collected by Kakooch from the Ganjoor Project.
Available at: https://huggingface.co/datasets/persian_poetry
size: "Custom"
language:
- fa
splits:
train:
description: "This split contains 90% of the verses of each poem for each poet and is used for training."
test:
description: "This split contains 10% of the verses of each poem for each poet and is used for testing."
---
# Persian Poetry Dataset
## Dataset Description
### Overview
This dataset contains a rich collection of Persian poems along with metadata about the poets and the verses. The data spans various poets and their poems, and includes the verses with associated information about their position within each poem.
### Data Collection
- **Data Collection Source:** The data is sourced from the [Ganjoor project](https://github.com/ganjoor/). The specific database file can be found in the [releases section](https://github.com/ganjoor/desktop/releases/tag/v2.81) of their GitHub repository.
- **Time Period:** Oct-12-2023
- **Collection Methods:** The data was collected by downloading the raw database file from the Ganjoor project's GitHub repository.
### Data Structure
The dataset is structured into multiple tables, notably `poem`, `poet`, and `verse` tables which contain information about the poems, poets, and verses respectively. The tables are linked through various ID fields that allow the data to be connected and queried together.
- **Poem Table:**
- `id`: The unique identifier of a poem.
- `cat_id`: A category identifier linking to poet information.
- `title`: The title of the poem.
- `url`: A URL associated with the poem.
- **Poet Table:**
- `id`: The unique identifier of a poet.
- `name`: The name of the poet.
- `cat_id`: A category identifier.
- `description`: A textual description or biography of the poet.
- **Verse Table:**
- `poem_id`: Identifier linking the verse to a particular poem.
- `vorder`: Order of the verse within the poem.
- `position`: Position of the verse, used to determine if two verses form a hemistich.
- `text`: The text of the verse.
### Data Example
```json
{
"poet": "Sample Poet",
"title": "Sample Poem Title",
"content": [
{
"hemistich": {
"verse0": "First part of a hemistich",
"verse1": "Second part of a hemistich"
}
},
{
"verse": {"text": "A standalone verse"}
}
]
}
```
## Dataset Usage
### Use Cases
This dataset can be utilized for various Natural Language Processing and analysis tasks related to Persian poetry, such as:
- Poem generation
- Poet identification
- Style analysis
### Challenges & Limitations
- The dataset does not contain long verses that are over 100 characters.
- Some poems may contain verses that form hemistichs, which are represented with specific structure in the dataset.
### License
GPL-2 (GNU General Public License) inherited from the original source
## Additional Information
### Citation
```
Persian Poetry Dataset. Collected by Kakooch from the Ganjoor Project. Available at: https://huggingface.co/datasets/persian_poetry
```
### Dataset Link
[Download the dataset from Hugging Face](https://huggingface.co/datasets/persian_poetry)
### Contact
Email: [kakooch@gmail.com](mailto:kakooch@gmail.com) | GitHub: [kakooch](https://github.com/kakooch)
---
*This README was generated by Kakooch.*
| 3,996 | [
[
-0.029510498046875,
-0.038726806640625,
0.0197296142578125,
0.0260467529296875,
-0.0282745361328125,
0.0002130270004272461,
-0.03350830078125,
-0.0240020751953125,
0.0350341796875,
0.047332763671875,
-0.0443115234375,
-0.0733642578125,
-0.043182373046875,
0.... |
rouabelgacem/autotrain-data-nlp-bert-ner-testing | 2023-10-12T14:53:16.000Z | [
"region:us"
] | rouabelgacem | null | null | 0 | 5 | 2023-10-12T14:44:39 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
carnival13/xlmr_test_10shot | 2023-10-12T15:00:02.000Z | [
"region:us"
] | carnival13 | null | null | 0 | 5 | 2023-10-12T14:59:28 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 948566820
num_examples: 900000
download_size: 204962722
dataset_size: 948566820
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "xlmr_test_10shot"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 503 | [
[
-0.03814697265625,
-0.01800537109375,
0.01678466796875,
0.002231597900390625,
-0.007518768310546875,
0.01064300537109375,
0.02459716796875,
0.0096282958984375,
0.04168701171875,
0.028228759765625,
-0.0511474609375,
-0.042510986328125,
-0.0565185546875,
-0.00... |
surathisin/nvse-dataset-test-1 | 2023-10-13T06:24:23.000Z | [
"region:us"
] | surathisin | null | null | 0 | 5 | 2023-10-12T15:23:30 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
tyzhu/eval_tag_nq_dev_v11_first | 2023-10-12T15:36:30.000Z | [
"region:us"
] | tyzhu | null | null | 0 | 5 | 2023-10-12T15:26:50 | ---
dataset_info:
features:
- name: question
dtype: string
- name: title
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
- name: answers
struct:
- name: answer_start
sequence: 'null'
- name: text
sequence: string
- name: id
dtype: string
- name: titles
dtype: string
splits:
- name: train
num_bytes: 3340
num_examples: 10
- name: validation
num_bytes: 2403269
num_examples: 6515
download_size: 1389023
dataset_size: 2406609
---
# Dataset Card for "eval_tag_nq_dev_v11_first"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 722 | [
[
-0.046539306640625,
-0.0212554931640625,
0.006084442138671875,
0.00626373291015625,
-0.01251220703125,
0.0273590087890625,
0.0282745361328125,
0.002193450927734375,
0.0633544921875,
0.03302001953125,
-0.05072021484375,
-0.053558349609375,
-0.025848388671875,
... |
hippocrates/MedNLI_test | 2023-10-18T19:46:58.000Z | [
"region:us"
] | hippocrates | null | null | 0 | 5 | 2023-10-12T15:48:06 | ---
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
- name: answer
dtype: string
- name: choices
sequence: string
- name: gold
dtype: int64
splits:
- name: train
num_bytes: 4828284
num_examples: 11232
- name: valid
num_bytes: 606323
num_examples: 1395
- name: test
num_bytes: 605516
num_examples: 1422
download_size: 0
dataset_size: 6040123
---
# Dataset Card for "MedNLI_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 607 | [
[
-0.037078857421875,
-0.019805908203125,
0.0042572021484375,
0.0025348663330078125,
-0.00511932373046875,
-0.01050567626953125,
0.017059326171875,
-0.0122833251953125,
0.06353759765625,
0.025421142578125,
-0.0628662109375,
-0.045440673828125,
-0.020965576171875,
... |
kakooch/persian-poetry-qa | 2023-10-14T10:22:45.000Z | [
"language:fa",
"license:gpl-2.0",
"region:us"
] | kakooch | null | null | 0 | 5 | 2023-10-12T16:38:52 | ---
name: Persian Poetry QA Dataset
description: |
This dataset is structured in a question-answering format derived from a rich collection of Persian poems along with metadata about the poets and the verses.
It is designed to be utilized for various Natural Language Processing and analysis tasks related to Persian poetry, such as Question Answering, Text Generation, Language Modeling, and Style Analysis.
license: gpl-2.0
url: https://github.com/ganjoor/desktop/releases/tag/v2.81
citation: |
Persian Poetry QA Dataset. Collected by Kakooch from the Ganjoor Project.
Available at: https://huggingface.co/datasets/persian_poetry
size: "Custom"
language:
- fa
splits:
train:
description: "This split contains Persian poems structured for QA, where each row asks for a sample poem from a specific poet with the poem or verse as the answer."
validation:
description: "This split contains random selection of 1% of Persian poems in original dataset."
features:
context:
description: "A static string which is 'Persian Poetry or شعر فارسی'."
type: "string"
question:
description: "A string that asks for a sample poem from a specific poet in the format 'یک نمونه از شعر [POET_NAME]'."
type: "string"
answer:
description: "Text of a hemistich or verse."
type: "string"
answer_start:
description: "The starting character index of `answer` within `context` (Note: this is always -1 in the current dataset as `answer` is not a substring of `context`)."
type: "int32"
configs:
- config_name: default
data_files:
- split: train
path: poems-qa.csv
---
# Persian Poetry Dataset
## Dataset Description
### Overview
This dataset contains a collection of Persian poems structured in a question-answering format. The dataset is derived from various Persian poets and their poems, providing a rich source for exploring Persian poetry in a structured manner suitable for machine learning applications, especially in natural language processing tasks like question answering.
### Data Collection
- **Data Collection Source:** The data is sourced from the [Ganjoor project](https://github.com/ganjoor/). The specific database file can be found in the [releases section](https://github.com/ganjoor/desktop/releases/tag/v2.81) of their GitHub repository.
- **Time Period:** Oct-12-2023
- **Collection Methods:** The data was collected by downloading the raw database file from the Ganjoor project's GitHub repository.
### Data Structure
The dataset is structured into a CSV file with the following columns:
- `context`: A static string which is "Persian Poetry or شعر فارسی".
- `question`: A string that asks for a sample poem from a specific poet in the format "یک نمونه از شعر [POET_NAME]".
- `answer`: Text of a hemistich or verse. Verses of a hemistich are TAB SEPARATED
- `answer_start`: The starting character index of `answer` within `context` (Note: this is always -1 in the current dataset as `answer` is not a substring of `context`).
### Data Example
```plaintext
context,question,answer,answer_start
Persian Poetry,یک نمونه از شعر صائب تبریزی,خار نتواند گرفتن دامن ریگ روان رهنورد شوق، افسردن نمی داند که چیست,-1
```
## Dataset Usage
### Use Cases
This dataset can be utilized for various Natural Language Processing and analysis tasks related to Persian poetry, such as:
- Question Answering
- Text Generation
- Language Modeling
- Style Analysis
### Challenges & Limitations
- The `answer_start` field is always -1 as the `answer` is not a substring of `context`. Depending on your use-case, you might need to adjust how `context` and `answer_start` are determined.
- The dataset does not contain long verses that are over 100 characters.
### License
GPL-2 (GNU General Public License) ingerited from original ganjoor project
## Additional Information
### Citation
```
Persian Poetry Dataset. Collected by Kakooch from the Ganjoor Project. Available at: https://huggingface.co/datasets/persian_poetry
```
### Dataset Link
[Download the dataset from Hugging Face](https://huggingface.co/datasets/persian_poetry)
### Contact
Email: [kakooch@gmail.com](mailto:kakooch@gmail.com) | GitHub: [kakooch](https://github.com/kakooch)
---
*This README was generated by Kakooch.*
| 4,326 | [
[
-0.0257720947265625,
-0.0614013671875,
0.01934814453125,
0.0180816650390625,
-0.031280517578125,
-0.01326751708984375,
-0.037200927734375,
-0.0211639404296875,
0.02972412109375,
0.037811279296875,
-0.052764892578125,
-0.05731201171875,
-0.041778564453125,
0.... |
Lexington120/Test_Semantic_Search | 2023-10-12T22:05:58.000Z | [
"region:us"
] | Lexington120 | null | null | 0 | 5 | 2023-10-12T18:45:03 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
daishen/legal-ar | 2023-10-14T07:33:12.000Z | [
"region:us"
] | daishen | null | null | 0 | 5 | 2023-10-13T01:22:24 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
carnival13/rbrt_eval | 2023-10-13T16:10:23.000Z | [
"region:us"
] | carnival13 | null | null | 0 | 5 | 2023-10-13T16:10:18 | ---
dataset_info:
features:
- name: domain_label
dtype: int64
- name: pass_label
dtype: int64
- name: input
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 18920775
num_examples: 11590
download_size: 6002960
dataset_size: 18920775
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "rbrt_eval"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 603 | [
[
-0.037017822265625,
-0.04931640625,
0.00696563720703125,
0.0142364501953125,
-0.01361083984375,
0.0228729248046875,
0.0043792724609375,
-0.0077972412109375,
0.040802001953125,
0.032989501953125,
-0.0418701171875,
-0.047637939453125,
-0.034027099609375,
-0.01... |
gg-ai/es-1310-no-demoji-m | 2023-10-13T18:58:20.000Z | [
"region:us"
] | gg-ai | null | null | 0 | 5 | 2023-10-13T18:58:03 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: val
path: data/val-*
dataset_info:
features:
- name: text
dtype: string
- name: clean_text
dtype: string
- name: sent
dtype: int64
splits:
- name: train
num_bytes: 12631455
num_examples: 32431
- name: test
num_bytes: 2492249
num_examples: 6486
- name: val
num_bytes: 609294
num_examples: 1622
download_size: 10520538
dataset_size: 15732998
---
# Dataset Card for "es-1310-no-demoji-m"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 716 | [
[
-0.033050537109375,
-0.00794219970703125,
0.0169677734375,
0.00234222412109375,
-0.02532958984375,
-0.01325225830078125,
0.0017957687377929688,
0.005756378173828125,
0.08905029296875,
0.042205810546875,
-0.075927734375,
-0.06488037109375,
-0.04241943359375,
... |
sshreyy/donut_test_combined | 2023-10-14T06:39:09.000Z | [
"region:us"
] | sshreyy | null | null | 0 | 5 | 2023-10-14T06:37:53 | ---
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 8310825.0
num_examples: 8
- name: test
num_bytes: 3675299.0
num_examples: 4
- name: validation
num_bytes: 6543521.0
num_examples: 6
download_size: 8907499
dataset_size: 18529645.0
---
# Dataset Card for "donut_test_combined"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 530 | [
[
-0.03082275390625,
-0.0232696533203125,
0.00202178955078125,
0.006046295166015625,
-0.0019741058349609375,
0.01617431640625,
0.0007948875427246094,
-0.00667572021484375,
0.054656982421875,
0.036407470703125,
-0.049102783203125,
-0.0379638671875,
-0.0370178222656... |
carnival13/rbrt_eval_sur | 2023-10-14T11:42:41.000Z | [
"region:us"
] | carnival13 | null | null | 0 | 5 | 2023-10-14T11:42:36 | ---
dataset_info:
features:
- name: domain_label
dtype: int64
- name: pass_label
dtype: int64
- name: input
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 13846819
num_examples: 6970
download_size: 3906257
dataset_size: 13846819
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "rbrt_eval_sur"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 606 | [
[
-0.04559326171875,
-0.043548583984375,
0.004955291748046875,
0.020263671875,
-0.0177764892578125,
0.021331787109375,
0.01190185546875,
-0.005512237548828125,
0.046905517578125,
0.044891357421875,
-0.0423583984375,
-0.052337646484375,
-0.0333251953125,
-0.008... |
carnival13/rbrt_full_uda_large_ep5 | 2023-10-15T03:08:18.000Z | [
"region:us"
] | carnival13 | null | null | 0 | 5 | 2023-10-15T02:40:07 | ---
dataset_info:
features:
- name: domain_label
dtype: int64
- name: pass_label
dtype: int64
- name: input
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 1219081708
num_examples: 824810
download_size: 422786339
dataset_size: 1219081708
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "rbrt_full_uda_large_ep5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 624 | [
[
-0.047271728515625,
-0.0207366943359375,
0.024688720703125,
0.0273284912109375,
-0.031219482421875,
-0.00009012222290039062,
0.01139068603515625,
-0.016326904296875,
0.057769775390625,
0.03717041015625,
-0.042144775390625,
-0.057861328125,
-0.0307159423828125,
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.