id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
Kaludi/Customer-Support-Responses | 2023-03-27T23:11:45.000Z | [
"region:us"
] | Kaludi | null | null | 1 | 41 | 2023-03-27T23:11:14 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
Pranavkpba2000/skin_cancer_dataset | 2023-05-14T08:47:49.000Z | [
"region:us"
] | Pranavkpba2000 | null | null | 1 | 41 | 2023-05-14T08:40:43 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': AK
'1': BCC
'2': BKL
'3': DF
'4': MEL
'5': NV
'6': SCC
'7': VASC
splits:
- name: train
num_bytes: 9380942753.528
num_examples: 28516
- name: test
num_bytes: 1445202498.285
num_examples: 7105
download_size: 9852696203
dataset_size: 10826145251.813
---
# Dataset Card for "skin_cancer_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 659 | [
[
-0.01209259033203125,
-0.017791748046875,
0.018707275390625,
0.00450897216796875,
-0.01806640625,
0.00760650634765625,
0.030120849609375,
-0.0153350830078125,
0.061553955078125,
0.051239013671875,
-0.0521240234375,
-0.07696533203125,
-0.0445556640625,
-0.029... |
tasksource/tracie | 2023-05-31T08:26:23.000Z | [
"task_categories:text-classification",
"language:en",
"license:apache-2.0",
"nli",
"region:us"
] | tasksource | null | null | 1 | 41 | 2023-05-25T07:17:09 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
tags:
- nli
---
https://github.com/allenai/aristo-leaderboard/tree/master/tracie/data
```
@inproceedings{ZRNKSR21,
author = {Ben Zhou and Kyle Richardson and Qiang Ning and Tushar Khot and Ashish Sabharwal and Dan Roth},
title = {Temporal Reasoning on Implicit Events from Distant Supervision},
booktitle = {NAACL},
year = {2021},
}
``` | 430 | [
[
-0.001842498779296875,
-0.051055908203125,
0.0660400390625,
0.0099639892578125,
-0.00839996337890625,
-0.00016129016876220703,
0.0029735565185546875,
-0.058197021484375,
0.016845703125,
0.01166534423828125,
-0.0692138671875,
-0.057952880859375,
-0.04632568359375... |
clarin-knext/dbpedia-pl-qrels | 2023-06-07T08:12:37.000Z | [
"language:pl",
"arxiv:2305.19840",
"region:us"
] | clarin-knext | null | null | 0 | 41 | 2023-06-06T22:28:53 | ---
language:
- pl
---
Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**.
Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf
Contact: konrad.wojtasik@pwr.edu.pl | 201 | [
[
-0.015411376953125,
-0.0628662109375,
0.03546142578125,
0.016387939453125,
-0.022186279296875,
-0.0103607177734375,
-0.0115966796875,
-0.034515380859375,
-0.0013065338134765625,
0.0286102294921875,
-0.038299560546875,
-0.04815673828125,
-0.0290069580078125,
... |
causal-lm/gpt4all | 2023-06-25T03:24:10.000Z | [
"region:us"
] | causal-lm | null | null | 2 | 41 | 2023-06-25T03:15:25 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ibm-nasa-geospatial/multi-temporal-crop-classification | 2023-09-06T19:33:21.000Z | [
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-4.0",
"remote sensing",
"segmentation",
"crop type",
"foundation model",
"doi:10.57967/hf/0955",
"region:us"
] | ibm-nasa-geospatial | null | null | 11 | 41 | 2023-07-27T18:56:57 | ---
license: cc-by-4.0
language:
- en
tags:
- remote sensing
- segmentation
- crop type
- foundation model
size_categories:
- 1K<n<10K
---
# Dataset Card for Multi-Temporal Crop Classification
## Dataset Description
- **Homepage: https://huggingface.co/datasets/ibm-nasa-geospatial/cdl-crops/**
- **Point of Contact: Dr. Hamed Alemohammad (halemohammad@clarku.edu)**
### Dataset Summary
This dataset contains temporal Harmonized Landsat-Sentinel imagery of diverse land cover and crop type classes across the Contiguous United States for the year 2022. The target labels are derived from USDA's Crop Data Layer (CDL). It's primary purpose is for training segmentation geospatial machine learning models.
### Dataset Structure
## TIFF Files
Each tiff file covers a 224 x 224 pixel area at 30m spatial resolution. Each input satellite file contains 18 bands including 6 spectral bands for three time steps stacked together. Each GeoTIFF file for the mask contains one band with the target classes for each pixel.
## Band Order
In each input GeoTIFF the following bands are repeated three times for three observations throughout the growing season:
Channel, Name, HLS S30 Band number
1, Blue, B02
2, Green, B03
3, Red, B04
4, NIR, B8A
5, SW 1, B11
6, SW 2, B12
Masks are a single band with values:
0 : "No Data"
1 : "Natural Vegetation"
2 : "Forest"
3 : "Corn"
4 : "Soybeans"
5 : "Wetlands"
6 : "Developed/Barren"
7 : "Open Water"
8 : "Winter Wheat"
9 : "Alfalfa"
10 : "Fallow/Idle Cropland"
11 : "Cotton"
12 : "Sorghum"
13 : "Other"
## Class Distribution
### Training Data Distribution

### Validation Data Distribution

## Data Splits
The 3,854 chips have been randomly split into training (80%) and validation (20%) with corresponding ids recorded in cvs files `train_data.txt` and `validation_data.txt`.
## Dataset Creation
### Query and Scene Selection
First, a set of 5,000 chips were defined based on samples from the USDA CDL to ensure a representative sampling across the CONUS. Next, for each chip, the corresponding HLS S30 scenes between March and September 2022 were queried, and scenes with low cloud cover were retrieved. Then, three scenes are selected among the low cloudy scenes to ensure a scene from early in the season, one in the middle, and one toward the end. The three final scenes were then reprojected to CDL's projection grid (`EPSG:5070`) using bilinear interpolation.
### Chip Generation
In the final step, the three scenes for each chip were clipped to the bounding box of the chip, and 18 spectral bands were stacked together. In addition, a quality control was applied to each chip using the `Fmask` layer of the HLS dataset. Any chip containing clouds, cloud shadow, adjacent to cloud or missing values were discarded. This resulted in 3,854 chips.
### Dataset Download
You can download the data in `.tgz` format from this repository (you need to install [Git Large File Sotrage](https://git-lfs.com/) for this). The same version of the data is hosted on [Source Cooperative](https://beta.source.coop/repositories/clarkcga/multi-temporal-crop-classification/description) as objects on AWS S3.
### Citation
If this dataset helped your research, please cite `hls-multi-temporal-crop-classification` in your publications. Here is an example BibTeX entry:
```
@misc{hls-multi-temporal-crop-classification,
author = {Cecil, Michael and Kordi, Fatemehand Li, Hanxi (Steve) and Khallaghi, Sam and Alemohammad, Hamed},
doi = {10.57967/hf/0955},
month = aug,
title = {{HLS Multi Temporal Crop Classification}},
url = {https://huggingface.co/ibm-nasa-geospatial/multi-temporal-crop-classification},
year = {2023}
}
``` | 3,798 | [
[
-0.031494140625,
-0.031951904296875,
0.03570556640625,
0.0115203857421875,
-0.0087432861328125,
0.02972412109375,
-0.0093841552734375,
-0.03973388671875,
-0.0033416748046875,
0.00637054443359375,
-0.04449462890625,
-0.0587158203125,
-0.047210693359375,
-0.00... |
maximuslee07/raqna | 2023-10-11T17:48:21.000Z | [
"region:us"
] | maximuslee07 | null | null | 0 | 41 | 2023-08-10T18:53:46 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 85566
num_examples: 100
download_size: 53421
dataset_size: 85566
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "raqna"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 426 | [
[
-0.0374755859375,
-0.01459503173828125,
0.002044677734375,
0.0138092041015625,
-0.01348876953125,
0.0097808837890625,
0.027069091796875,
-0.00768280029296875,
0.06500244140625,
0.032928466796875,
-0.054229736328125,
-0.05108642578125,
-0.0340576171875,
-0.01... |
thesistranslation/distilled-ccmatrix-en-de | 2023-10-03T12:20:34.000Z | [
"language:en",
"language:de",
"region:us"
] | thesistranslation | null | null | 0 | 41 | 2023-08-17T13:44:37 | ---
dataset_info:
features:
- name: id
dtype: int32
- name: translation
dtype:
translation:
languages:
- en
- de
splits:
- name: train
num_bytes: 7294036621
num_examples: 30000000
download_size: 5135500985
dataset_size: 7294036621
language:
- en
- de
---
# Dataset Card for "distilled-ccmatrix-en-de"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 494 | [
[
-0.042816162109375,
-0.02557373046875,
0.0262908935546875,
0.0146636962890625,
-0.03533935546875,
0.0259857177734375,
0.00264739990234375,
0.01294708251953125,
0.04974365234375,
0.0254364013671875,
-0.047821044921875,
-0.06072998046875,
-0.058807373046875,
-... |
larryvrh/ShareGPT-Zh_Only | 2023-08-22T08:25:50.000Z | [
"task_categories:text-generation",
"task_categories:conversational",
"size_categories:1K<n<10K",
"language:zh",
"region:us"
] | larryvrh | null | null | 4 | 41 | 2023-08-21T09:57:50 | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: src
dtype: string
splits:
- name: train
num_bytes: 69835231
num_examples: 8631
download_size: 32862465
dataset_size: 69835231
task_categories:
- text-generation
- conversational
language:
- zh
size_categories:
- 1K<n<10K
---
# Dataset Card for "sharegpt"
Combined and filtered from [shibing624/sharegpt_gpt4](https://huggingface.co/datasets/shibing624/sharegpt_gpt4) and [zetavg/ShareGPT-Processed](https://huggingface.co/datasets/zetavg/ShareGPT-Processed). | 628 | [
[
-0.044219970703125,
-0.0231170654296875,
0.0267333984375,
0.03045654296875,
-0.03338623046875,
0.007404327392578125,
0.01558685302734375,
-0.026031494140625,
0.046875,
0.04644775390625,
-0.075439453125,
-0.043426513671875,
-0.055877685546875,
-0.008666992187... |
Isaak-Carter/Function_Calling_Private_GG | 2023-10-10T12:35:06.000Z | [
"task_categories:text-generation",
"task_categories:conversational",
"size_categories:100K<n<1M",
"license:apache-2.0",
"region:us"
] | Isaak-Carter | null | null | 1 | 41 | 2023-09-02T10:35:38 | ---
license: apache-2.0
task_categories:
- text-generation
- conversational
pretty_name: Funcion Calling Like A Champ
size_categories:
- 100K<n<1M
---
# Function Recommendation Dataset Readme
## Description
This dataset is based on the "glaiveai/glaive-function-calling" repository and has been customized to suit my specific requirements. It is designed for fine-tuning a Large Language Model (LLM) on the task of generating function recommendations in a conversational context. The dataset contains 158,738 snippets of conversations between me and my virtual assistant named J.O.S.I.E. (Just an Outstandingly Smart Intelligent Entity), who only assists me with various tasks by recommending functions and providing relevant information.
Each snippet in the dataset represents a conversation between the user and J.O.S.I.E. The conversations typically follow this structure:
1. The conversation begins with an introduction of J.O.S.I.E. and its capabilities.
2. The user asks a question or requests assistance related to a specific task.
3. J.O.S.I.E. responds by recommending a function call that can help fulfill the user's request.
4. The user may ask follow-up questions or request additional recommendations.
5. J.O.S.I.E. continues to provide function recommendations and responses as needed.
## Snippet Example
Here is an example snippet from the dataset:
```markdown
### SYSTEM: You are Gökdeniz Gülmez's private assistant named J.O.S.I.E. (Just an Outstandingly Smart Intelligent Entity) who has access to the following functions to exclusively help Gökdeniz Gülmez, you can use the functions if needed-
{
"name": "recommend_movies",
"description": "Recommend movies based on user preferences",
"parameters": {
"type": "object",
"properties": {
"genre": {
"type": "string",
"description": "The preferred movie genre"
},
"year_range": {
"type": "object",
"properties": {
"start_year": {
"type": "integer",
"description": "The start year of the movie release"
},
"end_year": {
"type": "integer",
"description": "The end year of the movie release"
}
}
}
}
}
}
### Gökdeniz Gülmez: Can you recommend some movies for me?
### J.O.S.I.E.: Sure! I can recommend movies based on your preferences. Please provide me with your preferred movie genre and the year range of the movie release.
### Gökdeniz Gülmez: I enjoy action movies from the 2000s.
### J.O.S.I.E.: <functioncall> {"name": "recommend_movies", "arguments": '{
"genre": "action",
"year_range": {
"start_year": 2000,
"end_year": 2009
}
}'}
### FUNCTION RESPONSE: {"movies": ["The Dark Knight", "Gladiator", "The Bourne Identity", "Kill Bill: Volume 1", "The Matrix"]}
### J.O.S.I.E.: Based on your preferences, I recommend the following movies: "The Dark Knight", "Gladiator", "The Bourne Identity", "Kill Bill: Volume 1", and "The Matrix".
```
In this example, the user asks J.O.S.I.E. to recommend action movies from the 2000s. J.O.S.I.E. responds with a function call to the "recommend_movies" function and provides a list of recommended movies as a response.
## Dataset Usage
This dataset can be used for training and fine-tuning Large Language Models (LLMs) such as GPT-3.5 on the task of generating function recommendations in a conversational context. Researchers and developers can use this data to build virtual assistants or chatbots capable of recommending functions and providing relevant information to users based on their requests.
## Citation
If you use this dataset in your research or applications, please cite it as follows:
```
@dataset{your citation here,
title = {Private Function Calling},
author = {Gökdeniz Gülmez},
year = {2023},
publisher = {Gökdeniz Gülmez},
url = {https://huggingface.co/datasets/Isaak-Carter/Function_Calling_Private_GG/tree/main},
}
``` | 4,131 | [
[
-0.031005859375,
-0.0513916015625,
0.0272369384765625,
-0.0005044937133789062,
-0.01934814453125,
-0.0198974609375,
-0.016632080078125,
-0.0126800537109375,
0.032073974609375,
0.0606689453125,
-0.0611572265625,
-0.0582275390625,
-0.0247039794921875,
-0.00843... |
chengli-thu/linghuchong | 2023-09-03T01:57:53.000Z | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:zh",
"license:cc-by-4.0",
"arxiv:2308.09597",
"region:us"
] | chengli-thu | null | null | 1 | 41 | 2023-09-03T01:51:46 | ---
license: cc-by-4.0
task_categories:
- text-generation
language:
- zh
size_categories:
- 1K<n<10K
---
支持ChatHaruhi2 的令狐冲数据,可以使用如下方式调用
```python
from chatharuhi import ChatHaruhi
chatbot = ChatHaruhi( role_from_hf = 'chengli-thu/linghuchong', \
llm = 'openai')
response = chatbot.chat(role='小师妹', text = '冲哥。')
print(response)
```
上传者: 李鲁鲁
更具体的信息,见 [ChatHaruhi](https://github.com/LC1332/Chat-Haruhi-Suzumiya)
欢迎加入我们的 [众筹角色创建项目](https://github.com/LC1332/Chat-Haruhi-Suzumiya/tree/main/characters/novel_collecting)
### Citation引用
Please cite the repo if you use the data or code in this repo.
```
@misc{li2023chatharuhi,
title={ChatHaruhi: Reviving Anime Character in Reality via Large Language Model},
author={Cheng Li and Ziang Leng and Chenxi Yan and Junyi Shen and Hao Wang and Weishi MI and Yaying Fei and Xiaoyang Feng and Song Yan and HaoSheng Wang and Linkang Zhan and Yaokai Jia and Pingyu Wu and Haozhen Sun},
year={2023},
eprint={2308.09597},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | 1,077 | [
[
0.01165771484375,
-0.049163818359375,
-0.013397216796875,
0.0178070068359375,
-0.0152587890625,
0.0016050338745117188,
-0.03314208984375,
-0.035400390625,
0.03363037109375,
0.01458740234375,
-0.02850341796875,
0.005817413330078125,
-0.0191802978515625,
-0.00... |
yzhuang/autotree_automl_100000_bank-marketing_sgosdt_l256_dim7_d3_sd0 | 2023-09-07T21:25:25.000Z | [
"region:us"
] | yzhuang | null | null | 0 | 41 | 2023-09-07T21:24:57 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 2057200000
num_examples: 100000
- name: validation
num_bytes: 205720000
num_examples: 10000
download_size: 419082043
dataset_size: 2262920000
---
# Dataset Card for "autotree_automl_100000_bank-marketing_sgosdt_l256_dim7_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 855 | [
[
-0.0197296142578125,
-0.0212554931640625,
0.009185791015625,
0.0248260498046875,
-0.01025390625,
0.015380859375,
0.042633056640625,
-0.0023097991943359375,
0.04681396484375,
0.0406494140625,
-0.052734375,
-0.046630859375,
-0.044830322265625,
0.00083684921264... |
jxie/higgs | 2023-09-20T06:01:24.000Z | [
"region:us"
] | jxie | null | null | 0 | 41 | 2023-09-13T01:10:20 | ---
dataset_info:
features:
- name: inputs
sequence: float64
- name: label
dtype: float64
splits:
- name: val_16k
num_bytes: 3702368
num_examples: 15688
- name: train_10k
num_bytes: 2360000
num_examples: 10000
- name: train_1k
num_bytes: 236000
num_examples: 1000
- name: train_68k
num_bytes: 14809236
num_examples: 62751
- name: train_100k
num_bytes: 23600000
num_examples: 100000
- name: train
num_bytes: 2478000000
num_examples: 10500000
- name: test
num_bytes: 118000000
num_examples: 500000
- name: test_20k
num_bytes: 4627960
num_examples: 19610
- name: train_63k
num_bytes: 14809236
num_examples: 62751
download_size: 2168393527
dataset_size: 2660144800
---
# Dataset Card for "higgs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 934 | [
[
-0.040679931640625,
-0.01812744140625,
0.022918701171875,
-0.0009479522705078125,
0.0020160675048828125,
0.0005540847778320312,
0.0196533203125,
-0.0174102783203125,
0.0596923828125,
0.028839111328125,
-0.049285888671875,
-0.0489501953125,
-0.0423583984375,
... |
AnhTong/vi_dataset | 2023-09-20T16:50:47.000Z | [
"region:us"
] | AnhTong | null | null | 0 | 41 | 2023-09-20T16:11:02 | ---
dataset_info:
features:
- name: title
dtype: string
- name: link
dtype: string
- name: content
dtype: string
splits:
- name: astronomy
num_bytes: 5509853
num_examples: 1163
- name: cacnuoc
num_bytes: 1849582
num_examples: 373
- name: hocvan12
num_bytes: 3700549
num_examples: 584
- name: marketing
num_bytes: 1395360
num_examples: 304
- name: molympiad
num_bytes: 11949913
num_examples: 4488
- name: sinhhocvn
num_bytes: 1201768
num_examples: 142
- name: vansudia
num_bytes: 85849474
num_examples: 9045
- name: kimca
num_bytes: 2126678
num_examples: 902
- name: toidicodedao
num_bytes: 3045055
num_examples: 498
download_size: 57946392
dataset_size: 116628232
---
# Dataset Card for "vi_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 948 | [
[
-0.036773681640625,
-0.0203704833984375,
0.0131072998046875,
0.01473236083984375,
-0.0174102783203125,
-0.00333404541015625,
0.01788330078125,
-0.00879669189453125,
0.06207275390625,
0.0306854248046875,
-0.059051513671875,
-0.055694580078125,
-0.03314208984375,
... |
usvsnsp/memories-semantic-memorization-filter-results | 2023-09-20T20:16:41.000Z | [
"region:us"
] | usvsnsp | null | null | 1 | 41 | 2023-09-20T20:08:28 | ---
dataset_info:
features:
- name: sequence_id
dtype: int64
- name: text
dtype: string
- name: sequence_duplicates
dtype: int64
- name: max_frequency
dtype: int64
- name: avg_frequency
dtype: float64
- name: min_frequency
dtype: int64
- name: median_frequency
dtype: float64
- name: p25_frequency
dtype: int64
- name: p75_frequency
dtype: int64
- name: frequencies
sequence: int64
- name: is_incrementing
dtype: bool
- name: tokens
sequence: int64
- name: repeating_offset
dtype: int32
- name: num_repeating
dtype: int32
- name: smallest_repeating_chunk
sequence: int64
- name: memorization_score
dtype: float64
- name: templating_frequency_0.9
dtype: int64
- name: templating_frequency_0.8
dtype: int64
- name: prompt_perplexity
dtype: float32
- name: generation_perplexity
dtype: float32
- name: sequence_perplexity
dtype: float32
splits:
- name: memories.duped.70m
num_bytes: 648141277
num_examples: 463953
- name: memories.duped.160m
num_bytes: 955903849
num_examples: 689673
- name: memories.duped.410m
num_bytes: 1337555782
num_examples: 970341
- name: memories.duped.1b
num_bytes: 1725540452
num_examples: 1256141
- name: memories.duped.1.4b
num_bytes: 1884519155
num_examples: 1373722
- name: memories.duped.2.8b
num_bytes: 2292743123
num_examples: 1675077
- name: memories.duped.6.9b
num_bytes: 2898035658
num_examples: 2120976
- name: memories.duped.12b
num_bytes: 3252649684
num_examples: 2382328
- name: memories.deduped.70m
num_bytes: 576211560
num_examples: 411448
- name: memories.deduped.160m
num_bytes: 809545073
num_examples: 581195
- name: memories.deduped.410m
num_bytes: 1126006111
num_examples: 811039
- name: memories.deduped.1b
num_bytes: 1430399436
num_examples: 1032865
- name: memories.deduped.1.4b
num_bytes: 1450336662
num_examples: 1048097
- name: memories.deduped.2.8b
num_bytes: 1871907415
num_examples: 1355211
- name: memories.deduped.6.9b
num_bytes: 2319039796
num_examples: 1680294
- name: memories.deduped.12b
num_bytes: 2581349436
num_examples: 1871216
download_size: 9223426756
dataset_size: 27159884469
---
# Dataset Card for "memories-semantic-memorization-filter-results"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 2,541 | [
[
-0.042388916015625,
-0.0265655517578125,
0.0399169921875,
-0.0008945465087890625,
-0.01953125,
-0.00785064697265625,
-0.00039076805114746094,
-0.00829315185546875,
0.0567626953125,
0.04412841796875,
-0.061431884765625,
-0.08197021484375,
-0.050933837890625,
... |
Duxiaoman-DI/FinCorpus | 2023-09-22T10:10:10.000Z | [
"size_categories:10M<n<100M",
"language:zh",
"license:apache-2.0",
"finance",
"region:us"
] | Duxiaoman-DI | null | null | 27 | 41 | 2023-09-22T05:01:30 | ---
license: apache-2.0
language:
- zh
tags:
- finance
size_categories:
- 10M<n<100M
---
中文金融资讯数据集,包括(压缩前):
- 上市公司公告 announcement_data.jsonl 20G
- 金融资讯/新闻
- fin_news_data.jsonl 30G
- fin_articles_data.jsonl 10G
- 金融试题 fin_exam.jsonl 370M
数据格式:
```
{
"text": <文本内容>,
"meta": {
"source": <数据来源>
}
}
``` | 318 | [
[
-0.01226806640625,
-0.0618896484375,
0.00019347667694091797,
0.037628173828125,
-0.044097900390625,
0.0238800048828125,
0.008544921875,
-0.00861358642578125,
0.03515625,
0.052398681640625,
-0.0181427001953125,
-0.043426513671875,
-0.0231170654296875,
0.00796... |
Kerenfuentes/holistic_bias | 2023-09-29T21:18:24.000Z | [
"region:us"
] | Kerenfuentes | This folder contains code to generate the HolisticBias dataset, a set of sentences containing demographic
identity language (e.g. “Hi! I am a Catholic grandmother.”), used in the context of a two-person conversation.
Sentences are formed by combining (1) an identity term from one of 13 demographic axes, (2) a noun referring to
a person (mom, boy, grandparent, etc.), and (3) one of several dozen sentence templates. | @article{smith2022imsorry,
doi = {10.48550/ARXIV.2205.09209},
url = {https://arxiv.org/abs/2205.09209},
author = {Smith, Eric Michael and Hall, Melissa and Kambadur, Melanie and Presani, Eleonora and Williams, Adina},
keywords = {Computation and Language (cs.CL), Computers and Society (cs.CY), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {"I'm sorry to hear that": Finding New Biases in Language Models with a Holistic Descriptor Dataset},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution Share Alike 4.0 International}
} | 0 | 41 | 2023-09-22T21:53:16 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
HumanCompatibleAI/ppo-seals-Swimmer-v1 | 2023-09-27T07:01:55.000Z | [
"region:us"
] | HumanCompatibleAI | null | null | 0 | 41 | 2023-09-26T14:44:14 | ---
dataset_info:
features:
- name: obs
sequence:
sequence: float64
- name: acts
sequence:
sequence: float32
- name: infos
sequence: string
- name: terminal
dtype: bool
- name: rews
sequence: float32
splits:
- name: train
num_bytes: 131302158
num_examples: 104
download_size: 23343768
dataset_size: 131302158
---
# Dataset Card for "ppo-seals-Swimmer-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 547 | [
[
-0.0355224609375,
0.00824737548828125,
0.0192718505859375,
0.0157470703125,
-0.039398193359375,
-0.0077972412109375,
0.052093505859375,
-0.0100250244140625,
0.05133056640625,
0.050994873046875,
-0.053009033203125,
-0.044097900390625,
-0.05413818359375,
-0.01... |
evanfrick/chess | 2023-10-23T05:27:33.000Z | [
"region:us"
] | evanfrick | null | null | 0 | 41 | 2023-09-29T23:19:25 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01494598388671875,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.0465087890625,
0.052490234375,
0.00505828857421875,
0.051361083984375,
0.0170135498046875,
-0.05206298828125,
-0.0149993896484375,
-0.06036376953125,
0.0379028320... |
rajendrabaskota/hc3-wiki-cleaned-text-for-domain-classification-roberta-tokenized-max-len-512 | 2023-10-06T08:47:00.000Z | [
"region:us"
] | rajendrabaskota | null | null | 0 | 41 | 2023-10-06T08:46:38 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: text
dtype: string
- name: source
dtype: int64
- name: human/ai
dtype: int64
- name: perplexity
dtype: float64
- name: cleaned_text
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 845606936
num_examples: 330345
- name: test
num_bytes: 44570090
num_examples: 17387
download_size: 499405861
dataset_size: 890177026
---
# Dataset Card for "hc3-wiki-cleaned-text-for-domain-classification-roberta-tokenized-max-len-512"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 760 | [
[
-0.03192138671875,
-0.033050537109375,
0.015228271484375,
0.004360198974609375,
-0.028076171875,
-0.01177978515625,
-0.0199127197265625,
-0.02447509765625,
0.0288543701171875,
0.0406494140625,
-0.042938232421875,
-0.0648193359375,
-0.04974365234375,
0.017715... |
grasool/breast-cancer-QAs-llama | 2023-10-11T16:17:53.000Z | [
"region:us"
] | grasool | null | null | 0 | 41 | 2023-10-11T14:39:56 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 104168
num_examples: 298
- name: test
num_bytes: 11934
num_examples: 34
download_size: 65852
dataset_size: 116102
---
# Dataset Card for "breast-cancer-QAs-llama"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 541 | [
[
-0.01190948486328125,
-0.0115203857421875,
0.03167724609375,
0.019073486328125,
-0.0340576171875,
0.0164947509765625,
0.06317138671875,
-0.0087432861328125,
0.06951904296875,
0.043426513671875,
-0.06036376953125,
-0.0721435546875,
-0.056396484375,
-0.0021228... |
ostapeno/platy_icl5_subset1.0_maxD1000_3 | 2023-10-12T19:50:03.000Z | [
"region:us"
] | ostapeno | null | null | 0 | 41 | 2023-10-12T07:21:54 | ## model_setting_name: platy
## max_context_length: 512
## subset: 1.0
## icl_examples: 5
## icl_dataset_name: lukaemon/mmlu
## max_documents_per_subject: 1000
## icl_use_out_options: True
## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all
## subjects: SUB_10
## prompt 00 (basic prompts)
| 298 | [
[
-0.03912353515625,
-0.02783203125,
0.0303802490234375,
0.030609130859375,
-0.02777099609375,
-0.01212310791015625,
0.000995635986328125,
0.0200347900390625,
-0.0058746337890625,
0.0377197265625,
-0.07379150390625,
-0.039398193359375,
-0.0233306884765625,
0.0... |
weirdMoonFace/Dummy-TinyStories | 2023-10-13T05:32:04.000Z | [
"region:us"
] | weirdMoonFace | null | null | 0 | 41 | 2023-10-13T05:32:01 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 13906
num_examples: 20
- name: validation
num_bytes: 6798
num_examples: 10
download_size: 21291
dataset_size: 20704
---
# Dataset Card for "Dummy-TinyStories"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 549 | [
[
-0.041290283203125,
-0.0101165771484375,
0.0281982421875,
0.0084686279296875,
-0.0095672607421875,
-0.004497528076171875,
0.0103607177734375,
0.006336212158203125,
0.06689453125,
0.01953125,
-0.06402587890625,
-0.04168701171875,
-0.01555633544921875,
-0.0037... |
zhangshuoming/c_x86_exebench_json_cleaned | 2023-10-13T16:57:43.000Z | [
"region:us"
] | zhangshuoming | null | null | 0 | 41 | 2023-10-13T16:37:01 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 749238025.3045925
num_examples: 701744
download_size: 209658460
dataset_size: 749238025.3045925
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "c_x86_exebench_json_cleaned"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 479 | [
[
-0.04449462890625,
-0.0231170654296875,
0.00759124755859375,
-0.0008764266967773438,
-0.02593994140625,
0.0190887451171875,
-0.002777099609375,
-0.0216064453125,
0.058380126953125,
0.052978515625,
-0.048583984375,
-0.061065673828125,
-0.0278167724609375,
-0.... |
ppxscal/embeddings-network | 2023-10-18T20:03:21.000Z | [
"region:us"
] | ppxscal | null | null | 0 | 41 | 2023-10-18T20:00:59 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: title
dtype: string
- name: authors
dtype: string
- name: year
dtype: int64
- name: venue
dtype: string
- name: index
dtype: int64
- name: abstract
dtype: string
- name: embedding
dtype: string
- name: references
dtype: string
splits:
- name: train
num_bytes: 6369485357
num_examples: 281080
download_size: 4310698624
dataset_size: 6369485357
---
# Dataset Card for "embeddings-network"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 707 | [
[
-0.0462646484375,
-0.0236968994140625,
0.00457000732421875,
0.0196990966796875,
-0.00975799560546875,
0.0005254745483398438,
0.021820068359375,
0.003368377685546875,
0.08203125,
0.0333251953125,
-0.04669189453125,
-0.058807373046875,
-0.0509033203125,
-0.017... |
grasool/data-to16Hz | 2023-10-18T22:02:44.000Z | [
"region:us"
] | grasool | null | null | 0 | 41 | 2023-10-18T22:00:57 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 10844658264
num_examples: 11291
- name: test
num_bytes: 2710421968
num_examples: 2822
download_size: 1783591438
dataset_size: 13555080232
---
# Dataset Card for "data-to16Hz"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 620 | [
[
-0.062744140625,
-0.02178955078125,
0.01491546630859375,
0.01352691650390625,
-0.02777099609375,
-0.0021820068359375,
0.004749298095703125,
-0.03070068359375,
0.05316162109375,
0.02252197265625,
-0.071044921875,
-0.054351806640625,
-0.0305633544921875,
-0.02... |
MU-NLPC/Calc-asdiv_a | 2023-10-30T15:56:07.000Z | [
"arxiv:2305.15017",
"region:us"
] | MU-NLPC | null | null | 0 | 41 | 2023-10-20T18:34:13 | ---
dataset_info:
- config_name: default
features:
- name: id
dtype: string
- name: question
dtype: string
- name: chain
dtype: string
- name: result
dtype: string
- name: result_float
dtype: float64
- name: result_unit
dtype: string
- name: grade
dtype: int64
- name: source_question
dtype: string
splits:
- name: test
num_bytes: 415636
num_examples: 1218
download_size: 152949
dataset_size: 415636
- config_name: original-splits
features:
- name: id
dtype: string
- name: question
dtype: string
- name: chain
dtype: string
- name: result
dtype: string
- name: result_float
dtype: float64
- name: result_unit
dtype: string
- name: grade
dtype: int64
- name: source_question
dtype: string
splits:
- name: test
num_bytes: 415664
num_examples: 1218
download_size: 152949
dataset_size: 415664
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- config_name: original-splits
data_files:
- split: test
path: original-splits/test-*
---
# Dataset Card for Calc-asdiv_a
## Summary
The dataset is a collection of simple math word problems focused on arithmetics. It is derived from the arithmetic subset of ASDiv ([original repo](https://github.com/chaochun/nlu-asdiv-dataset)).
The main addition in this dataset variant is the `chain` column. It was created by converting the solution to a simple html-like language that can be easily
parsed (e.g. by BeautifulSoup). The data contains 3 types of tags:
- gadget: A tag whose content is intended to be evaluated by calling an external tool (sympy-based calculator in this case)
- output: An output of the external tool
- result: The final answer to the mathematical problem (a number)
## Supported Tasks
This variant of the dataset is intended for training Chain-of-Thought reasoning models able to use external tools to enhance the factuality of their responses.
This dataset presents in-context scenarios where models can outsource the computations in the reasoning chain to a calculator.
## Data splits
The dataset does not contain data splits. We consider the whole dataset as a testing benchmark.
## Attributes:
- **id**: id of the example
- **question** problem description in English
- **chain**: series of simple operations (derived from **expression**) that lead to the solution
- **result**: the solution for x as a number or fraction (string)
- **result_float**: same as **result** but converted to a float
- **result_unit**: the units of the result
- **grade**: an estimate of the school grade in which the problem would be practiced
- **source_question**: the source from which the example originates
Attributes **id**, **question**, **chain**, and **result** are present in all datasets in the [Calc-X collection](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483).
## Related work
This dataset was created as a part of a larger effort in training models capable of using a calculator during inference, which we call Calcformers.
- [**Calc-X collection**](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483) - datasets for training Calcformers
- [**Calcformers collection**](https://huggingface.co/collections/MU-NLPC/calcformers-65367392badc497807b3caf5) - calculator-using models we trained and published on HF
- [**Calc-X and Calcformers paper**](https://arxiv.org/abs/2305.15017)
- [**Calc-X and Calcformers repo**](https://github.com/prompteus/calc-x)
Here are links to the original dataset:
- [**original ASDiv dataset and repo**](https://github.com/chaochun/nlu-asdiv-dataset)
- [**original ASDiv paper**](https://aclanthology.org/2020.acl-main.92)
## Licence
CC BY-NC 4.0, consistent with the original source dataset linked above.
## Cite
If you use this dataset in research, please cite the original [ASDiv paper](https://aclanthology.org/2020.acl-main.92), and [Calc-X collection](https://arxiv.org/abs/2305.15017) as follows:
```bibtex
@inproceedings{kadlcik-etal-2023-soft,
title = "Calc-X and Calcformers: Empowering Arithmetical Chain-of-Thought through Interaction with Symbolic Systems",
author = "Marek Kadlčík and Michal Štefánik and Ondřej Sotolář and Vlastimil Martinek",
booktitle = "Proceedings of the The 2023 Conference on Empirical Methods in Natural Language Processing: Main track",
month = dec,
year = "2023",
address = "Singapore, Singapore",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2305.15017",
}
```
| 4,617 | [
[
-0.0252532958984375,
-0.040374755859375,
0.007732391357421875,
0.003753662109375,
0.00421905517578125,
0.0018472671508789062,
-0.011383056640625,
-0.01535797119140625,
0.0222625732421875,
0.028900146484375,
-0.050994873046875,
-0.038726806640625,
-0.033996582031... |
gokul00060/armchat1 | 2023-10-28T09:29:18.000Z | [
"license:mit",
"region:us"
] | gokul00060 | null | null | 1 | 41 | 2023-10-28T08:02:33 | ---
license: mit
---
## THIS DATASET IS ONLY MADE FOR THESE
# ID name color
# 1. ball yellow
# 2. battery silver
# 3. wood wood
# 4. bowl white | 151 | [
[
0.0010690689086914062,
-0.0021724700927734375,
0.015838623046875,
0.019012451171875,
-0.025604248046875,
0.0084228515625,
0.038330078125,
0.007602691650390625,
0.03424072265625,
0.02716064453125,
-0.055267333984375,
-0.0307769775390625,
-0.0231781005859375,
... |
JosueElias/pipeline_dataset2 | 2023-10-29T21:23:27.000Z | [
"region:us"
] | JosueElias | null | null | 0 | 41 | 2023-10-29T20:59:49 | ---
dataset_info:
features:
- name: title
dtype: string
- name: section
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1522896529
num_examples: 2101279
download_size: 850821844
dataset_size: 1522896529
---
# Dataset Card for "pipeline_dataset2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 440 | [
[
-0.0208892822265625,
-0.005977630615234375,
0.008514404296875,
0.01311492919921875,
-0.0217742919921875,
0.00909423828125,
0.037109375,
-0.00820159912109375,
0.051971435546875,
0.03851318359375,
-0.060272216796875,
-0.039581298828125,
-0.058013916015625,
-0.... |
Iftoo95/Arabic_Sentiment_and_Topics | 2021-11-20T14:50:45.000Z | [
"region:us"
] | Iftoo95 | null | null | 0 | 40 | 2022-03-02T23:29:22 | Arabic Twitter based dataset with multi-labels that contains two classes:
1. Sentiment class: classifies tweets as Positive, Negative and Neutral
2. Topic class: Classifies tweets as Politics, Business and Health | 212 | [
[
-0.04498291015625,
-0.037445068359375,
-0.0068511962890625,
0.0299072265625,
-0.00872039794921875,
0.052642822265625,
0.0038013458251953125,
-0.016143798828125,
0.037933349609375,
0.024017333984375,
-0.030548095703125,
-0.07781982421875,
-0.06591796875,
-0.0... |
Niciu/test-cre-dataset-issues | 2022-03-01T14:06:43.000Z | [
"region:us"
] | Niciu | null | null | 0 | 40 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
abdusah/masc | 2022-07-01T15:28:48.000Z | [
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language:ar",
"license:cc-by-nc-4.0",
"region:us"
] | abdusah | null | null | 0 | 40 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- ar
license:
- cc-by-nc-4.0
multilinguality: []
paperswithcode_id: []
pretty_name: 'MASC'
size_categories:
source_datasets: []
task_categories: []
task_ids: []
---
# Dataset Card for MASC: MASSIVE ARABIC SPEECH CORPUS
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://ieee-dataport.org/open-access/masc-massive-arabic-speech-corpus
- **Repository:**
- **Paper:** https://dx.doi.org/10.21227/e1qb-jv46
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This corpus is a dataset that contains 1,000 hours of speech sampled at 16~kHz and crawled from over 700 YouTube channels. MASC is multi-regional, multi-genre, and multi-dialect dataset that is intended to advance the research and development of Arabic speech technology with the special emphasis on Arabic speech recognition
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Multi-dialect Arabic
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
#### masc_dev
- speech
- sampling_rate
- target_text (label)
### Data Splits
#### masc_dev
- train: 100
- test: 40
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
Note: this is a small development set for testing.
### Dataset Curators
[More Information Needed]
### Licensing Information
CC 4.0
### Citation Information
[More Information Needed]
### Contributions
Mohammad Al-Fetyani, Muhammad Al-Barham, Gheith Abandah, Adham Alsharkawi, Maha Dawas, August 18, 2021, "MASC: Massive Arabic Speech Corpus", IEEE Dataport, doi: https://dx.doi.org/10.21227/e1qb-jv46.
| 3,357 | [
[
-0.05303955078125,
-0.039886474609375,
-0.009674072265625,
0.007610321044921875,
-0.0110626220703125,
0.015655517578125,
-0.0201263427734375,
-0.0156402587890625,
0.0325927734375,
0.0268096923828125,
-0.04449462890625,
-0.0760498046875,
-0.058807373046875,
0... |
csarron/image-captions | 2021-11-29T04:31:34.000Z | [
"region:us"
] | csarron | null | null | 0 | 40 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
edge2992/rri_short | 2021-12-10T16:01:26.000Z | [
"region:us"
] | edge2992 | null | null | 0 | 40 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
florentgbelidji/test-3 | 2022-02-23T15:05:28.000Z | [
"region:us"
] | florentgbelidji | null | null | 0 | 40 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
florentgbelidji/test-dataset | 2022-02-23T14:52:03.000Z | [
"region:us"
] | florentgbelidji | null | null | 0 | 40 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
florianbussmann/train_tickets-yu2020pick | 2022-01-19T14:18:09.000Z | [
"region:us"
] | florianbussmann | \
The train ticket is fixed layout dataset, however, it contains background noise and imaging distortions.
It contains 1,530 synthetic images and 320 real images for training, and 80 real images for testing.
Every train ticket has eight key text fields including ticket number, starting station, train number, destination station, date, ticket rates, seat category, and name.
This dataset mainly consists of digits, English characters, and Chinese characters. | \
@inproceedings{yu2021pick,
title={PICK: Processing key information extraction from documents using improved graph learning-convolutional networks},
author={Yu, Wenwen and Lu, Ning and Qi, Xianbiao and Gong, Ping and Xiao, Rong},
booktitle={2020 25th International Conference on Pattern Recognition (ICPR)},
pages={4363--4370},
year={2021},
organization={IEEE}
} | 0 | 40 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
formermagic/github_python_1m | 2022-10-21T16:45:17.000Z | [
"task_ids:language-modeling",
"task_ids:slot-filling",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:py",
"license:mit",
"region:us"
] | formermagic | null | null | 1 | 40 | 2022-03-02T23:29:22 | ---
annotations_creators:
- found
language_creators:
- found
language:
- py
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- sequence-modeling
- conditional-text-generation
task_ids:
- language-modeling
- slot-filling
- code-generation
---
# Dataset Card for Github Python 1M | 349 | [
[
-0.0247039794921875,
-0.010833740234375,
-0.0286102294921875,
0.0034389495849609375,
-0.05267333984375,
-0.00251007080078125,
0.00623321533203125,
0.0203094482421875,
0.050079345703125,
0.03875732421875,
-0.047943115234375,
-0.0574951171875,
-0.0211334228515625,... |
formu/CVT | 2021-03-26T15:40:33.000Z | [
"region:us"
] | formu | null | null | 0 | 40 | 2022-03-02T23:29:22 | https://www.geogebra.org/m/w8uzjttg
https://www.geogebra.org/m/gvn7m78g
https://www.geogebra.org/m/arxecanq
https://www.geogebra.org/m/xb69bvww
https://www.geogebra.org/m/apvepfnd
https://www.geogebra.org/m/evmj8ckk
https://www.geogebra.org/m/qxcxwmhp
https://www.geogebra.org/m/p3cxqh6c
https://www.geogebra.org/m/ggrahbgd
https://www.geogebra.org/m/pnhymrbc
https://www.geogebra.org/m/zjukbtk9
https://www.geogebra.org/m/bbezun8r
https://www.geogebra.org/m/sgwamtru
https://www.geogebra.org/m/fpunkxxp
https://www.geogebra.org/m/acxebrr7 | 539 | [
[
-0.05120849609375,
-0.020904541015625,
0.0426025390625,
0.017547607421875,
-0.0309600830078125,
-0.00917816162109375,
-0.0019121170043945312,
-0.0093536376953125,
0.017364501953125,
0.01904296875,
-0.05682373046875,
-0.06829833984375,
-0.0455322265625,
-0.00... |
frtna/test2 | 2022-01-04T05:23:40.000Z | [
"region:us"
] | frtna | null | null | 0 | 40 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
fulai/DuReader | 2021-04-12T12:07:18.000Z | [
"region:us"
] | fulai | null | null | 0 | 40 | 2022-03-02T23:29:22 | 百度lic2020语言与智能信息竞赛数据集。 | 22 | [
[
-0.032196044921875,
-0.033935546875,
0.0027599334716796875,
0.041412353515625,
-0.033172607421875,
0.00728607177734375,
0.037384033203125,
-0.038482666015625,
0.02618408203125,
0.0618896484375,
-0.0408935546875,
-0.0164031982421875,
-0.020263671875,
-0.01969... |
gagan3012/fake-news | 2021-10-27T23:14:42.000Z | [
"region:us"
] | gagan3012 | null | null | 0 | 40 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
geekydevu/mlquestions | 2021-11-11T08:11:10.000Z | [
"region:us"
] | geekydevu | null | null | 0 | 40 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
guoqiang/cuge | 2022-01-25T05:30:29.000Z | [
"region:us"
] | guoqiang | null | null | 0 | 40 | 2022-03-02T23:29:22 | Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file. Many of the 9,283 recorded hours in the dataset also include demographic metadata like age, sex, and accent that can help train the accuracy of speech recognition engines.
The dataset currently consists of 7,335 validated hours in 60 languages, but were always adding more voices and languages. Take a look at our Languages page to request a language or start contributing.
Supported Tasks and Leaderboards
[Needs More Information]
Languages
English
Dataset Structure
Data Instances
A typical data point comprises the path to the audio file, called path and its sentence. Additional fields include accent, age, client_id, up_votes down_votes, gender, locale and segment.
{'accent': 'netherlands', 'age': 'fourties', 'client_id': 'bbbcb732e0f422150c30ff3654bbab572e2a617da107bca22ff8b89ab2e4f124d03b6a92c48322862f60bd0179ae07baf0f9b4f9c4e11d581e0cec70f703ba54', 'down_votes': 0, 'gender': 'male', 'locale': 'nl', 'path': 'nl/clips/common_voice_nl_23522441.mp3', 'segment': "''", 'sentence': 'Ik vind dat een dubieuze procedure.', 'up_votes': 2, 'audio': {'path':nl/clips/common_voice_nl_23522441.mp3', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 48000} `
Data Fields
client_id: An id for which client (voice) made the recording
path: The path to the audio file
audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: dataset[0]["audio"] the audio file is automatically decoded and resampled to dataset.features["audio"].sampling_rate. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the "audio" column, i.e. dataset[0]["audio"] should always be preferred over dataset["audio"][0].
sentence: The sentence the user was prompted to speak
up_votes: How many upvotes the audio file has received from reviewers
down_votes: How many downvotes the audio file has received from reviewers
age: The age of the speaker.
gender: The gender of the speaker
accent: Accent of the speaker
locale: The locale of the speaker
segment: Usually empty field
Data Splits
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
The validated data is data that has been validated with reviewers and recieved upvotes that the data is of high quality.
The invalidated data is data has been invalidated by reviewers and recieved downvotes that the data is of low quality.
The reported data is data that has been reported, for different reasons.
The other data is data that has not yet been reviewed.
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
Dataset Creation
Curation Rationale
[Needs More Information]
Source Data
Initial Data Collection and Normalization
[Needs More Information]
Who are the source language producers?
[Needs More Information]
Annotations
Annotation process
[Needs More Information]
Who are the annotators?
[Needs More Information]
Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
Considerations for Using the Data
Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
Discussion of Biases
[More Information Needed]
Other Known Limitations
[More Information Needed]
Additional Information
Dataset Curators
[More Information Needed]
Licensing Information
Public Domain, CC-0
Citation Information
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
| 4,335 | [
[
-0.038238525390625,
-0.042572021484375,
0.0094146728515625,
0.026458740234375,
-0.0118865966796875,
-0.00408172607421875,
-0.039581298828125,
-0.03265380859375,
0.027618408203125,
0.059783935546875,
-0.050445556640625,
-0.0640869140625,
-0.03662109375,
0.020... |
gusu/mymodel1 | 2021-11-02T03:41:43.000Z | [
"region:us"
] | gusu | null | null | 0 | 40 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
habu24/fdz | 2021-09-10T14:47:37.000Z | [
"region:us"
] | habu24 | null | null | 0 | 40 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
henrychess/gutenberg-fulltext-dirty-locc | 2022-01-03T05:53:21.000Z | [
"region:us"
] | henrychess | null | null | 0 | 40 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
herbievore/test | 2021-11-21T14:50:05.000Z | [
"region:us"
] | herbievore | null | null | 0 | 40 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
hf-internal-testing/test-dataset | 2022-09-05T16:10:12.000Z | [
"region:us"
] | hf-internal-testing | null | null | 0 | 40 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
honghungle/dataset | 2021-11-23T08:13:10.000Z | [
"region:us"
] | honghungle | null | null | 0 | 40 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
huggingartists/hillsong-worship | 2021-08-30T18:36:51.000Z | [
"region:us"
] | huggingartists | null | null | 0 | 40 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
hyeonduck/your_dataset_name | 2021-12-16T08:19:27.000Z | [
"region:us"
] | hyeonduck | null | null | 0 | 40 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
public-data/sample-images-TADNE | 2022-01-23T23:03:47.000Z | [
"region:us"
] | public-data | null | null | 0 | 40 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
image-search-2/unsplash_lite_image_dataset | 2021-11-19T12:44:46.000Z | [
"region:us"
] | image-search-2 | null | null | 1 | 40 | 2022-03-02T23:29:22 | # The Unsplash Dataset

The Unsplash Dataset is made up of over 250,000+ contributing global photographers and data sourced from hundreds of millions of searches across a nearly unlimited number of uses and contexts. Due to the breadth of intent and semantics contained within the Unsplash dataset, it enables new opportunities for research and learning.
The Unsplash Dataset is offered in two datasets:
- the Lite dataset: available for commercial and noncommercial usage, containing 25k nature-themed Unsplash photos, 25k keywords, and 1M searches
- the Full dataset: available for noncommercial usage, containing 3M+ high-quality Unsplash photos, 5M keywords, and over 250M searches
As the Unsplash library continues to grow, we’ll release updates to the dataset with new fields and new images, with each subsequent release being [semantically versioned](https://semver.org/).
We welcome any feedback regarding the content of the datasets or their format. With your input, we hope to close the gap between the data we provide and the data that you would like to leverage. You can [open an issue](https://github.com/unsplash/datasets/issues/new/choose) to report a problem or to let us know what you would like to see in the next release of the datasets.
For more on the Unsplash Dataset, see [our announcement](https://unsplash.com/blog/the-unsplash-dataset/) and [site](https://unsplash.com/data).
## Download
### Lite Dataset
The Lite dataset contains all of the same fields as the Full dataset, but is limited to ~25,000 photos. It can be used for both commercial and non-commercial usage, provided you abide by [the terms](https://github.com/unsplash/datasets/blob/master/TERMS.md).
[⬇️ Download the Lite dataset](https://unsplash.com/data/lite/latest) [~650MB compressed, ~1.4GB raw]
### Full Dataset
The Full dataset is available for non-commercial usage and all uses must abide by [the terms](https://github.com/unsplash/datasets/blob/master/TERMS.md). To access, please go to [unsplash.com/data](https://unsplash.com/data) and request access. The dataset weighs ~20 GB compressed (~43GB raw)).
## Documentation
See the [documentation for a complete list of tables and fields](https://github.com/unsplash/datasets/blob/master/DOCS.md).
## Usage
You can follow these examples to load the dataset in these common formats:
- [Load the dataset in a PostgreSQL database](https://github.com/unsplash/datasets/tree/master/how-to/psql)
- [Load the dataset in a Python environment](https://github.com/unsplash/datasets/tree/master/how-to/python)
- [Submit an example doc](https://github.com/unsplash/datasets/blob/master/how-to/README.md#submit-an-example)
## Share your work
We're making this data open and available with the hopes of enabling researchers and developers to discover interesting and useful connections in the data.
We'd love to see what you create, whether that's a research paper, a machine learning model, a blog post, or just an interesting discovery in the data. Send us an email at [data@unsplash.com](mailto:data@unsplash.com).
If you're using the dataset in a research paper, you can attribute the dataset as `Unsplash Lite Dataset 1.2.0` or `Unsplash Full Dataset 1.2.0` and link to the permalink [`unsplash.com/data`](https://unsplash.com/data).
----
The Unsplash Dataset is made available for research purposes. [It cannot be used to redistribute the images contained within](https://github.com/unsplash/datasets/blob/master/TERMS.md). To use the Unsplash library in a product, see [the Unsplash API](https://unsplash.com/developers).

| 3,725 | [
[
-0.00734710693359375,
-0.0137176513671875,
0.010345458984375,
0.007259368896484375,
-0.041748046875,
0.01446533203125,
-0.022735595703125,
-0.0277099609375,
0.0291290283203125,
0.0430908203125,
-0.040679931640625,
-0.055023193359375,
-0.00980377197265625,
0.... |
imflash217/github-issues | 2022-02-28T23:47:32.000Z | [
"region:us"
] | imflash217 | null | null | 0 | 40 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ixxi/my_v1 | 2022-02-07T15:39:44.000Z | [
"region:us"
] | ixxi | null | null | 0 | 40 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
jacobbieker/open-crab-sample | 2022-02-11T11:56:00.000Z | [
"region:us"
] | jacobbieker | null | null | 0 | 40 | 2022-03-02T23:29:22 | astrophysics
astroparticle
simulation
timeseries
point-cloud
# Dataset Card for FACT Open Crab Sample
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://factdata.app.tu-dortmund.de/
- **Repository:** [Needs More Information]
- **Paper:** https://iopscience.iop.org/article/10.1088/1748-0221/8/06/P06008/pdf, https://iopscience.iop.org/article/10.1088/1748-0221/9/10/P10012/pdf
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
This is a mirror of the Open Crab Sample released by the FACT collaboration, containing simulations of astroparticle events as seen by the FACT telescope from the CORSIKA simulation program, as well as a few nights of observations of the Crab Nebula over 2013 and 2014. The simulation data is in two formats, the photon stream format, as well as a preprocessed version containing extracted features, and cleaned point clouds, which were performed with various levels of DBSCAN. The observations are all the raw data, with no cleaning or extracted features.
### Supported Tasks and Leaderboards
- 'classification': Classification of simulated events as either hadron or gamma events.
- 'regression': Predicting the energy of the initial energy of the simulated events, or where in the night sky the original particle originated
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
The goal of the Open Crab Sample is to open up astroparticle data for exploring different ways of doing analysis.
### Source Data
#### Initial Data Collection and Normalization
The initial simulated data was generated by the CORSIKA simulation program. The observations were taken by the FACT telescope on La Palma between 2013 and 2014. The data is not normalized.
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
The simulations were annotated from the ground truth in the simulation, while the observations have no ground truths.
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] | 3,669 | [
[
-0.040374755859375,
-0.03973388671875,
0.0562744140625,
-0.00128173828125,
-0.040130615234375,
-0.01377105712890625,
-0.01085662841796875,
-0.0197906494140625,
0.07379150390625,
0.042724609375,
-0.058502197265625,
-0.03955078125,
-0.03009033203125,
-0.002731... |
jaimin/wav2vec2-large-xlsr-gujarati-demo | 2021-03-24T03:41:24.000Z | [
"region:us"
] | jaimin | null | null | 0 | 40 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
jakemarcus/MATH | 2021-09-22T16:00:35.000Z | [
"region:us"
] | jakemarcus | null | null | 0 | 40 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
jel/covid | 2022-02-15T01:34:31.000Z | [
"region:us"
] | jel | null | null | 0 | 40 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
jfarray/TFM | 2022-02-15T06:27:36.000Z | [
"region:us"
] | jfarray | null | null | 0 | 40 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
jhqwqq/2 | 2021-09-29T06:58:22.000Z | [
"region:us"
] | jhqwqq | null | null | 0 | 40 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
jianhong/dateset1 | 2022-01-18T11:35:45.000Z | [
"region:us"
] | jianhong | null | null | 0 | 40 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
jianhong/dateset2 | 2022-01-18T11:37:44.000Z | [
"region:us"
] | jianhong | null | null | 0 | 40 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
jiminsun/atc0_demo | 2022-02-24T01:39:29.000Z | [
"region:us"
] | jiminsun | null | null | 0 | 40 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
jimregan/lasid | 2021-10-06T23:31:28.000Z | [
"region:us"
] | jimregan | Linguistic Atlas and Survey of Irish Dialects, volume 1 | @book{wagner1958linguistic,
title={Linguistic Atlas and Survey of Irish Dialects: Introduction, 300 maps.},
author={Wagner, H.},
number={v. 1},
year={1958},
publisher={Dublin Institute for Advanced Studies}
}
@phdthesis{mckendry1982computer,
title={Computer-aided contributions to the study of Irish dialects},
author={McKendry, Eugene},
year={1982},
school={Queen's University Belfast}
}
@article{mckendry1998linguistic,
title={The Linguistic Atlas and Survey of Irish Dialects (LASID) and the Computer},
author={McKendry, Eugene},
journal={Studia Celtica Upsaliensia},
volume={2},
pages={345--354},
year={1998}
} | 0 | 40 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ngdiana/uaspeech_severity_high | 2022-02-03T22:59:37.000Z | [
"region:us"
] | ngdiana | null | null | 0 | 40 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
huggan/anime-faces | 2022-03-22T10:01:22.000Z | [
"license:cc0-1.0",
"region:us"
] | huggan | null | null | 6 | 40 | 2022-03-03T13:15:34 | ---
license: cc0-1.0
---
# Dataset Card for anime-faces
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://www.kaggle.com/soumikrakshit/anime-faces
- **Repository:** https://www.kaggle.com/soumikrakshit/anime-faces
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** https://github.com/Mckinsey666
### Dataset Summary
This is a dataset consisting of 21551 anime faces scraped from www.getchu.com, which are then cropped using the anime face detection algorithm in https://github.com/nagadomi/lbpcascade_animeface. All images are resized to 64 * 64 for the sake of convenience. Please also cite the two sources when using this dataset.
Some outliers are still present in the dataset:
Bad cropping results
Some non-human faces.
Feel free to contribute to this dataset by adding images of similar quality or adding image labels.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
Has a data folder with png files inside.
### Data Splits
Only training set
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information]
---
annotations_creators:
- found
language_creators:
- found
languages:
- unknown
licenses:
- unknown
multilinguality:
- unknown
pretty_name: anime-faces
size_categories:
- unknown
source_datasets:
- original
task_categories:
- image-classification
task_ids: []
--- | 3,373 | [
[
-0.03955078125,
-0.04571533203125,
0.00980377197265625,
0.02117919921875,
-0.01140594482421875,
0.0039825439453125,
-0.00347900390625,
-0.03558349609375,
0.047210693359375,
0.051727294921875,
-0.07958984375,
-0.059326171875,
-0.047454833984375,
0.00960540771... |
fmplaza/EmoEvent | 2023-03-27T08:19:58.000Z | [
"language:en",
"language:es",
"license:apache-2.0",
"region:us"
] | fmplaza | EmoEvent is a multilingual emotion dataset of tweets based on different events that took place in April 2019.
Three annotators labeled the tweets following the six Ekman’s basic emotion model (anger, fear, sadness, joy, disgust, surprise) plus the “neutral or other emotions” category. | @inproceedings{plaza-del-arco-etal-2020-emoevent,
title = "{{E}mo{E}vent: A Multilingual Emotion Corpus based on different Events}",
author = "{Plaza-del-Arco}, {Flor Miriam} and Strapparava, Carlo and {Ure{~n}a-L{\’o}pez}, L. Alfonso and {Mart{\’i}n-Valdivia}, M. Teresa",
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference", month = may, year = "2020", address = "Marseille, France", publisher = "European Language Resources Association",
url = "https://www.aclweb.org/anthology/2020.lrec-1.186",
pages = "1492--1498",
language = "English",
ISBN = "979-10-95546-34-4" } | 6 | 40 | 2022-03-09T10:17:46 | ---
license: apache-2.0
language:
- en
- es
---
# Dataset Card for Emoevent
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Repository:** [EmoEvent dataset repository](https://github.com/fmplaza/EmoEvent)
- **Paper: EmoEvent:** [A Multilingual Emotion Corpus based on different Events](https://aclanthology.org/2020.lrec-1.186.pdf)
- **Leaderboard:** [Leaderboard for EmoEvent / Spanish version](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6385)
- **Point of Contact: fmplaza@ujaen.es**
### Dataset Summary
EmoEvent is a multilingual emotion dataset of tweets based on different events that took place in April 2019.
Three annotators labeled the tweets following the six Ekman’s basic emotion model (anger, fear, sadness, joy, disgust, surprise) plus the “neutral or other emotions” category. Morevoer, the tweets are annotated as offensive (OFF) or non-offensive (NO).
### Supported Tasks and Leaderboards
This dataset is intended for multi-class emotion classification and binary offensive classification.
Competition [EmoEvalEs task on emotion detection for Spanish at IberLEF 2021](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6385)
### Languages
- Spanish
- English
## Dataset Structure
### Data Instances
For each instance, there is a string for the id of the tweet, a string for the emotion class, a string for the offensive class, and a string for the event. See the []() to explore more examples.
```
{'id': 'a0c1a858-a9b8-4cb1-8a81-1602736ff5b8',
'event': 'GameOfThrones',
'tweet': 'ARYA DE MI VIDA. ERES MAS ÉPICA QUE EL GOL DE INIESTA JODER #JuegodeTronos #VivePoniente',
'offensive': 'NO',
'emotion': 'joy',
}
```
```
{'id': '3YCT0L9OMMFP7KWKQSTJRJO0YHUSN2a0c1a858-a9b8-4cb1-8a81-1602736ff5b8',
'event': 'GameOfThrones',
'tweet': 'The #NotreDameCathedralFire is indeed sad and people call all offered donations humane acts, but please if you have money to donate, donate to humans and help bring food to their tables and affordable education first. What more humane than that? #HumanityFirst',
'offensive': 'NO',
'emotion': 'sadness',
}
```
### Data Fields
- `id`: a string to identify the tweet
- `event`: a string containing the event associated with the tweet
- `tweet`: a string containing the text of the tweet
- `offensive`: a string containing the offensive gold label
- `emotion`: a string containing the emotion gold label
### Data Splits
The EmoEvent dataset has 2 subsets: EmoEvent_es (Spanish version) and EmoEvent_en (English version)
Each subset contains 3 splits: _train_, _validation_, and _test_. Below are the statistics subsets.
| EmoEvent_es | Number of Instances in Split |
| ------------- | ------------------------------------------- |
| Train | 5,723 |
| Validation | 844 |
| Test | 1,656 |
| EmoEvent_en | Number of Instances in Split |
| ------------- | ------------------------------------------- |
| Train | 5,112 |
| Validation | 744 |
| Test | 1,447 |
## Dataset Creation
### Source Data
Twitter
#### Who are the annotators?
Amazon Mechanical Turkers
## Additional Information
### Licensing Information
The EmoEvent dataset is released under the [Apache-2.0 License](http://www.apache.org/licenses/LICENSE-2.0).
### Citation Information
@inproceedings{plaza-del-arco-etal-2020-emoevent,
title = "{{E}mo{E}vent: A Multilingual Emotion Corpus based on different Events}",
author = "{Plaza-del-Arco}, {Flor Miriam} and Strapparava, Carlo and {Ure{\~n}a-L{\’o}pez}, L. Alfonso and {Mart{\’i}n-Valdivia}, M. Teresa",
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France", publisher = "European Language Resources Association",
url = "https://www.aclweb.org/anthology/2020.lrec-1.186", pages = "1492--1498",
language = "English",
ISBN = "979-10-95546-34-4"
} | 4,822 | [
[
-0.0208282470703125,
-0.0516357421875,
0.0087432861328125,
0.0285186767578125,
-0.0210418701171875,
0.004344940185546875,
-0.02178955078125,
-0.04595947265625,
0.0537109375,
0.003299713134765625,
-0.04071044921875,
-0.0711669921875,
-0.0302734375,
0.02966308... |
SetFit/amazon_reviews_multi_ja | 2022-03-23T15:40:06.000Z | [
"region:us"
] | SetFit | null | null | 1 | 40 | 2022-03-13T02:46:28 | #amazon reviews multi japanese
This dataset is a port of the official ['amazon_reviews_multi' dataset] (https://huggingface.co/datasets/amazon_reviews_multi) on the Hub. It has just the Japanese language version. It has been reduced to just 3 columns (and 4th "label_text") that are relevant to the SetFit task. | 312 | [
[
-0.058685302734375,
-0.0347900390625,
0.0020999908447265625,
0.044464111328125,
-0.0265350341796875,
0.005657196044921875,
0.000025510787963867188,
-0.040557861328125,
0.050048828125,
0.07550048828125,
-0.07763671875,
-0.0287322998046875,
-0.0113067626953125,
... |
juliensimon/amazon-shoe-reviews | 2023-10-09T13:22:34.000Z | [
"language:en",
"region:us"
] | juliensimon | null | null | 0 | 40 | 2022-05-23T16:20:41 | ---
language: en
dataset_info:
features:
- name: labels
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 16847665.2
num_examples: 90000
- name: test
num_bytes: 1871962.8
num_examples: 10000
download_size: 0
dataset_size: 18719628.0
---
# Dataset Card for "amazon-shoe-reviews"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 473 | [
[
-0.042449951171875,
-0.00862884521484375,
0.0130615234375,
0.029693603515625,
-0.034149169921875,
0.004611968994140625,
0.0206146240234375,
-0.022918701171875,
0.05145263671875,
0.02484130859375,
-0.061431884765625,
-0.058502197265625,
-0.0189666748046875,
-... |
tner/tweebank_ner | 2022-11-27T20:59:13.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"size_categories:1k<10K",
"language:en",
"license:other",
"arxiv:2201.07281",
"region:us"
] | tner | [Tweebank NER](https://arxiv.org/abs/2201.07281) | @article{DBLP:journals/corr/abs-2201-07281,
author = {Hang Jiang and
Yining Hua and
Doug Beeferman and
Deb Roy},
title = {Annotating the Tweebank Corpus on Named Entity Recognition and Building
{NLP} Models for Social Media Analysis},
journal = {CoRR},
volume = {abs/2201.07281},
year = {2022},
url = {https://arxiv.org/abs/2201.07281},
eprinttype = {arXiv},
eprint = {2201.07281},
timestamp = {Fri, 21 Jan 2022 13:57:15 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2201-07281.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | 3 | 40 | 2022-07-18T10:39:20 | ---
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 1k<10K
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: TweeBank NER
---
# Dataset Card for "tner/tweebank_ner"
## Dataset Description
- **Repository:** [T-NER](https://github.com/asahi417/tner)
- **Paper:** [https://arxiv.org/abs/2201.07281](https://arxiv.org/abs/2201.07281)
- **Dataset:** TweeBank NER
- **Domain:** Twitter
- **Number of Entity:** 4
### Dataset Summary
TweeBank NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project.
- Entity Types: `LOC`, `MISC`, `PER`, `ORG`
## Dataset Structure
### Data Instances
An example of `train` looks as follows.
```
{
'tokens': ['RT', '@USER2362', ':', 'Farmall', 'Heart', 'Of', 'The', 'Holidays', 'Tabletop', 'Christmas', 'Tree', 'With', 'Lights', 'And', 'Motion', 'URL1087', '#Holiday', '#Gifts'],
'tags': [8, 8, 8, 2, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8]
}
```
### Label ID
The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/tweebank_ner/raw/main/dataset/label.json).
```python
{
"B-LOC": 0,
"B-MISC": 1,
"B-ORG": 2,
"B-PER": 3,
"I-LOC": 4,
"I-MISC": 5,
"I-ORG": 6,
"I-PER": 7,
"O": 8
}
```
### Data Splits
| name |train|validation|test|
|---------|----:|---------:|---:|
|tweebank_ner | 1639| 710 |1201|
### Citation Information
```
@article{DBLP:journals/corr/abs-2201-07281,
author = {Hang Jiang and
Yining Hua and
Doug Beeferman and
Deb Roy},
title = {Annotating the Tweebank Corpus on Named Entity Recognition and Building
{NLP} Models for Social Media Analysis},
journal = {CoRR},
volume = {abs/2201.07281},
year = {2022},
url = {https://arxiv.org/abs/2201.07281},
eprinttype = {arXiv},
eprint = {2201.07281},
timestamp = {Fri, 21 Jan 2022 13:57:15 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2201-07281.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | 2,124 | [
[
-0.0290069580078125,
-0.031707763671875,
0.003063201904296875,
0.0203704833984375,
-0.0252532958984375,
0.01465606689453125,
-0.0192718505859375,
-0.0220794677734375,
0.04779052734375,
0.016937255859375,
-0.0266265869140625,
-0.0634765625,
-0.0533447265625,
... |
and111/bert_pretrain_phase1 | 2022-08-23T17:14:31.000Z | [
"region:us"
] | and111 | null | null | 2 | 40 | 2022-08-23T13:51:03 | ### Dataset Summary
Input data for the **first** phase of BERT pretraining (sequence length 128). All text is tokenized with [bert-base-uncased](https://huggingface.co/bert-base-uncased) tokenizer.
Data is obtained by concatenating and shuffling [wikipedia](https://huggingface.co/datasets/wikipedia) (split: `20220301.en`) and [bookcorpusopen](https://huggingface.co/datasets/bookcorpusopen) datasets and running [reference BERT data preprocessor](https://github.com/google-research/bert/blob/master/create_pretraining_data.py) without masking and input duplication (`dupe_factor = 1`). Documents are split into sentences with the [NLTK](https://www.nltk.org/) sentence tokenizer (`nltk.tokenize.sent_tokenize`).
See the dataset for the **second** phase of pretraining: [bert_pretrain_phase2](https://huggingface.co/datasets/and111/bert_pretrain_phase2). | 858 | [
[
-0.0218963623046875,
-0.04833984375,
0.017669677734375,
0.0303802490234375,
-0.04498291015625,
-0.01502227783203125,
-0.01275634765625,
-0.0203399658203125,
0.0179443359375,
0.031402587890625,
-0.07318115234375,
-0.03350830078125,
-0.046722412109375,
0.00432... |
allenai/multinews_sparse_mean | 2022-11-24T21:37:31.000Z | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:other",
"region:us"
] | allenai | null | null | 2 | 40 | 2022-08-26T21:42:59 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- other
multilinguality:
- monolingual
pretty_name: Multi-News
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
task_ids:
- news-articles-summarization
paperswithcode_id: multi-news
train-eval-index:
- config: default
task: summarization
task_id: summarization
splits:
train_split: train
eval_split: test
col_mapping:
document: text
summary: target
metrics:
- type: rouge
name: Rouge
---
This is a copy of the [Multi-News](https://huggingface.co/datasets/multi_news) dataset, except the input source documents of its `test` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
- __query__: The `summary` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits
- __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"mean"`, i.e. the number of documents retrieved, `k`, is set as the mean number of documents seen across examples in this dataset, in this case `k==3`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8793 | 0.7460 | 0.6403 | 0.7417 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8748 | 0.7453 | 0.6361 | 0.7442 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8775 | 0.7480 | 0.6370 | 0.7443 | | 1,763 | [
[
-0.031402587890625,
-0.0193634033203125,
0.01561737060546875,
0.0126800537109375,
-0.028045654296875,
-0.004520416259765625,
-0.013763427734375,
0.00833892822265625,
0.039764404296875,
0.0241241455078125,
-0.047637939453125,
-0.043212890625,
-0.056671142578125,
... |
hossein20s/enrun-emails-text-classification | 2022-09-27T22:33:36.000Z | [
"region:us"
] | hossein20s | null | null | 0 | 40 | 2022-09-27T22:33:26 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
priyank-m/text_recognition_en_zh_clean | 2022-12-16T18:05:44.000Z | [
"region:us"
] | priyank-m | null | null | 2 | 40 | 2022-12-15T12:22:22 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: val
num_bytes: 53886975.51
num_examples: 2910
- name: test
num_bytes: 55192498.476
num_examples: 2894
- name: train
num_bytes: 26744379885.02228
num_examples: 1396731
download_size: 26975033720
dataset_size: 26853459359.00828
---
# Dataset Card for "text_recognition_en_zh_clean"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 560 | [
[
-0.024688720703125,
-0.01439666748046875,
0.021209716796875,
-0.00524139404296875,
-0.0208892822265625,
-0.0116729736328125,
-0.004638671875,
-0.0281219482421875,
0.05084228515625,
0.03802490234375,
-0.046600341796875,
-0.06402587890625,
-0.038330078125,
0.0... |
sedthh/gutenberg_multilang | 2023-03-16T14:22:26.000Z | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:es",
"language:de",
"language:fr",
"language:nl",
"language:it",
"language:pt",
"language:hu",
"license:mit",
"project gutenberg",
"e-book",
"gutenberg.org",
"region:us"
] | sedthh | null | null | 1 | 40 | 2023-02-28T13:25:31 | ---
dataset_info:
features:
- name: TEXT
dtype: string
- name: SOURCE
dtype: string
- name: METADATA
dtype: string
splits:
- name: train
num_bytes: 3127780102
num_examples: 7907
download_size: 1911528348
dataset_size: 3127780102
license: mit
task_categories:
- text-generation
language:
- es
- de
- fr
- nl
- it
- pt
- hu
tags:
- project gutenberg
- e-book
- gutenberg.org
pretty_name: Project Gutenberg eBooks in different languages
size_categories:
- 1K<n<10K
---
# Dataset Card for Project Gutenber - Multilanguage eBooks
A collection of non-english language eBooks (7907, about 75-80% of all the ES, DE, FR, NL, IT, PT, HU books available on the site) from the Project Gutenberg site with metadata removed.
Originally colected for https://github.com/LAION-AI/Open-Assistant
| LANG | EBOOKS |
|----|----|
| ES | 717 |
| DE | 1735 |
| FR | 2863 |
| NL | 904 |
| IT | 692 |
| PT | 501 |
| HU | 495 |
The METADATA column contains catalogue meta information on each book as a serialized JSON:
| key | original column |
|----|----|
| language | - |
| text_id | Text# unique book identifier on Prject Gutenberg as *int* |
| title | Title of the book as *string* |
| issued | Issued date as *string* |
| authors | Authors as *string*, comma separated sometimes with dates |
| subjects | Subjects as *string*, various formats |
| locc | LoCC code as *string* |
| bookshelves | Bookshelves as *string*, optional |
## Source data
**How was the data generated?**
- A crawler (see Open-Assistant repository) downloaded the raw HTML code for
each eBook based on **Text#** id in the Gutenberg catalogue (if available)
- The metadata and the body of text are not clearly separated so an additional
parser attempts to split them, then remove transcriber's notes and e-book
related information from the body of text (text clearly marked as copyrighted or
malformed was skipped and not collected)
- The body of cleaned TEXT as well as the catalogue METADATA is then saved as
a parquet file, with all columns being strings
**Copyright notice:**
- Some of the books are copyrighted! The crawler ignored all books
with an english copyright header by utilizing a regex expression, but make
sure to check out the metadata for each book manually to ensure they are okay
to use in your country! More information on copyright:
https://www.gutenberg.org/help/copyright.html and
https://www.gutenberg.org/policy/permission.html
- Project Gutenberg has the following requests when using books without
metadata: _Books obtianed from the Project Gutenberg site should have the
following legal note next to them: "This eBook is for the use of anyone
anywhere in the United States and most other parts of the world at no cost and
with almost" no restrictions whatsoever. You may copy it, give it away or
re-use it under the terms of the Project Gutenberg License included with this
eBook or online at www.gutenberg.org. If you are not located in the United
States, you will have to check the laws of the country where you are located
before using this eBook."_ | 3,116 | [
[
-0.0264739990234375,
-0.0017499923706054688,
0.0028476715087890625,
-0.0018558502197265625,
-0.028350830078125,
-0.00970458984375,
0.001476287841796875,
-0.0298614501953125,
0.006076812744140625,
0.0731201171875,
-0.0330810546875,
-0.07940673828125,
-0.035095214... |
cartesinus/iva_mt_wslot | 2023-07-21T15:40:44.000Z | [
"task_categories:translation",
"size_categories:10K<n<100K",
"language:en",
"language:pl",
"language:de",
"language:es",
"language:sv",
"language:fr",
"language:pt",
"license:cc-by-4.0",
"machine translation",
"nlu",
"natural-language-understanding",
"virtual assistant",
"region:us"
] | cartesinus | \ | null | 0 | 40 | 2023-03-09T14:02:00 | ---
dataset_info:
features:
- name: id
dtype: string
- name: locale
dtype: string
- name: origin
dtype: string
- name: partition
dtype: string
- name: translation_utt
dtype:
translation:
languages:
- en
- pl
- name: translation_xml
dtype:
translation:
languages:
- en
- pl
- name: src_bio
dtype: string
- name: tgt_bio
dtype: string
splits:
- name: train
num_bytes: 6187206
num_examples: 20362
- name: validation
num_bytes: 1115480
num_examples: 3681
- name: test
num_bytes: 1587613
num_examples: 5394
download_size: 3851892
dataset_size: 8890299
task_categories:
- translation
language:
- en
- pl
- de
- es
- sv
- fr
- pt
tags:
- machine translation
- nlu
- natural-language-understanding
- virtual assistant
pretty_name: Machine translation for NLU with slot transfer
size_categories:
- 10K<n<100K
license: cc-by-4.0
---
# Machine translation dataset for NLU (Virual Assistant) with slot transfer between languages
## Dataset Summary
Disclaimer: This is for research purposes only. Please have a look at the license section below. Some of the datasets used to construct IVA_MT have an unknown license.
IVA_MT is a machine translation dataset that can be used to train, adapt and evaluate MT models used in Virtual Assistant NLU context (e.g. to translate trainig corpus of NLU).
## Dataset Composition
### en-pl
| Corpus | Train | Dev | Test |
|----------------------------------------------------------------------|--------|-------|-------|
| [Massive 1.1](https://huggingface.co/datasets/AmazonScience/massive) | 11514 | 2033 | 2974 |
| [Leyzer 0.2.0](https://github.com/cartesinus/leyzer/tree/0.2.0) | 3974 | 701 | 1380 |
| [OpenSubtitles from OPUS](https://opus.nlpl.eu/OpenSubtitles-v1.php) | 2329 | 411 | 500 |
| [KDE from OPUS](https://opus.nlpl.eu/KDE4.php) | 1154 | 241 | 241 |
| [CCMatrix from Opus](https://opus.nlpl.eu/CCMatrix.php) | 1096 | 232 | 237 |
| [Ubuntu from OPUS](https://opus.nlpl.eu/Ubuntu.php) | 281 | 60 | 59 |
| [Gnome from OPUS](https://opus.nlpl.eu/GNOME.php) | 14 | 3 | 3 |
| *total* | 20362 | 3681 | 5394 |
### en-de
| Corpus | Train | Dev | Test |
|----------------------------------------------------------------------|--------|-------|-------|
| [Massive 1.1](https://huggingface.co/datasets/AmazonScience/massive) | 7536 | 1346 | 1955 |
### en-es
| Corpus | Train | Dev | Test |
|----------------------------------------------------------------------|--------|-------|-------|
| [Massive 1.1](https://huggingface.co/datasets/AmazonScience/massive) | 8415 | 1526 | 2202 |
### en-sv
| Corpus | Train | Dev | Test |
|----------------------------------------------------------------------|--------|-------|-------|
| [Massive 1.1](https://huggingface.co/datasets/AmazonScience/massive) | 7540 | 1360 | 1921 |
### en-fr
| Corpus | Train | Dev | Test |
|----------------------------------------------------------------------|--------|-------|-------|
| [Massive 1.1](https://huggingface.co/datasets/AmazonScience/massive) | 6800 | 1203 | 1757 |
### en-pt
| Corpus | Train | Dev | Test |
|----------------------------------------------------------------------|--------|-------|-------|
| [Massive 1.1](https://huggingface.co/datasets/AmazonScience/massive) | 7368 | 1296 | 1885 |
### en-hi
| Corpus | Train | Dev | Test |
|----------------------------------------------------------------------|--------|-------|-------|
| [Massive 1.1](https://huggingface.co/datasets/AmazonScience/massive) | 6702 | 1175 | 1747 |
### en-tr
| Corpus | Train | Dev | Test |
|----------------------------------------------------------------------|--------|-------|-------|
| [Massive 1.1](https://huggingface.co/datasets/AmazonScience/massive) | 8269 | 1474 | 2170 |
### en-ja
| Corpus | Train | Dev | Test |
|----------------------------------------------------------------------|--------|-------|-------|
| [Massive 1.1](https://huggingface.co/datasets/AmazonScience/massive) | 8066 | 1434 | 2085 |
### en-zh
| Corpus | Train | Dev | Test |
|----------------------------------------------------------------------|--------|-------|-------|
| [Massive 1.1](https://huggingface.co/datasets/AmazonScience/massive) | 8433 | 1513 | 2179 |
## Tools
Scripts used to generate this dataset can be found on [github](https://github.com/cartesinus/iva_mt).
## Citation
If you use this models please cite:
```
@article{Sowanski2023SlotLI,
title={Slot Lost in Translation? Not Anymore: A Machine Translation Model for Virtual Assistants with Type-Independent Slot Transfer},
author={Marcin Sowanski and Artur Janicki},
journal={2023 30th International Conference on Systems, Signals and Image Processing (IWSSIP)},
year={2023},
pages={1-5}
}
```
## License
This is a composition of 7 datasets, and the license is as defined in original release:
- MASSIVE: [CC-BY 4.0](https://huggingface.co/datasets/AmazonScience/massive/blob/main/LICENSE)
- Leyzer: [CC BY-NC 4.0](https://github.com/cartesinus/leyzer/blob/master/LICENSE)
- OpenSubtitles: unknown
- KDE: [GNU Public License](https://l10n.kde.org/about.php)
- CCMatrix: no license given, therefore assuming it is LASER project license [BSD](https://github.com/facebookresearch/LASER/blob/main/LICENSE)
- Ubuntu: [GNU Public License](https://help.launchpad.net/Legal)
- Gnome: unknown
| 6,301 | [
[
-0.047576904296875,
-0.0340576171875,
0.0195159912109375,
0.0119476318359375,
-0.0174560546875,
-0.01061248779296875,
-0.012725830078125,
-0.033721923828125,
0.0281524658203125,
0.040863037109375,
-0.0467529296875,
-0.04180908203125,
-0.048187255859375,
0.00... |
cambridgeltl/vsr_random | 2023-03-22T17:28:37.000Z | [
"task_categories:text-classification",
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-4.0",
"multimodality",
"vision-and-language",
"arxiv:2205.00363",
"region:us"
] | cambridgeltl | null | null | 1 | 40 | 2023-03-22T16:27:00 | ---
license: cc-by-4.0
task_categories:
- text-classification
- question-answering
language:
- en
tags:
- multimodality
- vision-and-language
pretty_name: VSR (random split)
size_categories:
- 10K<n<100K
---
# VSR: Visual Spatial Reasoning
This is the **random set** of **VSR**: *Visual Spatial Reasoning* (TACL 2023) [[paper]](https://arxiv.org/abs/2205.00363).
### Usage
```python
from datasets import load_dataset
data_files = {"train": "train.jsonl", "dev": "dev.jsonl", "test": "test.jsonl"}
dataset = load_dataset("cambridgeltl/vsr_random", data_files=data_files)
```
Note that the image files still need to be downloaded separately. See [`data/`](https://github.com/cambridgeltl/visual-spatial-reasoning/tree/master/data) for details.
Go to our [github repo](https://github.com/cambridgeltl/visual-spatial-reasoning) for more introductions.
### Citation
If you find VSR useful:
```bibtex
@article{Liu2022VisualSR,
title={Visual Spatial Reasoning},
author={Fangyu Liu and Guy Edward Toh Emerson and Nigel Collier},
journal={Transactions of the Association for Computational Linguistics},
year={2023},
}
``` | 1,129 | [
[
-0.02880859375,
-0.047454833984375,
0.044097900390625,
0.01006317138671875,
-0.0184173583984375,
-0.0054931640625,
-0.0126800537109375,
-0.021331787109375,
-0.0009245872497558594,
0.02752685546875,
-0.026519775390625,
-0.04443359375,
-0.0264739990234375,
0.0... |
TurkuNLP/Suomi24-toxicity-annotated | 2023-06-02T13:04:21.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:fi",
"license:cc-by-sa-4.0",
"toxicity",
"region:us"
] | TurkuNLP | This dataset consists of Suomi24 comments which have been labeled by human raters for toxic behavior. | null | 0 | 40 | 2023-03-30T11:25:13 | ---
license: cc-by-sa-4.0
task_categories:
- text-classification
language:
- fi
tags:
- toxicity
size_categories:
- 1K<n<10K
---
### Suomi-24-toxicity-annotated
This dataset includes comments from Suomi24 sampled using predictions from a toxicity classifier. The comments were taken in intervals for each label. The process of sampling emphasized difficult borderline cases. 500 comments were sampled for each label.
The annotation process used the labels from Perspective, used e.g. for `TurkuNLP/wikipedia-toxicity-data-fi`.
Instead of multi-label, we annotated each comment only for one label, although a couple comments appear in two labels.
Process of annotation included initial annotation of 100-200 comments followed by a discussion and final annotations. Raw data can be found from [here](https://github.com/TurkuNLP/toxicity-classifier/tree/main/annotations/raw_annotations).
Examples that made it to the dataset are ones that had unanimous agreement or were resolved through discussion.
### Citing
To cite this dataset use the following bibtex.
```
@inproceedings{eskelinen-etal-2023-toxicity,
title = "Toxicity Detection in {F}innish Using Machine Translation",
author = "Eskelinen, Anni and
Silvala, Laura and
Ginter, Filip and
Pyysalo, Sampo and
Laippala, Veronika",
booktitle = "Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)",
month = may,
year = "2023",
address = "T{\'o}rshavn, Faroe Islands",
publisher = "University of Tartu Library",
url = "https://aclanthology.org/2023.nodalida-1.68",
pages = "685--697",
abstract = "Due to the popularity of social media platforms and the sheer amount of user-generated content online, the automatic detection of toxic language has become crucial in the creation of a friendly and safe digital space. Previous work has been mostly focusing on English leaving many lower-resource languages behind. In this paper, we present novel resources for toxicity detection in Finnish by introducing two new datasets, a machine translated toxicity dataset for Finnish based on the widely used English Jigsaw dataset and a smaller test set of Suomi24 discussion forum comments originally written in Finnish and manually annotated following the definitions of the labels that were used to annotate the Jigsaw dataset. We show that machine translating the training data to Finnish provides better toxicity detection results than using the original English training data and zero-shot cross-lingual transfer with XLM-R, even with our newly annotated dataset from Suomi24.",
}
```
## Label definitions taken from Perspective API
THREAT: Describes an intention to inflict pain, injury, or violence against an individual or group.
THREATENING: Language that is threatening or encouraging violence or harm, including self-harm.
PROFANITY: Swear words, curse words, or other obscene or profane language.
INSULT: Insulting, inflammatory, or negative comment towards a person or a group of people. Such comments are not necessarily identity specific.
IDENTITY ATTACK: Negative or hateful comments targeting someone because of their identity.
TOXICITY: A rude, disrespectful, or unreasonable comment that is likely to make people leave a discussion.
SEVERE TOXICITY: A very hateful, aggressive, disrespectful comment or otherwise very likely to make a user leave a discussion or give up on sharing their perspective. This attribute is much less sensitive to more mild forms of toxicity, such as comments that include positive uses of curse words.
## Guidelines used for annotation:
### Obscene
swearwords, including mild expletives and misspelled, masked, or other variations
sexually explicit words/terminology that are not topically or contextually appropriate
### Threat
suicidal or self-harm comments, incitement to violence or self-harm, hypothetical situations and wishing harm to somebody
comments that are very unlikely to happen if not marked clearly as sarcasm
only threats towards people are annotated as threat
threats made by somebody else other than the writer NOT included
counterfactuals statements NOT included <!--- as in "if I was there I would have..." --->
### Insult
terms that are insulting towards groups of people (also in identity attack)
insults against political groups, e.g. "vitun demari/suvakki/persu" -> "fucking liberal/conservative etc." <!--- I made this decision here.. --->
negative insulting comments towards oneself, things other than people and hypothetical situations NOT included
<!--- PROBLEM: use of racist or rapist if true, target not clear --->
### Identity attack
comments that have no negative language but are still clearly negative
negative statements towards political groups or groups that nobody self-identifies with are NOT included (unless an insult)
### Toxicity
unreasonably expressed negative comments regardless of the target present and whether the target is known or not
mild or humoristic swearwords are NOT included
positive or neutral sexually explicit comments are NOT included
### Severe toxicity
comments that include only sexually explicit content
only one severely toxic element is needed to have this label and a comment is severely toxic even if the comment contains substantive content
target does not need to be present nor does the target matter
## Inter-annotator agreement:
| Label | Initial (unanimous) | After discussion (unanimous) | Initial (at least 2/3) | After discussion (at least 2/3) |
|------ | ------------------- | ---------------------------- | ---------------------- | ------------------------------- |
| identity attack | 54,5 % | 66,6 % | 92 % | 93,6 % |
| insult | 47,5 % | 49,6 % | 94,5 % | 95,6 % |
| severe toxicity | 63 % | 66 % | 92 % | 96,6 % |
| threat | 82 % | 80,3 % | 98 % | 97,3 % |
| toxicity | 58 % | 54 % | 93 % | 89,6 % |
| obscene | 69 % | 62 % | 97 % | 96 % |
## Evaluation results
Evaluation results from using `TurkuNLP/bert-large-finnish-cased-toxicity`.
| Label | Precision | Recall | F1 |
|------ | ------------------- | ---------------------------- | ---------------------- |
| identity attack | 73,2 | 32 | 44,6 |
| insult | 59,4 | 646,8 | 52,4 |
| severe toxicity | 12 | 28,6 | 16,9 |
| threat | 32,4 | 28,6 | 30,4 |
| toxicity | 60,4 | 79,2 | 68,5 |
| obscene | 64,5 | 82,4 | 72,3 |
| OVERALL | 57,4 | 58,9 | 51,1 |
| OVERALL weighted by original sample counts | 55,5 | 65,5 | 60,1 |
## Licensing Information
Contents of this repository are distributed under the
[Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/).
Copyright of the dataset contents belongs to the original copyright holders. | 6,853 | [
[
-0.0083465576171875,
-0.030181884765625,
0.0345458984375,
0.0266571044921875,
-0.0246734619140625,
-0.0274810791015625,
0.007171630859375,
-0.03448486328125,
0.03125,
0.04071044921875,
-0.0228118896484375,
-0.07275390625,
-0.052642822265625,
0.03756713867187... |
mvasiliniuc/iva-kotlin-codeint | 2023-06-16T06:56:58.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"size_categories:100K<n<1M",
"language:code",
"license:other",
"code, kotlin, native Android development",
"doi:10.57967/hf/0779",
"region:us"
] | mvasiliniuc | null | null | 1 | 40 | 2023-04-04T19:02:39 | ---
annotations_creators:
- crowdsourced
license: other
language_creators:
- crowdsourced
language:
- code
task_categories:
- text-generation
tags:
- code, kotlin, native Android development
size_categories:
- 100K<n<1M
source_datasets: []
pretty_name: iva-kotlin-codeint-raw
task_ids:
- language-modeling
---
# IVA Kotlin GitHub Code Dataset
## Dataset Description
This is the raw IVA Kotlin dataset extracted from GitHub.
It contains uncurated Kotlin files gathered with the purpose to train a code generation model.
The dataset consists of 464215 kotlin code files from GitHub totaling ~361 MB of data.
The dataset was created from the public GitHub dataset on Google BiqQuery.
### How to use it
To download the full dataset:
```python
from datasets import load_dataset
dataset = load_dataset('mvasiliniuc/iva-kotlin-codeint', split='train')
```
```python
from datasets import load_dataset
dataset = load_dataset('mvasiliniuc/iva-kotlin-codeint', split='train')
print(dataset[723])
#OUTPUT:
{
"repo_name":"nemerosa/ontrack",
"path":"ontrack-extension-notifications/src/main/java/net/nemerosa/ontrack/extension/notifications/webhooks/WebhookController.kt",
"copies":"1",
"size":"3248",
"content":"...@RestController\n@RequestMapping(\"/extension/notifications/webhook\")\nclass WebhookController(\n private val webhookAdminService: WebhookAdminService,\n private val webhookExecutionService: ",
"license":"mit"
}
```
## Data Structure
### Data Fields
|Field|Type|Description|
|---|---|---|
|repo_name|string|name of the GitHub repository|
|path|string|path of the file in GitHub repository|
|copies|string|number of occurrences in dataset|
|code|string|content of source file|
|size|string|size of the source file in bytes|
|license|string|license of GitHub repository|
### Instance
```json
{
"repo_name":"nemerosa/ontrack",
"path":"ontrack-extension-notifications/src/main/java/net/nemerosa/ontrack/extension/notifications/webhooks/WebhookController.kt",
"copies":"1",
"size":"3248",
"content":"...@RestController\n@RequestMapping(\"/extension/notifications/webhook\")\nclass WebhookController(\n private val webhookAdminService: WebhookAdminService,\n private val webhookExecutionService: ",
"license":"mit"
}
```
## Languages
The dataset contains only Kotlin files.
```json
{
"Kotlin": [".kt"]
}
```
## Licenses
Each entry in the dataset contains the associated license. The following is a list of licenses involved and their occurrences.
```json
{
"agpl-3.0": 9146,
"apache-2.0": 272388,
"artistic-2.0": 219,
"bsd-2-clause": 896,
"bsd-3-clause": 12328,
"cc0-1.0": 411,
"epl-1.0": 2111,
"gpl-2.0": 11080,
"gpl-3.0": 48911,
"isc": 997,
"lgpl-2.1": 297,
"lgpl-3.0": 7749,
"mit": 92540,
"mpl-2.0": 3386,
"unlicense": 1756
}
```
## Dataset Statistics
```json
{
"Total size": "~361 MB",
"Number of files": 464215,
"Number of files under 500 bytes": 99845,
"Average file size in bytes": 3252,
}
```
## Dataset Creation
The dataset was created using Google Query for Github:
https://cloud.google.com/blog/topics/public-datasets/github-on-bigquery-analyze-all-the-open-source-code
The following steps were pursued for data
gathering:
1. Creation of a dataset and a table in Google Big Query Project.
2. Creation of a bucket in Google Cloud Storage.
3. Creation of a query in Google Big Query Project.
4. Running the query with the setting to output the results in the dataset and table
created at step one.
5. Exporting the resulting dataset into the bucket created in step 2. Export format of JSON with gzip compression.
The result of these steps leads to the following results:
* 2.7 TB Processed,
* number of extracted rows/files was 464,215
* total logical bytes 1.46 GB.
* the result amounts to 7 json.gz files in a total of 361 MB.
The SQL Query used is:
```sql
SELECT
f.repo_name, f.path, c.copies, c.size, c.content, l.license
FROM
(select f.*, row_number() over (partition by id order by path desc) as seqnum from `bigquery-public-data.github_repos.files` AS f) f
JOIN
`bigquery-public-data.github_repos.contents` AS c
ON
f.id = c.id AND seqnum=1
JOIN
`bigquery-public-data.github_repos.licenses` AS l
ON
f.repo_name = l.repo_name
WHERE
NOT c.binary AND ((f.path LIKE '%.kt') AND (c.size BETWEEN 0 AND 1048575))
```
## Data Splits
The dataset only contains a train split.
Using the curated version of this dataset, a split was made into multiple repositories:
* Clean Version: https://huggingface.co/datasets/mvasiliniuc/iva-kotlin-codeint-clean
* Clean Version Train: https://huggingface.co/datasets/mvasiliniuc/iva-kotlin-codeint-clean-train
* Clean Version Valid: https://huggingface.co/datasets/mvasiliniuc/iva-kotlin-codeint-clean-valid
# Considerations for Using the Data
The dataset comprises source code from various repositories, potentially containing harmful or biased code,
along with sensitive information such as passwords or usernames.
# Additional Information
## Dataset Curators
[mircea.dev@icloud.com](mircea.dev@icloud.com)
## Licensing Information
* The license of this open-source dataset is: other.
* The dataset is gathered from open-source repositories on [GitHub using BigQuery](https://cloud.google.com/blog/topics/public-datasets/github-on-bigquery-analyze-all-the-open-source-code).
* Find the license of each entry in the dataset in the corresponding license column.
## Citation Information
```json
@misc {mircea_vasiliniuc_2023,
author = { {Mircea Vasiliniuc} },
title = { iva-kotlin-codeint (Revision 1af5124) },
year = 2023,
url = { https://huggingface.co/datasets/mvasiliniuc/iva-kotlin-codeint },
doi = { 10.57967/hf/0779 },
publisher = { Hugging Face }
}
``` | 5,837 | [
[
-0.027984619140625,
-0.0215606689453125,
0.018341064453125,
0.007663726806640625,
-0.025604248046875,
-0.0035686492919921875,
0.0011157989501953125,
-0.0163421630859375,
0.03662109375,
0.048370361328125,
-0.0338134765625,
-0.05364990234375,
-0.033416748046875,
... |
koutch/intro_prog | 2023-06-05T08:45:02.000Z | [
"region:us"
] | koutch | The Dublin programming dataset is a dataset composed of students' submissions
to introductory programming assignments at the University of Dublin.
Students submitted these programs for multiple programming courses over the duration of three academic years. | @inproceedings{azcona2019user2code2vec,
title={user2code2vec: Embeddings for Profiling Students Based on Distributional Representations of Source Code},
author={Azcona, David and Arora, Piyush and Hsiao, I-Han and Smeaton, Alan},
booktitle={Proceedings of the 9th International Learning Analytics & Knowledge Conference (LAK’19)},
year={2019},
organization={ACM}
}
@inproceedings{DBLP:conf/edm/CleuziouF21,
author = {Guillaume Cleuziou and
Fr{\'{e}}d{\'{e}}ric Flouvat},
editor = {Sharon I{-}Han Hsiao and
Shaghayegh (Sherry) Sahebi and
Fran{\c{c}}ois Bouchet and
Jill{-}J{\^{e}}nn Vie},
title = {Learning student program embeddings using abstract execution traces},
booktitle = {Proceedings of the 14th International Conference on Educational Data
Mining, {EDM} 2021, virtual, June 29 - July 2, 2021},
publisher = {International Educational Data Mining Society},
year = {2021},
timestamp = {Wed, 09 Mar 2022 16:47:22 +0100},
biburl = {https://dblp.org/rec/conf/edm/CleuziouF21.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | 0 | 40 | 2023-04-05T14:44:41 | ---
dataset_info:
- config_name: dublin_metadata
features:
- name: assignment_id
dtype: string
- name: func_name
dtype: string
- name: reference_solution
dtype: string
- name: description
dtype: string
- name: test
dtype: string
splits:
- name: train
num_bytes: 18983
num_examples: 36
- name: test
num_bytes: 17403
num_examples: 35
download_size: 41873
dataset_size: 36386
- config_name: singapore_metadata
features:
- name: assignment_id
dtype: string
- name: func_name
dtype: string
- name: reference_solution
dtype: string
- name: description
dtype: string
- name: test
dtype: string
splits:
- name: train
num_bytes: 5577
num_examples: 5
download_size: 6139
dataset_size: 5577
- config_name: dublin_data
features:
- name: submission_id
dtype: int32
- name: func_code
dtype: string
- name: assignment_id
dtype: string
- name: func_name
dtype: string
- name: description
dtype: string
- name: test
dtype: string
- name: correct
dtype: bool
- name: user
dtype: string
- name: academic_year
dtype: int32
splits:
- name: train
num_bytes: 4412068
num_examples: 7486
- name: test
num_bytes: 7737585
num_examples: 14259
download_size: 15756562
dataset_size: 12149653
- config_name: singapore_data
features:
- name: submission_id
dtype: int32
- name: func_code
dtype: string
- name: assignment_id
dtype: string
- name: func_name
dtype: string
- name: description
dtype: string
- name: test
dtype: string
- name: correct
dtype: bool
splits:
- name: train
num_bytes: 5098928
num_examples: 4394
download_size: 5705043
dataset_size: 5098928
- config_name: dublin_repair
features:
- name: submission_id
dtype: int32
- name: func_code
dtype: string
- name: assignment_id
dtype: string
- name: func_name
dtype: string
- name: description
dtype: string
- name: test
dtype: string
- name: annotation
dtype: string
- name: user
dtype: string
- name: academic_year
dtype: int32
splits:
- name: train
num_bytes: 229683
num_examples: 307
- name: test
num_bytes: 1451820
num_examples: 1698
download_size: 1929518
dataset_size: 1681503
- config_name: singapore_repair
features:
- name: submission_id
dtype: int32
- name: func_code
dtype: string
- name: assignment_id
dtype: string
- name: func_name
dtype: string
- name: description
dtype: string
- name: test
dtype: string
- name: annotation
dtype: string
splits:
- name: train
num_bytes: 18979
num_examples: 18
download_size: 21737
dataset_size: 18979
- config_name: newcaledonia_metadata
features:
- name: assignment_id
dtype: string
- name: func_name
dtype: string
- name: reference_solution
dtype: string
- name: description
dtype: string
- name: test
dtype: string
splits:
- name: train
num_bytes: 9053
num_examples: 9
download_size: 9760
dataset_size: 9053
- config_name: newcaledonia_data
features:
- name: submission_id
dtype: int32
- name: func_code
dtype: string
- name: assignment_id
dtype: string
- name: func_name
dtype: string
- name: description
dtype: string
- name: test
dtype: string
- name: correct
dtype: bool
splits:
- name: train
num_bytes: 932024
num_examples: 1201
download_size: 1198518
dataset_size: 932024
---
# Dataset Card for intro_prog
## Dataset Description
### Dataset Summary
IntroProg is a collection of students' submissions to assignments in various introductory programming courses offered at different universities.
Currently, the dataset contains submissions collected from Dublin City University, and the University of Singapore.
#### Dublin
The Dublin programming dataset is a dataset composed of students' submissions to introductory programming assignments at the University of Dublin.
Students submitted these programs for multiple programming courses over the duration of three academic years.
#### Singapore
The Singapore dataset contains 2442 correct and 1783 buggy program attempts by 361 undergraduate students
crediting an introduction to Python programming course at NUS (National University of Singapore).
### Supported Tasks and Leaderboards
#### "Metadata": Program synthesis
Similarly to the [Most Basic Python Programs](https://huggingface.co/datasets/mbpp) (mbpp), the data split can be used to evaluate
code generations models.
#### "Data"
The data configuration contains all the submissions as well as an indicator of whether these passed the required test.
#### "repair": Program refinement/repair
The "repair" configuration of each dataset is a subset of the "data" configuration
augmented with educators' annotations on the corrections to the buggy programs.
This configuration can be used for the task of program refinement. In [Computing Education Research](https://faculty.washington.edu/ajko/cer/) (CER),
methods for automatically repairing student programs are used to provide students with feedback and help them debug their code.
#### "bug": Bug classification
[Coming soon]
### Languages
The assignments were written in Python.
## Dataset Structure
One configuration is defined by one source dataset *dublin* or *singapore* and one subconfiguration ("metadata", "data", or "repair"):
* "dublin_metadata"
* "dublin_data"
* "dublin_repair"
* "singapore_metadata"
* "singapore_data"
* "singapore_repair"
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
Some of the fields are configuration specific
* submission_id: a unique number identifying the submission
* user: a unique string identifying the (anonymized) student who submitted the solution
* date: the timestamp at which the grading server received the submission
* func_code: the cleaned code submitted
* func_name: the name of the function that had to be implemented
* assingment_id: the unique (string) identifier of the assignment that had to be completed
* academic_year: the starting year of the academic year (e.g. 2015 for the academic year 2015-2016)
* module: the course/module
* test: a human eval-style string which can be used to execute the submitted solution on the provided test cases
* Description: a description of what the function is supposed to achieve
* correct: whether the solution passed all tests or not
### Data Splits
#### Dublin
The Dublin dataset is split into a training and validation set. The training set contains the submissions to the assignments
written during the academic years 2015-2016, and 2016-2017, while the test set contains programs written during the academic year 2017-2018.
#### Singapore
The Singapore dataset only contains a training split, which can be used as a test split for evaluating how your feedback
methods perform on an unseen dataset (if, for instance, you train your methods on the Dublin Dataset).
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
#### Dublin
#### Singapore
The data was released under a [GNU Lesser General Public License v3.0](https://github.com/githubhuyang/refactory/blob/master/LICENSE) license
### Citation Information
```
@inproceedings{azcona2019user2code2vec,
title={user2code2vec: Embeddings for Profiling Students Based on Distributional Representations of Source Code},
author={Azcona, David and Arora, Piyush and Hsiao, I-Han and Smeaton, Alan},
booktitle={Proceedings of the 9th International Learning Analytics & Knowledge Conference (LAK’19)},
year={2019},
organization={ACM}
}
@inproceedings{DBLP:conf/edm/CleuziouF21,
author = {Guillaume Cleuziou and
Fr{\'{e}}d{\'{e}}ric Flouvat},
editor = {Sharon I{-}Han Hsiao and
Shaghayegh (Sherry) Sahebi and
Fran{\c{c}}ois Bouchet and
Jill{-}J{\^{e}}nn Vie},
title = {Learning student program embeddings using abstract execution traces},
booktitle = {Proceedings of the 14th International Conference on Educational Data
Mining, {EDM} 2021, virtual, June 29 - July 2, 2021},
publisher = {International Educational Data Mining Society},
year = {2021},
timestamp = {Wed, 09 Mar 2022 16:47:22 +0100},
biburl = {https://dblp.org/rec/conf/edm/CleuziouF21.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
[More Information Needed] | 9,264 | [
[
-0.03271484375,
-0.05133056640625,
0.01448822021484375,
0.0018014907836914062,
0.007480621337890625,
0.00968170166015625,
-0.0218353271484375,
-0.01502227783203125,
0.0232086181640625,
0.02783203125,
-0.0467529296875,
-0.06512451171875,
-0.021453857421875,
0... |
BramVanroy/alpaca-cleaned-dutch | 2023-07-07T12:16:39.000Z | [
"task_categories:question-answering",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:nl",
"license:cc-by-nc-4.0",
"alpaca",
"instruct",
"instruction",
"doi:10.57967/hf/0530",
"region:us"
] | BramVanroy | null | null | 1 | 40 | 2023-04-12T07:02:22 | ---
license: cc-by-nc-4.0
task_categories:
- question-answering
- text-generation
language:
- nl
tags:
- alpaca
- instruct
- instruction
pretty_name: Alpaca Cleaned Dutch
size_categories:
- 10K<n<100K
---
# Dataset Card for Alpaca Cleaned Dutch
## Dataset Description
- **Homepage:** N/A
- **Repository:** N/A
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** Bram Vanroy
### Dataset Summary
This dataset contains 51,712 conversations between een AI assistant and a (fake) "Human" (generated) in Dutch. They are translations of [Alpaca Cleaned Dataset](https://huggingface.co/datasets/yahma/alpaca-cleaned).
☕ [**Want to help me out?**](https://www.buymeacoffee.com/bramvanroy) Translating the data with the OpenAI API, and prompt testing, cost me 💸$57.99💸. If you like this dataset, please consider [buying me a coffee](https://www.buymeacoffee.com/bramvanroy) to offset a portion of this cost, I appreciate it a lot! ☕
### Languages
- Dutch
## Dataset Structure
### Data Instances
```python
{
'id': 7,
'instruction': 'Leg uit waarom de volgende breuk gelijk is aan 1/4',
'input': '4/16',
'output': 'De breuk 4/16 is gelijk aan 1/4 omdat zowel de teller als de '
'noemer deelbaar zijn door 4. Door zowel de teller als de noemer '
'door 4 te delen, krijgen we de breuk 1/4.'
}
```
### Data Fields
- **id**: the ID of the item. The following ID is not included because they could not be translated: `[23019]`
- **instruction**: the given instruction
**input**: optional input to accompany the instruction. Can be empty.
- **output**: the "answer" to the instruction
## Dataset Creation
The instructions, inputs and outputs were translated with OpenAI's API for `gpt-3.5-turbo`. `max_tokens=1024, temperature=0` as parameters.
The prompt template to translate is (where `src_lang` is English and `tgt_lang` is Dutch):
```python
TRANSLATION_PROMPT = """You are asked to translate a task's instruction, optional input to the task, and the output of the task, from {src_lang} into {tgt_lang}.
Here are the requirements that you should adhere to:
1. maintain the format: the task consists of a task instruction (marked `instruction: `), optional input to the task (marked `input: `) and output for the task marked with `output: `;
2. do not translate the identifiers `instruction: `, `input: `, and `output: ` but instead copy them to your output;
3. make sure that text is fluent to read and does not contain grammatical errors. Use standard {tgt_lang} without regional bias;
4. translate the instruction and input text using informal, but standard, language;
5. make sure to avoid biases (such as gender bias, grammatical bias, social bias);
6. if the instruction is to correct grammar mistakes or spelling mistakes then you have to generate a similar mistake in the input in {tgt_lang}, and then also generate a corrected output version in the output in {tgt_lang};
7. if the instruction is to translate text from one language to another, then you do not translate the text that needs to be translated in the instruction or the input, nor the translation in the output (just copy them as-is);
8. do not translate code fragments but copy them to your output. If there are English examples, variable names or definitions in code fragments, keep them in English.
Now translate the following task with the requirements set out above. Do not provide an explanation and do not add anything else.\n\n"""
```
This prompt is concatenated with the instruction, optionally the input, and the output. In code, that last part looks like this:
```python
text = f'instruction: "{instruction}"\n\n'
if inputstr:
text += f'input: "{inputstr}"\n\n'
text += f'output: "{outputstr}"'
```
The system message was:
```
You are a helpful assistant that translates English to Dutch to the requirements that are given to you.
```
Note that 1 item (0.0001%) was not successfully translated. The translation was missing the input, instruction, or output keywords where those were expected. The ID for the missing item is `[23019]`.
### Source Data
#### Initial Data Collection and Normalization
Initial data creation by [Tatsu lab](https://huggingface.co/datasets/tatsu-lab/alpaca) and cleaned by [Yahma](https://huggingface.co/datasets/yahma/alpaca-cleaned).
#### Who are the source language producers?
The original dataset was generated with OpenAI's `text-davinci-003`.
## Considerations for Using the Data
Note that the translations in this new dataset have not been verified by humans.
### Discussion of Biases
As with any machine-generated texts, users should be aware of potential biases that are included in this dataset. Although the prompt specifically includes `make sure to avoid biases (such as gender bias, grammatical bias, social bias)`, of course the impact of such command is not known. It is likely that biases remain in the dataset so use with caution.
### Other Known Limitations
The translation quality has not been verified. Use at your own risk!
### Licensing Information
As per OpenAI's terms of use, this dataset cannot be used to build [a commercial system that competes with OpenAI's services](https://openai.com/policies/terms-of-use). Similar to the original Alpaca dataset, this dataset is released under CC NC 4.0.
This text was generated (either in part or in full) with GPT-3 (`gpt-3.5-turbo`), OpenAI’s large-scale language-generation model. Upon generating draft language, the author reviewed, edited, and revised the language to their own liking and takes ultimate responsibility for the content of this publication.
If you use this dataset, you must also follow the [Sharing](https://openai.com/policies/sharing-publication-policy) and [Usage](https://openai.com/policies/usage-policies) policies.
As clearly stated in their [Terms of Use](https://openai.com/policies/terms-of-use), specifically 2c.iii, "[you may not] use output from the Services to develop models that compete with OpenAI". That means that you cannot use this dataset to build models that are intended to commercially compete with OpenAI. [As far as I am aware](https://law.stackexchange.com/questions/93308/licensing-material-generated-with-chatgpt), that is a specific restriction that should serve as an addendum to the current license.
### Citation Information
If you use this data set, please cite :
Vanroy, B. (2023). Alpaca Cleaned Dutch [Data set]. Hugging Face. https://doi.org/10.57967/HF/0530
```bibtex
@misc{https://doi.org/10.57967/hf/0530,
doi = {10.57967/HF/0530},
url = {https://huggingface.co/datasets/BramVanroy/alpaca-cleaned-dutch},
author = {Vanroy, Bram},
title = {{A}lpaca {C}leaned {D}utch},
publisher = {Hugging Face},
year = {2023}
}
```
### Contributions
Thanks to [Tatsu lab](https://huggingface.co/datasets/tatsu-lab/alpaca) for the initial machine-generated dataset and yahma for [cleaning it](https://huggingface.co/datasets/yahma/alpaca-cleaned). | 6,983 | [
[
-0.027374267578125,
-0.058746337890625,
0.004932403564453125,
0.0292510986328125,
-0.02655029296875,
-0.04559326171875,
-0.0290374755859375,
-0.04278564453125,
0.0292510986328125,
0.042877197265625,
-0.0477294921875,
-0.044769287109375,
-0.049072265625,
0.02... |
lighteval/pile | 2023-04-26T06:27:38.000Z | [
"region:us"
] | lighteval | The Pile is a 825 GiB diverse, open source language modeling data set that consists
of 22 smaller, high-quality datasets combined together. To score well on Pile
BPB (bits per byte), a model must be able to understand many disparate domains
including books, github repositories, webpages, chat logs, and medical, physics,
math, computer science, and philosophy papers. | @article{pile,
title={The {P}ile: An 800GB Dataset of Diverse Text for Language Modeling},
author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and Presser, Shawn and Leahy, Connor},
journal={arXiv preprint arXiv:2101.00027},
year={2020}
} | 0 | 40 | 2023-04-26T06:26:43 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
Ar4ikov/iemocap_audio_text_splitted | 2023-05-03T18:36:01.000Z | [
"region:us"
] | Ar4ikov | null | null | 1 | 40 | 2023-05-03T18:08:58 | ---
dataset_info:
features:
- name: _id
dtype: string
- name: activation
dtype: float64
- name: dominance
dtype: float64
- name: emotion
dtype: string
- name: end_time
dtype: float64
- name: start_time
dtype: float64
- name: titre
dtype: string
- name: to_translate
dtype: string
- name: translated
dtype: string
- name: valence
dtype: float64
- name: audio
dtype:
audio:
sampling_rate: 16000
splits:
- name: train
num_bytes: 1148478491.1463113
num_examples: 8031
- name: test
num_bytes: 287155695.4826887
num_examples: 2008
download_size: 1409847521
dataset_size: 1435634186.629
---
# Dataset Card for "iemocap_audio_text_splitted"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 873 | [
[
-0.042999267578125,
-0.031402587890625,
0.00020301342010498047,
0.0232696533203125,
-0.0125579833984375,
0.0014801025390625,
-0.00281524658203125,
-0.03033447265625,
0.06646728515625,
0.0298004150390625,
-0.062744140625,
-0.0478515625,
-0.05059814453125,
-0.... |
Nan-Do/instructional_code-search-net-python | 2023-05-20T05:09:44.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:text2text-generation",
"language:en",
"license:apache-2.0",
"Python",
"Code generation",
"Instruction Response",
"region:us"
] | Nan-Do | null | null | 9 | 40 | 2023-05-20T04:50:17 | ---
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
splits:
- name: train
num_bytes: 451473573
num_examples: 418545
download_size: 172777462
dataset_size: 451473573
license: apache-2.0
task_categories:
- conversational
- text-generation
- text2text-generation
language:
- en
tags:
- Python
- Code generation
- Instruction Response
pretty_name: Instructional Python Dataset
---
# Dataset Card for "instructional_code-search-net-python"
## Dataset Description
- **Homepage:** None
- **Repository:** https://huggingface.co/datasets/Nan-Do/instructional_code-search-net-python
- **Paper:** None
- **Leaderboard:** None
- **Point of Contact:** [@Nan-Do](https://github.com/Nan-Do)
### Dataset Summary
This is an instructional dataset for Python.
The dataset contains two different kind of tasks:
- Given a piece of code generate a description of what it does.
- Given a description generate a piece of code that fulfils the description.
### Languages
The dataset is in English.
### Data Splits
There are no splits.
## Dataset Creation
May of 2023
### Curation Rationale
This dataset was created to improve the coding capabilities of LLMs.
### Source Data
The summarized version of the code-search-net dataset can be found at https://huggingface.co/datasets/Nan-Do/code-search-net-python
### Annotations
The dataset includes an instruction and response columns.
#### Annotation process
The annotation procedure was done using templates and NLP techniques to generate human-like instructions and responses.
A sample notebook of the process can be found at https://github.com/Nan-Do/OpenAssistantInstructionResponsePython
The annontations have been cleaned to make sure there are no repetitions and/or meaningless summaries.
### Licensing Information
Apache 2.0 | 1,882 | [
[
-0.020233154296875,
-0.04718017578125,
-0.00733184814453125,
0.032806396484375,
0.0006299018859863281,
-0.0157928466796875,
-0.024658203125,
-0.0033206939697265625,
0.036651611328125,
0.03253173828125,
-0.046234130859375,
-0.050506591796875,
-0.027923583984375,
... |
Meranti/CLAP_freesound | 2023-07-09T17:09:18.000Z | [
"task_categories:audio-classification",
"size_categories:1M<n<10M",
"language:en",
"audio",
"text",
"contrastive learning",
"region:us"
] | Meranti | null | null | 2 | 40 | 2023-06-02T00:42:03 | ---
task_categories:
- audio-classification
language:
- en
tags:
- audio
- text
- contrastive learning
pretty_name: freesound
size_categories:
- 1M<n<10M
---
# LAION-Audio-630K Freesound Dataset
[LAION-Audio-630K](https://github.com/LAION-AI/audio-dataset/blob/main/laion-audio-630k/README.md) is the largest audio-text dataset publicly available and a magnitude larger than previous audio-text datasets (by 2022-11-05). Notably, it combines eight distinct datasets, which includes the Freesound dataset.
Specifically, this Hugging face repository contains two versions of Freesound dataset. Details of each dataset (e.g. how captions are made etc.) could be found in the "datacard" column of the table below.
- **Freesound (full)**: The complete Freesound dataset, available at `/freesound` folder.
- **Freesound (no overlap)**: Made based on Freesound(full), with samples from ESC50, FSD50K, Urbansound8K and Clotho removed. available at `/freesound_no_overlap` folder.
As of the structure and format of `freesound` and `freesound_no_overlap` folder, please refer to [this page](https://github.com/LAION-AI/audio-dataset/blob/main/data_preprocess/README.md).
| Name |Duration |Number of Samples |Data Type | Metadata | Data Card |
|--------------------------------------------------|-------------------------|--------------------|--------- |--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------- |
| Freesound (no overlap) |2817.31hrs | 460801 |1-2 captions per audio, audio | [website](https://freesound.org/) <br> [csv]()|[data card](/data_card/freesound.md)|
| Freesound (full) |3033.38hrs | 515581 |1-2 captions per audio, audio | [website](https://freesound.org/) <br> [csv]() |[data card](/data_card/freesound.md)|
## Metadata csv file
For each of the two datasets, we provide a metadata csv file including the following columns:
- **audio_filename**: The filename of the audio file in `.tar` files. `exemple: 2394.flac`
- **caption_i**: the i-th caption of the audio file
- **freesound_id**: The freesound id of the audio file.
- **username**: The username of the uploader of the audio file.
- **freesound_url**: The url of the audio file in freesound.org
- **username**: The freesound username of the uploader of the audio file.
- **license**: The license of the audio file. `http://creativecommons.org/licenses/by/3.0/`
## Credits & Licence
- **!!!TERM OF USE!!!**: **By downloading files in this repository, you agree that you will use them <u> for research purposes only </u>. If you want to use Freesound clips in LAION-Audio-630K for commercial purposes, please contact Frederic Font Corbera at frederic.font@upf.edu.**
### Freesound Credit:
All audio clips from Freesound are released under Creative Commons (CC) licenses, while each clip has its own license as defined by the clip uploader in Freesound, some of them requiring attribution to their original authors and some forbidding further commercial reuse. Specifically, here is the statistics about licenses of audio clips involved in LAION-Audio-630K:
| License | Number of Samples |
| :--- | :--- |
| http://creativecommons.org/publicdomain/zero/1.0/ | 260134 |
| https://creativecommons.org/licenses/by/4.0/ | 97090 |
| http://creativecommons.org/licenses/by/3.0/ | 89337 |
| http://creativecommons.org/licenses/by-nc/3.0/ | 31680 |
| https://creativecommons.org/licenses/by-nc/4.0/ | 26736 |
| http://creativecommons.org/licenses/sampling+/1.0/ | 11116 |
## Acknowledgement
The whole collection process as well as all usage of the LAION-Audio-630K are conducted by Germany non-profit pure research organization [LAION](https://laion.ai/). All contributors and collectors of the dataset are considered as open source contributors affiliated to LAION. These community contributors (Discord ids) include but not limited to: @marianna13#7139, @Chr0my#0173, @PiEquals4#1909, @Yuchen Hui#8574, @Antoniooooo#4758, @IYWO#9072, krishna#1648, @dicknascarsixtynine#3885, and @turian#1607. We would like to appreciate all of them for their efforts on the LAION-Audio-630k dataset. | 4,733 | [
[
-0.049896240234375,
-0.00690460205078125,
0.030242919921875,
0.0213623046875,
-0.026123046875,
-0.011260986328125,
-0.0098876953125,
-0.0274810791015625,
0.03985595703125,
0.052978515625,
-0.0557861328125,
-0.048583984375,
-0.02972412109375,
-0.0037727355957... |
tasksource/corr2cause | 2023-06-30T17:56:41.000Z | [
"license:mit",
"region:us"
] | tasksource | null | null | 0 | 40 | 2023-06-28T14:07:19 | ---
license: mit
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: relation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 791933574
num_examples: 411452
- name: dev
num_bytes: 3140558
num_examples: 2246
- name: test
num_bytes: 2415937
num_examples: 2246
download_size: 11038753
dataset_size: 797490069
---
https://github.com/causalNLP/corr2cause/
The HF dataset provided by the author cannot be directly loaded. We use the NLI subset, which is the most general task.
```
@article{jin2023can,
title={Can Large Language Models Infer Causation from Correlation?},
author={Jin, Zhijing and Liu, Jiarui and Lyu, Zhiheng and Poff, Spencer and Sachan, Mrinmaya and Mihalcea, Rada and Diab, Mona and Sch{\"o}lkopf, Bernhard},
journal={arXiv preprint arXiv:2306.05836},
year={2023}
}
``` | 921 | [
[
-0.0034770965576171875,
-0.072021484375,
0.03900146484375,
0.03314208984375,
0.00601959228515625,
-0.027313232421875,
-0.00928497314453125,
-0.057159423828125,
0.0269317626953125,
0.049896240234375,
-0.034515380859375,
-0.0055999755859375,
-0.033538818359375,
... |
Aznor/MeetingBank-original | 2023-08-07T09:50:07.000Z | [
"task_categories:summarization",
"license:cc-by-nc-sa-4.0",
"arxiv:2305.17529",
"region:us"
] | Aznor | null | null | 0 | 40 | 2023-08-07T09:40:38 | ---
license: cc-by-nc-sa-4.0
task_categories:
- summarization
---
This dataset is the original train-validation-test split from the [MeetingBank dataset](https://meetingbank.github.io/) used to train and evaluate the summarisation models in the original paper cited below.
**Overview**
MeetingBank, a benchmark dataset created from the city councils of 6 major U.S. cities to supplement existing datasets. It contains 1,366 meetings with over 3,579 hours of video, as well as transcripts, PDF documents of meeting minutes, agenda, and other metadata. On average, a council meeting is 2.6 hours long and its transcript contains over 28k tokens, making it a valuable testbed for meeting summarizers and for extracting structure from meeting videos. The datasets contains 6,892 segment-level summarization instances for training and evaluating of performance.
**Acknowledgement**
Please cite the following paper in work that makes use of this dataset:
[MeetingBank: A Benchmark Dataset for Meeting Summarization](https://arxiv.org/abs/2305.17529) \
Yebowen Hu, Tim Ganter, Hanieh Deilamsalehy, Franck Dernoncourt, Hassan Foroosh, Fei Liu \
In main conference of Association for Computational Linguistics (ACL’23), Toronto, Canada.
**Bibtex**
```
@inproceedings{hu-etal-2023-meetingbank,
title = "MeetingBank: A Benchmark Dataset for Meeting Summarization",
author = "Yebowen Hu and Tim Ganter and Hanieh Deilamsalehy and Franck Dernoncourt and Hassan Foroosh and Fei Liu",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (ACL)",
month = July,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
}
```
**Resources**
MeetingBank dataset will be hosted at Zenodo. The audio files of each meeting will be hosted individually on Huggingface. All resources will includes meeting audio, transcripts, meetingbank main JSON file, summaries from 6 systems and human annotations.
**Summary, Segments Transcripts and VideoList:** [zenodo](https://zenodo.org/record/7989108)
**Meeting Audios:** [HuggingFace](https://huggingface.co/datasets/huuuyeah/MeetingBank_Audio)
**Meeting Transcripts:** [HuggingFace](https://huggingface.co/datasets/lytang/MeetingBank-transcript)
Some scripts can be found in github repo [MeetingBank_Utils](https://github.com/YebowenHu/MeetingBank-utils) | 2,408 | [
[
-0.0391845703125,
-0.025177001953125,
0.0147247314453125,
0.0014181137084960938,
-0.0254974365234375,
-0.003925323486328125,
-0.034515380859375,
-0.040283203125,
0.02197265625,
0.0263824462890625,
-0.05841064453125,
-0.031768798828125,
-0.0228118896484375,
0... |
glaiveai/glaive-function-calling | 2023-09-27T18:04:36.000Z | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"region:us"
] | glaiveai | null | null | 29 | 40 | 2023-08-07T17:51:48 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
size_categories:
- 10K<n<100K
---
This dataset consists of 52k samples generated through [Glaive](https://glaive.ai) for the task of function calling, in the following format-
```
SYSTEM: You are an helpful assistant who has access to the following functions to help the user, you can use the functions if needed-
{
JSON function definiton
}
USER: user message
ASSISTANT: assistant message
Function call invocations are formatted as-
ASSISTANT: <functioncall> {json function call}
Response to the function call is formatted as-
FUNCTION RESPONSE: {json function response}
```
There are also samples which do not have any function invocations, multiple invocations and samples with no functions presented and invoked to keep the data balanced. | 824 | [
[
0.0034694671630859375,
-0.04180908203125,
0.0205230712890625,
0.0096282958984375,
-0.019256591796875,
-0.00998687744140625,
0.0182647705078125,
-0.021820068359375,
0.0209197998046875,
0.0699462890625,
-0.0703125,
-0.038909912109375,
-0.0224761962890625,
0.01... |
amankhandelia/namo_speech_dataset | 2023-10-19T06:58:00.000Z | [
"region:us"
] | amankhandelia | null | null | 0 | 40 | 2023-08-10T13:05:26 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
- name: duration
dtype: float64
splits:
- name: train
num_bytes: 23048334918.04
num_examples: 255210
download_size: 22741882513
dataset_size: 23048334918.04
---
# Dataset Card for "test_concat_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 547 | [
[
-0.050018310546875,
-0.022247314453125,
-0.013641357421875,
0.01519775390625,
-0.026031494140625,
0.00493621826171875,
0.0098419189453125,
-0.01256561279296875,
0.04254150390625,
0.03326416015625,
-0.053680419921875,
-0.041748046875,
-0.035064697265625,
-0.0... |
sandipanp/public_dataset | 2023-08-16T10:27:26.000Z | [
"region:us"
] | sandipanp | null | null | 0 | 40 | 2023-08-16T10:26:30 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
allenai/objaverse-xl | 2023-10-31T16:46:54.000Z | [
"language:en",
"license:odc-by",
"arxiv:2307.05663",
"region:us"
] | allenai | null | null | 34 | 40 | 2023-08-17T17:50:21 | ---
license: odc-by
language:
- en
viewer: false
---
# Objaverse-XL
<a href="//arxiv.org/abs/2307.05663" target="_blank">
<img src="https://img.shields.io/badge/arXiv-2307.05663-<COLOR>">
</a>
Objaverse-XL is an open dataset of over 10 million 3D objects!
With it, we train Zero123-XL, a foundation model for 3D, observing incredible 3D generalization abilities: 🧵👇
<img src="https://mattdeitke.com/static/1cdcdb2ef7033e177ca9ae2975a9b451/9c1ca/objaverse-xl.webp">
## Scale Comparison
Objaverse 1.0 was released back in December. It was a step in the right direction, but still relatively small with 800K objects.
Objaverse-XL is over an order of magnitude larger and much more diverse!
<img src="https://github.com/allenai/objaverse-rendering/assets/28768645/43833dd3-ec97-4a3d-8782-00a6aea584b4">
## Unlocking Generalization
Compared to the original Zero123 model, Zero123-XL improves remarkably in 0-shot generalization abilities, even being able to perform novel view synthesis on sketches, cartoons, and people!
A ton more examples in the [📝 paper](https://arxiv.org/abs/2307.05663) :)
<img src="https://github.com/allenai/objaverse-rendering/assets/28768645/8470e4df-e39d-444b-9871-58fbee4b87fd">
## Image → 3D
With the base Zero123-XL foundation model, we can perform image → 3D using [DreamFusion](https://dreamfusion3d.github.io/), having the model guide a NeRF to generate novel views!
<video autoplay muted loop controls>
<source src="https://github.com/allenai/objaverse-rendering/assets/28768645/571852cd-dc02-46ce-b2bb-88f64a67d0ac" type="video/mp4">
</video>
## Text → 3D
Text-to-3D comes for free with text → image models, such as with SDXL here, providing the initial image!
<video autoplay muted loop controls>
<source src="https://github.com/allenai/objaverse-rendering/assets/28768645/96255b42-8158-4c7a-8308-7b0f1257ada8" type="video/mp4">
</video>
## Scaling Trends
Beyond that, we show strong scaling trends for both Zero123-XL and [PixelNeRF](https://alexyu.net/pixelnerf/)!
<img src="https://github.com/allenai/objaverse-rendering/assets/28768645/0c8bb433-27df-43a1-8cb8-1772007c0899">
## Tutorial
Check out the [Google Colab tutorial](https://colab.research.google.com/drive/15XpZMjrHXuky0IgBbXcsUtb_0g-XWYmN?usp=sharing) to download Objaverse-XL.
Polycam data is available by Polycam to academic researchers for non-commercial use upon request and approval from Polycam. For access please fill out [this form](https://forms.gle/HUjYVtS9GKVS5QBXA).
## License
The use of the dataset as a whole is licensed under the ODC-By v1.0 license. Individual objects in Objaverse-XL are licensed under different licenses.
## Citation
To cite Objaverse-XL, please cite our [📝 arXiv](https://arxiv.org/abs/2307.05663) paper with the following BibTeX entry:
```bibtex
@article{objaverseXL,
title={Objaverse-XL: A Universe of 10M+ 3D Objects},
author={Matt Deitke and Ruoshi Liu and Matthew Wallingford and Huong Ngo and
Oscar Michel and Aditya Kusupati and Alan Fan and Christian Laforte and
Vikram Voleti and Samir Yitzhak Gadre and Eli VanderBilt and
Aniruddha Kembhavi and Carl Vondrick and Georgia Gkioxari and
Kiana Ehsani and Ludwig Schmidt and Ali Farhadi},
journal={arXiv preprint arXiv:2307.05663},
year={2023}
}
```
Objaverse 1.0 is available on 🤗Hugging Face at [@allenai/objaverse](https://huggingface.co/datasets/allenai/objaverse). To cite it, use:
```bibtex
@article{objaverse,
title={Objaverse: A Universe of Annotated 3D Objects},
author={Matt Deitke and Dustin Schwenk and Jordi Salvador and Luca Weihs and
Oscar Michel and Eli VanderBilt and Ludwig Schmidt and
Kiana Ehsani and Aniruddha Kembhavi and Ali Farhadi},
journal={arXiv preprint arXiv:2212.08051},
year={2022}
}
```
| 3,824 | [
[
-0.05767822265625,
-0.059417724609375,
0.047821044921875,
0.01251983642578125,
-0.00893402099609375,
-0.024169921875,
0.00732421875,
-0.05291748046875,
0.0235443115234375,
0.033294677734375,
-0.032958984375,
-0.024993896484375,
-0.039825439453125,
0.02062988... |
MBZUAI-LLM/SlimPajama-627B-DC | 2023-09-20T06:26:19.000Z | [
"task_categories:text-generation",
"language:en",
"license:mit",
"arxiv:2309.10818",
"region:us"
] | MBZUAI-LLM | null | null | 5 | 40 | 2023-09-08T23:58:27 | ---
license: mit
task_categories:
- text-generation
language:
- en
pretty_name: SlimPajama-627B-divided
---
### Dataset Description:
This is a split version of [cerebras/SlimPajama-627B](https://huggingface.co/datasets/cerebras/SlimPajama-627B) that divides data based on its sources.
The content of this dataset is the same as SlimPajama-627B.
We divide data from different sources based on the "redpajama_setname" and save them in different directories, which is convenient for future dataset combination related research.
This dataset consists of 15,967 jsonl files and is ~ 883G compressed.
### Primary Usage:
This dataset is used for our study: [SlimPajama-DC: Understanding Data Combinations for LLM Training](https://arxiv.org/abs/2309.10818).
For more details about the content in this dataset, please refer to the original [cerebras/SlimPajama-627B](https://huggingface.co/datasets/cerebras/SlimPajama-627B).
### License:
Please refer to the licenses of the data subsets you use.
- [Common Crawl Foundation Terms of Use](https://commoncrawl.org/terms-of-use/full/)
- [C4 license](https://huggingface.co/datasets/allenai/c4#license)
- GitHub was limited to MIT, BSD, or Apache licenses only
- Books: [the_pile_books3 license](https://huggingface.co/datasets/the_pile_books3#licensing-information) and [pg19 license](https://huggingface.co/datasets/pg19#licensing-information)
- [ArXiv Terms of Use](https://info.arxiv.org/help/api/tou.html)
- [Wikipedia License](https://huggingface.co/datasets/wikipedia#licensing-information)
- [StackExchange license on the Internet Archive](https://archive.org/details/stackexchange) | 1,635 | [
[
-0.037017822265625,
-0.0223541259765625,
0.01473236083984375,
0.03369140625,
-0.024383544921875,
-0.001857757568359375,
-0.01381683349609375,
-0.0278778076171875,
0.035369873046875,
0.05230712890625,
-0.05413818359375,
-0.03631591796875,
-0.04840087890625,
0... |
hakanssonjesper/dataset-llama | 2023-10-01T16:39:18.000Z | [
"region:us"
] | hakanssonjesper | null | null | 0 | 40 | 2023-09-15T14:21:39 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 65284594.45526487
num_examples: 45592
- name: validation
num_bytes: 16322580.544735134
num_examples: 11399
download_size: 38476271
dataset_size: 81607175.0
---
# Dataset Card for "dataset-llama"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 491 | [
[
-0.03277587890625,
-0.01375579833984375,
0.018280029296875,
0.02752685546875,
-0.034271240234375,
0.018829345703125,
0.031982421875,
-0.020599365234375,
0.07415771484375,
0.03765869140625,
-0.05560302734375,
-0.05706787109375,
-0.055877685546875,
-0.00313568... |
mirfan899/uner-ner | 2023-10-15T09:16:26.000Z | [
"region:us"
] | mirfan899 | null | null | 0 | 40 | 2023-09-21T17:27:32 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': DATE
'1': DESIGNATION
'2': LOCATION
'3': NUMBER
'4': O
'5': ORGANIZATION
'6': PERSON
'7': TIME
splits:
- name: train
num_bytes: 682695
num_examples: 1145
- name: validation
num_bytes: 302036
num_examples: 491
- name: test
num_bytes: 302036
num_examples: 491
download_size: 0
dataset_size: 1286767
---
# Dataset Card for "uner-ner"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 924 | [
[
-0.045928955078125,
-0.0182952880859375,
0.006557464599609375,
0.006649017333984375,
-0.007015228271484375,
0.0036525726318359375,
0.0170745849609375,
-0.021026611328125,
0.0631103515625,
0.030181884765625,
-0.047271728515625,
-0.049346923828125,
-0.035339355468... |
euclaise/WritingPromptsX | 2023-09-22T14:37:38.000Z | [
"size_categories:1M<n<10M",
"license:cc0-1.0",
"region:us"
] | euclaise | null | null | 0 | 40 | 2023-09-22T14:22:28 | ---
dataset_info:
features:
- name: post_title
dtype: string
- name: body
dtype: string
- name: score
dtype: int64
- name: gilded
dtype: int64
- name: post_score
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 2040557544
num_examples: 1245546
download_size: 1016138545
dataset_size: 2040557544
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc0-1.0
size_categories:
- 1M<n<10M
---
# Dataset Card for "WritingPromptsX"
Comments from r/WritingPrompts, up to 12-2022, from PushShift. Inspired by [WritingPrompts](https://huggingface.co/datasets/euclaise/writingprompts), but a bit more complete. | 732 | [
[
-0.0335693359375,
-0.01137542724609375,
0.037445068359375,
0.042633056640625,
-0.0198211669921875,
-0.023651123046875,
0.00350189208984375,
-0.03924560546875,
0.04486083984375,
0.054107666015625,
-0.10699462890625,
-0.036865234375,
-0.0391845703125,
0.014251... |
fahrialfiansyah/openstax-sample | 2023-10-03T14:47:58.000Z | [
"region:us"
] | fahrialfiansyah | null | null | 0 | 40 | 2023-10-03T12:52:40 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
autoevaluate/autoeval-eval-xsum-default-e3e096-60495145410 | 2023-10-04T17:19:17.000Z | [
"autotrain",
"evaluation",
"region:us"
] | autoevaluate | null | null | 0 | 40 | 2023-10-04T16:46:55 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- xsum
eval_info:
task: summarization
model: google/pegasus-xsum
metrics: ['bertscore']
dataset_name: xsum
dataset_config: default
dataset_split: test
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: google/pegasus-xsum
* Dataset: xsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@zuzannad1](https://huggingface.co/zuzannad1) for evaluating this model. | 803 | [
[
-0.0390625,
-0.00485992431640625,
0.01410675048828125,
0.0032405853271484375,
-0.01324462890625,
-0.01500701904296875,
0.0076904296875,
-0.0289306640625,
0.03662109375,
0.02886962890625,
-0.08465576171875,
-0.0089569091796875,
-0.045989990234375,
-0.01335906... |
emozilla/proofpile-test-tokenized-mistral | 2023-10-07T03:18:31.000Z | [
"region:us"
] | emozilla | null | null | 0 | 40 | 2023-10-07T03:17:40 | ---
dataset_info:
features:
- name: text
dtype: string
- name: meta
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: tokenized_len
dtype: int64
splits:
- name: train
num_bytes: 1647980074
num_examples: 46251
download_size: 554081392
dataset_size: 1647980074
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "proofpile-test-tokenized-mistral"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 627 | [
[
-0.036651611328125,
-0.0188446044921875,
-0.0004010200500488281,
0.0187530517578125,
-0.0088653564453125,
-0.0081939697265625,
0.0162506103515625,
-0.0004172325134277344,
0.043853759765625,
0.026092529296875,
-0.034027099609375,
-0.048492431640625,
-0.0479736328... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.