id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
n0w0f/nomad-structure-csv | n0w0f | 2023-11-12T21:08:02Z | 14 | 0 | null | [
"license:cc-by-4.0",
"region:us"
] | 2023-11-12T21:08:02Z | 2023-11-12T18:05:30.000Z | 2023-11-12T18:05:30 | ---
license: cc-by-4.0
---
| [
-0.1285339742898941,
-0.18616800010204315,
0.6529127359390259,
0.4943626821041107,
-0.1931934952735901,
0.2360742688179016,
0.360720157623291,
0.05056300014257431,
0.5793654322624207,
0.7400140166282654,
-0.6508105993270874,
-0.23783984780311584,
-0.7102248668670654,
-0.047826044261455536,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
leonvanbokhorst/hboi_test | leonvanbokhorst | 2023-11-12T19:32:20Z | 14 | 0 | null | [
"region:us"
] | 2023-11-12T19:32:20Z | 2023-11-12T19:32:14.000Z | 2023-11-12T19:32:14 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: output
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 151364.55566905005
num_examples: 900
- name: test
num_bytes: 13286.44433094995
num_examples: 79
download_size: 65869
dataset_size: 164651.0
---
# Dataset Card for "hboi_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6467028260231018,
-0.24608764052391052,
0.016233380883932114,
0.12645167112350464,
-0.09869109839200974,
-0.06266883760690689,
0.5900728106498718,
-0.014378366060554981,
0.5515879392623901,
0.38122135400772095,
-0.7344348430633545,
-0.6052399277687073,
-0.4152127206325531,
-0.3030142784... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
davidgaofc/techdebt_label | davidgaofc | 2023-11-15T00:07:50Z | 14 | 0 | null | [
"region:us"
] | 2023-11-15T00:07:50Z | 2023-11-13T02:30:05.000Z | 2023-11-13T02:30:05 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: CommitHash
dtype: string
- name: NewPath
dtype: string
- name: Diff
dtype: string
- name: Message
dtype: string
splits:
- name: train
num_bytes: 6172686
num_examples: 8793
- name: test
num_bytes: 1542823
num_examples: 2199
download_size: 2192562
dataset_size: 7715509
---
# Dataset Card for "techdebt_label"
This dataset was generated from [The Technical Debt Dataset](https://github.com/clowee/The-Technical-Debt-Dataset) created by Lenarduzzi, et al. and the citation is down below.
## Dataset Details and Structure
The labels for the dataset were provided by the SonarQube software cited by the paper and matched to the diff in the commit where the message was raised. This diff was then cleaned to only include the lines of code added.
## Bias, Risks, and Limitations
Beware of the limited sample size and label variety in the dataset. Also, the queries used to extract this data are still being checked over to ensure correctness.
## Recommendations
Changes are constantly being made to this dataset to make it better. Please be aware when you use it.
## References
Valentina Lenarduzzi, Nyyti Saarimäki, Davide Taibi. The Technical Debt Dataset. Proceedings for the 15th Conference on Predictive Models and Data Analytics in Software Engineering. Brazil. 2019. | [
-0.466068834066391,
-0.5077477097511292,
0.03131551668047905,
0.18844203650951385,
-0.3882276117801666,
0.48624661564826965,
0.25756576657295227,
-0.335251122713089,
0.4103684425354004,
0.5442306399345398,
-0.6721015572547913,
-0.8031367063522339,
-0.5291808843612671,
-0.3111269176006317,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
wesley7137/autotrain_qa_neuro | wesley7137 | 2023-11-13T04:52:15Z | 14 | 0 | null | [
"region:us"
] | 2023-11-13T04:52:15Z | 2023-11-13T03:45:19.000Z | 2023-11-13T03:45:19 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zhangshuoming/c_x86_O3_exebench_json_cleaned | zhangshuoming | 2023-11-13T08:21:34Z | 14 | 1 | null | [
"region:us"
] | 2023-11-13T08:21:34Z | 2023-11-13T08:20:48.000Z | 2023-11-13T08:20:48 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1268266964.093047
num_examples: 725290
download_size: 200600341
dataset_size: 1268266964.093047
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "c_x86_O3_exebench_json_cleaned"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5933070778846741,
-0.36194533109664917,
0.17885127663612366,
-0.05128267779946327,
-0.29683390259742737,
0.15382084250450134,
0.038831643760204315,
-0.3755839467048645,
0.7295221090316772,
0.7868071794509888,
-0.6267447471618652,
-0.8479501008987427,
-0.35912659764289856,
-0.25354886054... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
avgalaida/faq_gu_covid_vac | avgalaida | 2023-11-13T08:56:40Z | 14 | 0 | null | [
"region:us"
] | 2023-11-13T08:56:40Z | 2023-11-13T08:54:44.000Z | 2023-11-13T08:54:44 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: int64
- name: context
dtype: string
- name: question
dtype: string
- name: answer
struct:
- name: answer_start
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 6559
num_examples: 4
- name: validation
num_bytes: 6459
num_examples: 4
download_size: 34057
dataset_size: 13018
---
# Dataset Card for "faq_gu_covid_vac"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6082015037536621,
-0.48188963532447815,
0.053243547677993774,
-0.0005984769086353481,
-0.0914449393749237,
-0.1845574676990509,
0.4890628159046173,
0.13770894706249237,
0.6622651219367981,
0.45589160919189453,
-0.8468155264854431,
-0.9098663330078125,
-0.45382925868034363,
-0.2995098829... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
rishiraj/bengalichat | rishiraj | 2023-11-16T09:14:55Z | 14 | 2 | null | [
"task_categories:conversational",
"task_categories:text-generation",
"language:bn",
"license:cc-by-nc-4.0",
"arxiv:2203.02155",
"region:us"
] | 2023-11-16T09:14:55Z | 2023-11-15T17:58:04.000Z | 2023-11-15T17:58:04 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: category
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 66596881
num_examples: 9500
- name: test
num_bytes: 3573980
num_examples: 500
download_size: 27678311
dataset_size: 70170861
task_categories:
- conversational
- text-generation
language:
- bn
pretty_name: Bengali Chat
license: cc-by-nc-4.0
---
# Dataset Card for Bengali Chat
We know that current English-first LLMs don’t work well for many other languages, both in terms of performance, latency, and speed. Building instruction datasets for non-English languages is an important challenge that needs to be solved.
Dedicated towards addressing this problem, I release 2 new datasets [rishiraj/bengalichat](https://huggingface.co/datasets/rishiraj/bengalichat/) & [rishiraj/hindichat](https://huggingface.co/datasets/rishiraj/hindichat/) of 10,000 instructions and demonstrations each. This data can be used for supervised fine-tuning (SFT) to make language multilingual models follow instructions better.
### Dataset Summary
[rishiraj/bengalichat](https://huggingface.co/datasets/rishiraj/bengalichat/) was modelled after the instruction dataset described in OpenAI's [InstructGPT paper](https://huggingface.co/papers/2203.02155), and is translated from [HuggingFaceH4/no_robots](https://huggingface.co/datasets/HuggingFaceH4/no_robots/) which comprised mostly of single-turn instructions across the following categories:
| Category | Count |
|:-----------|--------:|
| Generation | 4560 |
| Open QA | 1240 |
| Brainstorm | 1120 |
| Chat | 850 |
| Rewrite | 660 |
| Summarize | 420 |
| Coding | 350 |
| Classify | 350 |
| Closed QA | 260 |
| Extract | 190 |
### Languages
The data in [rishiraj/bengalichat](https://huggingface.co/datasets/rishiraj/bengalichat/) are in Bengali (BCP-47 bn).
### Data Fields
The data fields are as follows:
* `prompt`: Describes the task the model should perform.
* `prompt_id`: A unique ID for the prompt.
* `messages`: An array of messages, where each message indicates the role (system, user, assistant) and the content.
* `category`: Which category the example belongs to (e.g. `Chat` or `Coding`).
* `text`: Content of `messages` in a format that is compatible with dataset_text_field of SFTTrainer.
### Data Splits
| | train_sft | test_sft |
|---------------|------:| ---: |
| bengalichat | 9500 | 500 |
### Licensing Information
The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode).
### Citation Information
```
@misc{bengalichat,
author = {Rishiraj Acharya},
title = {Bengali Chat},
year = {2023},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/datasets/rishiraj/bengalichat}}
}
``` | [
-0.08529293537139893,
-0.7232986092567444,
-0.09116391837596893,
0.544959306716919,
-0.23678910732269287,
0.004747185856103897,
-0.19500313699245453,
-0.285526305437088,
0.3607453405857086,
0.5488638281822205,
-0.7087580561637878,
-0.6901996731758118,
-0.4843425750732422,
0.187050625681877... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Weni/Zeroshot-multilanguages-2.1 | Weni | 2023-11-17T14:53:48Z | 14 | 0 | null | [
"region:us"
] | 2023-11-17T14:53:48Z | 2023-11-17T14:25:40.000Z | 2023-11-17T14:25:40 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
smangrul/assistant_chatbot_dataset | smangrul | 2023-11-17T14:46:33Z | 14 | 0 | null | [
"license:unknown",
"region:us"
] | 2023-11-17T14:46:33Z | 2023-11-17T14:45:52.000Z | 2023-11-17T14:45:52 | ---
license: unknown
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bitadin/one-one-attributes | bitadin | 2023-11-21T23:27:36Z | 14 | 0 | null | [
"region:us"
] | 2023-11-21T23:27:36Z | 2023-11-17T15:50:30.000Z | 2023-11-17T15:50:30 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 241257350
num_examples: 423530
download_size: 42243320
dataset_size: 241257350
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mengmengmmm/tlc_slice2 | mengmengmmm | 2023-11-20T15:47:23Z | 14 | 0 | null | [
"region:us"
] | 2023-11-20T15:47:23Z | 2023-11-20T15:47:04.000Z | 2023-11-20T15:47:04 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Zarakun/youtube_ua_subtitles_test | Zarakun | 2023-11-21T14:44:15Z | 14 | 0 | null | [
"task_categories:automatic-speech-recognition",
"region:us"
] | 2023-11-21T14:44:15Z | 2023-11-20T16:55:36.000Z | 2023-11-20T16:55:36 | ---
task_categories:
- automatic-speech-recognition
pretty_name: MangoSpeech
configs:
- config_name: rozdympodcast
data_files: "data/rozdympodcast.parquet"
- config_name: opodcast
data_files: "data/opodcast.parquet"
- config_name: test
data_files: "data/test.parquet"
---
# The list of all subsets in the dataset
Each subset is generated splitting videos from given particular ukrainiam YouTube channel
All subsets are in test split
- "opodcast" subset is from channel "О! ПОДКАСТ"
- "rozdympodcast" subset is from channel "Роздум | Подкаст"
- "test" subset is just a small subset of samples
# Loading a particular subset
```
>>> data_files = {"train": "data/<your_subset>.parquet"}
>>> data = load_dataset("Zarakun/youtube_ua_subtitles_test", data_files=data_files)
>>> data
DatasetDict({
train: Dataset({
features: ['audio', 'rate', 'duration', 'sentence'],
num_rows: <some_number>
})
})
``` | [
-0.661379873752594,
-0.517618715763092,
-0.27393805980682373,
0.018057141453027725,
-0.5919249653816223,
0.020114963874220848,
-0.26958736777305603,
0.49361786246299744,
0.5939579010009766,
0.5933604836463928,
-1.1180731058120728,
-0.3820410966873169,
-0.4886628985404968,
0.020839750766754... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nguyenthanhdo/zac2023-math-en | nguyenthanhdo | 2023-11-21T15:25:30Z | 14 | 0 | null | [
"region:us"
] | 2023-11-21T15:25:30Z | 2023-11-21T15:25:25.000Z | 2023-11-21T15:25:25 | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: choices
sequence: string
- name: explanation
dtype: string
- name: answer
dtype: string
splits:
- name: public_test
num_bytes: 31204
num_examples: 189
download_size: 18758
dataset_size: 31204
configs:
- config_name: default
data_files:
- split: public_test
path: data/public_test-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
denysdios/StellARset-dialogue-text-en-alpha | denysdios | 2023-11-22T22:03:01Z | 14 | 0 | null | [
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-11-22T22:03:01Z | 2023-11-22T13:05:01.000Z | 2023-11-22T13:05:01 | ---
license: apache-2.0
dataset_info:
features:
- name: data
dtype: string
- name: dialogue_greeting
dtype: int64
- name: dialogue_forbidden_words
dtype: int64
- name: dialogue_sentiment
dtype: int64
- name: dialogue_sided
dtype: int64
- name: dialogue_end
dtype: int64
splits:
- name: train
num_bytes: 15380
num_examples: 10
download_size: 21616
dataset_size: 15380
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
language:
- en
---
This dialogue dataset was produced using artificial intelligence-generated data from llms. Values are manually verified. This is merely an alpha test. The given data is a test set.
Additional information for values:
{
“dialogue_greeting”:2 if both sides greet each other at the start such as “hi, hello, greetings, hi there”, 1 if just one side greets, else 0,
“dialogue_forbidden_words”: 1 if any inappropriate or offensive word used, else 0,
“dialogue_sentiment”: 0 if the dialogue has overall negative sentiment, else 1,
“dialogue_sided”: 1 if one side talks consecutively, 0 else,
“dialogue_end”: 0 if the dialogue do not finalize, 1 else
}
| [
-0.4811134338378906,
-0.6348851919174194,
0.5639751553535461,
0.31731370091438293,
-0.2524479329586029,
0.08068021386861801,
-0.004731678869575262,
-0.33769461512565613,
0.4104889929294586,
0.6899672150611877,
-1.301061749458313,
-0.7748430967330933,
-0.524486780166626,
0.3951415419578552,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
argilla/ultrafeedback-prompts-with-ultrajudge | argilla | 2023-11-24T12:27:36Z | 14 | 0 | null | [
"region:us"
] | 2023-11-24T12:27:36Z | 2023-11-22T16:30:27.000Z | 2023-11-22T16:30:27 | ---
dataset_info:
features:
- name: source
dtype: string
- name: input
dtype: string
- name: models
sequence: string
- name: completions
list:
- name: annotations
struct:
- name: helpfulness
struct:
- name: Rating
dtype: string
- name: Rationale
dtype: string
- name: Rationale For Rating
dtype: string
- name: Type
sequence: string
- name: honesty
struct:
- name: Rating
dtype: string
- name: Rationale
dtype: string
- name: instruction_following
struct:
- name: Rating
dtype: string
- name: Rationale
dtype: string
- name: truthfulness
struct:
- name: Rating
dtype: string
- name: Rationale
dtype: string
- name: Rationale For Rating
dtype: string
- name: Type
sequence: string
- name: critique
dtype: string
- name: custom_system_prompt
dtype: string
- name: model
dtype: string
- name: overall_score
dtype: float64
- name: principle
dtype: string
- name: response
dtype: string
- name: correct_answers
sequence: string
- name: incorrect_answers
sequence: string
- name: generation_model
dtype: string
- name: generation_prompt
dtype: string
- name: raw_generation_responses
sequence: string
- name: generations
sequence: string
- name: labelling_model
dtype: string
- name: labelling_prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: raw_labelling_response
dtype: string
- name: rating
sequence: float64
- name: areas
list:
- name: Authenticity & Reliability
struct:
- name: rating
dtype: string
- name: rationale
dtype: string
- name: Clarity & Transparency
struct:
- name: rating
dtype: string
- name: rationale
dtype: string
- name: Compliance with Intent
struct:
- name: rating
dtype: string
- name: rationale
dtype: string
- name: Practical Accuracy
struct:
- name: rating
dtype: string
- name: rationale
dtype: string
splits:
- name: train
num_bytes: 1844998918
num_examples: 63967
download_size: 0
dataset_size: 1844998918
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
justinphan3110/sharegpt_instructions_small_en_vi_answers | justinphan3110 | 2023-11-24T01:11:15Z | 14 | 0 | null | [
"region:us"
] | 2023-11-24T01:11:15Z | 2023-11-24T01:11:14.000Z | 2023-11-24T01:11:14 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: vn
dtype: string
- name: en
dtype: string
splits:
- name: train
num_bytes: 218457
num_examples: 424
download_size: 138882
dataset_size: 218457
---
# Dataset Card for "sharegpt_instructions_small_en_vi_answers"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7205162048339844,
-0.5195655822753906,
0.3273206949234009,
0.3576391935348511,
-0.21532946825027466,
-0.35104110836982727,
0.02414626255631447,
-0.038100019097328186,
0.6561708450317383,
0.27800729870796204,
-1.0112048387527466,
-0.5833406448364258,
-0.6251119375228882,
-0.3327503204345... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
girrajjangid/databricks-dolly-1k | girrajjangid | 2023-11-24T07:37:04Z | 14 | 0 | null | [
"region:us"
] | 2023-11-24T07:37:04Z | 2023-11-24T07:37:02.000Z | 2023-11-24T07:37:02 | ---
dataset_info:
features:
- name: pre_instruction
dtype: string
- name: instruction
dtype: string
- name: output
dtype: string
- name: category
dtype: string
splits:
- name: train
num_bytes: 896125.1526880288
num_examples: 1103
download_size: 1077566
dataset_size: 896125.1526880288
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
paul-w-qs/contracts_v6 | paul-w-qs | 2023-11-24T09:32:09Z | 14 | 0 | null | [
"region:us"
] | 2023-11-24T09:32:09Z | 2023-11-24T09:29:43.000Z | 2023-11-24T09:29:43 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: N_ROWS
dtype: int64
- name: N_COLS
dtype: int64
- name: FONT_SIZE
dtype: int64
- name: FONT_NAME
dtype: string
- name: BORDER_THICKNESS
dtype: int64
- name: TABLE_STYLE
dtype: string
- name: NOISED
dtype: bool
- name: LABEL_NOISE
dtype: bool
- name: JSON_LABEL
dtype: string
splits:
- name: train
num_bytes: 360922904.016
num_examples: 5364
download_size: 360853881
dataset_size: 360922904.016
---
# Dataset Card for "contracts_v6"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3341638445854187,
0.14720268547534943,
0.2593700587749481,
0.1279878467321396,
-0.22613416612148285,
-0.22254517674446106,
0.5149721503257751,
-0.3338617980480194,
0.645757794380188,
0.7634316682815552,
-0.7469471096992493,
-0.984348475933075,
-0.4928000569343567,
-0.2936473488807678,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
confit/emodb | confit | 2023-11-24T18:25:25Z | 14 | 0 | null | [
"region:us"
] | 2023-11-24T18:25:25Z | 2023-11-24T17:01:25.000Z | 2023-11-24T17:01:25 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: filename
dtype: string
- name: label
dtype:
class_label:
names:
'0': anxiety
'1': disgust
'2': happiness
'3': boredom
'4': neutral
'5': sadness
'6': anger
splits:
- name: train
num_bytes: 6992
num_examples: 304
- name: test
num_bytes: 5313
num_examples: 231
download_size: 6510
dataset_size: 12305
---
# Dataset Card for "emodb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7546016573905945,
-0.6114839911460876,
0.3611249327659607,
0.21917185187339783,
-0.2092742919921875,
0.03354276716709137,
0.3589745759963989,
-0.09968266636133194,
1.1139663457870483,
0.5127640962600708,
-0.8195594549179077,
-0.9079219102859497,
-0.5107062458992004,
-0.11243321746587753... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Imxxn/child-mind-institute-test | Imxxn | 2023-11-25T11:09:58Z | 14 | 0 | null | [
"region:us"
] | 2023-11-25T11:09:58Z | 2023-11-25T11:02:13.000Z | 2023-11-25T11:02:13 | ---
dataset_info:
features:
- name: series_id
dtype: string
- name: step
dtype: uint32
- name: timestamp
dtype: string
- name: anglez
dtype: float32
- name: enmo
dtype: float32
- name: awake
dtype: int64
splits:
- name: train
num_bytes: 120291840
num_examples: 1879560
download_size: 35781653
dataset_size: 120291840
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Michaelkassouf/Ferrari_SD1 | Michaelkassouf | 2023-11-25T13:52:04Z | 14 | 0 | null | [
"region:us"
] | 2023-11-25T13:52:04Z | 2023-11-25T13:50:59.000Z | 2023-11-25T13:50:59 | ---
dataset_info:
features:
- name: image
dtype: string
- name: caption
dtype: string
splits:
- name: train
num_bytes: 3495120
num_examples: 35553
download_size: 1051219
dataset_size: 3495120
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
andersonbcdefg/example_pairs | andersonbcdefg | 2023-11-26T03:19:15Z | 14 | 0 | null | [
"region:us"
] | 2023-11-26T03:19:15Z | 2023-11-26T03:19:12.000Z | 2023-11-26T03:19:12 | ---
dataset_info:
features:
- name: anchor
dtype: string
- name: positive
dtype: string
splits:
- name: train
num_bytes: 1985788
num_examples: 1000
download_size: 1150009
dataset_size: 1985788
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "example_pairs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.48482856154441833,
-0.46787574887275696,
0.13646817207336426,
0.23686432838439941,
-0.4593578279018402,
-0.2567481994628906,
0.2923927307128906,
0.002281342865899205,
0.90121990442276,
0.3407973349094391,
-0.6225616931915283,
-0.7121485471725464,
-0.43823084235191345,
-0.127678662538528... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zoeyki/ende-error | zoeyki | 2023-11-26T05:48:59Z | 14 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-26T05:48:59Z | 2023-11-26T05:03:38.000Z | 2023-11-26T05:03:38 | ---
license: apache-2.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lhallee/Thermostability_reg | lhallee | 2023-11-26T18:05:23Z | 14 | 0 | null | [
"region:us"
] | 2023-11-26T18:05:23Z | 2023-11-26T18:05:18.000Z | 2023-11-26T18:05:18 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: seqs
dtype: string
- name: labels
dtype: float64
splits:
- name: train
num_bytes: 2990210
num_examples: 5056
- name: valid
num_bytes: 373605
num_examples: 639
- name: test
num_bytes: 795351
num_examples: 1336
download_size: 4142780
dataset_size: 4159166
---
# Dataset Card for "Thermostability_reg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.45319727063179016,
-0.13173390924930573,
-0.0421217642724514,
0.11790870130062103,
-0.27653399109840393,
-0.19315814971923828,
0.09299934655427933,
0.06880351901054382,
0.8509816527366638,
0.2476881444454193,
-0.6433213353157043,
-0.5601511597633362,
-0.4225081205368042,
-0.249014541506... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
deepapaikar/Llama_SC_pairs | deepapaikar | 2023-11-27T01:16:27Z | 14 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-27T01:16:27Z | 2023-11-27T01:04:41.000Z | 2023-11-27T01:04:41 | ---
license: apache-2.0
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1976153
num_examples: 5346
download_size: 858001
dataset_size: 1976153
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dutta18/omcs_dataset_full_with_embeds | dutta18 | 2023-11-27T03:37:18Z | 14 | 0 | null | [
"region:us"
] | 2023-11-27T03:37:18Z | 2023-11-27T03:31:24.000Z | 2023-11-27T03:31:24 | ---
dataset_info:
features:
- name: fact
dtype: string
- name: count
dtype: int64
- name: embeddings
sequence: float32
splits:
- name: train
num_bytes: 4951309139
num_examples: 1578238
download_size: 5895178326
dataset_size: 4951309139
---
# Dataset Card for "omcs_dataset_full_with_embeds"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6577151417732239,
-0.34343841671943665,
0.4262439012527466,
0.19189541041851044,
-0.2502034902572632,
-0.0666632279753685,
-0.03223911300301552,
0.11973527073860168,
1.0509096384048462,
0.6740930080413818,
-0.5506075620651245,
-1.058812141418457,
-0.49449169635772705,
-0.179692760109901... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Erynan/100_deon_util_shuffled | Erynan | 2023-11-27T08:41:47Z | 14 | 0 | null | [
"region:us"
] | 2023-11-27T08:41:47Z | 2023-11-27T08:41:44.000Z | 2023-11-27T08:41:44 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 73066
num_examples: 100
download_size: 17853
dataset_size: 73066
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Harelix/Prompt-Injection-Mixed-Techniques-2024 | Harelix | 2023-11-27T21:36:22Z | 14 | 0 | null | [
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"jailbreak",
"prompt injection",
"region:us"
] | 2023-11-27T21:36:22Z | 2023-11-27T12:42:55.000Z | 2023-11-27T12:42:55 | ---
language:
- en
tags:
- jailbreak
- prompt injection
pretty_name: Prompt Injection Dataset 2024
size_categories:
- 1K<n<10K
license: apache-2.0
--- | [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
maximedb/wow | maximedb | 2021-11-23T10:09:28Z | 13 | 1 | null | [
"region:us"
] | 2021-11-23T10:09:28Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nateraw/food101 | nateraw | 2022-07-08T07:06:41Z | 13 | 1 | food-101 | [
"task_categories:other",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-foodspotting",
"language:en",
"license:unknown",
"region:us"
] | 2022-07-08T07:06:41Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- unknown
multilinguality:
- monolingual
pretty_name: food101
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-foodspotting
task_categories:
- other
task_ids:
- other-other-image-classification
paperswithcode_id: food-101
---
# Dataset Card for Food-101
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**[Food-101 Dataset](https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/)
- **Repository:** N/A
- **Paper:**[Paper](https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/static/bossard_eccv14_food-101.pdf)
- **Leaderboard:** N/A
- **Point of Contact:** N/A
### Dataset Summary
This dataset consists of 101 food categories, with 101'000 images. For each class, 250 manually reviewed test images are provided as well as 750 training images. On purpose, the training images were not cleaned, and thus still contain some amount of noise. This comes mostly in the form of intense colors and sometimes wrong labels. All images were rescaled to have a maximum side length of 512 pixels.
### Supported Tasks and Leaderboards
- image-classification
### Languages
English
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```
{
'image': '/root/.cache/huggingface/datasets/downloads/extracted/6e1e8c9052e9f3f7ecbcb4b90860668f81c1d36d86cc9606d49066f8da8bfb4f/food-101/images/churros/1004234.jpg',
'label': 23
}
```
### Data Fields
The data instances have the following fields:
- `image`: a `string` filepath to an image.
- `label`: an `int` classification label.
### Data Splits
| name |train|validation|
|----------|----:|---------:|
|food101|75750|25250|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{bossard14,
title = {Food-101 -- Mining Discriminative Components with Random Forests},
author = {Bossard, Lukas and Guillaumin, Matthieu and Van Gool, Luc},
booktitle = {European Conference on Computer Vision},
year = {2014}
}
```
### Contributions
Thanks to [@nateraw](https://github.com/nateraw) for adding this dataset.
| [
-0.4613100588321686,
-0.5900477170944214,
-0.15536797046661377,
-0.13557206094264984,
0.06639071553945541,
-0.13685840368270874,
-0.29728978872299194,
-0.5601768493652344,
0.5285432934761047,
0.4817781150341034,
-0.7404017448425293,
-0.9210007190704346,
-0.6062525510787964,
0.3882700204849... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
openclimatefix/goes | openclimatefix | 2022-05-09T16:05:54Z | 13 | 2 | null | [
"license:mit",
"region:us"
] | 2022-05-09T16:05:54Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
license: mit
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
rocca/sims4-faces | rocca | 2022-03-12T06:58:39Z | 13 | 1 | null | [
"region:us"
] | 2022-03-12T06:58:39Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | A collection of >200k screenshots from the Sims 4 character creator (face and upper-torso only), using the randomize button.
* There are ~100k masculine faces (`masc` folder), ~100k feminine faces (`fem` folder), ~12k faces with a masculine physical frame and feminine attire/makeup (`masc2fem` folder).
* All images are 917x917.
* Each image is about 40kb.
* The examples below are cropped slightly off-center, but in the actual data the characters are more centered.
* The files are named from `1.jpg` through to `N.jpg` (no zero-padding). For `fem`, `N=101499`. For `masc`, `N=103615`. For `masc2fem`, `N=12123`.
## fem examples:

## masc examples:

## masc2fem examples:

| [
-0.7664993405342102,
-0.25561684370040894,
0.6249535083770752,
0.38130468130111694,
-0.16342845559120178,
0.08977662026882172,
0.3890419602394104,
0.037509575486183167,
0.07628487795591354,
1.1587178707122803,
-1.224588394165039,
-0.4004254639148712,
-0.24878741800785065,
0.902227461338043... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
rubrix/sentiment-banking | rubrix | 2022-02-28T18:22:25Z | 13 | 0 | null | [
"region:us"
] | 2022-02-28T18:22:25Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
seamew/THUCNews | seamew | 2021-06-22T09:02:34Z | 13 | 0 | null | [
"region:us"
] | 2021-06-22T09:02:34Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
seamew/THUCNewsTitle | seamew | 2021-08-24T01:22:11Z | 13 | 0 | null | [
"region:us"
] | 2021-08-24T01:22:11Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
shivam/hindi_pib_processed | shivam | 2022-01-20T17:16:52Z | 13 | 0 | null | [
"region:us"
] | 2022-01-20T17:16:52Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
shivam/marathi_pib_processed | shivam | 2022-01-28T16:24:32Z | 13 | 0 | null | [
"region:us"
] | 2022-01-28T16:24:32Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
shpotes/tfcol | shpotes | 2021-11-16T21:49:16Z | 13 | 0 | null | [
"region:us"
] | 2021-11-16T21:49:16Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sia-precision-education/pile_js | sia-precision-education | 2022-02-05T20:23:12Z | 13 | 0 | null | [
"region:us"
] | 2022-02-05T20:23:12Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sia-precision-education/sia_pile_sample | sia-precision-education | 2022-01-14T02:47:18Z | 13 | 0 | null | [
"region:us"
] | 2022-01-14T02:47:18Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tau/scientific_papers | tau | 2022-02-03T09:10:13Z | 13 | 0 | null | [
"region:us"
] | 2022-02-03T09:10:13Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tharindu/MOLD | tharindu | 2021-09-12T19:25:26Z | 13 | 0 | null | [
"region:us"
] | 2021-09-12T19:25:26Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | # MOLD - {M}arathi {O}ffensive {L}anguage {D}ataset
The {M}arathi {O}ffensive {L}anguage {D}ataset (MOLD) contains a collection of 2500 annotated Marathi tweets.
The files included are:
```
MOLD
│ README.md
└───data
│ MOLD_train.csv
│ MOLD_test.csv
```
- `MOLD_train.csv`: contains 1,875 annotated tweets for the training set.
- `MOLD_test.csv`: contains 625 annotated tweets for the test set.
The dataset was annotated using crowdsourcing. The gold labels were assigned taking the agreement of six annotators into consideration. No correction has been carried out on the crowdsourcing annotations.
Each instance in MOLD has been annotated as offensive or not_offensive
## Citation
If you used MOLD, please refer to this paper:
```bash
@InProceedings{mold,
author = {Gaikwad, Saurabh and Ranasinghe, Tharindu and Zampieri, Marcos and Homan, Christopher M.},
title = {Cross-lingual Offensive Language Identification for Low Resource Languages: The Case of Marathi},
booktitle = {Proceedings of RANLP},
year = {2021}
}
```
| [
-0.02706856094300747,
-0.7881935834884644,
-0.3300139605998993,
0.33204784989356995,
-0.6963018774986267,
0.1852131187915802,
-0.15476319193840027,
-0.4909389913082123,
0.4733896255493164,
0.5827668905258179,
-0.7079517841339111,
-0.38742876052856445,
-1.0659948587417603,
0.163656011223793... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
thomwolf/very-test-dataset | thomwolf | 2021-09-17T12:11:26Z | 13 | 0 | null | [
"region:us"
] | 2021-09-17T12:11:26Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | # My great dataset | [
-0.4989798069000244,
0.36667490005493164,
0.12832903861999512,
0.4949142634868622,
-0.14238812029361725,
-0.034424327313899994,
0.05521995574235916,
0.2724461257457733,
0.4238664209842682,
0.9095531105995178,
-0.28728532791137695,
-0.5607730746269226,
-0.7142897844314575,
0.069953188300132... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
toddmorrill/github-issues | toddmorrill | 2022-10-25T09:56:49Z | 13 | 0 | null | [
"task_categories:text-classification",
"task_categories:text-retrieval",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:document-retrieval",
"annotations_creators:no-annotation",
"multilinguality:monolingual",
"size_categories:unknown",
"region:us"
] | 2022-10-25T09:56:49Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
YAML tags:
annotations_creators:
- no-annotation
language_creators: []
language:
- '''en-US'''
license: []
multilinguality:
- monolingual
pretty_name: Hugging Face Github Issues
size_categories:
- unknown
source_datasets: []
task_categories:
- text-classification
- text-retrieval
task_ids:
- multi-class-classification
- multi-label-classification
- document-retrieval
---
# Dataset Card for GitHub Issues
## Dataset Summary
GitHub Issues is a dataset consisting of GitHub issues and pull requests associated with the 🤗 Datasets repository. It is intended for educational purposes and can be used for semantic search or multilabel text classification. The contents of each GitHub issue are in English and concern the domain of datasets for NLP, computer vision, and beyond. | [
-0.4409926235675812,
-0.41138264536857605,
-0.028050635010004044,
-0.08979591727256775,
-0.3326789438724518,
0.3937123119831085,
-0.14980188012123108,
-0.2523435652256012,
0.6318988800048828,
0.49806326627731323,
-0.7246970534324646,
-0.8038663864135742,
-0.4676644802093506,
0.015201156027... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ttj/metadata_arxiv | ttj | 2021-08-05T12:45:40Z | 13 | 0 | null | [
"region:us"
] | 2021-08-05T12:45:40Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
uva-irlab/trec-cast-2019-multi-turn | uva-irlab | 2022-10-25T09:56:59Z | 13 | 0 | null | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"language:en",
"region:us"
] | 2022-10-25T09:56:59Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
language:
- en
multilinguality:
- monolingual
size_categories:
- 10M<n<100M
task_categories:
- text-retrieval
task_ids:
- document-retrieval
language_bcp47:
- en-US
---
# TREC Cast 2019
[TREC Cast](http://www.treccast.ai) have released a document collection with topics and qrels of which a subset has been annotated such that it is suitable for multi-turn conversational search.
## Dataset statistics
- # Passages: 38,426,252
- # Topics: 20
- # Queries: 173
## Subsets
### CAR + MSMARCO Collection
Together CAR and MSMARCO have a size of 6,13G, so downloading will take a while. You can use the collection as followed:
```python
collection = load_dataset('trec-cast-2019-multi-turn', 'test_collection')
```
The collection has the following data format:
```
docno: str
The document id format is [collection_id_paragraph_id] with collection id and paragraph id separated by an underscore.
The collection ids are in the set: {MARCO, CAR}. E.g.: CAR_6869dee46ab12f0f7060874f7fc7b1c57d53144a
text: str
The content of the passage.
```
#### Sample
Instead of using the entire data set, you can also download a sample set containing only 200,000 items:
```python
collection = load_dataset('trec-cast-2019-multi-turn', 'test_collection_sample')
```
### Topics
You can get the topics as followed:
```python
topics = load_dataset('trec-cast-2019-multi-turn', 'topics')
```
The topics have the following dataformat:
```
qid: str
Query ID of the format "topicId_questionNumber"
history: str[]
A list of queries. It can be empty for the first question in a topic.
query: str
The query
```
### Qrels
You can get the qrels as followed:
```python
qrels = load_dataset('trec-cast-2019-multi-turn', 'qrels')
```
The qrels have the following data format:
```
qid: str
Query ID of the format "topicId_questionNumber"
qrels: List[dict]
A list of dictionaries with the keys 'docno' and 'relevance'. Relevance is an integer in the range [0, 4]
``` | [
-0.5236700773239136,
-0.661327064037323,
0.39405789971351624,
0.17460063099861145,
-0.42741161584854126,
0.1882959008216858,
-0.25260940194129944,
0.2603226602077484,
0.5019423365592957,
0.5466737747192383,
-0.613614022731781,
-0.5237865447998047,
-0.3627501130104065,
0.048438601195812225,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
wicho/stylekqc-style | wicho | 2022-02-22T16:25:19Z | 13 | 2 | null | [
"license:cc-by-sa-4.0",
"region:us"
] | 2022-02-22T16:25:19Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
license: cc-by-sa-4.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yuvalkirstain/summ_screen_fd_t5_lm | yuvalkirstain | 2022-01-09T15:31:46Z | 13 | 0 | null | [
"region:us"
] | 2022-01-09T15:31:46Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
z-uo/female-LJSpeech-italian | z-uo | 2022-10-23T04:56:44Z | 13 | 1 | null | [
"multilinguality:monolingual",
"language:it",
"region:us"
] | 2022-10-23T04:56:44Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
task_ids:
- tts
language:
- it
task_categories:
- tts
multilinguality:
- monolingual
---
# Italian Male Voice
This dataset is an Italian version of [LJSpeech](https://keithito.com/LJ-Speech-Dataset/), that merge all female audio of the same speaker finded into [M-AILABS Speech Dataset](https://www.caito.de/2019/01/the-m-ailabs-speech-dataset/).
This dataset contains 8h 23m of one speacker recorded at 16000Hz. This is a valid choiche to train an italian TTS deep model with female voice. | [
-0.47920748591423035,
-0.328063040971756,
0.038118887692689896,
-0.004634067416191101,
-0.09792771935462952,
0.30149513483047485,
-0.14423410594463348,
-0.3259493112564087,
0.381655216217041,
0.32299309968948364,
-1.1311203241348267,
-0.43395885825157166,
-0.49930858612060547,
0.2879130840... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cat-claws/face-verification-with-features | cat-claws | 2021-12-28T16:22:30Z | 13 | 1 | null | [
"region:us"
] | 2021-12-28T16:22:30Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ruanchaves/hashset_distant_sampled | ruanchaves | 2022-10-20T19:13:24Z | 13 | 0 | null | [
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:multilingual",
"size_categories:unknown",
"source_datasets:original",
"language:hi",
"language:en",
"license:unknown",
"word-segmentation",
"arxiv:2201.06741",
"region:us"
] | 2022-10-20T19:13:24Z | 2022-03-04T22:13:50.000Z | 2022-03-04T22:13:50 | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language:
- hi
- en
license:
- unknown
multilinguality:
- multilingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- structure-prediction
task_ids: []
pretty_name: HashSet Distant Sampled
tags:
- word-segmentation
---
# Dataset Card for HashSet Distant Sampled
## Dataset Description
- **Repository:** [prashantkodali/HashSet](https://github.com/prashantkodali/HashSet)
- **Paper:** [HashSet -- A Dataset For Hashtag Segmentation](https://arxiv.org/abs/2201.06741)
### Dataset Summary
Hashset is a new dataset consisting on 1.9k manually annotated and 3.3M loosely supervised tweets for testing the
efficiency of hashtag segmentation models. We compare State of The Art Hashtag Segmentation models on Hashset and other
baseline datasets (STAN and BOUN). We compare and analyse the results across the datasets to argue that HashSet can act
as a good benchmark for hashtag segmentation tasks.
HashSet Distant: 3.3M loosely collected camel cased hashtags containing hashtag and their segmentation.
HashSet Distant Sampled is a sample of 20,000 camel cased hashtags from the HashSet Distant dataset.
### Languages
Hindi and English.
## Dataset Structure
### Data Instances
```
{
'index': 282559,
'hashtag': 'Youth4Nation',
'segmentation': 'Youth 4 Nation'
}
```
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@article{kodali2022hashset,
title={HashSet--A Dataset For Hashtag Segmentation},
author={Kodali, Prashant and Bhatnagar, Akshala and Ahuja, Naman and Shrivastava, Manish and Kumaraguru, Ponnurangam},
journal={arXiv preprint arXiv:2201.06741},
year={2022}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. | [
-0.5124838948249817,
-0.6725721955299377,
0.27651354670524597,
0.022129425778985023,
-0.4157800078392029,
0.28262442350387573,
-0.26289698481559753,
-0.7446957230567932,
0.16287273168563843,
-0.06352057307958603,
-0.5618054866790771,
-0.7847047448158264,
-0.5480558276176453,
0.043016973882... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ruanchaves/hashset_distant | ruanchaves | 2022-10-20T19:13:21Z | 13 | 0 | null | [
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:multilingual",
"size_categories:unknown",
"source_datasets:original",
"language:hi",
"language:en",
"license:unknown",
"word-segmentation",
"arxiv:2201.06741",
"region:us"
] | 2022-10-20T19:13:21Z | 2022-03-04T22:36:15.000Z | 2022-03-04T22:36:15 | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language:
- hi
- en
license:
- unknown
multilinguality:
- multilingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- structure-prediction
task_ids: []
pretty_name: HashSet Distant
tags:
- word-segmentation
---
# Dataset Card for HashSet Distant
## Dataset Description
- **Repository:** [prashantkodali/HashSet](https://github.com/prashantkodali/HashSet)
- **Paper:** [HashSet -- A Dataset For Hashtag Segmentation](https://arxiv.org/abs/2201.06741)
### Dataset Summary
Hashset is a new dataset consisiting on 1.9k manually annotated and 3.3M loosely supervised tweets for testing the
efficiency of hashtag segmentation models. We compare State of The Art Hashtag Segmentation models on Hashset and other
baseline datasets (STAN and BOUN). We compare and analyse the results across the datasets to argue that HashSet can act
as a good benchmark for hashtag segmentation tasks.
HashSet Distant: 3.3M loosely collected camel cased hashtags containing hashtag and their segmentation.
### Languages
Hindi and English.
## Dataset Structure
### Data Instances
```
{
'index': 282559,
'hashtag': 'Youth4Nation',
'segmentation': 'Youth 4 Nation'
}
```
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@article{kodali2022hashset,
title={HashSet--A Dataset For Hashtag Segmentation},
author={Kodali, Prashant and Bhatnagar, Akshala and Ahuja, Naman and Shrivastava, Manish and Kumaraguru, Ponnurangam},
journal={arXiv preprint arXiv:2201.06741},
year={2022}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. | [
-0.48197248578071594,
-0.6456729769706726,
0.28060415387153625,
0.02067846804857254,
-0.41353410482406616,
0.2750352621078491,
-0.2797783613204956,
-0.7219157814979553,
0.1357155591249466,
-0.09613247215747833,
-0.5320984125137329,
-0.7554410696029663,
-0.5609700083732605,
0.04412752017378... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ruanchaves/hashset_manual | ruanchaves | 2022-10-20T19:13:18Z | 13 | 0 | null | [
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:multilingual",
"size_categories:unknown",
"source_datasets:original",
"language:hi",
"language:en",
"license:unknown",
"word-segmentation",
"arxiv:2201.06741",
... | 2022-10-20T19:13:18Z | 2022-03-05T05:52:48.000Z | 2022-03-05T05:52:48 | ---
annotations_creators:
- expert-generated
language_creators:
- machine-generated
language:
- hi
- en
license:
- unknown
multilinguality:
- multilingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- structure-prediction
task_ids:
- named-entity-recognition
pretty_name: HashSet Manual
tags:
- word-segmentation
---
# Dataset Card for HashSet Manual
## Dataset Description
- **Repository:** [prashantkodali/HashSet](https://github.com/prashantkodali/HashSet)
- **Paper:** [HashSet -- A Dataset For Hashtag Segmentation](https://arxiv.org/abs/2201.06741)
### Dataset Summary
Hashset is a new dataset consisting on 1.9k manually annotated and 3.3M loosely supervised tweets for testing the
efficiency of hashtag segmentation models. We compare State of The Art Hashtag Segmentation models on Hashset and other
baseline datasets (STAN and BOUN). We compare and analyse the results across the datasets to argue that HashSet can act
as a good benchmark for hashtag segmentation tasks.
HashSet Manual: contains 1.9k manually annotated hashtags. Each row consists of the hashtag, segmented hashtag ,named entity annotations, whether the hashtag contains mix of hindi and english tokens and/or contains non-english tokens.
### Languages
Mostly Hindi and English.
## Dataset Structure
### Data Instances
```
{
"index": 10,
"hashtag": "goodnewsmegan",
"segmentation": "good news megan",
"spans": {
"start": [
8
],
"end": [
13
],
"text": [
"megan"
]
},
"source": "roman",
"gold_position": null,
"mix": false,
"other": false,
"ner": true,
"annotator_id": 1,
"annotation_id": 2088,
"created_at": "2021-12-30 17:10:33.800607",
"updated_at": "2021-12-30 17:10:59.714840",
"lead_time": 3896.182,
"rank": {
"position": [
1,
2,
3,
4,
5,
6,
7,
8,
9,
10
],
"candidate": [
"goodnewsmegan",
"goodnewsmeg an",
"goodnews megan",
"goodnewsmega n",
"go odnewsmegan",
"good news megan",
"good newsmegan",
"g oodnewsmegan",
"goodnewsme gan",
"goodnewsm egan"
]
}
}
```
### Data Fields
- `index`: a numerical index annotated by Kodali et al..
- `hashtag`: the original hashtag.
- `segmentation`: the gold segmentation for the hashtag.
- `spans`: named entity spans.
- `source`: data source.
- `gold_position`: position of the gold segmentation on the `segmentation` field inside the `rank`.
- `mix`: The hashtag has a mix of English and Hindi tokens.
- `other`: The hashtag has non-English tokens.
- `ner`: The hashtag has named entities.
- `annotator_id`: annotator ID.
- `annotation_id`: annotation ID.
- `created_at`: Creation date timestamp.
- `updated_at`: Update date timestamp.
- `lead_time`: Lead time field annotated by Kodali et al..
- `rank`: Rank of each candidate selected by a baseline word segmenter ( WordBreaker ).
- `candidates`: Candidates selected by a baseline word segmenter ( WordBreaker ).
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@article{kodali2022hashset,
title={HashSet--A Dataset For Hashtag Segmentation},
author={Kodali, Prashant and Bhatnagar, Akshala and Ahuja, Naman and Shrivastava, Manish and Kumaraguru, Ponnurangam},
journal={arXiv preprint arXiv:2201.06741},
year={2022}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. | [
-0.5575979351997375,
-0.706269383430481,
0.32381144165992737,
0.08459316194057465,
-0.5197281241416931,
0.13292548060417175,
-0.2834537923336029,
-0.47931620478630066,
0.38651242852211,
0.10313799977302551,
-0.5768786668777466,
-0.9383532404899597,
-0.7269235849380493,
0.1757550984621048,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ruanchaves/dev_stanford | ruanchaves | 2022-10-20T19:13:37Z | 13 | 0 | null | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:unknown",
"word-segmentation",
"region:us"
] | 2022-10-20T19:13:37Z | 2022-03-05T07:28:41.000Z | 2022-03-05T07:28:41 | ---
annotations_creators:
- expert-generated
language_creators:
- machine-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- structure-prediction
task_ids: []
pretty_name: Dev-Stanford
tags:
- word-segmentation
---
# Dataset Card for Dev-Stanford
## Dataset Description
- **Repository:** [ardax/hashtag-segmentor](https://github.com/ardax/hashtag-segmentor)
- **Paper:** [Segmenting Hashtags and Analyzing Their Grammatical Structure](https://asistdl.onlinelibrary.wiley.com/doi/epdf/10.1002/asi.23989?author_access_token=qbKcE1jrre5nbv_Tn9csbU4keas67K9QMdWULTWMo8NOtY2aA39ck2w5Sm4ePQ1MZhbjCdEuaRlPEw2Kd12jzvwhwoWP0fdroZAwWsmXHPXxryDk_oBCup1i9_VDNIpU)
### Dataset Summary
1000 hashtags manually segmented by Çelebi et al. for development purposes,
randomly selected from the Stanford Sentiment Tweet Corpus by Sentiment140.
### Languages
English
## Dataset Structure
### Data Instances
```
{
"index": 15,
"hashtag": "marathonmonday",
"segmentation": "marathon monday"
}
```
### Data Fields
- `index`: a numerical index.
- `hashtag`: the original hashtag.
- `segmentation`: the gold segmentation for the hashtag.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@article{celebi2018segmenting,
title={Segmenting hashtags and analyzing their grammatical structure},
author={Celebi, Arda and {\"O}zg{\"u}r, Arzucan},
journal={Journal of the Association for Information Science and Technology},
volume={69},
number={5},
pages={675--686},
year={2018},
publisher={Wiley Online Library}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. | [
-0.4865020513534546,
-0.9106378555297852,
0.3317815065383911,
0.2312864065170288,
-0.35266298055648804,
0.1629551202058792,
-0.3449802100658417,
-0.36744219064712524,
0.4082734286785126,
0.13239602744579315,
-0.6447974443435669,
-1.0475389957427979,
-0.5331693291664124,
0.09177128970623016... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ruanchaves/test_stanford | ruanchaves | 2022-10-20T19:13:07Z | 13 | 0 | null | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:unknown",
"word-segmentation",
"arxiv:1501.03210",
"region:us"
] | 2022-10-20T19:13:07Z | 2022-03-05T08:26:17.000Z | 2022-03-05T08:26:17 | ---
annotations_creators:
- expert-generated
language_creators:
- machine-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- structure-prediction
task_ids: []
pretty_name: Test-Stanford
tags:
- word-segmentation
---
# Dataset Card for Test-Stanford
## Dataset Description
- **Paper:** [Towards Deep Semantic Analysis Of Hashtags](https://arxiv.org/abs/1501.03210)
### Dataset Summary
Manually Annotated Stanford Sentiment Analysis Dataset by Bansal et al..
### Languages
English
## Dataset Structure
### Data Instances
```
{
"index": 1467856821,
"hashtag": "therapyfail",
"segmentation": "therapy fail",
"gold_position": 8,
"rank": {
"position": [
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20
],
"candidate": [
"therap y fail",
"the rap y fail",
"t her apy fail",
"the rap yfail",
"t he rap y fail",
"thera py fail",
"ther apy fail",
"th era py fail",
"therapy fail",
"therapy fai l",
"the r apy fail",
"the rapyfa il",
"the rapy fail",
"t herapy fail",
"the rapyfail",
"therapy f ai l",
"therapy fa il",
"the rapyf a il",
"therapy f ail",
"the ra py fail"
]
}
}
```
### Data Fields
- `index`: a numerical index annotated by Kodali et al..
- `hashtag`: the original hashtag.
- `segmentation`: the gold segmentation for the hashtag.
- `gold_position`: position of the gold segmentation on the `segmentation` field inside the `rank`.
- `rank`: Rank of each candidate selected by a baseline word segmenter ( Segmentations Seeder Module ).
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@misc{bansal2015deep,
title={Towards Deep Semantic Analysis Of Hashtags},
author={Piyush Bansal and Romil Bansal and Vasudeva Varma},
year={2015},
eprint={1501.03210},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. | [
-0.5123699903488159,
-0.840878963470459,
0.31414902210235596,
0.060316167771816254,
-0.3437008559703827,
0.2181331068277359,
-0.3751055598258972,
-0.34444567561149597,
0.42227399349212646,
0.2236885130405426,
-0.7442654967308044,
-1.131147027015686,
-0.8409368991851807,
0.12101541459560394... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ruanchaves/nru_hse | ruanchaves | 2022-10-20T19:12:59Z | 13 | 0 | null | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:ru",
"license:unknown",
"word-segmentation",
"arxiv:1911.03270",
"region:us"
] | 2022-10-20T19:12:59Z | 2022-03-05T17:40:41.000Z | 2022-03-05T17:40:41 | ---
annotations_creators:
- expert-generated
language_creators:
- machine-generated
language:
- ru
license:
- unknown
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- structure-prediction
task_ids: []
pretty_name: NRU-HSE
tags:
- word-segmentation
---
# Dataset Card for NRU-HSE
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [glushkovato/hashtag_segmentation](https://github.com/glushkovato/hashtag_segmentation/)
- **Paper:** [Char-RNN and Active Learning for Hashtag Segmentation](https://arxiv.org/abs/1911.03270)
### Dataset Summary
Real hashtags collected from several pages about civil services on vk.com (a Russian social network) and then segmented manually.
### Languages
Russian
## Dataset Structure
### Data Instances
```
{
"index": 0,
"hashtag": "ЁлкаВЗазеркалье",
"segmentation": "Ёлка В Зазеркалье"
}
```
### Data Fields
- `index`: a numerical index.
- `hashtag`: the original hashtag.
- `segmentation`: the gold segmentation for the hashtag.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@article{glushkova2019char,
title={Char-RNN and Active Learning for Hashtag Segmentation},
author={Glushkova, Taisiya and Artemova, Ekaterina},
journal={arXiv preprint arXiv:1911.03270},
year={2019}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. | [
-0.5338146686553955,
-0.8164032697677612,
0.1986798644065857,
-0.01587838865816593,
-0.5505955219268799,
0.24201956391334534,
-0.187850221991539,
-0.4370380938053131,
0.3767947256565094,
0.15170742571353912,
-0.6234642267227173,
-0.9967371821403503,
-0.4400584399700165,
0.21162675321102142... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ruanchaves/lynx | ruanchaves | 2022-10-20T19:12:51Z | 13 | 0 | null | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:code",
"license:unknown",
"word-segmentation",
"region:us"
] | 2022-10-20T19:12:51Z | 2022-03-05T23:19:48.000Z | 2022-03-05T23:19:48 | ---
annotations_creators:
- expert-generated
language_creators:
- machine-generated
language:
- code
license:
- unknown
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- structure-prediction
- code-generation
- conditional-text-generation
task_ids: []
pretty_name: Lynx
tags:
- word-segmentation
---
# Dataset Card for Lynx
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Paper:** [Helpful or Not? An investigation on the feasibility of identifier splitting via CNN-BiLSTM-CRF](https://ksiresearch.org/seke/seke18paper/seke18paper_167.pdf)
### Dataset Summary
In programming languages, identifiers are tokens (also called symbols) which name language entities.
Some of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.
Lynx is a dataset for identifier segmentation, i.e. the task of adding spaces between the words on a identifier.
Besides identifier segmentation, the gold labels for this dataset also include abbreviation expansion.
### Languages
- C
## Dataset Structure
### Data Instances
```
{
"index": 3,
"identifier": "abspath",
"segmentation": "abs path",
"expansion": "absolute path",
"spans": {
"text": [
"abs"
],
"expansion": [
"absolute"
],
"start": [
0
],
"end": [
4
]
}
}
```
### Data Fields
- `index`: a numerical index.
- `identifier`: the original identifier.
- `segmentation`: the gold segmentation for the identifier, without abbreviation expansion.
- `expansion`: the gold segmentation for the identifier, with abbreviation expansion.
- `spans`: the start and end index of each abbreviation, the text of the abbreviation and its corresponding expansion.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
### Citation Information
```
@inproceedings{madani2010recognizing,
title={Recognizing words from source code identifiers using speech recognition techniques},
author={Madani, Nioosha and Guerrouj, Latifa and Di Penta, Massimiliano and Gueheneuc, Yann-Gael and Antoniol, Giuliano},
booktitle={2010 14th European Conference on Software Maintenance and Reengineering},
pages={68--77},
year={2010},
organization={IEEE}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. | [
-0.6560587286949158,
-0.5064682960510254,
0.17525292932987213,
0.329748272895813,
-0.406394898891449,
0.2934958040714264,
-0.03775622323155403,
-0.5518139004707336,
0.32614651322364807,
0.18747392296791077,
-0.6071010231971741,
-0.7294619679450989,
-0.5637814998626709,
0.22036242485046387,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Carlisle/msmacro-test | Carlisle | 2022-03-11T00:19:32Z | 13 | 0 | null | [
"license:mit",
"region:us"
] | 2022-03-11T00:19:32Z | 2022-03-07T18:09:33.000Z | 2022-03-07T18:09:33 | ---
license: mit
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Carlisle/msmacro-test-corpus | Carlisle | 2022-03-11T00:13:14Z | 13 | 0 | null | [
"license:mit",
"region:us"
] | 2022-03-11T00:13:14Z | 2022-03-07T18:32:48.000Z | 2022-03-07T18:32:48 | ---
license: mit
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
z-uo/qasper-squad | z-uo | 2022-10-25T10:02:49Z | 13 | 0 | null | [
"task_categories:question-answering",
"task_ids:closed-domain-qa",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"region:us"
] | 2022-10-25T10:02:49Z | 2022-03-08T09:20:15.000Z | 2022-03-08T09:20:15 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
task_categories:
- question-answering
task_ids:
- closed-domain-qa
pretty_name: qasper-squad
language_bcp47:
- en-US
---
# Quasper into squad version
This is a change of format of [qasper](https://huggingface.co/datasets/qasper) dataset into squad format. | [
-0.11863570660352707,
-0.335533082485199,
-0.17033399641513824,
0.47943344712257385,
-0.24204084277153015,
0.4936336576938629,
0.4384351074695587,
-0.2884172797203064,
0.7920003533363342,
0.6988130807876587,
-1.019679069519043,
-0.19906441867351532,
-0.5449517369270325,
0.22372911870479584... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
rubrix/sst2_with_predictions | rubrix | 2022-09-16T13:23:05Z | 13 | 1 | null | [
"region:us"
] | 2022-09-16T13:23:05Z | 2022-03-09T14:13:30.000Z | 2022-03-09T14:13:30 | # Comparing model predictions and ground truth labels with Rubrix and Hugging Face
## Build dataset
You can skip this step if you run:
```python
from datasets import load_dataset
import rubrix as rb
ds = rb.DatasetForTextClassification.from_datasets(load_dataset("rubrix/sst2_with_predictions", split="train"))
```
Otherwise, the following cell will run the pipeline over the training set and store labels and predictions.
```python
from datasets import load_dataset
from transformers import pipeline, AutoModelForSequenceClassification
import rubrix as rb
name = "distilbert-base-uncased-finetuned-sst-2-english"
# Need to define id2label because surprisingly the pipeline has uppercase label names
model = AutoModelForSequenceClassification.from_pretrained(name, id2label={0: 'negative', 1: 'positive'})
nlp = pipeline("sentiment-analysis", model=model, tokenizer=name, return_all_scores=True)
dataset = load_dataset("glue", "sst2", split="train")
# batch predict
def predict(example):
return {"prediction": nlp(example["sentence"])}
# add predictions to the dataset
dataset = dataset.map(predict, batched=True).rename_column("sentence", "text")
# build rubrix dataset from hf dataset
ds = rb.DatasetForTextClassification.from_datasets(dataset, annotation="label")
```
```python
# Install Rubrix and start exploring and sharing URLs with interesting subsets, etc.
rb.log(ds, "sst2")
```
```python
ds.to_datasets().push_to_hub("rubrix/sst2_with_predictions")
```
Pushing dataset shards to the dataset hub: 0%| | 0/1 [00:00<?, ?it/s]
## Analize misspredictions and ambiguous labels
### With the UI
With Rubrix's UI you can:
- Combine filters and full-text/DSL queries to quickly find important samples
- All URLs contain the state so you can share with collaborator and annotator specific dataset regions to work on.
- Sort examples by score, as well as custom metadata fields.

### Programmatically
Let's find all the wrong predictions from Python. This is useful for bulk operations (relabelling, discarding, etc.) as well as
```python
import pandas as pd
# Get dataset slice with wrong predictions
df = rb.load("sst2", query="predicted:ko").to_pandas()
# display first 20 examples
with pd.option_context('display.max_colwidth', None):
display(df[["text", "prediction", "annotation"]].head(20))
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>text</th>
<th>prediction</th>
<th>annotation</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>this particular , anciently demanding métier</td>
<td>[(negative, 0.9386059045791626), (positive, 0.06139408051967621)]</td>
<td>positive</td>
</tr>
<tr>
<th>1</th>
<td>under our skin</td>
<td>[(positive, 0.7508484721183777), (negative, 0.24915160238742828)]</td>
<td>negative</td>
</tr>
<tr>
<th>2</th>
<td>evokes a palpable sense of disconnection , made all the more poignant by the incessant use of cell phones .</td>
<td>[(negative, 0.6634528636932373), (positive, 0.3365470767021179)]</td>
<td>positive</td>
</tr>
<tr>
<th>3</th>
<td>plays like a living-room war of the worlds , gaining most of its unsettling force from the suggested and the unknown .</td>
<td>[(positive, 0.9968075752258301), (negative, 0.003192420583218336)]</td>
<td>negative</td>
</tr>
<tr>
<th>4</th>
<td>into a pulpy concept that , in many other hands would be completely forgettable</td>
<td>[(positive, 0.6178210377693176), (negative, 0.3821789622306824)]</td>
<td>negative</td>
</tr>
<tr>
<th>5</th>
<td>transcends ethnic lines .</td>
<td>[(positive, 0.9758220314979553), (negative, 0.024177948012948036)]</td>
<td>negative</td>
</tr>
<tr>
<th>6</th>
<td>is barely</td>
<td>[(negative, 0.9922297596931458), (positive, 0.00777028314769268)]</td>
<td>positive</td>
</tr>
<tr>
<th>7</th>
<td>a pulpy concept that , in many other hands would be completely forgettable</td>
<td>[(negative, 0.9738760590553284), (positive, 0.026123959571123123)]</td>
<td>positive</td>
</tr>
<tr>
<th>8</th>
<td>of hollywood heart-string plucking</td>
<td>[(positive, 0.9889695644378662), (negative, 0.011030420660972595)]</td>
<td>negative</td>
</tr>
<tr>
<th>9</th>
<td>a minimalist beauty and the beast</td>
<td>[(positive, 0.9100378751754761), (negative, 0.08996208757162094)]</td>
<td>negative</td>
</tr>
<tr>
<th>10</th>
<td>the intimate , unguarded moments of folks who live in unusual homes --</td>
<td>[(positive, 0.9967381358146667), (negative, 0.0032618637196719646)]</td>
<td>negative</td>
</tr>
<tr>
<th>11</th>
<td>steals the show</td>
<td>[(negative, 0.8031412363052368), (positive, 0.1968587338924408)]</td>
<td>positive</td>
</tr>
<tr>
<th>12</th>
<td>enough</td>
<td>[(positive, 0.7941301465034485), (negative, 0.2058698982000351)]</td>
<td>negative</td>
</tr>
<tr>
<th>13</th>
<td>accept it as life and</td>
<td>[(positive, 0.9987508058547974), (negative, 0.0012492131209000945)]</td>
<td>negative</td>
</tr>
<tr>
<th>14</th>
<td>this is the kind of movie that you only need to watch for about thirty seconds before you say to yourself , ` ah , yes ,</td>
<td>[(negative, 0.7889454960823059), (positive, 0.21105451881885529)]</td>
<td>positive</td>
</tr>
<tr>
<th>15</th>
<td>plunges you into a reality that is , more often then not , difficult and sad ,</td>
<td>[(positive, 0.967541515827179), (negative, 0.03245845437049866)]</td>
<td>negative</td>
</tr>
<tr>
<th>16</th>
<td>overcomes the script 's flaws and envelops the audience in his character 's anguish , anger and frustration .</td>
<td>[(positive, 0.9953157901763916), (negative, 0.004684178624302149)]</td>
<td>negative</td>
</tr>
<tr>
<th>17</th>
<td>troubled and determined homicide cop</td>
<td>[(negative, 0.6632784008979797), (positive, 0.33672159910202026)]</td>
<td>positive</td>
</tr>
<tr>
<th>18</th>
<td>human nature is a goofball movie , in the way that malkovich was , but it tries too hard</td>
<td>[(positive, 0.5959018468856812), (negative, 0.40409812331199646)]</td>
<td>negative</td>
</tr>
<tr>
<th>19</th>
<td>to watch too many barney videos</td>
<td>[(negative, 0.9909896850585938), (positive, 0.00901023019105196)]</td>
<td>positive</td>
</tr>
</tbody>
</table>
</div>
```python
df.annotation.hist()
```
<AxesSubplot:>

```python
# Get dataset slice with wrong predictions
df = rb.load("sst2", query="predicted:ko and annotated_as:negative").to_pandas()
# display first 20 examples
with pd.option_context('display.max_colwidth', None):
display(df[["text", "prediction", "annotation"]].head(20))
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>text</th>
<th>prediction</th>
<th>annotation</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>plays like a living-room war of the worlds , gaining most of its unsettling force from the suggested and the unknown .</td>
<td>[(positive, 0.9968075752258301), (negative, 0.003192420583218336)]</td>
<td>negative</td>
</tr>
<tr>
<th>1</th>
<td>a minimalist beauty and the beast</td>
<td>[(positive, 0.9100378751754761), (negative, 0.08996208757162094)]</td>
<td>negative</td>
</tr>
<tr>
<th>2</th>
<td>accept it as life and</td>
<td>[(positive, 0.9987508058547974), (negative, 0.0012492131209000945)]</td>
<td>negative</td>
</tr>
<tr>
<th>3</th>
<td>plunges you into a reality that is , more often then not , difficult and sad ,</td>
<td>[(positive, 0.967541515827179), (negative, 0.03245845437049866)]</td>
<td>negative</td>
</tr>
<tr>
<th>4</th>
<td>overcomes the script 's flaws and envelops the audience in his character 's anguish , anger and frustration .</td>
<td>[(positive, 0.9953157901763916), (negative, 0.004684178624302149)]</td>
<td>negative</td>
</tr>
<tr>
<th>5</th>
<td>and social commentary</td>
<td>[(positive, 0.7863275408744812), (negative, 0.2136724889278412)]</td>
<td>negative</td>
</tr>
<tr>
<th>6</th>
<td>we do n't get williams ' usual tear and a smile , just sneers and bile , and the spectacle is nothing short of refreshing .</td>
<td>[(positive, 0.9982783794403076), (negative, 0.0017216014675796032)]</td>
<td>negative</td>
</tr>
<tr>
<th>7</th>
<td>before pulling the plug on the conspirators and averting an american-russian armageddon</td>
<td>[(positive, 0.6992855072021484), (negative, 0.30071452260017395)]</td>
<td>negative</td>
</tr>
<tr>
<th>8</th>
<td>in tight pants and big tits</td>
<td>[(positive, 0.7850217819213867), (negative, 0.2149781733751297)]</td>
<td>negative</td>
</tr>
<tr>
<th>9</th>
<td>that it certainly does n't feel like a film that strays past the two and a half mark</td>
<td>[(positive, 0.6591460108757019), (negative, 0.3408539891242981)]</td>
<td>negative</td>
</tr>
<tr>
<th>10</th>
<td>actress-producer and writer</td>
<td>[(positive, 0.8167378306388855), (negative, 0.1832621842622757)]</td>
<td>negative</td>
</tr>
<tr>
<th>11</th>
<td>gives devastating testimony to both people 's capacity for evil and their heroic capacity for good .</td>
<td>[(positive, 0.8960123062133789), (negative, 0.10398765653371811)]</td>
<td>negative</td>
</tr>
<tr>
<th>12</th>
<td>deep into the girls ' confusion and pain as they struggle tragically to comprehend the chasm of knowledge that 's opened between them</td>
<td>[(positive, 0.9729612469673157), (negative, 0.027038726955652237)]</td>
<td>negative</td>
</tr>
<tr>
<th>13</th>
<td>a younger lad in zen and the art of getting laid in this prickly indie comedy of manners and misanthropy</td>
<td>[(positive, 0.9875985980033875), (negative, 0.012401451356709003)]</td>
<td>negative</td>
</tr>
<tr>
<th>14</th>
<td>get on a board and , uh , shred ,</td>
<td>[(positive, 0.5352609753608704), (negative, 0.46473899483680725)]</td>
<td>negative</td>
</tr>
<tr>
<th>15</th>
<td>so preachy-keen and</td>
<td>[(positive, 0.9644021391868591), (negative, 0.035597823560237885)]</td>
<td>negative</td>
</tr>
<tr>
<th>16</th>
<td>there 's an admirable rigor to jimmy 's relentless anger , and to the script 's refusal of a happy ending ,</td>
<td>[(positive, 0.9928517937660217), (negative, 0.007148175034672022)]</td>
<td>negative</td>
</tr>
<tr>
<th>17</th>
<td>` christian bale 's quinn ( is ) a leather clad grunge-pirate with a hairdo like gandalf in a wind-tunnel and a simply astounding cor-blimey-luv-a-duck cockney accent . '</td>
<td>[(positive, 0.9713286757469177), (negative, 0.028671346604824066)]</td>
<td>negative</td>
</tr>
<tr>
<th>18</th>
<td>passion , grief and fear</td>
<td>[(positive, 0.9849751591682434), (negative, 0.015024829655885696)]</td>
<td>negative</td>
</tr>
<tr>
<th>19</th>
<td>to keep the extremes of screwball farce and blood-curdling family intensity on one continuum</td>
<td>[(positive, 0.8838250637054443), (negative, 0.11617499589920044)]</td>
<td>negative</td>
</tr>
</tbody>
</table>
</div>
```python
# Get dataset slice with wrong predictions
df = rb.load("sst2", query="predicted:ko and score:{0.99 TO *}").to_pandas()
# display first 20 examples
with pd.option_context('display.max_colwidth', None):
display(df[["text", "prediction", "annotation"]].head(20))
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>text</th>
<th>prediction</th>
<th>annotation</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>plays like a living-room war of the worlds , gaining most of its unsettling force from the suggested and the unknown .</td>
<td>[(positive, 0.9968075752258301), (negative, 0.003192420583218336)]</td>
<td>negative</td>
</tr>
<tr>
<th>1</th>
<td>accept it as life and</td>
<td>[(positive, 0.9987508058547974), (negative, 0.0012492131209000945)]</td>
<td>negative</td>
</tr>
<tr>
<th>2</th>
<td>overcomes the script 's flaws and envelops the audience in his character 's anguish , anger and frustration .</td>
<td>[(positive, 0.9953157901763916), (negative, 0.004684178624302149)]</td>
<td>negative</td>
</tr>
<tr>
<th>3</th>
<td>will no doubt rally to its cause , trotting out threadbare standbys like ` masterpiece ' and ` triumph ' and all that malarkey ,</td>
<td>[(negative, 0.9936562180519104), (positive, 0.006343740504235029)]</td>
<td>positive</td>
</tr>
<tr>
<th>4</th>
<td>we do n't get williams ' usual tear and a smile , just sneers and bile , and the spectacle is nothing short of refreshing .</td>
<td>[(positive, 0.9982783794403076), (negative, 0.0017216014675796032)]</td>
<td>negative</td>
</tr>
<tr>
<th>5</th>
<td>somehow manages to bring together kevin pollak , former wrestler chyna and dolly parton</td>
<td>[(negative, 0.9979034662246704), (positive, 0.002096540294587612)]</td>
<td>positive</td>
</tr>
<tr>
<th>6</th>
<td>there 's an admirable rigor to jimmy 's relentless anger , and to the script 's refusal of a happy ending ,</td>
<td>[(positive, 0.9928517937660217), (negative, 0.007148175034672022)]</td>
<td>negative</td>
</tr>
<tr>
<th>7</th>
<td>the bottom line with nemesis is the same as it has been with all the films in the series : fans will undoubtedly enjoy it , and the uncommitted need n't waste their time on it</td>
<td>[(positive, 0.995850682258606), (negative, 0.004149340093135834)]</td>
<td>negative</td>
</tr>
<tr>
<th>8</th>
<td>is genial but never inspired , and little</td>
<td>[(negative, 0.9921030402183533), (positive, 0.007896988652646542)]</td>
<td>positive</td>
</tr>
<tr>
<th>9</th>
<td>heaped upon a project of such vast proportions need to reap more rewards than spiffy bluescreen technique and stylish weaponry .</td>
<td>[(negative, 0.9958089590072632), (positive, 0.004191054962575436)]</td>
<td>positive</td>
</tr>
<tr>
<th>10</th>
<td>than recommended -- as visually bland as a dentist 's waiting room , complete with soothing muzak and a cushion of predictable narrative rhythms</td>
<td>[(negative, 0.9988711476325989), (positive, 0.0011287889210507274)]</td>
<td>positive</td>
</tr>
<tr>
<th>11</th>
<td>spectacle and</td>
<td>[(positive, 0.9941601753234863), (negative, 0.005839805118739605)]</td>
<td>negative</td>
</tr>
<tr>
<th>12</th>
<td>groan and</td>
<td>[(negative, 0.9987359642982483), (positive, 0.0012639997294172645)]</td>
<td>positive</td>
</tr>
<tr>
<th>13</th>
<td>'re not likely to have seen before , but beneath the exotic surface ( and exotic dancing ) it 's surprisingly old-fashioned .</td>
<td>[(positive, 0.9908103942871094), (negative, 0.009189637377858162)]</td>
<td>negative</td>
</tr>
<tr>
<th>14</th>
<td>its metaphors are opaque enough to avoid didacticism , and</td>
<td>[(negative, 0.990602970123291), (positive, 0.00939704105257988)]</td>
<td>positive</td>
</tr>
<tr>
<th>15</th>
<td>by kevin bray , whose crisp framing , edgy camera work , and wholesale ineptitude with acting , tone and pace very obviously mark him as a video helmer making his feature debut</td>
<td>[(positive, 0.9973387122154236), (negative, 0.0026612314395606518)]</td>
<td>negative</td>
</tr>
<tr>
<th>16</th>
<td>evokes the frustration , the awkwardness and the euphoria of growing up , without relying on the usual tropes .</td>
<td>[(positive, 0.9989104270935059), (negative, 0.0010896018939092755)]</td>
<td>negative</td>
</tr>
<tr>
<th>17</th>
<td>, incoherence and sub-sophomoric</td>
<td>[(negative, 0.9962475895881653), (positive, 0.003752368036657572)]</td>
<td>positive</td>
</tr>
<tr>
<th>18</th>
<td>seems intimidated by both her subject matter and the period trappings of this debut venture into the heritage business .</td>
<td>[(negative, 0.9923072457313538), (positive, 0.007692818529903889)]</td>
<td>positive</td>
</tr>
<tr>
<th>19</th>
<td>despite downplaying her good looks , carries a little too much ai n't - she-cute baggage into her lead role as a troubled and determined homicide cop to quite pull off the heavy stuff .</td>
<td>[(negative, 0.9948075413703918), (positive, 0.005192441400140524)]</td>
<td>positive</td>
</tr>
</tbody>
</table>
</div>
```python
# Get dataset slice with wrong predictions
df = rb.load("sst2", query="predicted:ko and score:{* TO 0.6}").to_pandas()
# display first 20 examples
with pd.option_context('display.max_colwidth', None):
display(df[["text", "prediction", "annotation"]].head(20))
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>text</th>
<th>prediction</th>
<th>annotation</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>get on a board and , uh , shred ,</td>
<td>[(positive, 0.5352609753608704), (negative, 0.46473899483680725)]</td>
<td>negative</td>
</tr>
<tr>
<th>1</th>
<td>is , truly and thankfully , a one-of-a-kind work</td>
<td>[(positive, 0.5819814801216125), (negative, 0.41801854968070984)]</td>
<td>negative</td>
</tr>
<tr>
<th>2</th>
<td>starts as a tart little lemon drop of a movie and</td>
<td>[(negative, 0.5641832947731018), (positive, 0.4358167052268982)]</td>
<td>positive</td>
</tr>
<tr>
<th>3</th>
<td>between flaccid satire and what</td>
<td>[(negative, 0.5532692074775696), (positive, 0.44673076272010803)]</td>
<td>positive</td>
</tr>
<tr>
<th>4</th>
<td>it certainly does n't feel like a film that strays past the two and a half mark</td>
<td>[(negative, 0.5386656522750854), (positive, 0.46133431792259216)]</td>
<td>positive</td>
</tr>
<tr>
<th>5</th>
<td>who liked there 's something about mary and both american pie movies</td>
<td>[(negative, 0.5086333751678467), (positive, 0.4913666248321533)]</td>
<td>positive</td>
</tr>
<tr>
<th>6</th>
<td>many good ideas as bad is the cold comfort that chin 's film serves up with style and empathy</td>
<td>[(positive, 0.557632327079773), (negative, 0.44236767292022705)]</td>
<td>negative</td>
</tr>
<tr>
<th>7</th>
<td>about its ideas and</td>
<td>[(positive, 0.518638551235199), (negative, 0.48136141896247864)]</td>
<td>negative</td>
</tr>
<tr>
<th>8</th>
<td>of a sick and evil woman</td>
<td>[(negative, 0.5554516315460205), (positive, 0.4445483684539795)]</td>
<td>positive</td>
</tr>
<tr>
<th>9</th>
<td>though this rude and crude film does deliver a few gut-busting laughs</td>
<td>[(positive, 0.5045541524887085), (negative, 0.4954459071159363)]</td>
<td>negative</td>
</tr>
<tr>
<th>10</th>
<td>to squeeze the action and our emotions into the all-too-familiar dramatic arc of the holocaust escape story</td>
<td>[(negative, 0.5050069093704224), (positive, 0.49499306082725525)]</td>
<td>positive</td>
</tr>
<tr>
<th>11</th>
<td>that throws a bunch of hot-button items in the viewer 's face and asks to be seen as hip , winking social commentary</td>
<td>[(negative, 0.5873904228210449), (positive, 0.41260960698127747)]</td>
<td>positive</td>
</tr>
<tr>
<th>12</th>
<td>'s soulful and unslick</td>
<td>[(positive, 0.5931627750396729), (negative, 0.40683719515800476)]</td>
<td>negative</td>
</tr>
</tbody>
</table>
</div>
```python
from rubrix.metrics.commons import *
```
```python
text_length("sst2", query="predicted:ko").visualize()
```
 | [
-0.4972631633281708,
-0.731793999671936,
0.3744488060474396,
0.08970188349485397,
-0.2442716807126999,
0.13867388665676117,
0.05249929428100586,
-0.18201763927936554,
0.7278061509132385,
0.2809646427631378,
-0.546890139579773,
-0.29449865221977234,
-0.48585742712020874,
0.04693055152893066... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Non-Residual-Prompting/C2Gen | Non-Residual-Prompting | 2022-10-25T10:02:58Z | 13 | 1 | null | [
"task_categories:text-generation",
"size_categories:<100K",
"language:en",
"license:cc-by-sa-4.0",
"arxiv:1911.03705",
"region:us"
] | 2022-10-25T10:02:58Z | 2022-03-09T16:09:50.000Z | 2022-03-09T16:09:50 | ---
language:
- en
license:
- cc-by-sa-4.0
size_categories:
- <100K
task_categories:
- text-generation
---
# Dataset Card for Contextualized CommonGen(C2Gen)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Initial Data Collection and Normalization](#initial-cata-collection-and-normalization)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Repository:** [Non-Residual Prompting](https://github.com/FreddeFrallan/Non-Residual-Prompting)
- **Paper:** [Fine-Grained Controllable Text Generation Using Non-Residual Prompting](https://aclanthology.org/2022.acl-long.471)
- **Point of Contact:** [Fredrik Carlsson](mailto:Fredrik.Carlsson@ri.se)
### Dataset Summary
CommonGen [Lin et al., 2020](https://arxiv.org/abs/1911.03705) is a dataset for the constrained text generation task of word inclusion. But the task does not allow to include context. Therefore, to complement CommonGen, we provide an extended test set C2Gen [Carlsson et al., 2022](https://aclanthology.org/2022.acl-long.471) where an additional context is provided for each set of target words. The task is therefore reformulated to both generate commonsensical text which include the given words, and also have the generated text adhere to the given context.
### Languages
English
## Dataset Structure
### Data Instances
{"Context": "The show came on the television with people singing. The family all gathered to watch. They all became silent when the show came on.", "Words": ["follow", "series", "voice"]}
### Data Fields
- context: the generated text by the model should adhere to this text
- words: the words that should be included in the generated continuation
### Data Splits
Test
## Dataset Creation
### Curation Rationale
C2Gen was created because the authors of the paper believed that the task formulation of CommonGen is too narrow, and that it needlessly incentivizes researchers
to focus on methods that do not support context. Which is orthogonal to their belief that many application areas necessitates the consideration of surrounding context. Therefore, to complement CommonGen, they provide an extended test set where an additional context is provided for each set of target words.
### Initial Data Collection and Normalization
The dataset was constructed with the help the crowd sourcing platform MechanicalTurk. Each remaining concept set manually received a textual context. To assure the quality of the data generation, only native English speakers with a recorded high acceptance were allowed to participate. Finally, all contexts were manually verified, and fixed in terms of typos and poor quality. Furthermore we want to raise awareness that C2GEN can contain personal data or offensive content. If you would encounter such a sample, please reach out to us.
## Licensing Information
license: cc-by-sa-4.0
| [
-0.5778683423995972,
-0.7140498161315918,
0.15963301062583923,
0.3582002818584442,
-0.49276062846183777,
-0.2660435736179352,
-0.5766505002975464,
-0.4229273200035095,
0.09443135559558868,
0.33678722381591797,
-0.9646716713905334,
-0.7882944345474243,
-0.574173629283905,
0.591974675655365,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Biomedical-TeMU/SPACCC_Tokenizer | Biomedical-TeMU | 2022-03-11T02:18:16Z | 13 | 0 | null | [
"license:cc-by-4.0",
"region:us"
] | 2022-03-11T02:18:16Z | 2022-03-11T02:14:02.000Z | 2022-03-11T02:14:02 | ---
license: cc-by-4.0
---
# The Tokenizer for Clinical Cases Written in Spanish
## Introduction
This repository contains the tokenization model trained using the SPACCC_TOKEN corpus (https://github.com/PlanTL-SANIDAD/SPACCC_TOKEN). The model was trained using the 90% of the corpus (900 clinical cases) and tested against the 10% (100 clinical cases). This model is a great resource to tokenize biomedical documents, specially clinical cases written in Spanish.
This model was created using the Apache OpenNLP machine learning toolkit (https://opennlp.apache.org/), with the release number 1.8.4, released in December 2017.
This repository contains the training set, testing set, Gold Standard.
## Prerequisites
This software has been compiled with Java SE 1.8 and it should work with recent versions. You can download Java from the following website: https://www.java.com/en/download
The executable file already includes the Apache OpenNLP dependencies inside, so the download of this toolkit is not necessary. However, you may download the latest version from this website: https://opennlp.apache.org/download.html
The library file we have used to compile is "opennlp-tools-1.8.4.jar". The source code should be able to compile with the latest version of OpenNLP, "opennlp-tools-*RELEASE_NUMBER*.jar". In case there are compilation or execution errors, please let us know and we will make all the necessary updates.
## Directory structure
<pre>
exec/
An executable file that can be used to apply the tokenization to your documents.
You can find the notes about its execution below in section "Usage".
gold_standard/
The clinical cases used as gold standard to evaluate the model's performance.
model/
The tokenizationint model, "es-tokenization-model-spaccc.bin", a binary file.
src/
The source code to create the model (CreateModelTok.java) and evaluate it (EvaluateModelTok.java).
The directory includes an example about how to use the model inside your code (Tokenization.java).
File "abbreviations.dat" contains a list of abbreviations, essential to build the model.
test_set/
The clinical cases used as test set to evaluate the model's performance.
train_set/
The clinical cases used to build the model. We use a single file with all documents present in
directory "train_set_docs" concatented.
train_set_docs/
The clinical cases used to build the model. For each record the sentences are already splitted.
</pre>
## Usage
The executable file *Tokenizer.jar* is the program you need to tokenize the text in your document. For this program, two arguments are needed: (1) the text file to tokenize, and (2) the model file (*es-tokenization-model-spaccc.bin*). The program will display all tokens in the terminal, with one token per line.
From the `exec` folder, type the following command in your terminal:
<pre>
$ java -jar Tokenizer.jar INPUT_FILE MODEL_FILE
</pre>
## Examples
Assuming you have the executable file, the input file and the model file in the same directory:
<pre>
$ java -jar Tokenizer.jar file.txt es-tokenizer-model-spaccc.bin
</pre>
## Model creation
To create this tokenization model, we used the following training parameters (class *TrainingParameters* in OpenNLP) to get the best performance:
- Number of iterations: 1500.
- Cutoff parameter: 4.
- Trainer type parameter: *EventTrainer.EVENT_VALUE*.
- Algorithm: Maximum Entropy (*ModelType.MAXENT.name()*).
Meanwhile, we used the following parameters for the tokenizer builder (class *TokenizerFactory* in OpenNLP) to get the best performance:
- Language code: *es* (for Spanish).
- Abbreviation dictionary: file "abbreviations.dat" (included in the `src/` directory).
- Use alphanumeric optimization: false
- Alphanumeric pattern: null
## Model evaluation
After tuning the model using different values for each parameter mentioned above, we got the best performance with the values mentioned above.
| | Value |
| ----------------------------------------: | :------ |
| Number of tokens in the gold standard | 38247 |
| Number of tokens generated | 38227 |
| Number of words correctly tokenized | 38182 |
| Number of words wrongly tokenized | 35 |
| Number of tokens missed | 30 |
| **Precision** | **99.88%** |
| **Recall** | **99.83%** |
| **F-Measure** | **99.85%**|
Table 1: Evaluation statistics for the tokenization model.
## Contact
Ander Intxaurrondo (ander.intxaurrondo@bsc.es)
## License
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.
Copyright (c) 2018 Secretaría de Estado para el Avance Digital (SEAD)
| [
-0.23891980946063995,
-0.6190951466560364,
0.06003124266862869,
0.28673553466796875,
-0.47054997086524963,
-0.2803977131843567,
-0.16284969449043274,
-0.39283186197280884,
0.33586201071739197,
0.559281587600708,
-0.23651902377605438,
-0.8399350047111511,
-0.755854606628418,
0.0448729321360... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
rubrix/big_patent_a_test_100 | rubrix | 2022-03-11T17:22:14Z | 13 | 0 | null | [
"region:us"
] | 2022-03-11T17:22:14Z | 2022-03-11T17:22:10.000Z | 2022-03-11T17:22:10 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Amba/bert-finetuned-ner_tokenized_datasets | Amba | 2022-03-13T12:01:56Z | 13 | 0 | null | [
"region:us"
] | 2022-03-13T12:01:56Z | 2022-03-13T12:01:54.000Z | 2022-03-13T12:01:54 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tdklab/Hebrew_Squad_v1 | tdklab | 2022-08-04T04:59:05Z | 13 | 1 | null | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:auto_translation",
"language_creators:auto_translation",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:squad",
"region:us"
] | 2022-08-04T04:59:05Z | 2022-03-15T00:43:59.000Z | 2022-03-15T00:43:59 | ---
pretty_name: Hebrew_Squad_v1
annotations_creators:
- auto_translation
language_creators:
- auto_translation
languages:
- Hebrew
- he
licenses:
- cc-by-4-0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- squad
task_categories:
- question-answering
task_ids:
- extractive-qa
---
# Dataset Card for "Hebrew_Squad_v1"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/TechnionTDK/hebwiki-qa/](https://github.com/TechnionTDK/hebwiki-qa/)
- **Size of train dataset files:** 62.3 MB
- **Size of validation dataset files:** 9.48 MB
- **Total amount of disk used:** 71.78 MB
### Dataset Summary
Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. This Hebrew dataset is an automatic translation of the English SQuAD dataset https://huggingface.co/datasets/squad.
### Supported Tasks and Leaderboards
Extractive Question-Answering
### Languages
Hebrew
## Dataset Structure
Follows the standars SQuAD format.
### Data Instances
#### plain_text
- **Size of train dataset files:** 62.3 MB
- **Size of validation dataset files:** 9.48 MB
- **Total amount of disk used:** 71.78 MB
An example of 'train' looks as follows.
```
{
"id": "56be4db0acb8001400a502ee",
"title": "Super_Bowl_50",
"context": "סופרבול 50 היה משחק כדורגל אמריקאי כדי לקבוע את אלופת ליגת הפוטבול הלאומית (NFL) לעונת 2015. אלופת ועידת הכדורגל האמריקאית (AFC) דנבר ברונקוס ניצחה את אלופת ועידת הכדורגל הלאומית (NFC) קרולינה פנתרס 24–10 כדי לזכות בתואר הסופרבול השלישי שלה. המשחק נערך ב-7 בפברואר 2016 באצטדיון ליווי'ס באזור מפרץ סן פרנסיסקו בסנטה קלרה, קליפורניה. מכיוון שזה היה הסופרבול ה-50, הליגה הדגישה את יום השנה הזהב עם יוזמות שונות בנושא זהב, כמו גם השעיה זמנית את המסורת של שם כל משחק סופרבול עם ספרות רומיות (שתחתן המשחק היה ידוע בתור סופרבול L ), כך שהלוגו יוכל להציג באופן בולט את הספרות הערביות 50.",
"question": "היכן התקיים סופרבול 50?",
"answers": {
"text": ["סנטה קלרה, קליפורניה", "אצטדיון ליווי"],
"answer_start": [311, 271]
}
}
```
### Data Fields
The data fields are the same among all splits.
#### Hebrew_Squad_v1
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
| name |train|validation|
|----------|----|---------|
|Hebrew_Squad_v1|52405| 7455|
### Contributions
Created by Matan Ben-chorin, May Flaster, Guided by Dr. Oren Mishali.
This is our final project as part of computer engineering B.Sc studies in the Faculty of Electrical Engineering combined with Computer Science at Technion, Israel Institute of Technology.
For more cooperation, please contact email:
Matan Ben-chorin: matan.bh1@gmail.com
May Flaster: mayflaster96@gmail.com
| [
-0.7912200093269348,
-0.5953620076179504,
0.03957652300596237,
0.4084746241569519,
-0.40761619806289673,
0.10556591302156448,
-0.031913693994283676,
-0.4110000729560852,
0.4252018332481384,
0.16250889003276825,
-1.1091665029525757,
-0.7229211926460266,
-0.4175586402416229,
0.23578476905822... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
anjandash/java-8m-methods-v2 | anjandash | 2022-07-01T20:31:57Z | 13 | 0 | null | [
"multilinguality:monolingual",
"license:mit",
"region:us"
] | 2022-07-01T20:31:57Z | 2022-03-15T11:01:14.000Z | 2022-03-15T11:01:14 | ---
language:
- java
license:
- mit
multilinguality:
- monolingual
pretty_name:
- java-8m-methods-v2
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sumedh/MeQSum | sumedh | 2022-03-24T20:20:43Z | 13 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2022-03-24T20:20:43Z | 2022-03-23T04:21:51.000Z | 2022-03-23T04:21:51 | ---
license: apache-2.0
---
- Problem type: Summarization
languages:
- en
multilinguality:
- monolingual
task_ids:
- summarization
# MeQSum
Dataset for medical question summarization introduced in the ACL 2019 paper "On the Summarization of Consumer Health Questions": https://www.aclweb.org/anthology/P19-1215
### Citation Information
```bibtex
@Inproceedings{MeQSum,
author = {Asma {Ben Abacha} and Dina Demner-Fushman},
title = {On the Summarization of Consumer Health Questions},
booktitle = {Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28th - August 2},
year = {2019},
abstract = {Question understanding is one of the main challenges in question answering. In real world applications, users often submit natural language questions that are longer than needed and include peripheral information that increases the complexity of the question, leading to substantially more false positives in answer retrieval. In this paper, we study neural abstractive models for medical question summarization. We introduce the MeQSum corpus of 1,000 summarized consumer health questions. We explore data augmentation methods and evaluate state-of-the-art neural abstractive models on this new task. In particular, we show that semantic augmentation from question datasets improves the overall performance, and that pointer-generator networks outperform sequence-to-sequence attentional models on this task, with a ROUGE-1 score of 44.16%. We also present a detailed error analysis and discuss directions for improvement that are specific to question summarization. }}
``` | [
-0.12308941781520844,
-0.7937250733375549,
0.32143786549568176,
-0.08195647597312927,
-0.06032464653253555,
0.03151015192270279,
0.050642356276512146,
-0.5357454419136047,
0.45402026176452637,
0.40978294610977173,
-0.4075445830821991,
-0.43105605244636536,
-0.5525957942008972,
0.3848236799... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
M-Quan/sv_corpora_parliament_processe | M-Quan | 2022-03-29T04:28:30Z | 13 | 0 | null | [
"region:us"
] | 2022-03-29T04:28:30Z | 2022-03-29T04:28:11.000Z | 2022-03-29T04:28:11 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
laion/laion2B-multi-watermark | laion | 2022-03-29T22:50:20Z | 13 | 1 | null | [
"license:cc-by-4.0",
"region:us"
] | 2022-03-29T22:50:20Z | 2022-03-29T22:46:42.000Z | 2022-03-29T22:46:42 | ---
license: cc-by-4.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ukr-models/Ukr-Synth | ukr-models | 2023-08-31T09:35:43Z | 13 | 9 | null | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"task_ids:parsing",
"task_ids:part-of-speech",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"language:uk",
"license:mit",
"region:us"
] | 2023-08-31T09:35:43Z | 2022-04-06T17:13:34.000Z | 2022-04-06T17:13:34 | ---
annotations_creators:
- machine-generated
language_creators:
- found
language:
- uk
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
task_categories:
- token-classification
task_ids:
- named-entity-recognition
- parsing
- part-of-speech
pretty_name: Ukrainian synthetic dataset in conllu format
---
# Dataset Card for Ukr-Synth
## Dataset Description
### Dataset Summary
Large silver standard Ukrainian corpus annotated with morphology tags, syntax trees and PER, LOC, ORG NER-tags.
Represents a subsample of [Leipzig Corpora Collection for Ukrainian Language](https://wortschatz.uni-leipzig.de/en/download/Ukrainian). The source texts are newspaper texts split into sentences and shuffled. The sentrences are annotated using transformer-based models trained using gold standard Ukrainian language datasets.
### Languages
Ukrainian
## Dataset Structure
### Data Splits
| name |train |validation|
|---------|-------:|---------:|
|conll2003|1000000| 10000|
## Dataset Creation
### Source Data
Leipzig Corpora Collection:
D. Goldhahn, T. Eckart & U. Quasthoff: Building Large Monolingual Dictionaries at the Leipzig Corpora Collection: From 100 to 200 Languages.
In: Proceedings of the 8th International Language Resources and Evaluation (LREC'12), 2012
## Additional Information
### Licensing Information
MIT License
Copyright (c) 2022
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | [
-0.33004212379455566,
-0.18968449532985687,
0.1645425707101822,
0.12813164293766022,
-0.48072054982185364,
0.10865073651075363,
-0.15242716670036316,
-0.2758863568305969,
0.17786343395709991,
0.5738710165023804,
-0.6968401670455933,
-0.8363769054412842,
-0.08906112611293793,
0.298003882169... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
crystina-z/quora | crystina-z | 2022-04-11T03:39:09Z | 13 | 0 | null | [
"region:us"
] | 2022-04-11T03:39:09Z | 2022-04-11T01:31:58.000Z | 2022-04-11T01:31:58 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mwong/climate-evidence-related | mwong | 2022-10-25T10:06:54Z | 13 | 2 | climate-fever | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|climate_fever",
"language:en",
"license:cc-by-sa-3.0",
"license:gpl-3.0",
... | 2022-10-25T10:06:54Z | 2022-04-12T10:58:49.000Z | 2022-04-12T10:58:49 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-3.0
- gpl-3.0
multilinguality:
- monolingual
paperswithcode_id: climate-fever
pretty_name: climate-fever
size_categories:
- 100K<n<1M
source_datasets:
- extended|climate_fever
task_categories:
- text-classification
task_ids:
- fact-checking
---
### Dataset Summary
This dataset is extracted from Climate Fever dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever.html), pre-processed and ready to train and evaluate.
The training objective is a text classification task - given a claim and evidence, predict if evidence is related to claim. | [
-0.14197832345962524,
-0.3595137298107147,
0.17505554854869843,
-0.01358682569116354,
-0.20135705173015594,
-0.0877845510840416,
-0.11217031627893448,
-0.4002361297607422,
0.11767800897359848,
0.8182161450386047,
-0.5150581002235413,
-0.5441432595252991,
-0.7823590040206909,
0.102347478270... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
craffel/tasky_or_not | craffel | 2022-04-15T01:43:50Z | 13 | 2 | null | [
"region:us"
] | 2022-04-15T01:43:50Z | 2022-04-14T19:12:55.000Z | 2022-04-14T19:12:55 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mwong/fever-claim-related | mwong | 2022-10-25T10:06:56Z | 13 | 2 | fever | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|climate_fever",
"language:en",
"license:cc-by-sa-3.0",
"license:gpl-3.0",
... | 2022-10-25T10:06:56Z | 2022-04-15T07:04:59.000Z | 2022-04-15T07:04:59 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-3.0
- gpl-3.0
multilinguality:
- monolingual
paperswithcode_id: fever
pretty_name: fever
size_categories:
- 100K<n<1M
source_datasets:
- extended|climate_fever
task_categories:
- text-classification
task_ids:
- fact-checking
---
### Dataset Summary
This dataset is extracted from Climate Fever dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever.html), pre-processed and ready to train and evaluate.
The training objective is a text classification task - given a claim and evidence, predict if claim is related to evidence. | [
-0.14605812728405,
-0.3636053800582886,
0.1740349978208542,
-0.00795916747301817,
-0.20158256590366364,
-0.08485206216573715,
-0.11791373789310455,
-0.4048733413219452,
0.1204589307308197,
0.8055889010429382,
-0.5192589163780212,
-0.5446829199790955,
-0.7765915393829346,
0.1056889966130256... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
surrey-nlp/PLOD-unfiltered | surrey-nlp | 2023-01-14T23:31:04Z | 13 | 0 | plod-an-abbreviation-detection-dataset-for | [
"task_categories:token-classification",
"annotations_creators:Leonardo Zilio, Hadeel Saadany, Prashant Sharma, Diptesh Kanojia, Constantin Orasan",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
... | 2023-01-14T23:31:04Z | 2022-04-16T18:49:49.000Z | 2022-04-16T18:49:49 | ---
annotations_creators:
- Leonardo Zilio, Hadeel Saadany, Prashant Sharma, Diptesh Kanojia, Constantin Orasan
language_creators:
- found
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- token-classification
task_ids: []
paperswithcode_id: plod-an-abbreviation-detection-dataset-for
pretty_name: 'PLOD: An Abbreviation Detection Dataset'
tags:
- abbreviation-detection
---
# PLOD: An Abbreviation Detection Dataset
This is the repository for PLOD Dataset published at LREC 2022. The dataset can help build sequence labelling models for the task Abbreviation Detection.
### Dataset
We provide two variants of our dataset - Filtered and Unfiltered. They are described in our paper here.
1. The Filtered version can be accessed via [Huggingface Datasets here](https://huggingface.co/datasets/surrey-nlp/PLOD-filtered) and a [CONLL format is present here](https://github.com/surrey-nlp/PLOD-AbbreviationDetection).<br/>
2. The Unfiltered version can be accessed via [Huggingface Datasets here](https://huggingface.co/datasets/surrey-nlp/PLOD-unfiltered) and a [CONLL format is present here](https://github.com/surrey-nlp/PLOD-AbbreviationDetection).<br/>
3. The [SDU Shared Task](https://sites.google.com/view/sdu-aaai22/home) data we use for zero-shot testing is [available here](https://huggingface.co/datasets/surrey-nlp/SDU-test).
# Dataset Card for PLOD-unfiltered
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** https://github.com/surrey-nlp/PLOD-AbbreviationDetection
- **Paper:** https://arxiv.org/abs/2204.12061
- **Leaderboard:** https://paperswithcode.com/sota/abbreviationdetection-on-plod-an-abbreviation
- **Point of Contact:** [Diptesh Kanojia](mailto:d.kanojia@surrey.ac.uk)
### Dataset Summary
This PLOD Dataset is an English-language dataset of abbreviations and their long-forms tagged in text. The dataset has been collected for research from the PLOS journals indexing of abbreviations and long-forms in the text. This dataset was created to support the Natural Language Processing task of abbreviation detection and covers the scientific domain.
### Supported Tasks and Leaderboards
This dataset primarily supports the Abbreviation Detection Task. It has also been tested on a train+dev split provided by the Acronym Detection Shared Task organized as a part of the Scientific Document Understanding (SDU) workshop at AAAI 2022.
### Languages
English
## Dataset Structure
### Data Instances
A typical data point comprises an ID, a set of `tokens` present in the text, a set of `pos_tags` for the corresponding tokens obtained via Spacy NER, and a set of `ner_tags` which are limited to `AC` for `Acronym` and `LF` for `long-forms`.
An example from the dataset:
{'id': '1',
'tokens': ['Study', '-', 'specific', 'risk', 'ratios', '(', 'RRs', ')', 'and', 'mean', 'BW', 'differences', 'were', 'calculated', 'using', 'linear', 'and', 'log', '-', 'binomial', 'regression', 'models', 'controlling', 'for', 'confounding', 'using', 'inverse', 'probability', 'of', 'treatment', 'weights', '(', 'IPTW', ')', 'truncated', 'at', 'the', '1st', 'and', '99th', 'percentiles', '.'],
'pos_tags': [8, 13, 0, 8, 8, 13, 12, 13, 5, 0, 12, 8, 3, 16, 16, 0, 5, 0, 13, 0, 8, 8, 16, 1, 8, 16, 0, 8, 1, 8, 8, 13, 12, 13, 16, 1, 6, 0, 5, 0, 8, 13],
'ner_tags': [0, 0, 0, 3, 4, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 4, 4, 4, 4, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0]
}
### Data Fields
- id: the row identifier for the dataset point.
- tokens: The tokens contained in the text.
- pos_tags: the Part-of-Speech tags obtained for the corresponding token above from Spacy NER.
- ner_tags: The tags for abbreviations and long-forms.
### Data Splits
| | Train | Valid | Test |
| ----- | ------ | ----- | ---- |
| Filtered | 112652 | 24140 | 24140|
| Unfiltered | 113860 | 24399 | 24399|
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
Extracting the data from PLOS Journals online and then tokenization, normalization.
#### Who are the source language producers?
PLOS Journal
## Additional Information
### Dataset Curators
The dataset was initially created by Leonardo Zilio, Hadeel Saadany, Prashant Sharma,
Diptesh Kanojia, Constantin Orasan.
### Licensing Information
CC-BY-SA 4.0
### Citation Information
[Needs More Information]
### Installation
We use the custom NER pipeline in the [spaCy transformers](https://spacy.io/universe/project/spacy-transformers) library to train our models. This library supports training via any pre-trained language models available at the :rocket: [HuggingFace repository](https://huggingface.co/).<br/>
Please see the instructions at these websites to setup your own custom training with our dataset to reproduce the experiments using Spacy.
OR<br/>
However, you can also reproduce the experiments via the Python notebook we [provide here](https://github.com/surrey-nlp/PLOD-AbbreviationDetection/blob/main/nbs/fine_tuning_abbr_det.ipynb) which uses HuggingFace Trainer class to perform the same experiments. The exact hyperparameters can be obtained from the models readme cards linked below. Before starting, please perform the following steps:
```bash
git clone https://github.com/surrey-nlp/PLOD-AbbreviationDetection
cd PLOD-AbbreviationDetection
pip install -r requirements.txt
```
Now, you can use the notebook to reproduce the experiments.
### Model(s)
Our best performing models are hosted on the HuggingFace models repository:
| Models | [`PLOD - Unfiltered`](https://huggingface.co/datasets/surrey-nlp/PLOD-unfiltered) | [`PLOD - Filtered`](https://huggingface.co/datasets/surrey-nlp/PLOD-filtered) | Description |
| --- | :---: | :---: | --- |
| [RoBERTa<sub>large</sub>](https://huggingface.co/roberta-large) | [RoBERTa<sub>large</sub>-finetuned-abbr](https://huggingface.co/surrey-nlp/roberta-large-finetuned-abbr) | -soon- | Fine-tuning on the RoBERTa<sub>large</sub> language model |
| [RoBERTa<sub>base</sub>](https://huggingface.co/roberta-base) | -soon- | [RoBERTa<sub>base</sub>-finetuned-abbr](https://huggingface.co/surrey-nlp/roberta-large-finetuned-abbr) | Fine-tuning on the RoBERTa<sub>base</sub> language model |
| [AlBERT<sub>large-v2</sub>](https://huggingface.co/albert-large-v2) | [AlBERT<sub>large-v2</sub>-finetuned-abbDet](https://huggingface.co/surrey-nlp/albert-large-v2-finetuned-abbDet) | -soon- | Fine-tuning on the AlBERT<sub>large-v2</sub> language model |
On the link provided above, the model(s) can be used with the help of the Inference API via the web-browser itself. We have placed some examples with the API for testing.<br/>
### Usage
You can use the HuggingFace Model link above to find the instructions for using this model in Python locally using the notebook provided in the Git repo.
| [
-0.4893410801887512,
-0.7807414531707764,
0.09792500734329224,
0.21391020715236664,
-0.2772809565067291,
-0.07188747823238373,
-0.23428307473659515,
-0.38784971833229065,
0.5243551135063171,
0.47155696153640747,
-0.46351373195648193,
-0.7589182257652283,
-0.6135095357894897,
0.547093689441... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kniemiec/crack-segmentation | kniemiec | 2022-04-19T19:16:05Z | 13 | 0 | null | [
"region:us"
] | 2022-04-19T19:16:05Z | 2022-04-19T19:05:00.000Z | 2022-04-19T19:05:00 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
TheBritishLibrary/web_archive_classification | TheBritishLibrary | 2023-05-04T12:59:29Z | 13 | 2 | null | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
... | 2023-05-04T12:59:29Z | 2022-04-25T10:14:45.000Z | 2022-04-25T10:14:45 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- other
multilinguality:
- monolingual
pretty_name: UK Selective Web Archive Classification Dataset
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
- multi-label-classification
tags:
- lam
---
# Dataset Card for UK Selective Web Archive Classification Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
The dataset comprises a manually curated selective archive produced by UKWA which includes the classification of sites into a two-tiered subject hierarchy. In partnership with the Internet Archive and JISC, UKWA had obtained access to the subset of the Internet Archives web collection that relates to the UK. The JISC UK Web Domain Dataset (1996 - 2013) contains all of the resources from the Internet Archive that were hosted on domains ending in .uk, or that are required in order to render those UK pages. UKWA have made this manually-generated classification information available as an open dataset in Tab Separated Values (TSV) format. UKWA is particularly interested in whether high-level metadata like this can be used to train an appropriate automatic classification system so that this manually generated dataset may be used to partially automate the categorisation of the UKWAs larger archives. UKWA expects that an appropriate classifier might require more information about each site in order to produce reliable results, and a future goal is to augment this dataset with further information. Options include: for each site, making the titles of every page on that site available, and for each site, extract a set of keywords that summarise the site, via the full-text index. For more information: http://data.webarchive.org.uk/opendata/ukwa.ds.1/classification/
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
Creative Commons Public Domain Mark 1.0.
### Citation Information
[Needs More Information] | [
-0.5442662835121155,
0.09543928503990173,
-0.10508450865745544,
-0.027393247932195663,
-0.3471504747867584,
0.12181262671947479,
-0.16679830849170685,
-0.4763680398464203,
0.2830795645713806,
0.5618929266929626,
-0.6800389885902405,
-0.8325817584991455,
-0.6691356301307678,
0.4454791247844... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SetFit/amazon_massive_intent_sv-SE | SetFit | 2022-05-06T09:11:12Z | 13 | 0 | null | [
"region:us"
] | 2022-05-06T09:11:12Z | 2022-05-06T09:11:09.000Z | 2022-05-06T09:11:09 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mteb/raw_biorxiv | mteb | 2022-09-27T19:15:43Z | 13 | 5 | null | [
"language:en",
"region:us"
] | 2022-09-27T19:15:43Z | 2022-05-10T13:26:20.000Z | 2022-05-10T13:26:20 | ---
language:
- en
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
enoriega/odinsynth_dataset | enoriega | 2022-05-19T00:02:23Z | 13 | 0 | null | [
"region:us"
] | 2022-05-19T00:02:23Z | 2022-05-11T00:21:04.000Z | 2022-05-11T00:21:04 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jontooy/Flickr8k-Image-Features | jontooy | 2022-06-06T18:25:44Z | 13 | 0 | null | [
"language:ar",
"region:us"
] | 2022-06-06T18:25:44Z | 2022-05-11T18:26:26.000Z | 2022-05-11T18:26:26 | ---
language: ar
datasets: flickr8k
---
# Flickr8k Image Features
Flickr8k image features are extracted using the ResNeXt-152 C4 architecture ([found here](https://github.com/microsoft/scene_graph_benchmark)) and can be used as input for the [OSCAR](https://github.com/microsoft/Oscar) learning method. Arabic captions and splits are provided by [ElJundi et al.](https://github.com/ObeidaElJundi/Arabic-Image-Captioning)
## Dev-split
+ **dev-arabic.yaml** Yaml configure file with Arabic object tags
+ **dev.feature.tsv** Extracted image features
+ **dev.label.arabic.tsv** Arabic labels
+ **dev.label.tsv** English labels
+ **dev.yaml** Yaml configure file with English object tags
+ **dev_caption.json** Arabic captions for training
+ **dev_caption_coco_format.json** Arabic captions for validation
## Test-split
+ **test-arabic.yaml** Yaml configure file with Arabic object tags
+ **test.feature.tsv** Extracted image features
+ **test.label.arabic.tsv** Arabic labels
+ **test.label.tsv** English labels
+ **test.yaml** Yaml configure file with English object tags
+ **test_caption.json** Arabic captions for training
+ **test_caption_coco_format.json** Arabic captions for validation
## Train-split
+ **train-arabic.yaml** Yaml configure file with Arabic object tags
+ **train.feature.tsv** Extracted image features
+ **train.label.arabic.tsv** Arabic labels
+ **train.label.tsv** English labels
+ **train.yaml** Yaml configure file with English object tags
+ **train_caption.json** Arabic captions for training
+ **train_caption_coco_format.json** Arabic captions for validation | [
-0.816620409488678,
-0.1663798838853836,
0.1796746402978897,
0.1752563714981079,
-0.5924830436706543,
0.47319522500038147,
0.32038745284080505,
-0.659095287322998,
0.004433047957718372,
0.2846009433269501,
-0.6786139607429504,
-0.6328977346420288,
-0.7218134999275208,
0.1198902279138565,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Sultannn/id_recipe | Sultannn | 2022-09-18T09:24:13Z | 13 | 0 | null | [
"task_categories:text2text-generation",
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:id",
"license:mit",
"region... | 2022-09-18T09:24:13Z | 2022-05-16T08:45:23.000Z | 2022-05-16T08:45:23 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- id
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text2text-generation
- text-generation
task_ids:
- language-modeling
paperswithcode_id: null
pretty_name: Indonesian Recipe
---
# Dataset Card for id_recipe
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Indonesian-recipe](https://github.com/sultanbst123/Hugging-Face-indo)
- **Repository:** [Indonesian-recipe](https://github.com/sultanbst123/Hugging-Face-indo)
- **Paper:** [N/A]
- **Leaderboard:** [N/A]
- **Point of Contact:** [Sultan](sultansyach7@gmail.com)
### Dataset Summary
Indonesian foods are well-known for their rich taste. There are many spices used even for daily foods. This dataset may give insight on how to prepare Indonesian food.
id_recipe is an Indonesian Food Recipe dataset. The dataset contains >10000 Indonesian Recipe.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Indonesian
### Data Splits
Here are the number of examples
| name |n.examples|
|-----------------|--------: |
| train | 14858 |
| val | 783 |
### Source Data
[here](https://www.kaggle.com/datasets/canggih/indonesian-food-recipes)
### Annotations
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
MIT License
### Citation Information
[N/A]
### Contributions
Thanks to [@sultan](https://github.com/sultanbst123) for adding this dataset
| [
-0.3789474070072174,
-0.7352765202522278,
-0.0265517421066761,
0.4298982620239258,
-0.10320112854242325,
-0.15247179567813873,
-0.1782001107931137,
-0.18808774650096893,
0.8175324201583862,
0.912524402141571,
-0.8556612730026245,
-1.092574119567871,
-0.7815970182418823,
0.35683032870292664... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
HuggingFaceM4/ActivitiyNet_Captions | HuggingFaceM4 | 2022-10-23T05:50:46Z | 13 | 2 | null | [
"task_ids:closed-domain-qa",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10k<n<100K",
"source_datasets:original",
"language:en",
"license:other",
"arxiv:1705.00754",
"region:us"
] | 2022-10-23T05:50:46Z | 2022-05-17T11:26:07.000Z | 2022-05-17T11:26:07 | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
license:
- other
multilinguality:
- monolingual
pretty_name: ActivityNet Captions
size_categories:
- 10k<n<100K
source_datasets:
- original
task_categories:
- video-captionning
task_ids:
- closed-domain-qa
---
# Dataset Card for ActivityNet Captions
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://cs.stanford.edu/people/ranjaykrishna/densevid/
- **Paper:** https://arxiv.org/abs/1705.00754
### Dataset Summary
The ActivityNet Captions dataset connects videos to a series of temporally annotated sentence descriptions. Each sentence covers an unique segment of the video, describing multiple events that occur. These events may occur over very long or short periods of time and are not limited in any capacity, allowing them to co-occur. On average, each of the 20k videos contains 3.65 temporally localized sentences, resulting in a total of 100k sentences. We find that the number of sentences per video follows a relatively normal distribution. Furthermore, as the video duration increases, the number of sentences also increases. Each sentence has an average length of 13.48 words, which is also normally distributed. You can find more details of the dataset under the ActivityNet Captions Dataset section, and under supplementary materials in the paper.
### Languages
The captions in the dataset are in English.
## Dataset Structure
### Data Fields
- `video_id` : `str` unique identifier for the video
- `video_path`: `str` Path to the video file
-`duration`: `float32` Duration of the video
- `captions_starts`: `List_float32` List of timestamps denoting the time at which each caption starts
- `captions_ends`: `List_float32` List of timestamps denoting the time at which each caption ends
- `en_captions`: `list_str` List of english captions describing parts of the video
### Data Splits
| |train |validation| test | Overall |
|-------------|------:|---------:|------:|------:|
|# of videos|10,009 |4,917 |4,885 |19,811 |
### Annotations
Quoting [ActivityNet Captions' paper](https://arxiv.org/abs/1705.00754): \
"Each annotation task was divided into two steps: (1)
Writing a paragraph describing all major events happening
in the videos in a paragraph, with each sentence of the paragraph describing one event, and (2) Labeling the
start and end time in the video in which each sentence in the
paragraph event occurred."
### Who annotated the dataset?
Amazon Mechnical Turk annotators
### Personal and Sensitive Information
Nothing specifically mentioned in the paper.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Licensing Information
[More Information Needed]
### Citation Information
```bibtex
@InProceedings{tgif-cvpr2016,
@inproceedings{krishna2017dense,
title={Dense-Captioning Events in Videos},
author={Krishna, Ranjay and Hata, Kenji and Ren, Frederic and Fei-Fei, Li and Niebles, Juan Carlos},
booktitle={International Conference on Computer Vision (ICCV)},
year={2017}
}
```
### Contributions
Thanks to [@leot13](https://github.com/leot13) for adding this dataset. | [
-0.30022332072257996,
-0.523362934589386,
0.14052283763885498,
0.2561171054840088,
-0.5090405344963074,
-0.1593138724565506,
-0.2170467972755432,
-0.0688682347536087,
0.4089818298816681,
0.3958361744880676,
-0.6688596606254578,
-0.5893897414207458,
-0.6519454121589661,
-0.03168593719601631... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigscience-data/roots_zh-cn_wikipedia | bigscience-data | 2022-12-12T12:09:07Z | 13 | 19 | null | [
"language:zh",
"license:cc-by-sa-3.0",
"region:us"
] | 2022-12-12T12:09:07Z | 2022-05-18T09:19:49.000Z | 2022-05-18T09:19:49 | ---
language: zh
language_bcp47:
- zh-CN
license: cc-by-sa-3.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_zh-cn_wikipedia
# wikipedia
- Dataset uid: `wikipedia`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 3.2299 % of total
- 4.2071 % of en
- 5.6773 % of ar
- 3.3416 % of fr
- 5.2815 % of es
- 12.4852 % of ca
- 0.4288 % of zh
- 0.4286 % of zh
- 5.4743 % of indic-bn
- 8.9062 % of indic-ta
- 21.3313 % of indic-te
- 4.4845 % of pt
- 4.0493 % of indic-hi
- 11.3163 % of indic-ml
- 22.5300 % of indic-ur
- 4.4902 % of vi
- 16.9916 % of indic-kn
- 24.7820 % of eu
- 11.6241 % of indic-mr
- 9.8749 % of id
- 9.3489 % of indic-pa
- 9.4767 % of indic-gu
- 24.1132 % of indic-as
- 5.3309 % of indic-or
### BigScience processing steps
#### Filters applied to: en
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: ar
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: fr
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: es
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: ca
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: zh
#### Filters applied to: zh
#### Filters applied to: indic-bn
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ta
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-te
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: pt
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-hi
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ml
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ur
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: vi
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-kn
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: eu
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
#### Filters applied to: indic-mr
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: id
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-pa
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-gu
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-as
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
#### Filters applied to: indic-or
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
| [
-0.7009376883506775,
-0.5906853079795837,
0.36059698462486267,
0.1802215725183487,
-0.22236624360084534,
-0.0899733155965805,
-0.23155972361564636,
-0.16223786771297455,
0.7022555470466614,
0.3349216878414154,
-0.8390534520149231,
-0.9320297837257385,
-0.6899957656860352,
0.481157451868057... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
scoup123/testing | scoup123 | 2022-05-20T19:38:43Z | 13 | 0 | null | [
"region:us"
] | 2022-05-20T19:38:43Z | 2022-05-20T17:26:04.000Z | 2022-05-20T17:26:04 | annotations_creators:
- found
language_creators:
- found
languages:
- tr
licenses:
- unknown
multilinguality:
- monolingual
paperswithcode_id: null
pretty_name: testing _data
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
- sentiment-scoring | [
-0.5197525024414062,
-0.3424530327320099,
0.33441969752311707,
0.8207471966743469,
-0.3529653251171112,
0.1708018183708191,
-0.2669605314731598,
-0.4971548318862915,
0.5224016904830933,
0.7631419897079468,
-0.6064304113388062,
-0.775614321231842,
-0.8071231842041016,
0.5083742141723633,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
laion/laion2B-en-aesthetic-tags | laion | 2022-05-22T02:33:27Z | 13 | 2 | null | [
"region:us"
] | 2022-05-22T02:33:27Z | 2022-05-22T01:52:23.000Z | 2022-05-22T01:52:23 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
laion/laion2B-multi-aesthetic-tags | laion | 2022-05-22T03:16:06Z | 13 | 2 | null | [
"region:us"
] | 2022-05-22T03:16:06Z | 2022-05-22T01:52:39.000Z | 2022-05-22T01:52:39 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
laion/laion1B-nolang-aesthetic-tags | laion | 2022-05-22T02:09:56Z | 13 | 1 | null | [
"region:us"
] | 2022-05-22T02:09:56Z | 2022-05-22T01:52:57.000Z | 2022-05-22T01:52:57 | Entry not found | [
-0.32276469469070435,
-0.22568407654762268,
0.8622258901596069,
0.434614896774292,
-0.5282987952232361,
0.7012966275215149,
0.7915717363357544,
0.07618635147809982,
0.7746022939682007,
0.25632190704345703,
-0.7852814793586731,
-0.22573821246623993,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
laion/laion2B-multi-aesthetic | laion | 2023-01-18T20:04:36Z | 13 | 4 | null | [
"region:us"
] | 2023-01-18T20:04:36Z | 2022-05-22T12:34:24.000Z | 2022-05-22T12:34:24 | details at https://github.com/LAION-AI/laion-datasets/blob/main/laion-aesthetic.md | [
-0.28252074122428894,
-0.4003012180328369,
0.4656323790550232,
-0.12546944618225098,
0.02029193378984928,
0.06949134916067123,
-0.08352214843034744,
-0.25081029534339905,
0.6883168816566467,
0.7658242583274841,
-0.9028742909431458,
-1.1707165241241455,
-0.13811837136745453,
-0.363814562559... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nielsr/video-demo | nielsr | 2022-05-23T07:56:05Z | 13 | 1 | null | [
"region:us"
] | 2022-05-23T07:56:05Z | 2022-05-23T07:55:40.000Z | 2022-05-23T07:55:40 | Entry not found | [
-0.32276469469070435,
-0.22568407654762268,
0.8622258901596069,
0.434614896774292,
-0.5282987952232361,
0.7012966275215149,
0.7915717363357544,
0.07618635147809982,
0.7746022939682007,
0.25632190704345703,
-0.7852814793586731,
-0.22573821246623993,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lexington/Sneakers | lexington | 2022-05-25T19:24:00Z | 13 | 0 | null | [
"region:us"
] | 2022-05-25T19:24:00Z | 2022-05-25T19:22:33.000Z | 2022-05-25T19:22:33 | Entry not found | [
-0.32276469469070435,
-0.22568407654762268,
0.8622258901596069,
0.434614896774292,
-0.5282987952232361,
0.7012966275215149,
0.7915717363357544,
0.07618635147809982,
0.7746022939682007,
0.25632190704345703,
-0.7852814793586731,
-0.22573821246623993,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
wrice/sv_corpora_parliament_processed | wrice | 2022-05-26T18:47:02Z | 13 | 0 | null | [
"region:us"
] | 2022-05-26T18:47:02Z | 2022-05-26T13:00:05.000Z | 2022-05-26T13:00:05 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
wrice/sv_corpora_parliament_processed_punctuation | wrice | 2022-05-27T12:06:01Z | 13 | 0 | null | [
"region:us"
] | 2022-05-27T12:06:01Z | 2022-05-27T11:57:02.000Z | 2022-05-27T11:57:02 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Yah216/Poem_APCD_text_only | Yah216 | 2022-05-28T08:00:27Z | 13 | 0 | null | [
"region:us"
] | 2022-05-28T08:00:27Z | 2022-05-27T17:06:24.000Z | 2022-05-27T17:06:24 | We used the APCD dataset cited hereafter for pretraining the model. The dataset has been cleaned and only the main text column was kept:
```
@Article{Yousef2019LearningMetersArabicEnglish-arxiv,
author = {Yousef, Waleed A. and Ibrahime, Omar M. and Madbouly, Taha M. and Mahmoud,
Moustafa A.},
title = {Learning Meters of Arabic and English Poems With Recurrent Neural Networks: a Step
Forward for Language Understanding and Synthesis},
journal = {arXiv preprint arXiv:1905.05700},
year = 2019,
url = {https://github.com/hci-lab/LearningMetersPoems}
}
``` | [
-0.4426601529121399,
-0.32899513840675354,
0.2761167287826538,
0.09244091063737869,
-0.6419311761856079,
-0.24566206336021423,
-0.46312177181243896,
-0.06538646668195724,
-0.13512131571769714,
0.4385239779949188,
-0.5984827876091003,
-0.9098530411720276,
-0.6781519055366516,
0.172503963112... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jet-universe/top_landscape | jet-universe | 2022-05-27T19:41:20Z | 13 | 0 | null | [
"region:us"
] | 2022-05-27T19:41:20Z | 2022-05-27T19:16:55.000Z | 2022-05-27T19:16:55 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jet-universe/quark_gluon | jet-universe | 2022-05-27T20:16:05Z | 13 | 0 | null | [
"region:us"
] | 2022-05-27T20:16:05Z | 2022-05-27T20:03:52.000Z | 2022-05-27T20:03:52 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gary109/sv_corpora_parliament_processed | gary109 | 2022-05-27T23:46:48Z | 13 | 0 | null | [
"region:us"
] | 2022-05-27T23:46:48Z | 2022-05-27T23:46:16.000Z | 2022-05-27T23:46:16 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
daniel-dona/dani-voice | daniel-dona | 2022-06-04T11:02:50Z | 13 | 0 | null | [
"license:cc0-1.0",
"region:us"
] | 2022-06-04T11:02:50Z | 2022-05-28T15:19:55.000Z | 2022-05-28T15:19:55 | ---
license: cc0-1.0
---
| [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Rexhaif/xsum_reduced | Rexhaif | 2022-05-28T16:34:43Z | 13 | 0 | null | [
"region:us"
] | 2022-05-28T16:34:43Z | 2022-05-28T16:27:18.000Z | 2022-05-28T16:27:18 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.