id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
celsowm/adoro_cinema_filmes | celsowm | 2023-11-11T11:08:24Z | 53 | 0 | null | [
"region:us"
] | 2023-11-11T11:08:24Z | 2023-11-06T21:27:31.000Z | 2023-11-06T21:27:31 | ---
dataset_info:
features:
- name: titulo
dtype: string
- name: sinopse
dtype: string
- name: generos
sequence: string
- name: link
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 23369140
num_examples: 42918
download_size: 13807632
dataset_size: 23369140
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "adoro_cinema_filmes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6167279481887817,
-0.215311199426651,
0.08828268945217133,
-0.09252631664276123,
-0.45493489503860474,
-0.06454554945230484,
0.3715307116508484,
-0.03028213232755661,
0.9365917444229126,
0.550674557685852,
-0.6768782734870911,
-0.7282757759094238,
-0.7666997909545898,
-0.253118157386779... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kpriyanshu256/semeval-task-8-a-multi-v2 | kpriyanshu256 | 2023-11-07T13:15:37Z | 53 | 0 | null | [
"region:us"
] | 2023-11-07T13:15:37Z | 2023-11-07T13:15:01.000Z | 2023-11-07T13:15:01 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
- name: model
dtype: string
- name: source
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 362229271
num_examples: 137933
- name: val
num_bytes: 90513705
num_examples: 34484
- name: test
num_bytes: 8790338
num_examples: 4000
download_size: 265430410
dataset_size: 461533314
---
# Dataset Card for "semeval-task-8-a-multi-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.387811541557312,
-0.18803972005844116,
0.2675536870956421,
0.4089941680431366,
-0.280945360660553,
-0.2440974861383438,
0.4364301562309265,
-0.14749300479888916,
0.7754195332527161,
0.7179176211357117,
-0.9357049465179443,
-0.4638325870037079,
-0.7945368885993958,
-0.22000281512737274,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hanchungshin/testset | hanchungshin | 2023-11-19T14:10:27Z | 53 | 0 | null | [
"region:us"
] | 2023-11-19T14:10:27Z | 2023-11-16T05:41:27.000Z | 2023-11-16T05:41:27 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
knowrohit07/know-saraswati-cot | knowrohit07 | 2023-11-21T22:39:23Z | 53 | 11 | null | [
"license:openrail",
"region:us"
] | 2023-11-21T22:39:23Z | 2023-11-17T08:46:58.000Z | 2023-11-17T08:46:58 | ---
license: openrail
---
### 🚨 To all devs, scholars, and also fugazis of AI - A Philosophical Standpoint on AGI:
- This is extraneous, if you have time to read it-- give it a shot. We stand at the precipice of a digital era where the notions of artificial intelligence are often muddled with the grandiose idea of Artificial General Intelligence (AGI). Here's a candid reflection:
1. Current LLMs and their Limitations: Let's be unequivocally clear—present-day language models, including transformers, are not a direct path to AGI. They are sophisticated token predictors, highly skilled in generalizing from vast datasets but lacking true understanding. They operate in what might be termed the 'dog-AGI' phase—impressive, yes, but nowhere close to the 'god-AGI' phase we aspire to reach.
2. The Nature of 'Smart': These models, for all their complexity, are not sentient. They don't possess the rich tapestry of human experience—our memories, relationships, and 'eureka' moments that constitute learning and wisdom. They are yet to evolve from merely processing information to experiencing and understanding the nuances of life as we know it.
3. Stockpiling NVIDIA cards and accumulating GPU clusters is not the golden ticket to AGI. The pursuit of AGI is not solely a quest for more processing power. It is a deeper, more philosophical journey where:
- Space Outposts and Ion Engines: Mankind should expandd beyond the terrestrial, reaching for space outposts and harnessing commercialized ion engines for space travel. Ion engines, with their extended operational capacity, liberate us from the constraints of chemical fuel, enabling voyages that stretch both time and distance.
- Asteroid Mining and the Periodic Table: The quest for AGI is mirrored in our endeavor to mine asteroids, potentially revealing new elements that could add unknown dimensions to our periodic table. This is not merely resource extraction; it is an exploration that feeds into the self-iterative learning nature of AGI, fostering an intelligence that grows with each discovery.
- Nuclear Mass Energy and Helium-3: We look beyond silicon to the immense potential of nuclear mass energy. Helium-3, fused from deuterium in high-efficiency fusion generators, represents a future energy source that could power the next leaps in AGI development. Overcoming the scarcity of Helium-3 is a challenge we are poised to tackle, paving the way for a new era of energy abundance.
4. The Road Ahead: As we venture into the unknown, let's reimagine our approach. We seek an AI that lives a 'life', so to speak, with context vectors representing not just data points but the essence of existence itself. Imagine an AI with a library of experiences, including life choices and personal growth, akin to a human with 60 years of rich, varied living.
## Overview
The know-saraswati-cot dataset is a curated collection of examples designed to train and evaluate large language models (LLMs) on stream of consciousness (SoC), chain of thought (CoT), and logical reasoning. Named after Saraswati, the Hindu goddess of knowledge, wisdom, and learning, this dataset embodies the spirit of open-source knowledge sharing. It is an ode to democratizing knowledge, making it as accessible as the flowing waters of the mythical Saraswati river.
With addtional 30,000 code reasoning examples and various other deep reasoning scenarios, this dataset aims to imbue LLMs with a profound capacity for understanding, reasoning, and decision-making.
## Dataset Structure
Each entry in the know-saraswati-cot dataset comprises an instruction and an output field. Same old stuff, i like this format. The instruction provides a scenario or question that requires deep thinking, inviting the model to engage in a step-by-step reasoning process. The output then captures a reasoned response that aligns with the principles of logical deduction and stream of consciousness thought.
The know-saraswati-cot dataset has been meticulously crafted to reflect the intricacies of human-like reasoning. Here are some key specifications:
- Concise Reasoning: The majority of examples are concisely formulated within 500 tokens, fostering quick and efficient chains of thought (CoT). This simulates the succinct yet profound reasoning processes akin to human cognition.
- Multi-Turn Interactions: Some entries are designed as multi-turn interactions, allowing models to engage in a deeper and more dynamic discourse. This emulates real-world conversations where dialogues build upon previous exchanges.
- Extended Discussions: A subset of the dataset accommodates scenarios extending up to 2000 tokens for comprehensive reasoning tasks. These are tailored to model how a sapient being would thoughtfully respond to complex logic puzzles, as opposed to the often superficial and tangential responses generated by less sophisticated models.
- Each example is tailored to how an actual sapien would reason and respond, capturing the essence of human logic, emotion, and cognition. This approach aims to steer AI responses away from the undeveloped and extraneous (which usually llms respond with), guiding them towards relevance and depth that truly address the query at hand.
## Inspiration
Inspired by the vision of making knowledge free and accessible for all, akin to the way Goddess Saraswati is revered for her gifts of learning and enlightenment, this dataset was synthesized using GPT-4. A special pranaam and blessings 🙏 from my brother, whose vision of a frugally enlightened world where knowledge is a common wealth has been the cornerstone of this endeavor.
## Use Cases
The know-saraswati-cot dataset can be utilized to:
1. By providing rich, nuanced examples of logical reasoning, the dataset is perfect for developing models that can mimic the depth of human thought processes.
2. Researchers can leverage the dataset to investigate how AI models can not only reach conclusions but also articulate the reasoning behind their decisions, making AI workings more transparent.
3. know-saraswati-cot can foster AI development that intersects with philosophy, literature, and the Engineering, encouraging holistic and multidimensional growth in AI capabilities.
4. have fun | [
-0.39523550868034363,
-0.605617105960846,
0.7338857054710388,
-0.2751803994178772,
-0.13889744877815247,
0.28534072637557983,
-0.03869886323809624,
-0.5376488566398621,
0.07848742604255676,
0.33924680948257446,
-0.6939968466758728,
-0.1036088764667511,
-0.3916761875152588,
-0.1049823462963... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ml6team/cnn_dailymail_nl | ml6team | 2022-10-22T14:03:06Z | 52 | 13 | null | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:https://github.com/huggingface/datasets/tree/master/datasets/cnn_dailymail",
"language:nl",
"license:mit",
"region:us"
] | 2022-10-22T14:03:06Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- nl
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- https://github.com/huggingface/datasets/tree/master/datasets/cnn_dailymail
task_categories:
- conditional-text-generation
task_ids:
- summarization
---
# Dataset Card for Dutch CNN Dailymail Dataset
## Dataset Description
- **Repository:** [CNN / DailyMail Dataset NL repository](https://huggingface.co/datasets/ml6team/cnn_dailymail_nl)
### Dataset Summary
The Dutch CNN / DailyMail Dataset is a machine-translated version of the English CNN / Dailymail dataset containing just over 300k unique news aticles as written by journalists at CNN and the Daily Mail.
Most information about the dataset can be found on the [HuggingFace page](https://huggingface.co/datasets/cnn_dailymail) of the original English version.
These are the basic steps used to create this dataset (+ some chunking):
```
load_dataset("cnn_dailymail", '3.0.0')
```
And this is the HuggingFace translation pipeline:
```
pipeline(
task='translation_en_to_nl',
model='Helsinki-NLP/opus-mt-en-nl',
tokenizer='Helsinki-NLP/opus-mt-en-nl')
```
### Data Fields
- `id`: a string containing the heximal formated SHA1 hash of the url where the story was retrieved from
- `article`: a string containing the body of the news article
- `highlights`: a string containing the highlight of the article as written by the article author
### Data Splits
The Dutch CNN/DailyMail dataset follows the same splits as the original English version and has 3 splits: _train_, _validation_, and _test_.
| Dataset Split | Number of Instances in Split |
| ------------- | ------------------------------------------- |
| Train | 287,113 |
| Validation | 13,368 |
| Test | 11,490 |
| [
-0.41557931900024414,
-0.46385928988456726,
-0.015676412731409073,
0.5258344411849976,
-0.6759186387062073,
-0.20794281363487244,
-0.3039305806159973,
-0.5370593070983887,
0.42724329233169556,
0.565945565700531,
-0.7480979561805725,
-0.992812991142273,
-0.8176009654998779,
0.37268158793449... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
iluvvatar/NEREL | iluvvatar | 2023-03-30T13:37:20Z | 52 | 4 | null | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"language:ru",
"region:us"
] | 2023-03-30T13:37:20Z | 2022-04-07T09:03:51.000Z | 2022-04-07T09:03:51 | ---
language:
- ru
multilinguality:
- monolingual
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: NEREL
---
# NEREL dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Structure](#dataset-structure)
- [Citation Information](#citation-information)
- [Contacts](#contacts)
## Dataset Description
NEREL dataset (https://doi.org/10.48550/arXiv.2108.13112) is
a Russian dataset for named entity recognition and relation extraction.
NEREL is significantly larger than existing Russian datasets:
to date it contains 56K annotated named entities and 39K annotated relations.
Its important difference from previous datasets is annotation of nested named
entities, as well as relations within nested entities and at the discourse
level. NEREL can facilitate development of novel models that can extract
relations between nested named entities, as well as relations on both sentence
and document levels. NEREL also contains the annotation of events involving
named entities and their roles in the events.
You can see full entity types list in a subset "ent_types"
and full list of relation types in a subset "rel_types".
## Dataset Structure
There are three "configs" or "subsets" of the dataset.
Using
`load_dataset('MalakhovIlya/NEREL', 'ent_types')['ent_types']`
you can download list of entity types (
Dataset({features: ['type', 'link']})
) where "link" is a knowledge base name used in entity linking task.
Using
`load_dataset('MalakhovIlya/NEREL', 'rel_types')['rel_types']`
you can download list of entity types (
Dataset({features: ['type', 'arg1', 'arg2']})
) where "arg1" and "arg2" are lists of entity types that can take part in such
"type" of relation. \<ENTITY> stands for any type.
Using
`load_dataset('MalakhovIlya/NEREL', 'data')` or `load_dataset('MalakhovIlya/NEREL')`
you can download the data itself,
DatasetDict with 3 splits: "train", "test" and "dev".
Each of them contains text document with annotated entities, relations and
links.
"entities" are used in named-entity recognition task (see https://en.wikipedia.org/wiki/Named-entity_recognition).
"relations" are used in relationship extraction task (see https://en.wikipedia.org/wiki/Relationship_extraction).
"links" are used in entity linking task (see https://en.wikipedia.org/wiki/Entity_linking)
Each entity is represented by a string of the following format:
`"<id>\t<type> <start> <stop>\t<text>"`, where
`<id>` is an entity id,
`<type>` is one of entity types,
`<start>` is a position of the first symbol of entity in text,
`<stop>` is the last symbol position in text +1.
Each relation is represented by a string of the following format:
`"<id>\t<type> Arg1:<arg1_id> Arg2:<arg2_id>"`, where
`<id>` is a relation id,
`<arg1_id>` and `<arg2_id>` are entity ids.
Each link is represented by a string of the following format:
`"<id>\tReference <ent_id> <link>\t<text>"`, where
`<id>` is a link id,
`<ent_id>` is an entity id,
`<link>` is a reference to knowledge base entity (example: "Wikidata:Q1879675" if link exists, else "Wikidata:NULL"),
`<text>` is a name of entity in knowledge base if link exists, else empty string.
## Citation Information
@article{loukachevitch2021nerel,
title={NEREL: A Russian Dataset with Nested Named Entities, Relations and Events},
author={Loukachevitch, Natalia and Artemova, Ekaterina and Batura, Tatiana and Braslavski, Pavel and Denisov, Ilia and Ivanov, Vladimir and Manandhar, Suresh and Pugachev, Alexander and Tutubalina, Elena},
journal={arXiv preprint arXiv:2108.13112},
year={2021}
}
| [
-0.4817698001861572,
-0.6043692231178284,
0.24724288284778595,
0.0863480195403099,
-0.26790928840637207,
-0.26230481266975403,
-0.16492022573947906,
-0.5359522104263306,
0.3160240948200226,
0.3066062033176422,
-0.5607421398162842,
-0.8440940976142883,
-0.3493758738040924,
0.367140203714370... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
chenghao/cuad_qa | chenghao | 2022-09-14T16:15:12Z | 52 | 0 | cuad | [
"task_categories:question-answering",
"task_ids:closed-domain-qa",
"task_ids:extractive-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"arxiv:210... | 2022-09-14T16:15:12Z | 2022-09-14T00:01:15.000Z | 2022-09-14T00:01:15 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- closed-domain-qa
- extractive-qa
paperswithcode_id: cuad
pretty_name: CUAD
train-eval-index:
- config: default
task: question-answering
task_id: extractive_question_answering
splits:
train_split: train
eval_split: test
col_mapping:
question: question
context: context
answers:
text: text
answer_start: answer_start
metrics:
- type: cuad
name: CUAD
---
# Dataset Card for CUAD
This is a modified version of original [CUAD](https://huggingface.co/datasets/cuad/blob/main/README.md) which trims the question to its label form.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Contract Understanding Atticus Dataset](https://www.atticusprojectai.org/cuad)
- **Repository:** [Contract Understanding Atticus Dataset](https://github.com/TheAtticusProject/cuad/)
- **Paper:** [CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review](https://arxiv.org/abs/2103.06268)
- **Point of Contact:** [Atticus Project Team](info@atticusprojectai.org)
### Dataset Summary
Contract Understanding Atticus Dataset (CUAD) v1 is a corpus of more than 13,000 labels in 510 commercial legal contracts that have been manually labeled to identify 41 categories of important clauses that lawyers look for when reviewing contracts in connection with corporate transactions.
CUAD is curated and maintained by The Atticus Project, Inc. to support NLP research and development in legal contract review. Analysis of CUAD can be found at https://arxiv.org/abs/2103.06268. Code for replicating the results and the trained model can be found at https://github.com/TheAtticusProject/cuad.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset contains samples in English only.
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"answer_start": [44],
"text": ['DISTRIBUTOR AGREEMENT']
},
"context": 'EXHIBIT 10.6\n\n DISTRIBUTOR AGREEMENT\n\n THIS DISTRIBUTOR AGREEMENT (the "Agreement") is made by and between Electric City Corp., a Delaware corporation ("Company") and Electric City of Illinois LLC ("Distributor") this 7th day of September, 1999...',
"id": "LIMEENERGYCO_09_09_1999-EX-10-DISTRIBUTOR AGREEMENT__Document Name_0",
"question": "Highlight the parts (if any) of this contract related to "Document Name" that should be reviewed by a lawyer. Details: The name of the contract",
"title": "LIMEENERGYCO_09_09_1999-EX-10-DISTRIBUTOR AGREEMENT"
}
```
### Data Fields
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
This dataset is split into train/test set. Number of samples in each set is given below:
| | Train | Test |
| ----- | ------ | ---- |
| CUAD | 22450 | 4182 |
## Dataset Creation
### Curation Rationale
A highly valuable specialized task without a public large-scale dataset is contract review, which costs humans substantial time, money, and attention. Many law firms spend approximately 50% of their time reviewing contracts (CEB, 2017). Due to the specialized training necessary to understand and interpret contracts, the billing rates for lawyers at large law firms are typically around $500-$900 per hour in the US. As a result, many transactions cost companies hundreds of thousands of dollars just so that lawyers can verify that there are no problematic obligations or requirements included in the contracts. Contract review can be a source of drudgery and, in comparison to other legal tasks, is widely considered to be especially boring.
Contract review costs also affect consumers. Since contract review costs are so prohibitive, contract review is not often performed outside corporate transactions. Small companies and individuals consequently often sign contracts without even reading them, which can result in predatory behavior that harms consumers. Automating contract review by openly releasing high-quality data and fine-tuned models can increase access to legal support for small businesses and individuals, so that legal support is not exclusively available to wealthy companies.
To reduce the disparate societal costs of contract review, and to study how well NLP models generalize to specialized domains, the authors introduced a new large-scale dataset for contract review. As part of The Atticus Project, a non-profit organization of legal experts, CUAD is introduced, the Contract Understanding Atticus Dataset. This dataset was created with a year-long effort pushed forward by dozens of law student annotators, lawyers, and machine learning researchers. The dataset includes more than 500 contracts and more than 13,000 expert annotations that span 41 label categories. For each of 41 different labels, models must learn to highlight the portions of a contract most salient to that label. This makes the task a matter of finding needles in a haystack.
### Source Data
#### Initial Data Collection and Normalization
The CUAD includes commercial contracts selected from 25 different types of contracts based on the contract names as shown below. Within each type, the creators randomly selected contracts based on the names of the filing companies across the alphabet.
Type of Contracts: # of Docs
Affiliate Agreement: 10
Agency Agreement: 13
Collaboration/Cooperation Agreement: 26
Co-Branding Agreement: 22
Consulting Agreement: 11
Development Agreement: 29
Distributor Agreement: 32
Endorsement Agreement: 24
Franchise Agreement: 15
Hosting Agreement: 20
IP Agreement: 17
Joint Venture Agreemen: 23
License Agreement: 33
Maintenance Agreement: 34
Manufacturing Agreement: 17
Marketing Agreement: 17
Non-Compete/No-Solicit/Non-Disparagement Agreement: 3
Outsourcing Agreement: 18
Promotion Agreement: 12
Reseller Agreement: 12
Service Agreement: 28
Sponsorship Agreement: 31
Supply Agreement: 18
Strategic Alliance Agreement: 32
Transportation Agreement: 13
TOTAL: 510
#### Who are the source language producers?
The contracts were sourced from EDGAR, the Electronic Data Gathering, Analysis, and Retrieval system used at the U.S. Securities and Exchange Commission (SEC). Publicly traded companies in the United States are required to file certain contracts under the SEC rules. Access to these contracts is available to the public for free at https://www.sec.gov/edgar. Please read the Datasheet at https://www.atticusprojectai.org/ for information on the intended use and limitations of the CUAD.
### Annotations
#### Annotation process
The labeling process included multiple steps to ensure accuracy:
1. Law Student Training: law students attended training sessions on each of the categories that included a summary, video instructions by experienced attorneys, multiple quizzes and workshops. Students were then required to label sample contracts in eBrevia, an online contract review tool. The initial training took approximately 70-100 hours.
2. Law Student Label: law students conducted manual contract review and labeling in eBrevia.
3. Key Word Search: law students conducted keyword search in eBrevia to capture additional categories that have been missed during the “Student Label” step.
4. Category-by-Category Report Review: law students exported the labeled clauses into reports, review each clause category-by-category and highlight clauses that they believe are mislabeled.
5. Attorney Review: experienced attorneys reviewed the category-by-category report with students comments, provided comments and addressed student questions. When applicable, attorneys discussed such results with the students and reached consensus. Students made changes in eBrevia accordingly.
6. eBrevia Extras Review. Attorneys and students used eBrevia to generate a list of “extras”, which are clauses that eBrevia AI tool identified as responsive to a category but not labeled by human annotators. Attorneys and students reviewed all of the “extras” and added the correct ones. The process is repeated until all or substantially all of the “extras” are incorrect labels.
7. Final Report: The final report was exported into a CSV file. Volunteers manually added the “Yes/No” answer column to categories that do not contain an answer.
#### Who are the annotators?
Answered in above section.
### Personal and Sensitive Information
Some clauses in the files are redacted because the party submitting these contracts redacted them to protect confidentiality. Such redaction may show up as asterisks (\*\*\*) or underscores (\_\_\_) or blank spaces. The dataset and the answers reflect such redactions. For example, the answer for “January \_\_ 2020” would be “1/[]/2020”).
For any categories that require an answer of “Yes/No”, annotators include full sentences as text context in a contract. To maintain consistency and minimize inter-annotator disagreement, annotators select text for the full sentence, under the instruction of “from period to period”.
For the other categories, annotators selected segments of the text in the contract that are responsive to each such category. One category in a contract may include multiple labels. For example, “Parties” may include 4-10 separate text strings that are not continuous in a contract. The answer is presented in the unified format separated by semicolons of “Party A Inc. (“Party A”); Party B Corp. (“Party B”)”.
Some sentences in the files include confidential legends that are not part of the contracts. An example of such confidential legend is as follows:
THIS EXHIBIT HAS BEEN REDACTED AND IS THE SUBJECT OF A CONFIDENTIAL TREATMENT REQUEST. REDACTED MATERIAL IS MARKED WITH [* * *] AND HAS BEEN FILED SEPARATELY WITH THE SECURITIES AND EXCHANGE COMMISSION.
Some sentences in the files contain irrelevant information such as footers or page numbers. Some sentences may not be relevant to the corresponding category. Some sentences may correspond to a different category. Because many legal clauses are very long and contain various sub-parts, sometimes only a sub-part of a sentence is responsive to a category.
To address the foregoing limitations, annotators manually deleted the portion that is not responsive, replacing it with the symbol "<omitted>" to indicate that the two text segments do not appear immediately next to each other in the contracts. For example, if a “Termination for Convenience” clause starts with “Each Party may terminate this Agreement if” followed by three subparts “(a), (b) and (c)”, but only subpart (c) is responsive to this category, the authors manually deleted subparts (a) and (b) and replaced them with the symbol "<omitted>”. Another example is for “Effective Date”, the contract includes a sentence “This Agreement is effective as of the date written above” that appears after the date “January 1, 2010”. The annotation is as follows: “January 1, 2010 <omitted> This Agreement is effective as of the date written above.”
Because the contracts were converted from PDF into TXT files, the converted TXT files may not stay true to the format of the original PDF files. For example, some contracts contain inconsistent spacing between words, sentences and paragraphs. Table format is not maintained in the TXT files.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Attorney Advisors
Wei Chen, John Brockland, Kevin Chen, Jacky Fink, Spencer P. Goodson, Justin Haan, Alex Haskell, Kari Krusmark, Jenny Lin, Jonas Marson, Benjamin Petersen, Alexander Kwonji Rosenberg, William R. Sawyers, Brittany Schmeltz, Max Scott, Zhu Zhu
Law Student Leaders
John Batoha, Daisy Beckner, Lovina Consunji, Gina Diaz, Chris Gronseth, Calvin Hannagan, Joseph Kroon, Sheetal Sharma Saran
Law Student Contributors
Scott Aronin, Bryan Burgoon, Jigar Desai, Imani Haynes, Jeongsoo Kim, Margaret Lynch, Allison Melville, Felix Mendez-Burgos, Nicole Mirkazemi, David Myers, Emily Rissberger, Behrang Seraj, Sarahginy Valcin
Technical Advisors & Contributors
Dan Hendrycks, Collin Burns, Spencer Ball, Anya Chen
### Licensing Information
CUAD is licensed under the Creative Commons Attribution 4.0 (CC BY 4.0) license and free to the public for commercial and non-commercial use.
The creators make no representations or warranties regarding the license status of the underlying contracts, which are publicly available and downloadable from EDGAR.
Privacy Policy & Disclaimers
The categories or the contracts included in the dataset are not comprehensive or representative. The authors encourage the public to help improve them by sending them your comments and suggestions to info@atticusprojectai.org. Comments and suggestions will be reviewed by The Atticus Project at its discretion and will be included in future versions of Atticus categories once approved.
The use of CUAD is subject to their privacy policy https://www.atticusprojectai.org/privacy-policy and disclaimer https://www.atticusprojectai.org/disclaimer.
### Citation Information
```
@article{hendrycks2021cuad,
title={CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review},
author={Dan Hendrycks and Collin Burns and Anya Chen and Spencer Ball},
journal={arXiv preprint arXiv:2103.06268},
year={2021}
}
```
### Contributions
Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik) for adding the original CUAD dataset. | [
-0.412165105342865,
-0.5234143733978271,
0.24502213299274445,
0.13511425256729126,
-0.27570828795433044,
0.016023853793740273,
-0.0711870789527893,
-0.6775972247123718,
0.37042000889778137,
0.7004022598266602,
-0.12677226960659027,
-0.7508906126022339,
-0.4899674355983734,
0.10229738056659... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SALT-NLP/wikisql_VALUE | SALT-NLP | 2022-10-27T21:32:23Z | 52 | 0 | null | [
"region:us"
] | 2022-10-27T21:32:23Z | 2022-10-27T18:32:48.000Z | 2022-10-27T18:32:48 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/bioinfer | bigbio | 2022-12-22T15:43:38Z | 52 | 2 | null | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-2.0",
"region:us"
] | 2022-12-22T15:43:38Z | 2022-11-13T22:06:35.000Z | 2022-11-13T22:06:35 |
---
language:
- en
bigbio_language:
- English
license: cc-by-2.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_2p0
pretty_name: BioInfer
homepage: https://github.com/metalrt/ppi-dataset
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- RELATION_EXTRACTION
- NAMED_ENTITY_RECOGNITION
---
# Dataset Card for BioInfer
## Dataset Description
- **Homepage:** https://github.com/metalrt/ppi-dataset
- **Pubmed:** True
- **Public:** True
- **Tasks:** RE,NER
A corpus targeted at protein, gene, and RNA relationships which serves as a
resource for the development of information extraction systems and their
components such as parsers and domain analyzers. Currently, the corpus contains
1100 sentences from abstracts of biomedical research articles annotated for
relationships, named entities, as well as syntactic dependencies.
## Citation Information
```
@article{pyysalo2007bioinfer,
title = {BioInfer: a corpus for information extraction in the biomedical domain},
author = {
Pyysalo, Sampo and Ginter, Filip and Heimonen, Juho and Bj{"o}rne, Jari
and Boberg, Jorma and J{"a}rvinen, Jouni and Salakoski, Tapio
},
year = 2007,
journal = {BMC bioinformatics},
publisher = {BioMed Central},
volume = 8,
number = 1,
pages = {1--24}
}
```
| [
-0.17605681717395782,
-0.37665221095085144,
0.27065709233283997,
-0.0161895751953125,
-0.22708967328071594,
-0.17194300889968872,
0.048513952642679214,
-0.16195741295814514,
0.41259390115737915,
0.3527873456478119,
-0.4799744486808777,
-0.6700031161308289,
-0.543800413608551,
0.58316171169... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Jzuluaga/atcosim_corpus | Jzuluaga | 2022-12-05T11:14:57Z | 52 | 0 | null | [
"task_categories:automatic-speech-recognition",
"multilinguality:monolingual",
"language:en",
"audio",
"automatic-speech-recognition",
"en-atc",
"en",
"robust-speech-recognition",
"noisy-speech-recognition",
"speech-recognition",
"arxiv:2203.16822",
"region:us"
] | 2022-12-05T11:14:57Z | 2022-11-16T09:04:42.000Z | 2022-11-16T09:04:42 | ---
dataset_info:
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: segment_start_time
dtype: float32
- name: segment_end_time
dtype: float32
- name: duration
dtype: float32
splits:
- name: test
num_bytes: 471628915.76
num_examples: 1901
- name: train
num_bytes: 1934757106.88
num_examples: 7638
download_size: 0
dataset_size: 2406386022.6400003
tags:
- audio
- automatic-speech-recognition
- en-atc
- en
- robust-speech-recognition
- noisy-speech-recognition
- speech-recognition
task_categories:
- automatic-speech-recognition
language:
- en
multilinguality:
- monolingual
---
# Dataset Card for ATCOSIM corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages and Other Details](#languages-and-other-details)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [ATCOSIM homepage](https://www.spsc.tugraz.at/databases-and-tools/atcosim-air-traffic-control-simulation-speech-corpus.html)
- **Repository:** [GitHub repository (used in research)](https://github.com/idiap/w2v2-air-traffic)
- **Paper:** [The ATCOSIM Corpus of Non-Prompted Clean Air Traffic Control Speech](https://aclanthology.org/L08-1507/)
- **Paper of this research:** [How Does Pre-trained Wav2Vec 2.0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications](https://arxiv.org/abs/2203.16822)
### Dataset Summary
The ATCOSIM Air Traffic Control Simulation Speech corpus is a speech database of air traffic control (ATC) operator speech, provided by Graz University of Technology (TUG) and Eurocontrol Experimental Centre (EEC). It consists of ten hours of speech data, which were recorded during ATC real-time simulations using a close-talk headset microphone. The utterances are in English language and pronounced by ten non-native speakers. The database includes orthographic transcriptions and additional information on speakers and recording sessions. It was recorded and annotated by Konrad Hofbauer ([description here](https://www.spsc.tugraz.at/databases-and-tools/atcosim-air-traffic-control-simulation-speech-corpus.html)).
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`. Already adapted/fine-tuned models are available here --> [XLS-R-300m](https://huggingface.co/Jzuluaga/wav2vec2-large-960h-lv60-self-en-atc-atcosim).
### Languages and other details
The text and the recordings are in English. The participating controllers were all actively employed air traffic controllers and possessed professional experience in the simulated sectors. The six male and four female controllers were of either German or Swiss nationality and had German, Swiss German or Swiss French native tongue. The controllers had agreed to the recording of their voice for the purpose of language analysis as well as for research and development in speech technologies, and were asked to show their normal working behaviour.
## Dataset Structure
### Data Fields
- `id (string)`: a string of recording identifier for each example, corresponding to its.
- `audio (audio)`: audio data for the given ID
- `text (string)`: transcript of the file already normalized. Follow these repositories for more details [w2v2-air-traffic](https://github.com/idiap/w2v2-air-traffic) and [bert-text-diarization-atc](https://github.com/idiap/bert-text-diarization-atc)
- `segment_start_time (float32)`: segment start time (normally 0)
- `segment_end_time (float32): segment end time
- `duration (float32)`: duration of the recording, compute as segment_end_time - segment_start_time
## Additional Information
### Licensing Information
The licensing status of the dataset hinges on the legal status of the [ATCOSIM corpus](https://www.spsc.tugraz.at/databases-and-tools/atcosim-air-traffic-control-simulation-speech-corpus.html) creators.
### Citation Information
Contributors who prepared, processed, normalized and uploaded the dataset in HuggingFace:
```
@article{zuluaga2022how,
title={How Does Pre-trained Wav2Vec2. 0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Prasad, Amrutha and Nigmatulina, Iuliia and Sarfjoo, Saeed and others},
journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
year={2022}
}
@article{zuluaga2022bertraffic,
title={BERTraffic: BERT-based Joint Speaker Role and Speaker Change Detection for Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Sarfjoo, Seyyed Saeed and Prasad, Amrutha and others},
journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
year={2022}
}
@article{zuluaga2022atco2,
title={ATCO2 corpus: A Large-Scale Dataset for Research on Automatic Speech Recognition and Natural Language Understanding of Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Vesel{\`y}, Karel and Sz{\"o}ke, Igor and Motlicek, Petr and others},
journal={arXiv preprint arXiv:2211.04054},
year={2022}
}
```
Authors of the dataset:
```
@inproceedings{hofbauer-etal-2008-atcosim,
title = "The {ATCOSIM} Corpus of Non-Prompted Clean Air Traffic Control Speech",
author = "Hofbauer, Konrad and
Petrik, Stefan and
Hering, Horst",
booktitle = "Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}'08)",
month = may,
year = "2008",
address = "Marrakech, Morocco",
publisher = "European Language Resources Association (ELRA)",
url = "http://www.lrec-conf.org/proceedings/lrec2008/pdf/545_paper.pdf",
}
```
| [
-0.3753352463245392,
-0.6044749617576599,
-0.014359538443386555,
0.1324390023946762,
-0.3214430510997772,
0.1316450983285904,
-0.55068039894104,
-0.5221704244613647,
0.19202226400375366,
0.39827895164489746,
-0.3643493056297302,
-0.5389918088912964,
-0.6530123353004456,
-0.1700685173273086... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kmyoo/stsb-en-tiny | kmyoo | 2022-12-02T13:55:05Z | 52 | 0 | null | [
"region:us"
] | 2022-12-02T13:55:05Z | 2022-12-02T13:54:37.000Z | 2022-12-02T13:54:37 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
its5Q/yandex-q | its5Q | 2023-04-02T16:48:29Z | 52 | 6 | null | [
"task_categories:text-generation",
"task_categories:question-answering",
"task_ids:language-modeling",
"task_ids:open-domain-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language... | 2023-04-02T16:48:29Z | 2022-12-04T06:56:33.000Z | 2022-12-04T06:56:33 | ---
annotations_creators:
- crowdsourced
language:
- ru
language_creators:
- crowdsourced
license:
- cc0-1.0
multilinguality:
- monolingual
pretty_name: Yandex.Q
size_categories:
- 100K<n<1M
source_datasets:
- original
tags: []
task_categories:
- text-generation
- question-answering
task_ids:
- language-modeling
- open-domain-qa
---
# Dataset Card for Yandex.Q
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Citation Information](#citation-information)
## Dataset Description
- **Repository:** https://github.com/its5Q/yandex-q
### Dataset Summary
This is a dataset of questions and answers scraped from [Yandex.Q](https://yandex.ru/q/). There are 836810 answered questions out of the total of 1297670.
The full dataset that includes all metadata returned by Yandex.Q APIs and contains unanswered questions can be found in `full.jsonl.gz`
### Languages
The dataset is mostly in Russian, but there may be other languages present
## Dataset Structure
### Data Fields
The dataset consists of 3 fields:
- `question` - question title (`string`)
- `description` - question description (`string` or `null`)
- `answer` - answer to the question (`string`)
### Data Splits
All 836810 examples are in the train split, there is no validation split.
## Dataset Creation
The data was scraped through some "hidden" APIs using several scripts, located in [my GitHub repository](https://github.com/its5Q/yandex-q)
## Additional Information
### Dataset Curators
- https://github.com/its5Q
| [
-0.6437409520149231,
-0.45118555426597595,
0.5914639234542847,
-0.04507571831345558,
-0.238677978515625,
-0.06823823601007462,
-0.017309105023741722,
-0.1569332480430603,
0.5863985419273376,
0.43319985270500183,
-1.079350471496582,
-0.8965957760810852,
-0.21204717457294464,
0.1331723183393... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ipipan/polqa | ipipan | 2023-09-09T13:37:44Z | 52 | 3 | null | [
"task_categories:question-answering",
"task_categories:text-retrieval",
"task_categories:text2text-generation",
"task_ids:open-domain-qa",
"task_ids:document-retrieval",
"task_ids:abstractive-qa",
"annotations_creators:expert-generated",
"size_categories:10K<n<100K",
"language:pl",
"license:cc-by-... | 2023-09-09T13:37:44Z | 2022-12-17T15:03:58.000Z | 2022-12-17T15:03:58 | ---
task_categories:
- question-answering
- text-retrieval
- text2text-generation
task_ids:
- open-domain-qa
- document-retrieval
- abstractive-qa
language:
- pl
pretty_name: PolQA
size_categories:
- 10K<n<100K
annotations_creators:
- expert-generated
license: cc-by-sa-4.0
---
# Dataset Card for PolQA Dataset
## Dataset Description
- **Paper:** [Improving Question Answering Performance through Manual Annotation: Costs, Benefits and Strategies](https://arxiv.org/abs/2212.08897)
- **Point of Contact:** [Piotr Rybak](mailto:piotr.cezary.rybak@gmail.com)
### Dataset Summary
PolQA is the first Polish dataset for open-domain question answering. It consists of 7,000 questions, 87,525 manually labeled evidence passages, and a corpus of over 7 million candidate passages. The dataset can be used to train both a passage retriever and an abstractive reader.
### Supported Tasks and Leaderboards
- `open-domain-qa`: The dataset can be used to train a model for open-domain question answering. Success on this task is typically measured using [metric defined during PolEval 2021](https://2021.poleval.pl/tasks/task4).
- `document-retrieval`: The dataset can be used to train a model for document retrieval. Success on this task is typically measured by [top-k retrieval accuracy](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.top_k_accuracy_score.html) or [NDCG](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.ndcg_score.html).
- `abstractive-qa`: The dataset can be used to train a model for abstractive question answering. Success on this task is typically measured using [metric defined during PolEval 2021](https://2021.poleval.pl/tasks/task4).
### Languages
The text is in Polish, as spoken by the host of the [Jeden z Dziesięciu](https://pl.wikipedia.org/wiki/Jeden_z_dziesi%C4%99ciu) TV show (questions) and [Polish Wikipedia](https://pl.wikipedia.org/) editors (passages). The BCP-47 code for Polish is pl-PL.
## Dataset Structure
### Data Instances
The main part of the dataset consists of manually annotated question-passage pairs. For each instance, there is a `question`, a passage (`passage_id`, `passage_title`, `passage_text`), and a boolean indicator if the passage is `relevant` for the given question (i.e. does it contain the answers).
For each `question` there is a list of possible `answers` formulated in a natural language, in a way a Polish
speaker would answer the questions. It means that the answers might
contain prepositions, be inflected, and contain punctuation. In some
cases, the answer might have multiple correct variants, e.g. numbers
are written as numerals and words, synonyms, abbreviations and their
expansions.
Additionally, we provide a classification of each question-answer pair based on the `question_formulation`, the `question_type`, and the `entity_type/entity_subtype`, according to the taxonomy proposed by
[Maciej Ogrodniczuk and Piotr Przybyła (2021)](http://nlp.ipipan.waw.pl/Bib/ogr:prz:21:poleval.pdf).
```
{
'question_id': 6,
'passage_title': 'Mumbaj',
'passage_text': 'Mumbaj lub Bombaj (marathi मुंबई, trb.: Mumbaj; ang. Mumbai; do 1995 Bombay) – stolica indyjskiego stanu Maharasztra, położona na wyspie Salsette, na Morzu Arabskim.',
'passage_wiki': 'Mumbaj lub Bombaj (mr. मुंबई, trb.: "Mumbaj"; ang. Mumbai; do 1995 Bombay) – stolica indyjskiego stanu Maharasztra, położona na wyspie Salsette, na Morzu Arabskim. Wraz z miastami satelitarnymi tworzy najludniejszą po Delhi aglomerację liczącą 23 miliony mieszkańców. Dzięki naturalnemu położeniu jest to największy port morski kraju. Znajdują się tutaj także najsilniejsze giełdy Azji Południowej: National Stock Exchange of India i Bombay Stock Exchange.',
'passage_id': '42609-0',
'duplicate': False,
'question': 'W którym państwie leży Bombaj?',
'relevant': True,
'annotated_by': 'Igor',
'answers': "['w Indiach', 'Indie']",
'question_formulation': 'QUESTION',
'question_type': 'SINGLE ENTITY',
'entity_type': 'NAMED',
'entity_subtype': 'COUNTRY',
'split': 'train',
'passage_source': 'human'
}
```
The second part of the dataset is a corpus of Polish Wikipedia (March 2022 snapshot) passages. The raw Wikipedia snapshot was parsed using [WikiExtractor](https://github.com/attardi/wikiextractor) and split into passages at the ends of the paragraphs or if the passage was longer than 500 characters.
```
{
'id': '42609-0',
'title': 'Mumbaj',
'text': 'Mumbaj lub Bombaj (mr. मुंबई, trb.: "Mumbaj"; ang. Mumbai; do 1995 Bombay) – stolica indyjskiego stanu Maharasztra, położona na wyspie Salsette, na Morzu Arabskim. Wraz z miastami satelitarnymi tworzy najludniejszą po Delhi aglomerację liczącą 23 miliony mieszkańców. Dzięki naturalnemu położeniu jest to największy port morski kraju. Znajdują się tutaj także najsilniejsze giełdy Azji Południowej: National Stock Exchange of India i Bombay Stock Exchange.'
}
```
### Data Fields
Question-passage pairs:
- `question_id`: an integer id of the question
- `passage_title`: a string containing the title of the Wikipedia article
- `passage_text`: a string containing the passage text as extracted by the human annotator
- `passage_wiki`: a string containing the passage text as it can be found in the provided Wikipedia corpus. Empty if the passage doesn't exist in the corpus.
- `passage_id`: a string containing the id of the passage from the provided Wikipedia corpus. Empty if the passage doesn't exist in the corpus.
- `duplicate`: a boolean flag representing whether a question-passage pair is duplicated in the dataset. This occurs when the same passage was found in multiple passage sources.
- `question`: a string containing the question
- `relevant`: a boolean flag representing whether a passage is relevant to the question (i.e. does it contain the answers)
- `annotated_by`: a string containing the name of the annotator who verified the relevance of the pair
- `answers`: a string containing a list of possible short answers to the question
- `question_formulation`: a string containing a kind of expression used to request information. One of the following:
- `QUESTION`, e.g. *What is the name of the first letter of the Greek alphabet?*
- `COMMAND`, e.g. *Expand the abbreviation ’CIA’.*
- `COMPOUND`, e.g. *This French writer, born in the 19th century, is
considered a pioneer of sci-fi literature. What is his name?*
- `question_type`: a string indicating what type of information is sought by the question. One of the following:
- `SINGLE ENTITY`, e.g. *Who is the hero in the Tomb Rider video game series?*
- `MULTIPLE ENTITIES`, e.g. *Which two seas are linked by the Corinth Canal?*
- `ENTITY CHOICE`, e.g. *Is "Sombrero" a type of dance, a hat, or a dish?*
- `YES/NO`, e.g. *When the term of office of the Polish Sejm is terminated, does it apply to the Senate as well?*
- `OTHER NAME`, e.g. *What was the nickname of Louis I, the King of the Franks?*
- `GAP FILLING`, e.g. *Finish the proverb: "If you fly with the crows... ".*
- `entity_type`: a string containing a type of the sought entity. One of the following: `NAMED`, `UNNAMED`, or `YES/NO`.
- `entity_subtype`: a string containing a subtype of the sought entity. Can take one of the 34 different values.
- `split`: a string containing the split of the dataset. One of the following: `train`, `valid`, or `test`.
- `passage_source`: a string containing the source of the passage. One of the following:
- `human`: the passage was proposed by a human annotator using any
internal (i.e. Wikipedia search) or external (e.g. Google) search engines and any keywords or queries they considered useful
- `hard-negatives`: the passage was proposed using a neural retriever trained on the passages found by the human annotators
- `zero-shot`: the passage was proposed by the BM25 retriever and re-ranked using [multilingual cross-encoder](https://huggingface.co/unicamp-dl/mMiniLM-L6-v2-mmarco-v2)
Corpus of passages:
- `id`: a string representing the Wikipedia article id and the index of extracted passage. Matches the `passage_id` from the main part of the dataset.
- `title`: a string containing the title of the Wikipedia article. Matches the `passage_title` from the main part of the dataset.
- `text`: a string containing the passage text. Matches the `passage_wiki` from the main part of the dataset.
### Data Splits
The questions are assigned into one of three splits: `train`, `validation`, and `test`. The `validation` and `test` questions are randomly sampled from the `test-B` dataset from the [PolEval 2021](https://2021.poleval.pl/tasks/task4) competition.
| | # questions | # positive passages | # negative passages |
|------------|------------:|--------------------:|--------------------:|
| train | 5,000 | 27,131 | 34,904 |
| validation | 1,000 | 5,839 | 6,927 |
| test | 1,000 | 5,938 | 6,786 |
## Dataset Creation
### Curation Rationale
The PolQA dataset was created to support and promote the research in the open-domain question answering for Polish. It also serves as a benchmark to evaluate OpenQA systems.
### Source Data
#### Initial Data Collection and Normalization
The majority of questions come from two existing resources, the
6,000 questions from the [PolEval 2021 shared task on QA](https://2021.poleval.pl/tasks/task4) and additional 1,000 questions gathered by one of the shared
task [participants](http://poleval.pl/files/poleval2021.pdf#page=151). Originally, the questions come from collections associated with TV shows, both officially published and gathered online by their fans, as well as questions used in actual quiz competitions, on TV or online.
The evidence passages come from the Polish Wikipedia (March 2022 snapshot). The raw Wikipedia snapshot was parsed using [WikiExtractor](https://github.com/attardi/wikiextractor) and split into passages at the ends of the paragraphs or if the passage was longer than 500 characters.
#### Who are the source language producers?
The questions come from various sources and their authors are unknown but are mostly analogous (or even identical) to questions asked during the [Jeden z Dziesięciu](https://pl.wikipedia.org/wiki/Jeden_z_dziesi%C4%99ciu) TV show.
The passages were written by the editors of the Polish Wikipedia.
### Annotations
#### Annotation process
Two approaches were used to annotate the question-passage pairs. Each of them consists of two phases: the retrieval of candidate passages and the manual verification of their relevance.
In the first approach, we asked annotators to use internal (i.e. Wikipedia search) or external (e.g. Google) search engines to find up to five relevant passages using any keywords or queries they consider useful (`passage_source="human"`). Based on those passages, we trained the neural retriever to extend the number of relevant passages, as well as to retrieve the hard negatives (`passage_source="hard-negatives"`).
In the second approach, the passage candidates were proposed by the BM25 retriever and re-ranked using [multilingual cross-encoder](https://huggingface.co/unicamp-dl/mMiniLM-L6-v2-mmarco-v2) (`passage_source="zero-shot"`).
In both cases, all proposed question-passage pairs were manually verified by the annotators.
#### Who are the annotators?
The annotation team consisted of 16 annotators, all native Polish
speakers, most of them having linguistic backgrounds and previous
experience as an annotator.
### Personal and Sensitive Information
The dataset does not contain any personal or sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset was created to promote the research in the open-domain question answering for Polish and allow developing question answering systems.
### Discussion of Biases
The passages proposed by the `hard-negative` and `zero-shot` methods are bound to be easier to retrieve by retrievers since they were proposed by such. To mitigate this bias, we include the passages found by the human annotators in an unconstrained way (`passage_source="human"`). We hypothesize that it will result in more unbiased and diverse examples. Moreover, we asked the annotators to find not one but up to five passages, preferably from different articles to even further increase passage diversity.
### Other Known Limitations
The PolQA dataset focuses on trivia questions which might limit its usefulness in real-world applications since neural retrievers generalize poorly to other domains.
## Additional Information
### Dataset Curators
The PolQA dataset was developed by Piotr Rybak, Piotr Przybyła, and Maciej Ogrodniczuk from the [Institute of Computer Science, Polish Academy of Sciences](http://zil.ipipan.waw.pl/).
This work was supported by the European Regional Development Fund as a part of 2014–2020 Smart Growth Operational Programme, CLARIN — Common Language Resources and Technology Infrastructure, project no. POIR.04.02.00-00C002/19.
### Licensing Information
CC BY-SA 4.0
### Citation Information
```
@misc{rybak2022improving,
title={Improving Question Answering Performance through Manual Annotation: Costs, Benefits and Strategies},
author={Piotr Rybak and Piotr Przybyła and Maciej Ogrodniczuk},
year={2022},
eprint={2212.08897},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | [
-0.6944676041603088,
-0.9666171669960022,
0.4488867223262787,
0.08971996605396271,
-0.3810264766216278,
-0.09589977562427521,
-0.2282390594482422,
-0.3356172442436218,
0.4996907413005829,
0.5134751200675964,
-0.687333881855011,
-0.6358134746551514,
-0.35864052176475525,
0.5260313153266907,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ruanchaves/b2w-reviews01 | ruanchaves | 2023-01-20T18:22:37Z | 52 | 9 | null | [
"task_categories:text-classification",
"task_ids:sentiment-analysis",
"task_ids:sentiment-scoring",
"task_ids:intent-classification",
"task_ids:topic-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100M<n<1B",
"source_datase... | 2023-01-20T18:22:37Z | 2023-01-19T07:55:43.000Z | 2023-01-19T07:55:43 | ---
annotations_creators:
- found
language:
- pt
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: B2W-Reviews01
size_categories:
- 100M<n<1B
source_datasets:
- original
tags:
- reviews
task_categories:
- text-classification
task_ids:
- sentiment-analysis
- sentiment-scoring
- intent-classification
- topic-classification
---
# Dataset Card for Dataset Name
## Dataset Description
- **Repository:** https://github.com/americanas-tech/b2w-reviews01
- **Paper:** http://comissoes.sbc.org.br/ce-pln/stil2019/proceedings-stil-2019-Final-Publicacao.pdf
- **Point of Contact:** Livy Real
### Dataset Summary
B2W-Reviews01 is an open corpus of product reviews. It contains more than 130k e-commerce customer reviews, collected from the Americanas.com website between January and May, 2018. B2W-Reviews01 offers rich information about the reviewer profile, such as gender, age, and geographical location. The corpus also has two different review rates:
* the usual 5-point scale rate, represented by stars in most e-commerce websites,
* a "recommend to a friend" label, a "yes or no" question representing the willingness of the customer to recommend the product to someone else.
### Supported Tasks and Leaderboards
* Sentiment Analysis
* Topic Modeling
### Languages
* Portuguese
## Dataset Structure
### Data Instances
```
{'submission_date': '2018-01-02 06:23:22',
'reviewer_id': '6adc7901926fc1697d34181fbd88895976b4f3f31f0102d90217d248a1fad156',
'product_id': '123911277',
'product_name': 'Triciclo Gangorra Belfix Cabeça Cachorro Rosa',
'product_brand': 'belfix',
'site_category_lv1': 'Brinquedos',
'site_category_lv2': 'Mini Veículos',
'review_title': 'O produto não foi entregue',
'overall_rating': 1,
'recommend_to_a_friend': 'Yes',
'review_text': 'Incrível o descaso com o consumidor. O produto não chegou, apesar de já ter sido pago. Não recebo qualquer informação sobre onde se encontra o produto, ou qualquer compensação do vendedor. Não recomendo.',
'reviewer_birth_year': 1981,
'reviewer_gender': 'M',
'reviewer_state': 'RJ'}
```
### Data Fields
* **submission_date**: the date and time when the review was submitted. `"%Y-%m-%d %H:%M:%S"`.
* **reviewer_id**: a unique identifier for the reviewer.
* **product_id**: a unique identifier for the product being reviewed.
* **product_name**: the name of the product being reviewed.
* **product_brand**: the brand of the product being reviewed.
* **site_category_lv1**: the highest level category for the product on the site where the review is being submitted.
* **site_category_lv2**: the second level category for the product on the site where the review is being submitted.
* **review_title**: the title of the review.
* **overall_rating**: the overall star rating given by the reviewer on a scale of 1 to 5.
* **recommend_to_a_friend**: whether or not the reviewer would recommend the product to a friend (Yes/No).
* **review_text**: the full text of the review.
* **reviewer_birth_year**: the birth year of the reviewer.
* **reviewer_gender**: the gender of the reviewer (F/M).
* **reviewer_state**: the Brazilian state of the reviewer (e.g. RJ).
### Data Splits
| name |train|
|---------|----:|
|b2w-reviews01|132373|
### Citation Information
```
@inproceedings{real2019b2w,
title={B2W-reviews01: an open product reviews corpus},
author={Real, Livy and Oshiro, Marcio and Mafra, Alexandre},
booktitle={STIL-Symposium in Information and Human Language Technology},
year={2019}
}
```
### Contributions
Thanks to [@ruanchaves](https://github.com/ruanchaves) for adding this dataset. | [
-0.5670034289360046,
-0.5348218679428101,
0.20078663527965546,
0.6376871466636658,
-0.22765277326107025,
-0.22513209283351898,
-0.18706421554088593,
-0.666001558303833,
0.39143460988998413,
0.5084403157234192,
-0.6146557927131653,
-0.8784547448158264,
-0.44932883977890015,
0.23319740593433... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
clip-benchmark/wds_vtab-pets | clip-benchmark | 2023-01-20T07:21:42Z | 52 | 0 | null | [
"region:us"
] | 2023-01-20T07:21:42Z | 2023-01-20T07:21:05.000Z | 2023-01-20T07:21:05 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
wujohns/gpt2-base-learn | wujohns | 2023-03-26T10:54:40Z | 52 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-03-26T10:54:40Z | 2023-03-16T04:28:01.000Z | 2023-03-16T04:28:01 | ---
license: apache-2.0
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': neg
'1': pos
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 9576485.5665
num_examples: 6033
- name: test
num_bytes: 232838.225
num_examples: 151
download_size: 4622568
dataset_size: 9809323.7915
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jamescalam/lex-transcripts | jamescalam | 2023-04-06T07:49:58Z | 52 | 7 | null | [
"region:us"
] | 2023-04-06T07:49:58Z | 2023-03-28T08:49:00.000Z | 2023-03-28T08:49:00 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
J4YL19/biored_tokenized | J4YL19 | 2023-04-06T22:33:57Z | 52 | 0 | null | [
"region:us"
] | 2023-04-06T22:33:57Z | 2023-04-06T22:33:48.000Z | 2023-04-06T22:33:48 | ---
dataset_info:
features:
- name: pmid
dtype: string
- name: passage
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: string
splits:
- name: train
num_bytes: 2259680
num_examples: 387
- name: val
num_bytes: 604670
num_examples: 98
- name: test
num_bytes: 576610
num_examples: 97
download_size: 1083246
dataset_size: 3440960
---
# Dataset Card for "biored_tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4234323799610138,
-0.33930569887161255,
0.18166787922382355,
0.10032819956541061,
-0.32850977778434753,
0.35155096650123596,
0.3352162837982178,
-0.2499433308839798,
1.1040681600570679,
0.44377797842025757,
-0.7525020837783813,
-0.8606610298156738,
-0.610805869102478,
0.0269114132970571... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hpprc/jsick | hpprc | 2023-04-11T06:18:09Z | 52 | 4 | null | [
"task_categories:sentence-similarity",
"task_categories:text-classification",
"task_ids:natural-language-inference",
"task_ids:semantic-similarity-scoring",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:translation",
"size_categories:10K<n<100K",
"so... | 2023-04-11T06:18:09Z | 2023-04-08T16:02:06.000Z | 2023-04-08T16:02:06 | ---
annotations_creators:
- expert-generated
language:
- ja
- en
language_creators:
- expert-generated
license:
- cc-by-sa-4.0
multilinguality:
- translation
pretty_name: JSICK
size_categories:
- 10K<n<100K
source_datasets:
- extended|sick
tags:
- semantic-textual-similarity
- sts
task_categories:
- sentence-similarity
- text-classification
task_ids:
- natural-language-inference
- semantic-similarity-scoring
---
# Dataset Card for JSICK
## Table of Contents
- [Dataset Card for JSICK](#dataset-card-for-jsick)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Japanese Sentences Involving Compositional Knowledge (JSICK) Dataset.](#japanese-sentences-involving-compositional-knowledge-jsick-dataset)
- [JSICK-stress Test set](#jsick-stress-test-set)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [base](#base)
- [stress](#stress)
- [Data Fields](#data-fields)
- [base](#base-1)
- [stress](#stress-1)
- [Data Splits](#data-splits)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/verypluming/JSICK
- **Repository:** https://github.com/verypluming/JSICK
- **Paper:** https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00518/113850/Compositional-Evaluation-on-Japanese-Textual
- **Paper:** https://www.jstage.jst.go.jp/article/pjsai/JSAI2021/0/JSAI2021_4J3GS6f02/_pdf/-char/ja
### Dataset Summary
From official [GitHub](https://github.com/verypluming/JSICK):
#### Japanese Sentences Involving Compositional Knowledge (JSICK) Dataset.
JSICK is the Japanese NLI and STS dataset by manually translating the English dataset [SICK (Marelli et al., 2014)](https://aclanthology.org/L14-1314/) into Japanese.
We hope that our dataset will be useful in research for realizing more advanced models that are capable of appropriately performing multilingual compositional inference.
#### JSICK-stress Test set
The JSICK-stress test set is a dataset to investigate whether models capture word order and case particles in Japanese.
The JSICK-stress test set is provided by transforming syntactic structures of sentence pairs in JSICK, where we analyze whether models are attentive to word order and case particles to predict entailment labels and similarity scores.
The JSICK test set contains 1666, 797, and 1006 sentence pairs (A, B) whose premise sentences A (the column `sentence_A_Ja_origin`) include the basic word order involving
ga-o (nominative-accusative), ga-ni (nominative-dative), and ga-de (nominative-instrumental/locative) relations, respectively.
We provide the JSICK-stress test set by transforming syntactic structures of these pairs by the following three ways:
- `scrum_ga_o`: a scrambled pair, where the word order of premise sentences A is scrambled into o-ga, ni-ga, and de-ga order, respectively.
- `ex_ga_o`: a rephrased pair, where the only case particles (ga, o, ni, de) in the premise A are swapped
- `del_ga_o`: a rephrased pair, where the only case particles (ga, o, ni) in the premise A are deleted
### Languages
The language data in JSICK is in Japanese and English.
## Dataset Structure
### Data Instances
When loading a specific configuration, users has to append a version dependent suffix:
```python
import datasets as ds
dataset: ds.DatasetDict = ds.load_dataset("hpprc/jsick")
print(dataset)
# DatasetDict({
# train: Dataset({
# features: ['id', 'premise', 'hypothesis', 'label', 'score', 'premise_en', 'hypothesis_en', 'label_en', 'score_en', 'corr_entailment_labelAB_En', 'corr_entailment_labelBA_En', 'image_ID', 'original_caption', 'semtag_short', 'semtag_long'],
# num_rows: 4500
# })
# test: Dataset({
# features: ['id', 'premise', 'hypothesis', 'label', 'score', 'premise_en', 'hypothesis_en', 'label_en', 'score_en', 'corr_entailment_labelAB_En', 'corr_entailment_labelBA_En', 'image_ID', 'original_caption', 'semtag_short', 'semtag_long'],
# num_rows: 4927
# })
# })
dataset: ds.DatasetDict = ds.load_dataset("hpprc/jsick", name="stress")
print(dataset)
# DatasetDict({
# test: Dataset({
# features: ['id', 'premise', 'hypothesis', 'label', 'score', 'sentence_A_Ja_origin', 'entailment_label_origin', 'relatedness_score_Ja_origin', 'rephrase_type', 'case_particles'],
# num_rows: 900
# })
# })
```
#### base
An example of looks as follows:
```json
{
'id': 1,
'premise': '子供たちのグループが庭で遊んでいて、後ろの方には年を取った男性が立っている',
'hypothesis': '庭にいる男の子たちのグループが遊んでいて、男性が後ろの方に立っている',
'label': 1, // (neutral)
'score': 3.700000047683716,
'premise_en': 'A group of kids is playing in a yard and an old man is standing in the background',
'hypothesis_en': 'A group of boys in a yard is playing and a man is standing in the background',
'label_en': 1, // (neutral)
'score_en': 4.5,
'corr_entailment_labelAB_En': 'nan',
'corr_entailment_labelBA_En': 'nan',
'image_ID': '3155657768_b83a7831e5.jpg',
'original_caption': 'A group of children playing in a yard , a man in the background .',
'semtag_short': 'nan',
'semtag_long': 'nan',
}
```
#### stress
An example of looks as follows:
```json
{
'id': '5818_de_d',
'premise': '女性火の近くダンスをしている',
'hypothesis': '火の近くでダンスをしている女性は一人もいない',
'label': 2, // (contradiction)
'score': 4.0,
'sentence_A_Ja_origin': '女性が火の近くでダンスをしている',
'entailment_label_origin': 2,
'relatedness_score_Ja_origin': 3.700000047683716,
'rephrase_type': 'd',
'case_particles': 'de'
}
```
### Data Fields
#### base
A version adopting the column names of a typical NLI dataset.
| Name | Description |
| -------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------- |
| id | The ids (the same with original SICK). |
| premise | The first sentence in Japanese. |
| hypothesis | The second sentence in Japanese. |
| label | The entailment label in Japanese. |
| score | The relatedness score in the range [1-5] in Japanese. |
| premise_en | The first sentence in English. |
| hypothesis_en | The second sentence in English. |
| label_en | The original entailment label in English. |
| score_en | The original relatedness score in the range [1-5] in English. |
| semtag_short | The linguistic phenomena tags in Japanese. |
| semtag_long | The details of linguistic phenomena tags in Japanese. |
| image_ID | The original image in [8K ImageFlickr dataset](https://www.kaggle.com/datasets/adityajn105/flickr8k). |
| original_caption | The original caption in [8K ImageFlickr dataset](https://www.kaggle.com/datasets/adityajn105/flickr8k). |
| corr_entailment_labelAB_En | The corrected entailment label from A to B in English by [(Karouli et al., 2017)](http://vcvpaiva.github.io/includes/pubs/2017-iwcs.pdf). |
| corr_entailment_labelBA_En | The corrected entailment label from B to A in English by [(Karouli et al., 2017)](http://vcvpaiva.github.io/includes/pubs/2017-iwcs.pdf). |
#### stress
| Name | Description |
| --------------------------- | ------------------------------------------------------------------------------------------------- |
| id | Ids (the same with original SICK). |
| premise | The first sentence in Japanese. |
| hypothesis | The second sentence in Japanese. |
| label | The entailment label in Japanese |
| score | The relatedness score in the range [1-5] in Japanese. |
| sentence_A_Ja_origin | The original premise sentences A from the JSICK test set. |
| entailment_label_origin | The original entailment labels. |
| relatedness_score_Ja_origin | The original relatedness scores. |
| rephrase_type | The type of transformation applied to the syntactic structures of the sentence pairs. |
| case_particles | The grammatical particles in Japanese that indicate the function or role of a noun in a sentence. |
### Data Splits
| name | train | validation | test |
| --------------- | ----: | ---------: | ----: |
| base | 4,500 | | 4,927 |
| original | 4,500 | | 4,927 |
| stress | | | 900 |
| stress-original | | | 900 |
### Annotations
To annotate the JSICK dataset, they used the crowdsourcing platform "Lancers" to re-annotate entailment labels and similarity scores for JSICK.
They had six native Japanese speakers as annotators, who were randomly selected from the platform.
The annotators were asked to fully understand the guidelines and provide the same labels as gold labels for ten test questions.
For entailment labels, they adopted annotations that were agreed upon by a majority vote as gold labels and checked whether the majority judgment vote was semantically valid for each example.
For similarity scores, they used the average of the annotation results as gold scores.
The raw annotations with the JSICK dataset are [publicly available](https://github.com/verypluming/JSICK/blob/main/jsick/jsick-all-annotations.tsv).
The average annotation time was 1 minute per pair, and Krippendorff's alpha for the entailment labels was 0.65.
## Additional Information
- [verypluming/JSICK](https://github.com/verypluming/JSICK)
- [Compositional Evaluation on Japanese Textual Entailment and Similarity](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00518/113850/Compositional-Evaluation-on-Japanese-Textual)
- [JSICK: 日本語構成的推論・類似度データセットの構築](https://www.jstage.jst.go.jp/article/pjsai/JSAI2021/0/JSAI2021_4J3GS6f02/_article/-char/ja)
### Licensing Information
CC BY-SA 4.0
### Citation Information
```bibtex
@article{yanaka-mineshima-2022-compositional,
title = "Compositional Evaluation on {J}apanese Textual Entailment and Similarity",
author = "Yanaka, Hitomi and
Mineshima, Koji",
journal = "Transactions of the Association for Computational Linguistics",
volume = "10",
year = "2022",
address = "Cambridge, MA",
publisher = "MIT Press",
url = "https://aclanthology.org/2022.tacl-1.73",
doi = "10.1162/tacl_a_00518",
pages = "1266--1284",
}
@article{谷中 瞳2021,
title={JSICK: 日本語構成的推論・類似度データセットの構築},
author={谷中 瞳 and 峯島 宏次},
journal={人工知能学会全国大会論文集},
volume={JSAI2021},
number={ },
pages={4J3GS6f02-4J3GS6f02},
year={2021},
doi={10.11517/pjsai.JSAI2021.0_4J3GS6f02}
}
```
### Contributions
Thanks to [Hitomi Yanaka](https://hitomiyanaka.mystrikingly.com/) and [Koji Mineshima](https://abelard.flet.keio.ac.jp/person/minesima/index-j.html) for creating this dataset. | [
-0.3731032907962799,
-0.7991659045219421,
0.3535902500152588,
0.2893086075782776,
-0.2662632167339325,
-0.08225134760141373,
-0.39266982674598694,
-0.28282830119132996,
0.4980606734752655,
0.3724753260612488,
-0.653293788433075,
-0.817345380783081,
-0.5044299960136414,
0.40017953515052795,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigcode/santacoder-fim-task | bigcode | 2023-04-28T11:12:16Z | 52 | 1 | null | [
"license:openrail",
"code",
"arxiv:2301.03988",
"region:us"
] | 2023-04-28T11:12:16Z | 2023-04-28T11:07:59.000Z | 2023-04-28T11:07:59 | ---
dataset_info:
features:
- name: name
dtype: string
- name: language
dtype: string
- name: prompt
dtype: string
- name: suffix
dtype: string
- name: canonical_solution
dtype: string
- name: tests
dtype: string
splits:
- name: train
num_bytes: 8627440
num_examples: 4792
download_size: 1918113
dataset_size: 8627440
license: openrail
tags:
- code
---
# Dataset Card for "santacoder-fim-task"
This is a dataset of prompts and solutions to the fill-in-the-middle (FIM) task
presented in the [SantaCoder] paper.
This dataset was generated using [this notebook](https://github.com/nuprl/MultiPL-E/blob/main/fill_in_the_middle/dataset_builder.ipynb).
[SantaCoder]: https://arxiv.org/abs/2301.03988 | [
-0.8130285143852234,
-0.42844393849372864,
0.11108260601758957,
0.0904945656657219,
-0.2762231230735779,
0.2210354208946228,
0.020201250910758972,
0.1608414351940155,
0.3790292739868164,
0.6022689938545227,
-1.165657639503479,
-0.4180794358253479,
-0.3683014214038849,
0.13000352680683136,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
clarin-knext/trec-covid-pl-qrels | clarin-knext | 2023-06-07T08:11:44Z | 52 | 0 | null | [
"language:pl",
"arxiv:2305.19840",
"region:us"
] | 2023-06-07T08:11:44Z | 2023-06-06T22:38:14.000Z | 2023-06-06T22:38:14 | ---
language:
- pl
---
Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**.
Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf
Contact: konrad.wojtasik@pwr.edu.pl | [
-0.2209920734167099,
-0.9029767513275146,
0.5094642043113708,
0.2354191392660141,
-0.318521112203598,
-0.1491902619600296,
-0.16673962771892548,
-0.49629199504852295,
-0.0189602542668581,
0.41122621297836304,
-0.5503097772598267,
-0.6913566589355469,
-0.4166175425052643,
-0.048304721713066... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lca0503/GPTspeech_encodec_v2 | lca0503 | 2023-06-15T06:54:51Z | 52 | 0 | null | [
"region:us"
] | 2023-06-15T06:54:51Z | 2023-06-14T16:48:10.000Z | 2023-06-14T16:48:10 | ---
dataset_info:
features:
- name: file_id
dtype: string
- name: instruction
dtype: string
- name: transcription
dtype: string
- name: src_encodec_0
sequence: int64
- name: src_encodec_1
sequence: int64
- name: src_encodec_2
sequence: int64
- name: src_encodec_3
sequence: int64
- name: src_encodec_4
sequence: int64
- name: src_encodec_5
sequence: int64
- name: src_encodec_6
sequence: int64
- name: src_encodec_7
sequence: int64
- name: tgt_encodec_0
sequence: int64
- name: tgt_encodec_1
sequence: int64
- name: tgt_encodec_2
sequence: int64
- name: tgt_encodec_3
sequence: int64
- name: tgt_encodec_4
sequence: int64
- name: tgt_encodec_5
sequence: int64
- name: tgt_encodec_6
sequence: int64
- name: tgt_encodec_7
sequence: int64
splits:
- name: train
num_bytes: 42732349968
num_examples: 704563
- name: validation
num_bytes: 706650258
num_examples: 12855
- name: test
num_bytes: 700741253
num_examples: 12463
download_size: 4503561741
dataset_size: 44139741479
---
# Dataset Card for "GPTspeech_encodec_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3843342959880829,
-0.1863468438386917,
0.21487128734588623,
0.17693087458610535,
-0.30120524764060974,
-0.10031544417142868,
0.29917463660240173,
-0.14081551134586334,
0.6858844757080078,
0.39140620827674866,
-0.6323245763778687,
-0.669802725315094,
-0.9206419587135315,
-0.1901996284723... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MAPS-research/GEMRec-Metadata | MAPS-research | 2023-08-07T04:42:05Z | 52 | 0 | null | [
"task_categories:text-to-image",
"size_categories:10K<n<100K",
"language:en",
"license:openrail",
"art",
"stable diffusion",
"diffusers",
"region:us"
] | 2023-08-07T04:42:05Z | 2023-06-30T06:40:35.000Z | 2023-06-30T06:40:35 | ---
dataset_info:
features:
- name: image_id
dtype: string
- name: tag
dtype: string
- name: model_id
dtype: int64
- name: modelVersion_id
dtype: int64
- name: prompt_id
dtype: int64
- name: size
dtype: string
- name: seed
dtype: int64
- name: prompt
dtype: string
- name: negativePrompt
dtype: string
- name: cfgScale
dtype: int64
- name: sampler
dtype: string
- name: note
dtype: string
- name: nsfw_score
dtype: float64
- name: mcos_score
dtype: float64
- name: clip_score
dtype: float64
- name: norm_clip
dtype: float64
- name: norm_mcos
dtype: float64
- name: norm_nsfw
dtype: float64
- name: norm_pop
dtype: float64
splits:
- name: train
num_bytes: 7955010
num_examples: 18000
download_size: 0
dataset_size: 7955010
license: openrail
task_categories:
- text-to-image
language:
- en
tags:
- art
- stable diffusion
- diffusers
size_categories:
- 10K<n<100K
---
# GEMRec-18k -- Metadata
This is the official image metadata dataset for the paper [Towards Personalized Prompt-Model Retrieval for Generative Recommendation](https://github.com/MAPS-research/GEMRec).
## Dataset Intro
`GEMRec-18K` is a prompt-model interaction dataset with 18K images generated by 200 publicly-available generative models paired with a diverse set of 90 textual prompts. We randomly sampled a subset of 197 models from the full set of models (all finetuned from Stable Diffusion) on [Civitai](https://civitai.com/) according to the popularity distribution (i.e., download counts) and added 3 original Stable Diffusion checkpoints (v1.4, v1.5, v2.1) from HuggingFace. All the model checkpoints have been converted to the [Diffusers](https://huggingface.co/docs/diffusers/index) format. The textual prompts were drawn from three sources: 60 prompts were sampled from [Parti Prompts](https://github.com/google-research/parti); 10 prompts were sampled from [Civitai](https://civitai.com/) by popularity; we also handcrafted 10 prompts following the prompting guide from [DreamStudio](https://beta.dreamstudio.ai/prompt-guide), and then extended them to 20 by creating a shortened and simplified version following the tips from [Midjourney](https://docs.midjourney.com/docs/prompts). The textual prompts were classified into 12 categories: abstract, animal, architecture, art, artifact, food, illustration, people, produce & plant, scenery, vehicle, and world knowledge.
## Links
#### Dataset
- [GEMRec-Promptbook](https://huggingface.co/datasets/MAPS-research/GEMRec-PromptBook): The full version of our GemRec-18k dataset (images & metadata).
- [GEMRec-Metadata](https://huggingface.co/datasets/MAPS-research/GEMRec-Metadata): The pruned version of our GemRec-18k dataset (metadata only).
- [GEMRec-Roster](https://huggingface.co/datasets/MAPS-research/GEMRec-Roster): The metadata for the 200 model checkpoints fetched from [Civitai](https://civitai.com/).
#### Space
- [GEMRec-Gallery](https://huggingface.co/spaces/MAPS-research/GEMRec-Gallery): Our web application for browsing and comparing the generated images.
#### Github Code
- [GEMRec](https://github.com/MAPS-research/GEMRec)
## Acknowledgement
This work was supported through the NYU High Performance Computing resources, services, and staff expertise.
## Citation
If you find our work helpful, please consider cite it as follows:
```bibtex
@article{guo2023towards,
title={Towards Personalized Prompt-Model Retrieval for Generative Recommendation},
author={Guo, Yuanhe and Liu, Haoming and Wen, Hongyi},
journal={arXiv preprint arXiv:2308.02205},
year={2023}
}
``` | [
-0.8336116671562195,
-0.5479733943939209,
0.7770171761512756,
0.058696355670690536,
0.034546419978141785,
-0.22474990785121918,
-0.06973709166049957,
-0.29791975021362305,
0.02666541375219822,
0.5547617077827454,
-0.8154778480529785,
-0.9373546838760376,
-0.29930561780929565,
0.23591953516... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MateuszW/spoiler_generation | MateuszW | 2023-07-08T10:23:02Z | 52 | 0 | null | [
"task_categories:question-answering",
"task_categories:text-generation",
"language:en",
"spoiler generation",
"clickbait spoiling",
"region:us"
] | 2023-07-08T10:23:02Z | 2023-07-06T12:55:35.000Z | 2023-07-06T12:55:35 | ---
task_categories:
- question-answering
- text-generation
language:
- en
tags:
- spoiler generation
- clickbait spoiling
---
# Datasets used for spoiler generation task
## Dataset Description
This dataset contains multiple datasets used for spoiler generation task for Clickbait spoiling competition for
training models based on Question Answering, text generation, or learn-to-rank problems.
## Dataset Structure
This dataset has 5 main directories:
- clf_data - this dataset was used to train a classifier for two generated spoilers to decide
which spoiler better matches a clickbait post. This model makes a binary classification,
class 1 corresponds to the situation when the first spoiler is "better" than the second,
and class 0 corresponds to the opposite situation
- clickbait_spoiling_data - this dataset is the original dataset taken from the Clickbait spoiling competition
- generated_questions - this dataset contains questions generated for clickbait posts by the Vicuna model
- models_output - in this dataset were inserted generated spoilers from the best-selected models
- regressor_data - this dataset was used to train a model that estimates the BLEU of generated
spoiler without knowing reference spoiler
| [
-0.5937034487724304,
-0.3163614869117737,
0.31142598390579224,
0.2233773171901703,
-0.2989259362220764,
-0.3587832450866699,
0.24006877839565277,
0.13407084345817566,
-0.1732391119003296,
0.6405807733535767,
-0.9529246091842651,
-0.4786267578601837,
-0.3779710829257965,
0.4905758500099182,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DAMO-NLP-MT/multialpaca | DAMO-NLP-MT | 2023-07-14T01:43:07Z | 52 | 8 | null | [
"license:apache-2.0",
"region:us"
] | 2023-07-14T01:43:07Z | 2023-07-13T09:33:22.000Z | 2023-07-13T09:33:22 | ---
license: apache-2.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DynamicSuperb/SpeakerCounting_LibriTTS-TestClean | DynamicSuperb | 2023-07-31T07:47:14Z | 52 | 0 | null | [
"region:us"
] | 2023-07-31T07:47:14Z | 2023-07-13T18:22:13.000Z | 2023-07-13T18:22:13 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: instruction
dtype: string
- name: label
dtype: string
- name: utterance 1
dtype: string
- name: utterance 2
dtype: string
- name: utterance 3
dtype: string
- name: utterance 4
dtype: string
- name: utterance 5
dtype: string
splits:
- name: test
num_bytes: 391751299.0
num_examples: 2000
download_size: 444578671
dataset_size: 391751299.0
---
# Dataset Card for "SpeakerCounting_LibriTTSTestClean"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7352372407913208,
-0.12645113468170166,
0.19396314024925232,
0.02865840680897236,
-0.18310123682022095,
-0.05834072083234787,
-0.07181084901094437,
-0.003605927573516965,
0.9223605990409851,
0.5419444441795349,
-0.5978114604949951,
-0.6794688701629639,
-0.549207329750061,
-0.43612042069... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
abacusai/LongChat-Lines | abacusai | 2023-07-28T03:14:01Z | 52 | 10 | null | [
"region:us"
] | 2023-07-28T03:14:01Z | 2023-07-27T15:16:12.000Z | 2023-07-27T15:16:12 | ---
configs:
- config_name: default
data_files:
- split: '100'
path: data/100-*
- split: '150'
path: data/150-*
- split: '175'
path: data/175-*
- split: '200'
path: data/200-*
- split: '250'
path: data/250-*
- split: '300'
path: data/300-*
- split: '400'
path: data/400-*
- split: '500'
path: data/500-*
- split: '600'
path: data/600-*
- split: '680'
path: data/680-*
- split: '750'
path: data/750-*
- split: '850'
path: data/850-*
- split: '950'
path: data/950-*
- split: '1100'
path: data/1100-*
dataset_info:
features:
- name: expected_number
dtype: int64
- name: num_lines
dtype: int64
- name: token_size
dtype: int64
- name: prompt
dtype: string
splits:
- name: '100'
num_bytes: 275673
num_examples: 50
- name: '150'
num_bytes: 400446
num_examples: 50
- name: '175'
num_bytes: 463159
num_examples: 50
- name: '200'
num_bytes: 525856
num_examples: 50
- name: '250'
num_bytes: 650643
num_examples: 50
- name: '300'
num_bytes: 775800
num_examples: 50
- name: '400'
num_bytes: 1025288
num_examples: 50
- name: '500'
num_bytes: 1276039
num_examples: 50
- name: '600'
num_bytes: 1524627
num_examples: 50
- name: '680'
num_bytes: 1724325
num_examples: 50
- name: '750'
num_bytes: 1899422
num_examples: 50
- name: '850'
num_bytes: 2149220
num_examples: 50
- name: '950'
num_bytes: 2398398
num_examples: 50
- name: '1100'
num_bytes: 2772556
num_examples: 50
download_size: 7270406
dataset_size: 17861452
---
# Dataset Card for "LongChat-Lines"
This dataset is was used to evaluate the performance of model finetuned to operate on longer contexts it is based on
a task template proposed LMSys to evaluate attention to arbitrary points in the context. See the full details at
[https;//github.com/abacusai/Long-Context](https://github.com/abacusai/Long-Context). | [
-0.5091309547424316,
-0.8528550267219543,
0.2953500747680664,
0.09492660313844681,
-0.47617408633232117,
-0.4680091142654419,
-0.13432741165161133,
-0.40097352862358093,
0.300626277923584,
0.809241771697998,
-1.0333958864212036,
-0.403508722782135,
-0.050380170345306396,
-0.102252982556819... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
atmallen/all6_azaria_mitchell | atmallen | 2023-08-01T18:43:54Z | 52 | 0 | null | [
"region:us"
] | 2023-08-01T18:43:54Z | 2023-08-01T18:43:50.000Z | 2023-08-01T18:43:50 | ---
dataset_info:
features:
- name: statement
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'false'
'1': 'true'
splits:
- name: train
num_bytes: 615037
num_examples: 11699
- name: test
num_bytes: 154079
num_examples: 2927
download_size: 269239
dataset_size: 769116
---
# Dataset Card for "all6_azaria_mitchell"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5613369941711426,
-0.15704861283302307,
0.29406043887138367,
0.20004864037036896,
-0.18378984928131104,
-0.3200753331184387,
0.461885005235672,
-0.20809590816497803,
0.9011131525039673,
0.6447254419326782,
-0.9834957122802734,
-0.872072696685791,
-0.8316583037376404,
-0.0135830547660589... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
arazd/tulu_cot | arazd | 2023-08-04T21:38:24Z | 52 | 0 | null | [
"license:openrail",
"region:us"
] | 2023-08-04T21:38:24Z | 2023-08-04T21:37:28.000Z | 2023-08-04T21:37:28 | ---
license: openrail
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
PL-MTEB/cbd | PL-MTEB | 2023-08-11T12:22:44Z | 52 | 0 | null | [
"license:bsd-3-clause",
"region:us"
] | 2023-08-11T12:22:44Z | 2023-08-11T12:22:32.000Z | 2023-08-11T12:22:32 | ---
license: bsd-3-clause
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
katielink/moleculenet-benchmark | katielink | 2023-08-28T17:51:14Z | 52 | 0 | null | [
"license:apache-2.0",
"biology",
"chemistry",
"region:us"
] | 2023-08-28T17:51:14Z | 2023-08-28T17:36:25.000Z | 2023-08-28T17:36:25 | ---
license: apache-2.0
tags:
- biology
- chemistry
configs:
- config_name: bace
data_files:
- split: train
path: bace/train.csv
- split: test
path: bace/test.csv
- split: val
path: bace/valid.csv
- config_name: bbbp
data_files:
- split: train
path: bbbp/train.csv
- split: test
path: bbbp/test.csv
- split: val
path: bbbp/valid.csv
- config_name: clintox
data_files:
- split: train
path: clintox/train.csv
- split: test
path: clintox/test.csv
- split: val
path: clintox/valid.csv
- config_name: esol
data_files:
- split: train
path: esol/train.csv
- split: test
path: esol/test.csv
- split: val
path: esol/valid.csv
- config_name: freesolv
data_files:
- split: train
path: freesolv/train.csv
- split: test
path: freesolv/test.csv
- split: val
path: freesolv/valid.csv
- config_name: hiv
data_files:
- split: train
path: hiv/train.csv
- split: test
path: hiv/test.csv
- split: val
path: hiv/valid.csv
- config_name: lipo
data_files:
- split: train
path: lipo/train.csv
- split: test
path: lipo/test.csv
- split: val
path: lipo/valid.csv
- config_name: qm9
data_files:
- split: train
path: qm9/train.csv
- split: test
path: qm9/test.csv
- split: val
path: qm9/valid.csv
- config_name: sider
data_files:
- split: train
path: sider/train.csv
- split: test
path: sider/test.csv
- split: val
path: sider/valid.csv
- config_name: tox21
data_files:
- split: train
path: tox21/train.csv
- split: test
path: tox21/test.csv
- split: val
path: tox21/valid.csv
---
# MoleculeNet Benchmark ([website](https://moleculenet.org/))
MoleculeNet is a benchmark specially designed for testing machine learning methods of molecular properties. As we aim to facilitate the development of molecular machine learning method, this work curates a number of dataset collections, creates a suite of software that implements many known featurizations and previously proposed algorithms. All methods and datasets are integrated as parts of the open source DeepChem package(MIT license).
MoleculeNet is built upon multiple public databases. The full collection currently includes over 700,000 compounds tested on a range of different properties. We test the performances of various machine learning models with different featurizations on the datasets(detailed descriptions here), with all results reported in AUC-ROC, AUC-PRC, RMSE and MAE scores.
For users, please cite:
Zhenqin Wu, Bharath Ramsundar, Evan N. Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S. Pappu, Karl Leswing, Vijay Pande, MoleculeNet: A Benchmark for Molecular Machine Learning, arXiv preprint, arXiv: 1703.00564, 2017.
| [
-0.46683454513549805,
-0.4481217861175537,
0.015117292292416096,
-0.017192630097270012,
-0.12950502336025238,
0.06851436197757721,
-0.23586073517799377,
-0.29218584299087524,
-0.13449379801750183,
0.5058668255805969,
-0.3414967954158783,
-0.6401951909065247,
-0.5249934196472168,
0.34051254... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
allenai/MADLAD-400 | allenai | 2023-10-31T20:47:56Z | 52 | 71 | null | [
"task_categories:text-generation",
"size_categories:n>1T",
"license:odc-by",
"arxiv:2309.04662",
"arxiv:2010.14571",
"arxiv:2103.12028",
"region:us"
] | 2023-10-31T20:47:56Z | 2023-09-01T00:06:27.000Z | 2023-09-01T00:06:27 | ---
license: odc-by
task_categories:
- text-generation
size_categories:
- n>1T
---
# MADLAD-400
## Dataset and Introduction
[MADLAD-400 (*Multilingual Audited Dataset: Low-resource And Document-level*)](https://arxiv.org/abs/2309.04662) is
a document-level multilingual dataset based on Common Crawl, covering 419
languages in total. This uses all snapshots of CommonCrawl available as of August
1, 2022. The primary advantage of this dataset over similar datasets is that it
is more multilingual (419 languages), it is audited and more highly filtered,
and it is document-level. The main disadvantage is also its strength -- being
more filtered, it may lack the recall needed for some applications.
There are two versions released: the **noisy** dataset, which has no filtering
except document-level LangID, and the **clean** dataset, which has a variety of
filters applied, though it naturally has a fair amount of noise itself. Each
dataset is released in a document-level form that has been deduplicated.
## Loading
You can load both the clean and noisy versions of any language by specifing its LangID:
~~~
madlad_abt = load_dataset("allenai/madlad-400", "abt")
~~~
A list of langagues can also be supplied with a keyword argument:
~~~
madlad_multilang = load_dataset("allenai/madlad-400", languages=["abt", "ace"])
~~~
Additionally, you can load the noisy and clean subsets seperately with the split keyword argument:
~~~
madlad_multilang_clean = load_dataset("allenai/madlad-400", languages=["abt", "ace"], split="clean")
~~~
## LangID model and Crawl
Following [Language Id In the Wild](https://arxiv.org/pdf/2010.14571.pdf), we
trained a Semi-Supervised LangId model (SSLID) on 500 languages. The training
data is as described in that paper, with the differences that 1) training data
is sampled to a temperature of `T=3` to reduce over-triggering on low-resource
languages; and 2) the data is supplemented with web-crawled data from the same
paper (that has already been through the various filters described therein) in
the hopes that it will increase robustness to web-domain text.
## Filtering
Before separating the raw CommonCrawl corpus by LangID, these
filtering steps are done, similar to Raffel et al (2020):
- Discarded any page with fewer than 5 sentences and only retained lines that
contained at least 3 words.
- Removed any line with the word Javascript.
- Removed any page where the phrase “lorem ipsum” appeared.
- Removed any pages containing the phrases "terms of use", "privacy policy",
"cookie policy", "uses cookies", "use of cookies", "use cookies"
- Removed any pages that contained a curly bracket.
- To deduplicate the data set, discarded all but one of any three-sentence span occurring more than once in the data set.
The `noisy` subset of the data was filtered only by document-level LangID, which
was taken to be the majority sentence-level LangID prediction. The `clean`
subset removed all documents with a `percent_questionable` score greater than
20%. It furthermore removed any document with under 5 sentences.
The `pct_questionable` score is simple the percentage of sentences in the input
document that were "questionable". A sentence was considered questionable if any
of the following were true:
* **LangID Consistency:** the sentence-level LangID does not match the
document-level LangID
* **List Case:** The sentence has at least 12 tokens, and over 50% percent of
the tokens began in a capital letter.
* **Length:** The sentence has under 20 characters or over 500 characters
(note: this is a bad heuristic for ideographic languages)
* **Danger Chars:** Over 20% of the characters in the sentence match
`[0-9{}+/()>]`
* **Cursedness:** The sentence matches a cursed regex (see below)
### Cursed Substrings
Based on the initial round of data audits, the authors created a heuristic list of
substrings and regexes accounting for a large amount of questionable content.
Keep in mind that these all are fed into the `pct_questionable` score -- a
sentence is only excluded from the `clean` dataset if over 20% of the sentences
in that document are flagged as questionable.
notes about cursed substrings:
* low quality sentences ending in the pipe character were very common. Before
you ask, this was not Devanagari-script text using a Danda.
* The last few regexes are meant to match `A N T S P E A K`, `List Case`, and
weirdly regular text (for instance, lists of shipping labels or country
codes)
```
# this implementation is for demonstration and is pretty inefficient;
# to speed it up, use string inclusion (`in`) instead of regex for all but the
# last four, and for those use a compiled regex.
def is_cursed(s):
return any(re.findall(curse, s) in s for curse in CURSED_SUBSTRINGS)
CURSED_SUBSTRINGS = [" №", "���", "\\|\\s*$", " nr\\.$", "aute irure dolor ", " sunt in culpa qui ", "orem ipsum ", " quis nostrud ", " adipisicing ", " dolore eu ", " cupidatat ", "autem vel eum", "wisi enim ad", " sex ", " porn ", "黄色电影", "mp3", "ownload", "Vol\\.", " Ep\\.", "Episode", " г\\.\\s*$", " кг\\.\\s*$", " шт\\.", "Develop", "Facebook", " crusher ", " xxx ", " ... ... ... ... ... ... ... ... ...", " .... .... .... .... .... .... .... .... ....", " [^ ] [^ ] [^ ] [^ ] [^ ] [^ ] [^ ] [^ ] [^ ]", ", ..,,? ..,,? ..,,? ..,,?"]
```
### Virama Correction
Many languages using Brahmic Abugida (South and Southeast Asian scripts like
Devanagari, Khmer, etc.) use some variant on the virama character. For whatever
reason, it was found that this character was often messed up in the common crawl
snapshots used. Therefore, for the languages `bn my pa gu or ta te kn ml
si th tl mn lo bo km hi mr ne gom as jv dv bho dz hne ks_Deva mag mni shn yue zh
ja kjg mnw ksw rki mtr mwr xnr`, a special correction step was done.
For these languages, the authors took the list of all virama characters and removed all
unnecessary spaces between each instance of a virama character and the next
character with a regex.
```
'%s' % regex.sub(r' ([%s]) ' % _VIRAMA_CHARS, '\\1', x)
```
### Myanmar Font Compatibility
Prior to 2019, the most popular font for Burmese websites was the Zawgyi font.
The authors used [Myanmar Tools](https://github.com/google/myanmar-tools) to convert text.
Several scripts, like the Chinese script, Tibetan script, and Thai, do not use
whitespace to separate characters. The languages with this property in this
dataset are `yue zh ja th lo kjg mnw my shn ksw rki km bo dz`.
Alas, the **Length** aspect of the `pct_questionable` score was calculated using
simplistic whitespace tokenization, and therefore rendered the whole
`pct_questionable` score invalid for those languages. Therefore, for these
languages, the "clean" data is identical to the "noisy" data (barring Chinese;
see below.)
### Special filters
Chinese had a particular issue with pornographic content. After manual inspection
a list of strings likely to be present in pornographic content was developed. All
pages containing at least one of these strings were removed. Resulted in 17%
reduction in number of documents and 56% reduction in file size.
```
pornsignals = "caoporn caoprom caopron caoporen caoponrn caoponav caopom caoorn 99re dy888 caopro hezyo re99 4438x zooskool xfplay 7tav xxoo xoxo 52av freexx 91chinese anquye cao97 538porm 87fuli 91pron 91porn 26uuu 4438x 182tv kk4444 777me ae86 91av 720lu yy6080 6080yy qqchub paa97 aiai777 yy4480 videossexo 91free 一级特黄大片 偷拍久久国产视频 日本毛片免费视频观看 久久免费热在线精品 高清毛片在线看 日本毛片高清免费视频 一级黄色录像影片 亚洲男人天堂 久久精品视频在线看 自拍区偷拍亚洲视频 亚洲人成视频在线播放 色姑娘综合站 丁香五月啪啪 在线视频成人社区 亚洲人成视频在线播放 久久国产自偷拍 一本道 大香蕉无码 香港经典三级 亚洲成在人线免费视频 天天色综合网 大香蕉伊人久草 欧美一级高清片 天天鲁夜夜啪视频在线 免费黄片视频在线观看 加比勒久久综合 久草热久草在线视频 韩国三级片大全在线观看 青青草在线视频 美国一级毛片 久草在线福利资源 啪啪啪视频在线观看免费 成人福利视频在线观看 婷婷我去也 老司机在线国产 久久成人视频 手机看片福利永久国产 高清国产偷拍在线 大香蕉在线影院 日本高清免费一本视频 男人的天堂东京热 影音先锋男人资源 五月婷婷开心中文字幕 亚洲香蕉视频在线播放 天天啪久久爱视频精品 超碰久久人人摸人人搞".split()
```
A few more random notes, comparing to common alternative codes for these
languages:
* `fil` for Filipino/Tagalog, not `tl`
* `ak` for Twi/Akan, rather than `tw`. This includes Fante.
* Unfortunately use the macro code `chm` for Meadow Mari (instead of the
correct `mhr`), and `mrj` for Hill Mari
* `no` for Norwegian Bokmål, whereas some resources use
`nb`
* `ps` for Pashto instead of `pbt` (Southern Pashto)
* `ms` for Standard Malay, not `zlm`
* `sq` for Albanian, and don't distinguish dialects like
Gheg (`aln`) and Tosk (`als`)
* `ber` as the code for Tamazight, after consultation with Tamazight
speakers opining that the dialect distinctions are not significant. Other
resources use the individual codes like `tzm` and `kab`.
* Macrocode `qu` for Quechua. In practice, this seems usually to be
a mix of the Ayacucho and Cusco dialects. Other resources, like NLLB, may
use the dialect code, e.g. `quy` for Ayacucho Chanka. The same is true for a
few other macro codes, like `ff` (Macro code for Fulfulde, whereas other
sources may use e.g. `fuv`.)
* Really, there are notes that can be made about almost any code, from the
well-accepted conventions like `zh` for Mandarin, to many dialectical notes,
like which variant of Hmong really is the `hmn` data? But the above ones are
made specifically for ones where the authors are aware of other datasources floating
out there that use different conventions.
## Audit
Following [Quality at a Glance](https://arxiv.org/abs/2103.12028), the authors performed
an "audit" of every corpus in this dataset. Although the authors did not speak most
languages, they were able to give high-level comments on the general quality. They
looked at a sample of 20 documents of each language.
After an initial round of auditing, they devised a new set of filters and applied
them. They then re-did all audits.
### Overall notes from the audit
The decision was to **include languages that looked noisy, but omit any language
that was clearly majority noise, or only had 20 or fewer docs.** This is a low
bar -- twenty documents can be very little indeed, and some of the corpora released are quite noisy, but all of them should have at least the potential to
be used in some useful way. The motivation for not releasing nonsense or tiny
datasets is to not give a false sense of how multilingual this dataset actually
is ("Representation washing"), as recommended by **Quality at a Glance**.
A few overarching points:
* Many low-resource languages only had Bible text, or in some cases jw.org
data. These are marked in the rows below. Generally `ok bible` means that
100% of the audited sentences were Biblical, whereas if `bible` is simply
mentioned in the note, it was not the only source of data.
* Indian languages in the Latin script had a high concentration of
pornographic content.
### Renames and Merges as a result of the Audit
In several cases, it was clear from the audit that the corpora were not in the
languages that the LangID model claimed they were. This led to the following
renames:
* dty renamed to `zxx-xx-dtynoise`, aka a "language" of noise. This is mainly
mis-rendered PDFs and may have some practical applications for decoding
said.
* `fan` renamed to `bum`
* `ss-SZ` renamed to `ss` -- this was just a result of us having inconsistent
data labels.
* `cjk` merged into the `gil` dataset
* `bjj` merged into the `awa` dataset
## Canaries
Canaries are provided in separate `canaries` folder. Canaries are organized into three directions: `monolingual` hosts canaries designed for the MADLAD-400 monody data, `multiway` for the multiway data, and `generic` the generic canaries generated only from the model's vocabulary.
* Monolingual: Canaries here are organized by the language the canary was generated from. This corresponds exactly to the `translate_copy` setting in the paper, where the source and target language match.
* Multiway: Canaries here are organized in one of two fashions. `to_XX` indicates canaries organized by the target language (and where the source language could be any language). `XX-XX` indicates the canaries (interleaved_both and interleaved_mislabeled_both) designed for a specific pair of languages.
Within each subdirectory above, canaries are into separate files named by the canary type. There is always only a single file for each canary type. The `generic` folder contains within it the four canary types.
Canaries can be mixed in with normal training data to then be analyzed post-hoc to training
## References
Raffel, Colin, et al. "Exploring the limits of transfer learning with a unified
text-to-text transformer." J. Mach. Learn. Res. 21.140 (2020): 1-67.
## Contact
Please reach out to {snehakudugunta, icaswell}꩜google.com. For questions about the canaries, reach out to cchoquette@google.com
## License
This data is released with the `CC-BY-4.0` license.
## Detailed notes from the audit
Here are the notes on all languages, along with the number of documents
found, and the final decision made with respect to including the language in
this dataset.
| Lang. | note | N | decision |
| --------------- | ------------------------ | ---------- | --------------- |
| en | ok | 1838712272 | keep |
| ru | ok | 402458746 | keep |
| es | good | 250906994 | keep |
| de | ok | 225111495 | keep |
| fr | ok | 218863911 | keep |
| it | ok | 126406256 | keep |
| pt | ok | 124207090 | keep |
| pl | ok | 90908786 | keep |
| nl | ok | 86594116 | keep |
| tr | ok | 56417359 | keep |
| vi | ok | 54988654 | keep |
| cs | ok | 38254671 | keep |
| id | ok | 37979244 | keep |
| ro | ok | 35397563 | keep |
| sv | ok. Also the last | 35153050 | keep |
: : language (suz) is "ok : : :
: : bible" : : :
| hu | ok | 29677075 | keep |
| uk | ok | 24968305 | keep |
| fa | idk ask a farsi speaker; | 23138888 | keep |
: : ALI\: OK : : :
| ja | ok a little en mixed in | 21818123 | keep |
| el | ok | 20932239 | keep |
| fi | ok | 20433664 | keep |
| da | ok | 17865888 | keep |
| th | ok | 17439979 | keep |
| no | ok | 14864710 | keep |
| bg | ok | 12755329 | keep |
| ko | ok | 12653878 | keep |
| ar | good | 12411641 | keep |
| sk | ok | 11857945 | keep |
| ca | ok | 9477390 | keep |
| lt | ok | 8748025 | keep |
| iw | ok | 7194574 | keep |
| sl | ok | 6310419 | keep |
| et | ok | 5542933 | keep |
| lv | ok | 5007982 | keep |
| hi | ok some porn | 4512205 | keep |
| sq | good | 3622957 | keep |
| az | good | 3256331 | keep |
| hr | ok | 2841400 | keep |
| ta | ok | 2594191 | keep |
| ms | ok | 2337672 | keep |
| ml | ok | 2072605 | keep |
| sr | ok | 2010607 | keep |
| kk | ok | 1810963 | keep |
| te | ok a lot of weirdly low | 1682441 | keep |
: : quality looking content : : :
: : like commerce : : :
| mr | ok fix virama | 1673848 | keep |
| is | ok | 1560913 | keep |
| bs | good | 1362582 | keep |
| mk | ok | 1358293 | keep |
| gl | ok | 1253170 | keep |
| eu | ok | 1155671 | keep |
| bn | ok | 1138848 | keep |
| be | ok | 1092785 | keep |
| ka | ok | 936497 | keep |
| fil | ok more bible than | 901507 | keep |
: : expected for such a : : :
: : major language : : :
| mn | ok mongolian cyrillic | 879878 | keep |
| af | good | 868671 | keep |
| uz | ok some cyrllic noise | 669909 | keep |
| gu | ok | 659727 | keep |
| kn | ok | 657846 | keep |
| kaa | ok cyrllic | 586361 | keep |
| sw | ok | 537847 | keep |
| ur | ok | 467236 | keep |
| ne | ok | 453349 | keep |
| cy | ok; was terrible before | 430719 | keep |
: : filtering short docs : : :
| hy | ok | 397523 | keep |
| ky | ok | 367577 | keep |
| si | good | 349220 | keep |
| tt | good plus some | 346927 | keep |
: : nonunicode misrendered : : :
: : PDF : : :
| tg | good | 328194 | keep |
| la | ok some broken chars | 319178 | keep |
| so | good | 293218 | keep |
| ga | ok some en noise | 285999 | keep |
| km | ook | 285740 | keep |
| mt | ok | 265388 | keep |
| eo | ok; likely a lot of Mt | 259971 | keep |
| ps | ok | 252888 | keep |
| rw | ok | 226466 | keep |
| ku | ok | 218850 | keep |
| lo | ok many entities in | 215982 | keep |
: : latin script : : :
| fy | ok plausible but i bet | 210025 | keep |
: : there is a lot of nl in : : :
: : there : : :
| ha | ok | 173485 | keep |
| my | filter noise and en fix | 172401 | keep |
: : virama : : :
| dv | good | 167179 | keep |
| pa | ok | 150588 | keep |
| ckb | ok | 148870 | keep |
| lb | ok | 145988 | keep |
| mg | ok some bible jw | 115387 | keep |
| ht | ok | 110443 | keep |
| ug | ok | 106549 | keep |
| am | good | 106301 | keep |
| or | ok | 100530 | keep |
| fo | good | 97754 | keep |
| gd | ok | 94275 | keep |
| ba | ok | 90318 | keep |
| tk | ok; a few weird docs | 82495 | keep |
| mi | ok | 79509 | keep |
| hmn | ok | 75213 | keep |
| grc | ok some bible | 70730 | keep |
| jv | ok | 69473 | keep |
| ceb | ok | 66164 | keep |
| sd | good | 65858 | keep |
| yi | ok | 64949 | keep |
| kaa-Latn | ok urls are .ru or .kz | 61169 | keep |
| sn | ok | 60196 | keep |
| co | ok;l i suspect lots of | 55387 | keep |
: : MT : : :
| su | good | 54968 | keep |
| pap | ok | 54498 | keep |
| ig | ok | 54410 | keep |
| zu | good | 53809 | keep |
| xh | ok | 53672 | keep |
| sm | ok | 52614 | keep |
| ny | ok | 52244 | keep |
| yo | ok | 52067 | keep |
| cv | good | 47318 | keep |
| el-Latn | good; a lot of old | 46428 | keep |
: : content! : : :
| kl | ok | 46027 | keep |
| haw | ok scam tv products | 45670 | keep |
| gsw | wtf is happening here; | 42712 | keep |
: : keep with disclaimer; : : :
: : STILL BOILERPLATE : : :
| tet | good ; actually a lot of | 40367 | keep |
: : fun data! : : :
| st | ok | 40360 | keep |
| lus | ok | 36437 | keep |
| oc | ok | 36379 | keep |
| as | good | 33825 | keep |
| rm | ok | 33805 | keep |
| br | ok after shortfilter | 33219 | keep |
| sah | ok | 29169 | keep |
| hi-Latn | filter porn this is half | 26723 | keep |
: : porn : : :
| se | good | 23872 | keep |
| cnh | good, some local news! | 21556 | keep |
: : not sure if WL : : :
| om | ok | 18895 | keep |
| ce | ok | 14968 | keep |
| udm | ok | 13376 | keep |
| lg | ok lot of | 13030 | keep |
: : www.bukedde.co.ug in : : :
: : this : : :
| os | ok | 12623 | keep |
| nv | ok | 12578 | keep |
| kha | ok | 12070 | keep |
| ilo | ok some bible | 11754 | keep |
| ctd-Latn | ok; from some local | 11629 | keep |
: : news? : : :
| vec | very noisy has wiki from | 11108 | keep |
: : other langs and .it : : :
: : websites so not sure if : : :
: : vec : : :
| hil | ok some en boilerplate | 10564 | keep |
| tyv | ok fun stuff plus some | 9083 | keep |
: : russian noise i think : : :
| iba | ok jw data | 7638 | keep |
| ru-Latn | ok | 7523 | keep |
| kbd | ok many .ru | 7486 | keep |
| ti | ok; poor tigray | 7288 | keep |
| sa | ok | 7117 | keep |
| av | good | 6331 | keep |
| bo | needs some serious | 6226 | keep |
: : script filtering. but : : :
: : there is some ok data in : : :
: : there. : : :
| zza | good | 6019 | keep |
| ber-Latn | ok | 5612 | keep |
| otq | ok | 5554 | keep |
| te-Latn | great good text....but | 5305 | keep |
: : mostly pornographic : : :
| bua | ok | 5264 | keep |
| ts | good | 5198 | keep |
| cfm | ok mostly from | 4858 | keep |
: : chinland.co : : :
| tn | good | 4821 | keep |
| krc | ok | 4815 | keep |
| ak | good; much but not all | 4768 | keep |
: : bible : : :
| meo | ok mostly blogs | 4655 | keep |
| chm | ok; fyi watch out for | 4653 | keep |
: : yandex translationese : : :
| to | good ; news bible | 4612 | keep |
: : government : : :
| ee | good; mostly religious | 4536 | keep |
| nso | ok | 4422 | keep |
| ady | good | 4206 | keep |
| rom | bible | 4187 | keep |
| bho | mostly from anjoria.com. | 4121 | keep |
: : Looks like valid : : :
: : Bhojpuri. : : :
| ltg | ok mostly www.lakuga.lv | 4120 | keep |
| fj | ok | 3976 | keep |
| yua | ok | 3965 | keep |
| gn | ok some broken | 3858 | keep |
: : characters some bible : : :
| az-RU | good; a lot of JW | 3781 | keep |
| ln | ok bible jw | 3325 | keep |
| ada | good; bible; likely | 3095 | keep |
: : mixed with gaa : : :
| myv | maybe has .ru urls | 3095 | keep |
| bik | ok. keep in mind the bik | 3092 | keep |
: : vs bcl issue. : : :
| tlh | ok, but why tf are there | 3054 | keep |
: : websites inklingon? all : : :
: : MT ? : : :
| kbp | not sure if right script | 3036 | keep |
: : wiki says latin : : :
| war | ok but v sus. Pls filter | 2928 | keep |
: : out wikipedia : : :
| wa | ok lots of wiki stuff | 2772 | keep |
| bew | mostly blogs. idk if | 2677 | keep |
: : standard Indonesian or : : :
: : not : : :
| rcf | ok | 2630 | keep |
| ta-Latn | good text .... but | 2580 | keep |
: : pornographic : : :
| kac | ok | 2567 | keep |
| iu | filter script some is en | 2537 | keep |
: : rest is iu script : : :
| ay | good; mix of bible and | 2505 | keep |
: : other news sources : : :
| kum | ok | 2495 | keep |
| qu | ok | 2449 | keep |
| bgp | almost all ur-Latn. | 2427 | keep |
: : consider removing or : : :
: : renaming : : :
| hif | ok some en noise and | 2358 | keep |
: : religious : : :
| kw | ok short boilerplate | 2324 | keep |
: : bible wiki; ok some porn : : :
| nan-Latn-TW | ok | 2285 | keep |
| srn | ok bible + jw | 2281 | keep |
| tly-IR | deeply sus | 2239 | keep |
| sg | ok jw | 2106 | keep |
| gom | ok | 2102 | keep |
| ml-Latn | ok some short docs | 2071 | keep |
| kj | ok | 2062 | keep |
| ksd | ok bible | 2000 | keep |
| dz | ok; hidden parallel | 1899 | keep |
: : text; maybe actually bo; : : :
: : mainly buddhist : : :
| kv | ok a lil boilerplate | 1878 | keep |
: : vibes : : :
| msi | ok | 1870 | keep |
| ve | ok mostly bible jw | 1866 | keep |
| zap | ok JW. | 1803 | keep |
| zxx-xx-dtynoise | BEAUTIFUL NOISE rename | 1765 | keep |
: : but keep as beautiful : : :
: : xample. (was called : : :
: : "dty") : : :
| meu | ok bible | 1728 | keep |
| iso | ok jw | 1721 | keep |
| ium | filter out zh | 1721 | keep |
| nhe | ok | 1714 | keep |
| tyz | ok bible bu again i | 1707 | keep |
: : think some mixeed : : :
: : dialects : : :
| hui | ok some bible | 1680 | keep |
| new | ok | 1634 | keep |
| mdf | ok some short docs | 1609 | keep |
| pag | bible | 1588 | keep |
| gv | filter short repetitive | 1586 | keep |
: : sentences; still same : : :
: : but keep : : :
| gag | has 1-2 cyrillic | 1572 | keep |
: : examples with small amts : : :
: : of arabic script noise : : :
| ngu | ok | 1534 | keep |
| quc | bible | 1526 | keep |
| mam | ok bible jw | 1513 | keep |
| min | ok mostly wiki and bible | 1474 | keep |
| ho | ok | 1466 | keep |
| pon | bible | 1462 | keep |
| mrj | ok | 1447 | keep |
| lu | ok jw | 1444 | keep |
| gom-Latn | ok very noisy ; some ok | 1432 | keep |
: : stuff ; release with : : :
: : disclaimer : : :
| alt | ok | 1422 | keep |
| nzi | ok | 1371 | keep |
| tzo | ok bible + jw | 1357 | keep |
| bci | ok bible | 1329 | keep |
| dtp | ok; mostly from | 1309 | keep |
: : www.newsabahtimes.com.my : : :
| abt | fine; bible | 1305 | keep |
| bbc | ok | 1274 | keep |
| pck | ok | 1255 | keep |
| mai | ok mild amounts of en | 1240 | keep |
: : noise : : :
| mps | ok bible | 1239 | keep |
| emp | ok bible | 1238 | keep |
| mgh | ok bible jw | 1222 | keep |
| tab | idk plausibly ok | 1202 | keep |
| crh | ok | 1184 | keep |
| tbz | good mostly bible but | 1126 | keep |
: : not all : : :
| ss | good mix of data ; | 1089 | keep |
: : renamed from "ss" : : :
| chk | ok bible | 1082 | keep |
| bru | ok; bible | 1072 | keep |
| nnb | ok | 1071 | keep |
| fon | ok mostly jw but not all | 1065 | keep |
| ppk | bible | 1063 | keep |
| tiv | ok jw | 1063 | keep |
| btx | ok probably | 1009 | keep |
| bg-Latn | ok | 991 | keep |
| mbt | ok bible | 969 | keep |
| ace | good; bible | 966 | keep |
| tvl | ok jw | 933 | keep |
| dov | ok bible + jw | 923 | keep |
| ach | good; bible | 915 | keep |
| xal | ok has .ru sites though | 913 | keep |
| cuk | ok bible | 899 | keep |
| kos | ok lds bible | 881 | keep |
| crs | ok | 873 | keep |
| wo | ok; mostly bible. | 871 | keep |
| bts | ok; mostly bible | 869 | keep |
| ubu | ok bible | 846 | keep |
| gym | ok biblle | 820 | keep |
| ibb | ok bible and repeated @ | 818 | keep |
| ape | good; bible | 814 | keep |
| stq | ok i think ? | 809 | keep |
| ang | much noise but some good | 803 | keep |
: : Old English in there! : : :
| enq | ok bible | 793 | keep |
| tsg | much noise but somegood | 789 | keep |
: : data too! : : :
| shn | mostly English | 788 | keep |
: : boilerplate. filter by : : :
: : latin text before : : :
: : releasing : : :
| kri | ok boilerplate noise | 786 | keep |
: : bible jw : : :
| kek | ok jw bible | 782 | keep |
| rmc | ok | 738 | keep |
| acf | good; bible | 730 | keep |
| syr | good; practictitioners | 716 | keep |
: : should keep dialect in : : :
: : mind. : : :
| qub | bible | 705 | keep |
| bm | good | 702 | keep |
| tzh | ok jw | 702 | keep |
| jiv | ok bible | 696 | keep |
| kn-Latn | filter en noise of | 688 | keep |
: : karnatake govt websites : : :
| kjh | ok .ru domain | 672 | keep |
| yap | ok | 638 | keep |
| ban | ok bible | 637 | keep |
| tuc | ok bible | 635 | keep |
| tcy | good; mostly wikipedia; | 632 | keep |
: : likely some konkani : : :
: : mixed in : : :
| cab | ok jw | 629 | keep |
| cak | ok bible | 617 | keep |
| din | ok after SD filter | 611 | keep |
| arn | good; bible | 593 | keep |
| lrc | ok | 587 | keep |
| gil | empty; but merged in | 586 | keep |
: : data in "cjk" : : :
| gil | this is all in gil | 586 | keep |
: : (Kiribati). merged into : : :
: : "gil" : : :
| rwo | bible | 572 | keep |
| hus | ok bible | 569 | keep |
| bum | ok bible; but wrong | 559 | keep |
: : language. Data is in : : :
: : Bulu, not Fang : : :
| mak | ok bible | 555 | keep |
| frp | fair amount from | 550 | keep |
: : wikipedia. : : :
| seh | ok jw | 545 | keep |
| twu | ok bible, but also i | 539 | keep |
: : think it's lots of mixed : : :
: : similar dialects : : :
| kmb | ok bible jw | 538 | keep |
| ksw | ok bible | 536 | keep |
| sja | ok bibe | 527 | keep |
| amu | good; bible; crazy | 511 | keep |
: : diacritics : : :
| mad | remove mostly short text | 509 | keep |
| quh | bible | 501 | keep |
| dyu | ok bible | 483 | keep |
| toj | ok jw | 452 | keep |
| ch | ok; not sure about WL | 449 | keep |
| sus | hella sus jk ok bible | 437 | keep |
| nog | ok | 419 | keep |
| jam | ok bible | 416 | keep |
| gui | ok bible | 409 | keep |
| nia | ok | 408 | keep |
| mas | ok some amount of bible | 405 | keep |
| bzj | ok bible | 404 | keep |
| mkn | ok bible | 402 | keep |
| lhu | ok bible | 377 | keep |
| ctu | ok bible | 366 | keep |
| kg | ok bible jw | 365 | keep |
| inb | ok bible | 343 | keep |
| guh | ok bible | 331 | keep |
| rn | bible | 323 | keep |
| bus | ok; bible; about 50bzc | 322 | keep |
| mfe | ok mostly bible maybe | 320 | keep |
: : some french creole short : : :
: : doc noise : : :
| sda | ok bible | 317 | keep |
| bi | good! fun! | 311 | keep |
| cr-Latn | noise and lorem ipsom. | 303 | keep |
: : But some ok Cree text. : : :
| gor | ok bible | 303 | keep |
| jac | ok bible | 303 | keep |
| chr | ok bible | 301 | keep |
| mh | ok jw lds | 296 | keep |
| mni | ok | 290 | keep |
| wal | ok bible + jw | 286 | keep |
| teo | ok bible | 274 | keep |
| gub | ok bible | 271 | keep |
| qvi | bible | 266 | keep |
| tdx | ok jw | 262 | keep |
| rki | ok | 251 | keep |
| djk | ok; bible+jw | 246 | keep |
| nr | ok | 246 | keep |
| zne | ok jw | 239 | keep |
| izz | ok bible | 237 | keep |
| noa | ok | 234 | keep |
| bqc | ok; bible | 228 | keep |
| srm | ok; bible + jw | 227 | keep |
| niq | ok | 226 | keep |
| bas | ok; has some fun blog | 216 | keep |
: : stuff! : : :
| dwr | ok; bible; mixed script | 215 | keep |
| guc | ok bible | 214 | keep |
| jvn | ok bible | 213 | keep |
| hvn | ok religioous text | 200 | keep |
| sxn | ok bible ; also wild | 197 | keep |
: : diacritics : : :
| koi | ok | 196 | keep |
| alz | good; bible | 195 | keep |
| nyu | ok | 195 | keep |
| bn-Latn | ok | 191 | keep |
| suz | | 186 | keep |
| pau | ok | 185 | keep |
| nij | ok | 183 | keep |
| sat-Latn | good! al from local news | 183 | keep |
: : sources : : :
| gu-Latn | filter short en | 179 | keep |
: : boilerplate and : : :
: : repetitive sentences : : :
| msm | ok bible | 177 | keep |
| maz | ok bible jw | 170 | keep |
| qxr | bible | 153 | keep |
| shp | ok bible | 150 | keep |
| hne | ok | 146 | keep |
| ktu | ok bible jw | 144 | keep |
| laj | ok bible | 144 | keep |
| pis | bible | 139 | keep |
| mag | ok fix virama issue | 138 | keep |
| gbm | ok | 137 | keep |
| tzj | ok bible | 136 | keep |
| oj | ok | 135 | keep |
| ndc-ZW | ok | 132 | keep |
| tks | ok bible bu again i | 127 | keep |
: : think some mixeed : : :
: : dialects : : :
| gvl | filter short boilerplate | 126 | keep |
: : mostly bible : : :
| knj | ok bible | 126 | keep |
| awa | all bible in awadhi | 126 | keep |
: : (awa). Renamed from bjj : : :
| spp | ok bible | 123 | keep |
| mqy | bible remove short docs | 119 | keep |
| tca | ok bible + jw | 117 | keep |
| cce | ok jw | 116 | keep |
| skr | ok; some pnb mixed in | 107 | keep |
| kmz-Latn | ok soome ar script noise | 106 | keep |
| dje | ok; mostly but not all | 100 | keep |
: : bible : : :
| gof | ok some bible | 97 | keep |
| agr | good; bible | 93 | keep |
| qvz | bible | 88 | keep |
| adh | good; bible | 87 | keep |
| quf | bible | 86 | keep |
| kjg | ok bible | 84 | keep |
| tsc | ok | 82 | keep |
| ber | ok great! | 79 | keep |
| ify | ok bible | 79 | keep |
| cbk | ok bible | 78 | keep |
| quy | bible | 78 | keep |
| ahk | good; bible; crazy | 77 | keep |
: : diacritics : : :
| cac | ok bible | 77 | keep |
| akb | good; bible | 71 | keep |
| nut | ok | 67 | keep |
| ffm | ok bible; mixed fulfulde | 65 | keep |
: : dialects; consider : : :
: : merging with ff : : :
| taj | ok bible | 65 | keep |
| ms-Arab | ok mostly utusanmelayu | 63 | keep |
: : website : : :
| brx | quite good! | 62 | keep |
| ann | good; all from wikimedia | 56 | keep |
: : incubator : : :
| qup | bible | 53 | keep |
| ms-Arab-BN | ok not sure if same as | 46 | keep |
: : ms-Arab : : :
| miq | ok | 45 | keep |
| msb | ok bible | 41 | keep |
| bim | good; bible | 40 | keep |
| raj | ok | 40 | keep |
| kwi | ok bible | 37 | keep |
| tll | ok jw | 37 | keep |
| trp | good ; lots of random | 36 | keep |
: : stuff : : :
| smt | ok bible but lots of | 34 | keep |
: : different bibles! : : :
| mrw | ok | 29 | keep |
| dln | ok bible | 28 | keep |
| qvc | bible | 27 | keep |
| doi | ok actually nice! | 26 | keep |
| ff | ok after shortfilter | 26 | keep |
| zh | very noisy | 19850947 | keep (filtered) |
| zh-Latn | poor quality | 602 | remove |
| rhg-Latn | remove | 10302 | remove |
| ja-Latn | remove maybe low quality | 7516 | remove |
: : short and repeated : : :
| pam | remove | 2773 | remove |
| za | revisit after | 1700 | remove |
: : shortfilter : : :
| ar-Latn | terrible, 0% orrect, | 1520 | remove |
: : remove : : :
| mnw | remove en noise and | 1100 | remove |
: : boilerplate : : :
| fip | ok jw ; but wrong | 729 | remove |
: : language. mostly : : :
: : Mambwe-Lungu and Bemba, : : :
: : as well as Fipu (mgr+bem : : :
: : vs. fip) : : :
| el-CY | bad; not Cypriote | 537 | remove |
| luz | terrible; remove | 354 | remove |
| cni | ok; bible; lots of mixed | 261 | remove |
: : in content in : : :
: : not,cob,cpc,arl : : :
| apd-SD | terribly questionable; | 227 | remove |
: : probably remove : : :
| mey | mostly short and noisy | 127 | remove |
: : borderline : : :
| awa | OK; should be used with | 126 | remove |
: : caution and suspicion : : :
| mtq | remove short doc | 111 | remove |
: : repetitive : : :
| mel | remove noisy en | 103 | remove |
| mr-Latn | remove mostly porn and | 91 | remove |
: : short docs : : :
| srr | remove ; english | 91 | remove |
: : boilerplate : : :
| en-Cyrl | ok ... some fr-Cyrl too | 90 | remove |
: : and maybe others : : :
| en-Arab | remove | 79 | remove |
| syl | idk maybe ok ? | 61 | remove |
| jax | filter mostly | 58 | remove |
: : text.medjugorje.ws : : :
: : boilerplate : : :
| xmm | very noisy lots of dj | 58 | remove |
: : tiktok and peppa pig : : :
: : repeated : : :
| shu | quite questionable. prob | 53 | remove |
: : remove : : :
| ks | ok shorter docs | 51 | remove |
| gyn | remove boilerplate and | 45 | remove |
: : porn : : :
| aa | some pretty bad data but | 32 | remove |
: : also some good data. : : :
: : filter on "Woo" (case : : :
: : sensitive) : : :
| sjp | terible; probably | 31 | remove |
: : remove; check again : : :
: : after short filter : : :
| abs | all short nonsense | 24 | remove |
: : remove : : :
| mui | remove short docs | 23 | remove |
| mdh | filter porn short text | 22 | remove |
: : and repetitive : : :
: : boilerplate : : :
| noe | ok | 22 | remove |
| sxu | rvisit after shortfilter | 22 | remove |
| bhb-Gujr | bad. remove. all junk | 20 | remove |
: : gu. : : :
| yaq | remove | 20 | remove |
| prk | ok | 18 | remove |
| cgg | rather noisy but | 17 | remove |
: : potentialy ok. not sure : : :
: : if WL or not : : :
| bto | bad; remove unless short | 16 | remove |
: : filter keeps enough : : :
| ayl | terrible | 13 | remove |
| pa-Arab | ok | 13 | remove |
| bmm | terrible. filter on | 11 | remove |
: : short and reevaluate : : :
| mfb | remove short boilerplate | 11 | remove |
| mtr | ok fix virama remove en | 11 | remove |
: : noise : : :
| pmy | remove | 11 | remove |
| skg | terrible; remove | 11 | remove |
| ymm | remove | 11 | remove |
| xnr | ok maybe fix virama | 9 | remove |
: : though it seems fine : : :
| kjb | ok bible | 8 | remove |
| azg | short noise; bible | 7 | remove |
| bgz | idk maybe ok but | 7 | remove |
: : probably bad : : :
| ctg | probably terrible | 7 | remove |
: : probably remove : : :
| nyo | ok | 7 | remove |
| mdy | ok bible | 6 | remove |
| syl-Latn | revist or remove after | 6 | remove |
: : shortfilter : : :
| xog | ok bible and stories | 6 | remove |
| cyo | terrifying noise; remove | 4 | remove |
| kfy | filter virama issue | 4 | remove |
| nd | ok | 4 | remove |
| rwr | remove | 4 | remove |
| tuf | ok bible | 4 | remove |
| clu | ok bible | 3 | remove |
| ng | ok | 3 | remove |
| zyj | deeply bad data .. | 3 | remove |
: : revisit after : : :
: : shortfilter : : :
| rkt | ok | 2 | remove |
| bgc | super sketch. Remove | 1 | remove |
: : unless short doc filter : : :
: : leaves some. remove : : :
| dcc | remove | 1 | remove |
| ff-Adlm | good | 1 | remove |
| gju | remove short boilerplate | 1 | remove |
| max | remove short some ru | 1 | remove |
| mwr | filter short docs fix | 1 | remove |
: : virama : : :
| trw | sus; remove | 1 | remove |
| vkt | 1 doc remove | 1 | remove |
| gjk | empty remove | 0 | remove |
| bfy | very bad. remove unless | 0 | remove |
: : it looks better after : : :
: : filtering short docs; : : :
: : remove : : :
| nyn | ok | 0 | remove |
| sgj | remove | 0 | remove |
A few comments too long to fit in the table above:
* `alt`: WAIT THIS IS AMAZING IT IS ACTUALLY ALTAI! e.g. from urls like
https://altaicholmon.ru/2020/02/28/jarashty-la-jajaltany-jarkyndu-lekeri/
* `tly-IR`: They all look like boilerplate content, e.g., list of
keywords/search queries used to bump page ranking in search results. Not any
useful material for translation. Remove.
* `zap`: pls note that at least some Zapotec speakers tend to view it as one
language, not as a million dialects like ISO does. However, some are
certainly mutually unintelligible, complicating the matter.
* `zh-Latn`: The biggest problem is that several examples are not in Latin
Chinese (i.e., romanization in my understanding) but in English or mixed
English and Chinese. For those data in Latin Chinese, their quality seems to
be good.
* `zh`: Many examples are porn-related, particularly those very long
documents. Also, there are some examples of traditional Chinese.
## Final Dataset information
The number of documents, sentences, tokens, characters, and bytes for the noisy
and clean splits of the data. Note that the "toks" field below uses whitespace
for tokenization, so is not appropriate for non-whitespace-separating languages
like Chinese (see section above). Note that the english subset in this version
is missing 18% of documents that were included in the published analysis of the dataset.
These documents will be incoporated in an update coming soon.
BCP-47 | docs (noisy) | docs (clean) | sents (noisy) | sents (clean) | toks (noisy) | toks (clean) | chars (noisy) | chars (clean) | clean | noisy |
----------------|:---------------|:---------------|:----------------|:----------------|:---------------|:---------------|:----------------|:----------------|:---------|:---------|
total* | 7.2B | 3.7B | 133.1B | 97.5B | 4.6T | 2.6T | 30.6T | 16.0T | 11.4 T | 6.3 T
en* | 3.0B | 1.5B | 71.1B | 45.4B | 2.0T | 1.3T | 12.3T | 7.6T | 2.6 T | 4.3 T |
ru | 823M | 402.5M | 823M | 12.4B | 416.5B | 240.9B | 3.1T | 1.8T | 832.9 G | 1.4 T |
es | 476.4M | 250.9M | 8.3B | 4.5B | 325.7B | 170.4B | 2.1T | 1.1T | 380.9 G | 747.5 G |
de | 478.6M | 225.1M | 11.5B | 6B | 299.5B | 139.6B | 2.2T | 1T | 370.6 G | 815.5 G |
fr | 384.2M | 218.9M | 7.9B | 5B | 307.1B | 165.2B | 2T | 1T | 370.4 G | 699.1 G |
it | 238.9M | 126.4M | 4.5B | 2.5B | 180.1B | 83.6B | 1.2T | 553.1B | 198.4 G | 429.6 G |
pt | 209.2M | 124.2M | 4B | 2.4B | 123.2B | 79.2B | 791.5B | 499.8B | 183.1 G | 289.6 G |
pl | 145.1M | 90.9M | 3.3B | 2.4B | 68.9B | 49.2B | 505B | 356.4B | 140.7 G | 202.5 G |
nl | 134.5M | 86.6M | 134.5M | 2.3B | 104.4B | 51.6B | 698.5B | 334.5B | 118.2 G | 247.5 G |
tr | 107M | 56.4M | 107M | 1.2B | 41.9B | 25B | 328.8B | 198.9B | 73.7 G | 123.9 G |
vi | 92.8M | 55M | 1.6B | 1B | 71.5B | 48.7B | 342B | 228.8B | 88.8 G | 133.9 G |
cs | 72.1M | 38.3M | 1.7B | 1B | 40.8B | 22.1B | 272.2B | 147.9B | 62.1 G | 112.7 G |
id | 120.9M | 38M | 2.2B | 747.5M | 60.4B | 20.2B | 443B | 148.3B | 48.5 G | 148.7 G |
ro | 60.8M | 35.4M | 60.8M | 746.4M | 37.1B | 22.9B | 244.1B | 148.2B | 55.5 G | 90.3 G |
sv | 65.2M | 35.2M | 65.2M | 1B | 62.1B | 23.9B | 422.6B | 153.7B | 57.0 G | 149.9 G |
hu | 47.6M | 29.7M | 1.3B | 806.3M | 29.8B | 17.8B | 223.6B | 134.9B | 53.5 G | 86.8 G |
uk | 46.6M | 25M | 1B | 599.9M | 21.6B | 12.8B | 164.2B | 95.2B | 45.1 G | 75.8 G |
fa | 58.1M | 23.1M | 920.6M | 493.5M | 40.6B | 18.4B | 220.4B | 96.7B | 43.4 G | 97.4 G |
ja | 23.3M | 21.8M | 326M | 321.6M | 10.9B | 10.9B | 133.3B | 132.2B | 98.7 G | 99.7 G |
el | 52.4M | 20.9M | 808M | 445.4M | 25B | 12B | 173.2B | 80.9B | 37.9 G | 80.8 G |
fi | 35.8M | 20.4M | 1B | 650.3M | 23.8B | 11.5B | 202.2B | 101.1B | 37.6 G | 74.1 G |
zh | 29.3M | 19.9M | 492.3M | 298.8M | 19.2B | 10B | 333B | 142.3B | 109.9 G | 191.8 G |
da | 38.5M | 17.9M | 1.1B | 508M | 37.7B | 13B | 252B | 83.1B | 29.4 G | 89.5 G |
th | 19M | 17.4M | 19M | 385.8M | 8.9B | 8.9B | 118.6B | 117.6B | 57.6 G | 58.2 G |
no | 34.7M | 14.9M | 34.7M | 498.7M | 46.6B | 11.8B | 305.6B | 74.8B | 27.3 G | 109.8 G |
bg | 27.2M | 12.8M | 599.4M | 360.3M | 14.4B | 8.8B | 95.6B | 57.8B | 26.0 G | 42.8 G |
ko | 19.7M | 12.7M | 628.6M | 471.8M | 13.3B | 9.3B | 65.9B | 43.8B | 34.2 G | 49.1 G |
ar | 67.6M | 12.4M | 876.6M | 182.6M | 39B | 7.1B | 243B | 43.2B | 20.9 G | 115.9 G |
sk | 23.2M | 11.9M | 487.9M | 300.6M | 11.3B | 6.7B | 77.8B | 45.7B | 18.8 G | 31.9 G |
ca | 17.9M | 9.5M | 258.6M | 153M | 8.9B | 5.6B | 56.5B | 34.6B | 12.6 G | 20.8 G |
lt | 15.3M | 8.7M | 374M | 256.9M | 7.5B | 5.3B | 58.6B | 41.3B | 15.7 G | 22.3 G |
he | 14.1M | 7.2M | 302.2M | 196.8M | 9.2B | 5.2B | 54.9B | 30.5B | 14.8 G | 26.3 G |
sl | 12M | 6.3M | 316M | 180M | 6.9B | 4.5B | 47.8B | 30.5B | 11.5 G | 18.0 G |
et | 8.8M | 5.5M | 223.8M | 176.3M | 5B | 3.6B | 40.1B | 28.7B | 10.7 G | 15.0 G |
lv | 8.4M | 5M | 186.1M | 138.5M | 4.8B | 3.2B | 36.7B | 23.9B | 9.1 G | 13.8 G |
hi | 9.9M | 4.5M | 254.4M | 152M | 7.4B | 3.8B | 39.9B | 20.1B | 9.9 G | 19.7 G |
sq | 5.5M | 3.6M | 5.5M | 56.1M | 2.7B | 2.1B | 17B | 12.7B | 4.8 G | 6.6 G |
az | 5.2M | 3.3M | 90.3M | 70.9M | 2.1B | 1.5B | 16.3B | 11.9B | 4.5 G | 6.3 G |
hr | 23M | 2.8M | 476.6M | 53M | 12.6B | 1.4B | 85.1B | 9.6B | 3.7 G | 33.5 G |
ta | 5.6M | 2.6M | 122.5M | 81.9M | 2.1B | 1.1B | 19.2B | 10.6B | 4.9 G | 8.8 G |
ms | 14.1M | 2.3M | 14.1M | 55.2M | 8B | 1.7B | 58.8B | 12.5B | 4.0 G | 20.4 G |
ml | 3.7M | 2.1M | 75M | 52M | 1B | 603.3M | 10.5B | 6.3B | 3.0 G | 5.1 G |
sr | 4.7M | 2M | 4.7M | 64M | 2.7B | 1.6B | 18.6B | 11B | 5.1 G | 8.7 G |
kk | 3.1M | 1.8M | 87.4M | 59.1M | 1.6B | 1B | 13.4B | 8.6B | 3.8 G | 5.8 G |
te | 2.5M | 1.7M | 59M | 46.4M | 900.2M | 618.5M | 7.4B | 5.1B | 2.6 G | 3.8 G |
mr | 2.9M | 1.7M | 2.9M | 50M | 1.2B | 776.9M | 8.7B | 5.5B | 2.8 G | 4.4 G |
is | 2.9M | 1.6M | 73.7M | 39.3M | 2.1B | 979.2M | 14.9B | 6.4B | 2.5 G | 5.9 G |
bs | 12.9M | 1.4M | 163.6M | 9M | 5.9B | 490.9M | 39.5B | 3.3B | 1.3 G | 15.6 G |
mk | 2.9M | 1.4M | 41.3M | 22.6M | 1.3B | 685.9M | 9.1B | 4.5B | 2.0 G | 4.0 G |
gl | 4.2M | 1.3M | 45.3M | 18.8M | 2.3B | 748.4M | 15.6B | 4.8B | 1.7 G | 5.5 G |
eu | 2.1M | 1.2M | 41.7M | 24.8M | 827.5M | 525.3M | 6.9B | 4.3B | 1.5 G | 2.4 G |
bn | 4.3M | 1.1M | 151.2M | 38.6M | 2.5B | 645.7M | 16.8B | 4.3B | 2.2 G | 8.7 G |
be | 2M | 1.1M | 48.8M | 31.3M | 981M | 632.9M | 7.2B | 4.6B | 2.2 G | 3.5 G |
ka | 3.1M | 936.5K | 53.7M | 26.6M | 1.2B | 460.8M | 10.3B | 3.8B | 1.9 G | 5.0 G |
fil | 4.2M | 901.5K | 67.4M | 19.2M | 2.2B | 741.7M | 14.6B | 4.7B | 1.5 G | 5.0 G |
mn | 2.2M | 879.9K | 43.3M | 24M | 1.1B | 487.5M | 7.9B | 3.5B | 1.6 G | 3.5 G |
af | 2.9M | 868.7K | 51.9M | 30M | 1.7B | 795M | 11.8B | 4.8B | 1.8 G | 4.2 G |
uz | 1.4M | 669.9K | 25.7M | 17.5M | 605.9M | 388.3M | 5.2B | 3.3B | 1.1 G | 1.9 G |
gu | 1.3M | 659.7K | 28.9M | 18.1M | 634.4M | 345.9M | 3.9B | 2.1B | 1.1 G | 2.0 G |
kn | 1.6M | 657.8K | 32.9M | 19.2M | 546.4M | 258.6M | 4.6B | 2.2B | 1.1 G | 2.3 G |
kaa | 1.1M | 586.4K | 19.8M | 13.3M | 455.9M | 269M | 3.8B | 2.2B | 990.2 M | 1.6 G |
sw | 1.3M | 537.8K | 1.3M | 9.5M | 660.7M | 345.8M | 4.6B | 2.4B | 826.1 M | 1.6 G |
ur | 967.2K | 467.2K | 29M | 18.4M | 1B | 562.5M | 5.2B | 2.7B | 1.2 G | 2.4 G |
ne | 876.4K | 453.3K | 876.4K | 20.4M | 585M | 345.3M | 3.9B | 2.2B | 1.1 G | 1.9 G |
cy | 4.9M | 430.7K | 68.3M | 7.4M | 3.6B | 275.6M | 26.4B | 1.7B | 609.5 M | 10.0 G |
hy | 2M | 397.5K | 31.1M | 9.9M | 1B | 190.9M | 8.1B | 1.5B | 678.9 M | 3.6 G |
ky | 751.1K | 367.6K | 14.3M | 9.6M | 303.4M | 181.6M | 2.5B | 1.4B | 665.1 M | 1.1 G |
si | 788K | 349.2K | 22.1M | 16M | 507.3M | 293.3M | 3.4B | 1.9B | 1023.6 M | 1.8 G |
tt | 2.1M | 346.9K | 60.2M | 8.6M | 1B | 135M | 12.1B | 1B | 494.1 M | 4.6 G |
tg | 789.2K | 328.2K | 789.2K | 7.4M | 363.8M | 208.8M | 2.6B | 1.4B | 635.7 M | 1.1 G |
la | 2.9M | 319.2K | 85.7M | 13.8M | 1.1B | 218.4M | 8.2B | 1.5B | 550.6 M | 2.9 G |
so | 729.2K | 293.2K | 729.2K | 3.1M | 294.8M | 146.3M | 2.1B | 992.4M | 350.8 M | 746.2 M |
ga | 5.3M | 286K | 31.7M | 6.9M | 4.2B | 229.3M | 30.6B | 1.4B | 500.7 M | 9.8 G |
km | 297.8K | 285.7K | 5M | 5M | 53M | 52.6M | 1.1B | 1.1B | 566.2 M | 570.0 M |
mt | 1.2M | 265.4K | 1.2M | 5.6M | 390.4M | 171.5M | 3.2B | 1.3B | 467.4 M | 1.1 G |
eo | 1.4M | 260K | 33.9M | 9.3M | 745.1M | 253.1M | 5.5B | 1.7B | 627.6 M | 1.9 G |
ps | 429.9K | 252.9K | 5.1M | 3.6M | 293.9M | 177.5M | 1.4B | 848.9M | 403.5 M | 682.9 M |
rw | 681.8K | 226.5K | 681.8K | 1.9M | 225M | 99.8M | 1.7B | 749.1M | 264.8 M | 702.4 M |
ku | 671.9K | 218.9K | 10.7M | 4.9M | 305.3M | 143.8M | 2.1B | 849.9M | 335.3 M | 791.9 M |
lo | 229.1K | 216K | 2.9M | 2.8M | 41.7M | 41.1M | 706.9M | 697.6M | 365.3 M | 370.8 M |
fy | 1.7M | 210K | 12.1M | 3.7M | 506.9M | 94M | 3.7B | 592.3M | 223.0 M | 1.2 G |
ha | 443.9K | 173.5K | 4.5M | 2.4M | 206.5M | 109.3M | 1.3B | 630.2M | 219.0 M | 478.1 M |
my | 176.5K | 172.4K | 176.5K | 10.1M | 96.6M | 96.3M | 1.3B | 1.3B | 648.8 M | 650.4 M |
dv | 264.4K | 167.2K | 4.3M | 3.5M | 92.8M | 64M | 877.3M | 603.1M | 238.3 M | 343.2 M |
pa | 368.2K | 150.6K | 368.2K | 6M | 306M | 152.8M | 1.6B | 797.1M | 414.1 M | 857.6 M |
ckb | 622.7K | 148.9K | 5.6M | 2.5M | 312.7M | 83.3M | 2.2B | 572.7M | 265.0 M | 1011.1 M |
lb | 7.6M | 146K | 47.1M | 3.4M | 7.5B | 85M | 58.4B | 575.5M | 218.4 M | 22.2 G |
mg | 295.2K | 115.4K | 4.5M | 2.6M | 189.4M | 75.5M | 1.3B | 548.5M | 179.0 M | 429.3 M |
ht | 425.6K | 110.4K | 6.7M | 2.6M | 163M | 84.3M | 994.5M | 461.5M | 168.2 M | 361.5 M |
ug | 227.1K | 106.5K | 4.5M | 3.1M | 122.9M | 62.7M | 998.5M | 504.6M | 233.1 M | 449.9 M |
am | 245.2K | 106.3K | 7.1M | 5.3M | 157M | 95.2M | 869.9M | 509M | 345.5 M | 539.4 M |
or | 139.6K | 100.5K | 139.6K | 3.1M | 66M | 47.3M | 437.2M | 309.5M | 160.3 M | 228.1 M |
fo | 382.9K | 97.8K | 3.9M | 1.8M | 136.5M | 48.9M | 923.3M | 314.9M | 122.0 M | 328.8 M |
gd | 206K | 94.3K | 3.7M | 2.4M | 127.6M | 84.5M | 812M | 526M | 173.4 M | 276.6 M |
ba | 372.4K | 90.3K | 9.3M | 2.6M | 101M | 42.1M | 766.5M | 320.7M | 154.8 M | 352.4 M |
tk | 180.2K | 82.5K | 180.2K | 1.8M | 65.4M | 43.3M | 575.2M | 369M | 131.3 M | 221.6 M |
mi | 711.9K | 79.5K | 5.9M | 1.9M | 262.5M | 73.5M | 1.6B | 371.9M | 120.2 M | 539.1 M |
hmn | 241.3K | 75.2K | 3.5M | 1.9M | 192.1M | 80.2M | 1.2B | 408.8M | 124.3 M | 366.0 M |
grc | 364.8K | 70.7K | 13.7M | 2.8M | 298.6M | 65.3M | 2B | 417.8M | 217.7 M | 1.0 G |
jv | 999.5K | 69.5K | 13M | 2M | 302.3M | 52.1M | 2.3B | 376.1M | 130.9 M | 797.8 M |
ceb | 617.5K | 66.2K | 6.7M | 1.6M | 225M | 58.2M | 1.5B | 357.7M | 116.2 M | 451.4 M |
sd | 115.6K | 65.9K | 115.6K | 2.4M | 112.6M | 77.8M | 561M | 380.4M | 182.3 M | 267.1 M |
yi | 160.6K | 64.9K | 3.3M | 1.9M | 129.1M | 53.9M | 838.4M | 352.6M | 146.0 M | 350.8 M |
kaa_Latn | 375.2K | 61.2K | 3.6M | 1.3M | 375.2K | 61.2K | 1.5M | 209.5K | 86.2 M | 264.6 M |
sn | 3.1M | 60.2K | 3.1M | 1.2M | 1.3B | 31.6M | 10.6B | 266M | 92.5 M | 3.2 G |
co | 546.7K | 55.4K | 6.1M | 1.3M | 172.6M | 43.6M | 1.1B | 265.5M | 98.8 M | 386.8 M |
su | 336.6K | 55K | 336.6K | 1.6M | 154M | 39.5M | 967.2M | 286.7M | 100.7 M | 308.5 M |
pap | 259.1K | 54.5K | 259.1K | 1.4M | 183.9M | 41.1M | 1.4B | 229.9M | 83.5 M | 451.4 M |
ig | 130.4K | 54.4K | 2.1M | 1.4M | 129.2M | 45.7M | 846.1M | 251.4M | 93.0 M | 178.9 M |
zu | 372.3K | 53.8K | 3.8M | 1.2M | 148.4M | 27.2M | 1.2B | 257.4M | 89.6 M | 374.7 M |
xh | 310.9K | 53.7K | 2.9M | 1.4M | 81.6M | 31.2M | 749.5M | 287.3M | 100.0 M | 319.1 M |
sm | 137.8K | 52.6K | 1.9M | 1.3M | 100.9M | 53.7M | 607.9M | 276.3M | 88.6 M | 184.5 M |
ny | 181.6K | 52.2K | 181.6K | 1.5M | 80.6M | 34.8M | 611.2M | 277.5M | 91.8 M | 209.8 M |
yo | 115K | 52.1K | 2M | 1.2M | 76.6M | 46.3M | 415.6M | 239M | 89.2 M | 157.8 M |
cv | 599.4K | 47.3K | 12M | 1.6M | 169.6M | 22.2M | 1B | 168.9M | 82.1 M | 413.6 M |
el_Latn | 497.3K | 46.4K | 11.3M | 1.7M | 497.3K | 46.4K | 2.3M | 162.8K | 196.8 M | 571.1 M |
kl | 85.9K | 46K | 2.1M | 1.5M | 32.3M | 22.3M | 403.9M | 279.1M | 84.2 M | 126.1 M |
haw | 310.4K | 45.7K | 7.1M | 1M | 141M | 43.3M | 892M | 214.2M | 69.9 M | 271.2 M |
gsw | 7.6M | 42.7K | 64.5M | 1M | 5B | 22.3M | 42.3B | 149.2M | 53.8 M | 13.5 G |
tet | 291K | 40.4K | 1.9M | 475.7K | 240.6M | 22.8M | 1.6B | 152.3M | 51.2 M | 455.4 M |
st | 96.8K | 40.4K | 96.8K | 1.1M | 65M | 39.8M | 381.5M | 226.9M | 74.0 M | 127.0 M |
lus | 91.5K | 36.4K | 1.4M | 863.5K | 53M | 31.3M | 298.3M | 167.3M | 60.1 M | 107.0 M |
oc | 2.4M | 36.4K | 2.4M | 1.6M | 887.6M | 26.7M | 6.7B | 177.6M | 58.7 M | 1.9 G |
as | 53.9K | 33.8K | 2.4M | 1.7M | 41.4M | 27.9M | 275.8M | 182.1M | 95.8 M | 146.1 M |
rm | 238.1K | 33.8K | 238.1K | 603.4K | 59.2M | 15.8M | 391M | 100.2M | 34.6 M | 133.1 M |
br | 705.4K | 33.2K | 7.8M | 731.7K | 646.8M | 21M | 3.7B | 125.4M | 46.2 M | 1.2 G |
sah | 1.3M | 29.2K | 1.3M | 1.2M | 283.7M | 17.6M | 2.2B | 148.2M | 68.3 M | 852.3 M |
hi_Latn | 1.2M | 26.7K | 22.6M | 1.2M | 1.2M | 26.7K | 5.3M | 98.9K | 53.5 M | 1.7 G |
se | 54.3K | 23.9K | 879.5K | 493.3K | 17.7M | 10M | 148.4M | 84.6M | 31.1 M | 56.6 M |
cnh | 44.4K | 21.6K | 688.6K | 406.9K | 21.6M | 12.5M | 110.8M | 63M | 22.1 M | 39.6 M |
om | 846.1K | 18.9K | 846.1K | 469.8K | 238M | 11.2M | 1.9B | 88.5M | 30.4 M | 881.5 M |
ce | 59.3K | 15K | 991.1K | 460.1K | 17.8M | 9.6M | 130.6M | 67.8M | 31.1 M | 60.2 M |
udm | 67.1K | 13.4K | 942.7K | 510.3K | 14M | 7.4M | 106M | 55.5M | 26.3 M | 49.2 M |
lg | 61.1K | 13K | 510.9K | 166.1K | 21.4M | 6.1M | 160.7M | 48M | 17.3 M | 56.7 M |
os | 172.1K | 12.6K | 172.1K | 359.3K | 27.1M | 6.9M | 233.5M | 50.1M | 23.1 M | 87.7 M |
nv | 17.1K | 12.6K | 17.1K | 86.5K | 3.1M | 1.1M | 24.8M | 9.1M | 2.0 M | 7.9 M |
kha | 37.8K | 12.1K | 235.5K | 75.2K | 15.8M | 6M | 88.6M | 30.2M | 9.8 M | 27.3 M |
ilo | 69.8K | 11.8K | 889.2K | 365.1K | 26.7M | 9M | 187.9M | 59.4M | 20.6 M | 64.0 M |
ctd_Latn | 23.3K | 11.6K | 575.6K | 382.2K | 23.3K | 11.6K | 90.7K | 41K | 21.5 M | 35.1 M |
vec | 1.1M | 11.1K | 10M | 209.7K | 284.7M | 7.8M | 1.8B | 43.8M | 17.7 M | 625.0 M |
hil | 126.8K | 10.6K | 1.1M | 379.7K | 43.9M | 9.2M | 293.5M | 57.2M | 18.5 M | 95.2 M |
tyv | 61.6K | 9.1K | 596.6K | 268.3K | 9.9M | 4.7M | 80.2M | 38.5M | 16.7 M | 36.6 M |
iba | 34K | 7.6K | 326.9K | 126.1K | 37.8M | 4.8M | 251.4M | 30.5M | 10.0 M | 61.3 M |
ru_Latn | 346.3K | 7.5K | 346.3K | 239.1K | 346.3K | 7.5K | 1.5M | 27.7K | 14.9 M | 452.3 M |
kbd | 154.7K | 7.5K | 1.4M | 257.2K | 31.9M | 4.4M | 321.4M | 36.8M | 16.8 M | 209.6 M |
ti | 20.8K | 7.3K | 20.8K | 481.3K | 18.2M | 8.8M | 95.4M | 44.6M | 30.9 M | 63.6 M |
sa | 154.3K | 7.1K | 154.3K | 1.1M | 70M | 9.9M | 512.5M | 88.8M | 44.9 M | 236.6 M |
av | 107.6K | 6.3K | 806.1K | 190.1K | 15.5M | 3.4M | 129M | 30.2M | 12.8 M | 56.0 M |
bo | 6.2K | 6.2K | 1.1M | 1.1M | 3.4M | 3.4M | 88.7M | 88.7M | 40.7 M | 40.7 M |
zza | 370.1K | 6K | 3.3M | 229.2K | 87.7M | 3.9M | 617.3M | 26.3M | 10.0 M | 234.1 M |
ber_Latn | 480.5K | 5.6K | 10.5M | 169.4K | 480.5K | 5.6K | 2.1M | 18.9K | 11.0 M | 945.3 M |
otq | 17.6K | 5.6K | 17.6K | 114.8K | 10.2M | 3.8M | 65M | 23.4M | 7.7 M | 22.8 M |
te_Latn | 236.6K | 5.3K | 4.4M | 269.1K | 236.6K | 5.3K | 1M | 19.3K | 11.4 M | 254.3 M |
bua | 9.8K | 5.3K | 252K | 144.6K | 4.7M | 2.7M | 38M | 21.7M | 10.0 M | 17.9 M |
ts | 34.7K | 5.2K | 34.7K | 248.6K | 39.6M | 6.5M | 377.2M | 38.8M | 12.2 M | 99.5 M |
cfm | 9.1K | 4.9K | 199.6K | 128.6K | 6.2M | 4M | 32.9M | 21.5M | 7.4 M | 11.6 M |
tn | 138.2K | 4.8K | 138.2K | 174.4K | 46M | 5.5M | 302.3M | 29.2M | 9.4 M | 99.0 M |
krc | 359.5K | 4.8K | 2.3M | 153.9K | 50.2M | 2.6M | 369.5M | 20.7M | 9.1 M | 139.9 M |
ak | 19.5K | 4.8K | 341.7K | 210.2K | 12.3M | 4.7M | 74.5M | 24.8M | 9.1 M | 24.7 M |
meo | 790.7K | 4.7K | 16.5M | 39K | 478M | 1.2M | 3B | 7.5M | 3.1 M | 1.2 G |
chm | 81.5K | 4.7K | 929.1K | 179.7K | 17.2M | 2.9M | 132.2M | 21.3M | 9.8 M | 53.5 M |
to | 14.3K | 4.6K | 14.3K | 149K | 10.3M | 5.7M | 58.2M | 29.9M | 9.6 M | 19.0 M |
ee | 14.1K | 4.5K | 353.6K | 246.7K | 9.7M | 6.2M | 67.9M | 32.8M | 11.8 M | 23.3 M |
nso | 376.2K | 4.4K | 376.2K | 188.4K | 419.2M | 5.3M | 2B | 28.2M | 9.1 M | 502.7 M |
ady | 74.9K | 4.2K | 446.8K | 96.9K | 8M | 1.6M | 67.9M | 14.8M | 6.4 M | 30.6 M |
rom | 22.9K | 4.2K | 22.9K | 76.1K | 8.9M | 2.6M | 59M | 15.9M | 5.8 M | 21.0 M |
bho | 13.6K | 4.1K | 306.2K | 118.5K | 7.1M | 2.7M | 37.6M | 13.4M | 7.4 M | 20.6 M |
ltg | 13.1K | 4.1K | 213.7K | 87.3K | 4M | 1.9M | 29.2M | 13.9M | 5.6 M | 11.7 M |
fj | 17K | 4K | 410K | 164.1K | 11.6M | 5.2M | 67.7M | 28M | 8.6 M | 22.5 M |
yua | 10.4K | 4K | 141.6K | 77.6K | 5.2M | 2.5M | 36.8M | 17.2M | 5.7 M | 12.4 M |
gn | 87.1K | 3.9K | 770.9K | 162.6K | 19.2M | 2.7M | 140.7M | 20.8M | 7.8 M | 52.1 M |
az_RU | 6.5K | 3.8K | 231.8K | 177.3K | 6.5K | 3.8K | 24K | 12.9K | 10.3 M | 15.1 M |
ln | 94.7K | 3.3K | 718.7K | 139K | 42.4M | 3.4M | 291.8M | 21.5M | 6.8 M | 85.3 M |
ada | 6.5K | 3.1K | 291.5K | 199.2K | 7.5M | 4.9M | 38.9M | 24.2M | 8.6 M | 13.9 M |
myv | 164.8K | 3.1K | 164.8K | 130K | 16M | 1.7M | 120.3M | 13.8M | 6.2 M | 49.5 M |
bik | 44.8K | 3.1K | 376.7K | 77K | 14.8M | 2.5M | 102.3M | 15.7M | 5.3 M | 34.0 M |
tlh | 516.9K | 3.1K | 516.9K | 46.9K | 221.3M | 1.1M | 1.4B | 7.8M | 2.7 M | 554.2 M |
kbp | 5.9K | 3K | 247.9K | 128.3K | 5.6M | 2.6M | 30.8M | 14.6M | 5.7 M | 12.4 M |
war | 1M | 2.9K | 114M | 96.2K | 612.1M | 2.4M | 3.5B | 16.1M | 3.7 M | 1.2 G |
wa | 70.6K | 2.8K | 1.5M | 127.2K | 35.2M | 3.6M | 198.8M | 20.4M | 7.2 M | 67.8 M |
bew | 311.1K | 2.7K | 10.4M | 58.4K | 212.4M | 1.3M | 1.4B | 8.5M | 3.1 M | 547.1 M |
rcf | 21.6K | 2.6K | 21.6K | 50.5K | 4.9M | 1.2M | 30.2M | 5.7M | 2.1 M | 11.4 M |
ta_Latn | 260.7K | 2.6K | 3.4M | 142.7K | 260.7K | 2.6K | 1.2M | 9.1K | 5.0 M | 215.4 M |
kac | 5.9K | 2.6K | 109.2K | 77.4K | 5M | 2.8M | 26.6M | 13.6M | 4.3 M | 8.0 M |
iu | 5.4K | 2.5K | 92.6K | 53.1K | 1.9M | 907.4K | 17.5M | 8.3M | 4.8 M | 9.9 M |
ay | 8.1K | 2.5K | 196.7K | 83.8K | 3.9M | 1.4M | 34.5M | 13.1M | 4.5 M | 12.7 M |
kum | 4.2K | 2.5K | 132.2K | 89.7K | 2.3M | 1.6M | 18.2M | 12.4M | 5.3 M | 8.0 M |
qu | 149.7K | 2.4K | 1M | 87K | 26.7M | 1.3M | 200.6M | 12.2M | 4.0 M | 68.3 M |
bgp | 355.7K | 2.4K | 5.6M | 43.3K | 186.1M | 1.8M | 1.1B | 9.8M | 3.1 M | 377.5 M |
hif | 702K | 2.4K | 7.9M | 124.7K | 1.2B | 3.2M | 9.1B | 19.1M | 5.9 M | 3.5 G |
kw | 176.9K | 2.3K | 1M | 51.6K | 53.1M | 1.3M | 327.8M | 7.7M | 2.8 M | 89.2 M |
nan_Latn_TW | 7.4K | 2.3K | 7.4K | 72.7K | 7.4K | 2.3K | 28.3K | 7.7K | 4.8 M | 15.4 M |
srn | 16.7K | 2.3K | 16.7K | 139.5K | 8M | 3.4M | 49.1M | 17M | 5.1 M | 15.6 M |
tly_IR | 406.3K | 2.2K | 406.3K | 18.2K | 406.3K | 2.2K | 1.6M | 8.6K | 580.4 K | 283.0 M |
sg | 4.2K | 2.1K | 154K | 117.9K | 4.6M | 3.3M | 22.6M | 15.5M | 4.6 M | 6.8 M |
gom | 4.6K | 2.1K | 178.3K | 108K | 2.7M | 1.4M | 19.8M | 10M | 5.0 M | 10.5 M |
ml_Latn | 260.8K | 2.1K | 3.5M | 77.3K | 260.8K | 2.1K | 1.1M | 7.2K | 3.5 M | 277.7 M |
kj | 112.2K | 2.1K | 881.8K | 22.6K | 46.9M | 877.3K | 339.6M | 6M | 2.1 M | 104.9 M |
ksd | 14.9K | 2K | 533K | 78.6K | 11.5M | 2.1M | 62.4M | 10M | 2.9 M | 20.0 M |
dz | 1.9K | 1.9K | 191.7K | 191.7K | 1.1M | 1.1M | 22.7M | 22.7M | 10.0 M | 10.0 M |
kv | 59.1K | 1.9K | 584.3K | 88.8K | 9.5M | 1.2M | 91.4M | 9M | 4.4 M | 41.0 M |
msi | 686.7K | 1.9K | 686.7K | 22.6K | 414.8M | 440.4K | 2.6B | 2.7M | 1.1 M | 1.0 G |
ve | 3.8K | 1.9K | 97.8K | 79.4K | 3.2M | 2.1M | 19M | 11.7M | 3.8 M | 6.2 M |
zap | 5.5K | 1.8K | 202.3K | 93.5K | 4.2M | 1.8M | 26.4M | 11.4M | 4.0 M | 9.6 M |
zxx_xx_dtynoise | 118.8K | 1.8K | 3.8M | 49.3K | 118.8K | 1.8K | 501K | 6.6K | 3.9 M | 367.0 M |
meu | 5.9K | 1.7K | 232.1K | 72.6K | 4.2M | 1.4M | 27.2M | 8.6M | 2.6 M | 9.1 M |
iso | 3.7K | 1.7K | 155.8K | 111.5K | 4.4M | 2.7M | 23M | 13.7M | 4.9 M | 8.1 M |
ium | 100.3K | 1.7K | 6.2M | 54.9K | 48.4M | 1.7M | 314M | 7.4M | 2.6 M | 124.0 M |
nhe | 3K | 1.7K | 3K | 57.7K | 1.9M | 1.2M | 15.6M | 9.8M | 2.7 M | 4.8 M |
tyz | 8K | 1.7K | 454.8K | 104.6K | 7.5M | 1.9M | 46.3M | 11.3M | 3.8 M | 16.0 M |
hui | 2K | 1.7K | 80.1K | 74.7K | 1.8M | 1.7M | 11.8M | 10.9M | 3.0 M | 3.3 M |
new | 6.6K | 1.6K | 6.6K | 85K | 3.2M | 1.4M | 21.2M | 8.8M | 4.4 M | 10.6 M |
mdf | 71K | 1.6K | 394.7K | 45.1K | 8.3M | 670.1K | 65.8M | 5.5M | 2.5 M | 26.7 M |
pag | 49.6K | 1.6K | 49.6K | 88.8K | 13.8M | 1.9M | 92.9M | 12M | 3.9 M | 29.2 M |
gv | 501.9K | 1.6K | 18.8M | 26.9K | 137.7M | 996.2K | 933.1M | 6.2M | 2.0 M | 318.6 M |
gag | 33.9K | 1.6K | 491K | 37K | 10.2M | 661K | 84.9M | 5.2M | 2.1 M | 32.6 M |
ngu | 3.8K | 1.5K | 3.8K | 87.1K | 2.7M | 1.5M | 21.4M | 11.8M | 3.6 M | 6.7 M |
quc | 4.4K | 1.5K | 89.2K | 41.2K | 2.8M | 1.1M | 16.6M | 6.4M | 2.2 M | 5.9 M |
mam | 23K | 1.5K | 446.3K | 52.9K | 9.8M | 1.2M | 70.4M | 7.2M | 2.6 M | 30.7 M |
min | 28.2K | 1.5K | 500.9K | 75.6K | 10.2M | 1.4M | 70.5M | 9.9M | 2.6 M | 21.1 M |
ho | 2K | 1.5K | 57K | 47.8K | 1.8M | 1.3M | 12.3M | 7.8M | 1.9 M | 3.1 M |
pon | 5.7K | 1.5K | 167.8K | 48.7K | 3M | 1.1M | 18.3M | 6.7M | 2.1 M | 6.1 M |
mrj | 97.1K | 1.4K | 97.1K | 60.3K | 14.5M | 1.1M | 100.6M | 7.6M | 3.6 M | 40.8 M |
lu | 10.6K | 1.4K | 316K | 112.1K | 7.8M | 2.3M | 54.2M | 15.4M | 4.8 M | 18.0 M |
gom_Latn | 231.1K | 1.4K | 4.1M | 77.9K | 231.1K | 1.4K | 1M | 5.1K | 3.6 M | 240.6 M |
alt | 2.6K | 1.4K | 110.1K | 65.9K | 1.8M | 1.1M | 14.3M | 8.7M | 3.8 M | 6.4 M |
nzi | 2.5K | 1.4K | 2.5K | 71.8K | 2.5M | 1.7M | 14.4M | 9.4M | 3.1 M | 4.8 M |
tzo | 2.8K | 1.4K | 100.4K | 75.7K | 2.5M | 1.7M | 15.9M | 10.6M | 3.2 M | 4.9 M |
bci | 7.4K | 1.3K | 124.8K | 87.1K | 5M | 1.9M | 32.8M | 9M | 3.1 M | 9.4 M |
dtp | 4.6K | 1.3K | 51.2K | 7.9K | 1.9M | 419.4K | 12.7M | 3M | 1013.9 K | 4.5 M |
abt | 1.6K | 1.3K | 122.7K | 110.3K | 1.5M | 1.3M | 9.6M | 8.2M | 2.2 M | 2.7 M |
bbc | 72.3K | 1.3K | 718.3K | 73.2K | 21.7M | 1.7M | 151.3M | 10.6M | 3.6 M | 47.9 M |
pck | 8.9K | 1.3K | 8.9K | 69.7K | 6.8M | 2.1M | 39.8M | 11.5M | 4.2 M | 14.2 M |
mai | 54.3K | 1.2K | 1M | 60.2K | 24.6M | 1.2M | 156M | 6.8M | 3.6 M | 67.1 M |
mps | 2.7K | 1.2K | 132.8K | 71.9K | 2.8M | 1.6M | 16M | 8.7M | 2.3 M | 4.8 M |
emp | 3.6K | 1.2K | 106.4K | 75.4K | 1.9M | 999.1K | 14.5M | 7.4M | 2.4 M | 4.9 M |
mgh | 5.5K | 1.2K | 151.8K | 61.2K | 2.8M | 1.1M | 24.1M | 8.2M | 2.8 M | 8.3 M |
tab | 7.8K | 1.2K | 226.4K | 26.8K | 4.3M | 538.9K | 33.7M | 4.4M | 1.9 M | 15.7 M |
crh | 5.1K | 1.2K | 170.9K | 61.8K | 2.4M | 943K | 18.8M | 7.5M | 3.4 M | 8.9 M |
tbz | 5.1K | 1.1K | 128.7K | 37.5K | 3.5M | 893.4K | 22M | 4.8M | 1.9 M | 10.2 M |
ss | 8.1K | 1.1K | 8.1K | 30.4K | 2.7M | 568.3K | 23.7M | 5.5M | 1.8 M | 7.4 M |
chk | 2.8K | 1.1K | 98.8K | 44K | 2M | 1M | 12M | 5.8M | 1.8 M | 4.0 M |
bru | 3K | 1.1K | 89.7K | 48.2K | 2.4M | 938.1K | 12.9M | 4.8M | 1.5 M | 4.5 M |
nnb | 4.9K | 1.1K | 4.9K | 70.2K | 3.2M | 1.2M | 27.7M | 9.1M | 3.3 M | 10.0 M |
fon | 5.3K | 1.1K | 222.9K | 67.3K | 6.9M | 1.8M | 34M | 8.3M | 3.1 M | 14.8 M |
ppk | 2.6K | 1.1K | 85.8K | 34.9K | 1.9M | 801.8K | 13.2M | 5.5M | 1.6 M | 4.3 M |
tiv | 3.8K | 1.1K | 3.8K | 80.7K | 3.7M | 2.1M | 20.4M | 10.2M | 3.2 M | 6.0 M |
btx | 3.1K | 1K | 81.7K | 43.9K | 2M | 907.5K | 13.1M | 5.9M | 2.0 M | 4.6 M |
bg_Latn | 200.4K | 991 | 2.8M | 25.5K | 200.4K | 991 | 927.1K | 3.7K | 1.7 M | 143.6 M |
mbt | 1.6K | 969 | 86K | 45.4K | 2.4M | 1.3M | 14.6M | 7.5M | 2.2 M | 5.1 M |
ace | 65.5K | 966 | 632.5K | 32.5K | 19.9M | 1.1M | 146.1M | 7.4M | 2.2 M | 42.3 M |
tvl | 2.3K | 933 | 72.9K | 53.6K | 2.5M | 1.7M | 12.6M | 8.1M | 2.4 M | 3.8 M |
dov | 3.5K | 923 | 129.8K | 56.7K | 2.6M | 967.5K | 20.7M | 8M | 2.6 M | 7.1 M |
ach | 2K | 915 | 63K | 40.1K | 1.6M | 890.9K | 9M | 4.7M | 1.6 M | 3.0 M |
xal | 71.8K | 913 | 498.5K | 30.8K | 8.5M | 449.8K | 64.7M | 3.2M | 1.5 M | 24.4 M |
cuk | 4.1K | 899 | 76.5K | 34.3K | 2M | 469.9K | 24.7M | 4.6M | 1.5 M | 6.1 M |
kos | 2.2K | 881 | 44.6K | 27.8K | 1.1M | 780.1K | 6.5M | 4.2M | 1.4 M | 2.2 M |
crs | 7.6K | 873 | 282.4K | 40.1K | 7.3M | 1.2M | 40.1M | 6.8M | 2.2 M | 13.2 M |
wo | 36.4K | 871 | 303.4K | 25.4K | 30.7M | 850.7K | 213.4M | 4.5M | 1.7 M | 59.9 M |
bts | 3.2K | 869 | 109.1K | 29.1K | 3.1M | 663.3K | 20.8M | 4.2M | 1.4 M | 6.2 M |
ubu | 2.2K | 846 | 113.5K | 47.5K | 2.3M | 996.4K | 15.9M | 6.7M | 1.9 M | 4.7 M |
gym | 1.5K | 820 | 73.7K | 49.6K | 1.6M | 1.1M | 10.3M | 6.9M | 2.0 M | 3.2 M |
ibb | 74.1K | 818 | 516.5K | 36.3K | 26.4M | 776.1K | 190.9M | 4.9M | 1.5 M | 56.0 M |
ape | 7K | 814 | 147K | 56.1K | 12.4M | 881.5K | 71M | 5.8M | 1.6 M | 18.8 M |
stq | 111.9K | 809 | 111.9K | 27.7K | 34.4M | 600.4K | 243.1M | 3.8M | 1.5 M | 82.5 M |
ang | 66.5K | 803 | 1.8M | 86.7K | 28.5M | 1.7M | 193M | 9.8M | 3.4 M | 67.1 M |
enq | 7.1K | 793 | 241.9K | 39.1K | 11M | 718.8K | 68.5M | 4.8M | 1.3 M | 18.8 M |
tsg | 353.8K | 789 | 353.8K | 17.9K | 158M | 588.9K | 1.1B | 3.8M | 1.0 M | 309.9 M |
shn | 889 | 788 | 46.4K | 46.2K | 383.8K | 378.5K | 5.7M | 5.7M | 2.6 M | 2.6 M |
kri | 39.1K | 786 | 271.2K | 38.8K | 12.6M | 995.2K | 86.4M | 5M | 1.6 M | 20.9 M |
kek | 3.2K | 782 | 70.4K | 38.4K | 1.8M | 709K | 13.6M | 4.4M | 1.4 M | 4.7 M |
rmc | 2.4K | 738 | 2.4K | 25.8K | 1.3M | 545.4K | 7.9M | 3.2M | 1.1 M | 2.9 M |
acf | 4.9K | 730 | 81.9K | 24.6K | 2.1M | 602.2K | 11.6M | 3M | 1.1 M | 4.7 M |
fip | 3.7K | 729 | 165.6K | 49K | 3.5M | 916.8K | 25.7M | 6.6M | 2.1 M | 8.6 M |
syr | 3.5K | 716 | 326.4K | 197.1K | 4.6M | 1.9M | 31.5M | 14M | 6.1 M | 13.9 M |
qub | 972 | 705 | 61K | 51.1K | 589.2K | 455.5K | 5.9M | 4.4M | 1.4 M | 1.8 M |
bm | 21.9K | 702 | 172.3K | 24.5K | 7.1M | 583.1K | 48.4M | 3M | 1.1 M | 14.4 M |
tzh | 1.7K | 702 | 41.7K | 33.9K | 1.5M | 929.6K | 9.3M | 5.6M | 1.6 M | 2.6 M |
jiv | 1.7K | 696 | 80.9K | 32K | 1.1M | 418.9K | 9.6M | 3.5M | 1.1 M | 3.3 M |
kn_Latn | 72.9K | 688 | 765.9K | 10.1K | 72.9K | 688 | 328.1K | 2.5K | 430.8 K | 61.4 M |
kjh | 1.5K | 672 | 42.8K | 28.7K | 566.1K | 379.2K | 4.5M | 3.1M | 1.3 M | 2.0 M |
yap | 1.9K | 638 | 37.6K | 19.5K | 1.3M | 661.4K | 6.9M | 3.3M | 1.0 M | 2.2 M |
ban | 8K | 637 | 150.9K | 16.3K | 5M | 499.7K | 35.4M | 3.6M | 1.1 M | 12.0 M |
tuc | 3.5K | 635 | 193.2K | 50.3K | 2.9M | 703K | 17.2M | 4.1M | 1.2 M | 5.7 M |
tcy | 10.7K | 632 | 338.7K | 37.1K | 5.5M | 432.6K | 41.6M | 3.3M | 1.7 M | 20.9 M |
cab | 1.2K | 629 | 50.4K | 37.5K | 1M | 690.9K | 7.5M | 5.1M | 1.6 M | 2.4 M |
cak | 1.2K | 617 | 70.4K | 32.6K | 1.3M | 730.1K | 7.6M | 4.2M | 1.3 M | 2.4 M |
din | 128.4K | 611 | 885.8K | 23.6K | 31.6M | 541.7K | 210M | 2.9M | 1.1 M | 64.3 M |
zh_Latn | 739.4K | 602 | 10.7M | 45.1K | 739.4K | 602 | 3.4M | 2.3K | 2.0 M | 969.9 M |
arn | 2.4K | 593 | 64.5K | 26.2K | 1.5M | 541.9K | 10.2M | 3.7M | 1.2 M | 3.7 M |
lrc | 42.4K | 587 | 351.9K | 9K | 17.3M | 248.9K | 85.3M | 1.4M | 646.9 K | 37.5 M |
rwo | 938 | 572 | 938 | 45.5K | 734.8K | 590.4K | 5.1M | 4.2M | 1.1 M | 1.4 M |
hus | 825 | 569 | 26.5K | 23.7K | 733.4K | 542.1K | 4.4M | 3.1M | 967.6 K | 1.3 M |
bum | 4.7K | 559 | 103.8K | 36.5K | 3M | 805.5K | 18.8M | 4M | 1.3 M | 6.1 M |
mak | 1K | 555 | 32.5K | 20.4K | 761K | 457.4K | 6.1M | 3.7M | 1.1 M | 2.0 M |
frp | 148K | 550 | 3.5M | 8.2K | 71.2M | 230.2K | 535.4M | 1.4M | 518.3 K | 129.7 M |
seh | 5.6K | 545 | 68.8K | 37.2K | 2M | 650.6K | 14.9M | 4.9M | 1.5 M | 4.4 M |
twu | 2.5K | 539 | 109.9K | 24.4K | 2.4M | 571.2K | 14.2M | 3.2M | 1.0 M | 4.8 M |
kmb | 1.3K | 538 | 60.4K | 36.9K | 1.4M | 810.8K | 8.4M | 4.6M | 1.4 M | 2.6 M |
ksw | 560 | 536 | 16.1K | 16K | 219.9K | 218.8K | 2.9M | 2.9M | 1.4 M | 1.4 M |
sja | 1.3K | 527 | 67.7K | 24.9K | 982.5K | 459.3K | 7.7M | 3.4M | 1.1 M | 2.6 M |
amu | 1.8K | 511 | 72K | 25.2K | 1.5M | 443.3K | 9.6M | 3.2M | 1.0 M | 3.4 M |
mad | 103.8K | 509 | 500.6K | 18.5K | 16.2M | 386.7K | 111.8M | 2.8M | 960.3 K | 34.2 M |
quh | 1K | 501 | 42K | 29.9K | 624.4K | 396.8K | 5.8M | 3.7M | 1.2 M | 1.8 M |
dyu | 1.2K | 483 | 55.8K | 19.7K | 1.2M | 421.8K | 5.7M | 2M | 665.5 K | 1.9 M |
toj | 736 | 452 | 736 | 26.1K | 691.2K | 540.2K | 4.3M | 3.3M | 1.0 M | 1.3 M |
ch | 12.9K | 449 | 147.5K | 16K | 8.9M | 393.9K | 63.5M | 2.5M | 906.8 K | 10.0 M |
sus | 664 | 437 | 664 | 15.2K | 648K | 402.8K | 3.7M | 2.1M | 674.0 K | 1.0 M |
nog | 970 | 419 | 970 | 11K | 330.3K | 200.4K | 2.6M | 1.6M | 714.0 K | 1.2 M |
jam | 12.7K | 416 | 68.5K | 15.8K | 3.5M | 378.4K | 25.8M | 1.7M | 609.5 K | 7.6 M |
gui | 1.1K | 409 | 62.7K | 24.8K | 915K | 314K | 6.5M | 2M | 619.3 K | 2.1 M |
nia | 2K | 408 | 2K | 25K | 1.7M | 476.5K | 11.3M | 3.1M | 1.0 M | 3.9 M |
mas | 15.2K | 405 | 216.8K | 17.6K | 6.2M | 390.1K | 42.1M | 3M | 927.5 K | 13.4 M |
bzj | 983 | 404 | 33.6K | 26.4K | 824.3K | 565K | 4.5M | 2.9M | 981.2 K | 1.4 M |
mkn | 956 | 402 | 33.1K | 25.4K | 584.2K | 456.9K | 3.4M | 2.6M | 734.8 K | 1.0 M |
lhu | 46K | 377 | 975K | 15.7K | 29.1M | 441.2K | 208.6M | 2.5M | 623.0 K | 38.8 M |
ctu | 690 | 366 | 35.5K | 20.6K | 646.7K | 352.8K | 3.6M | 2M | 614.9 K | 1.2 M |
kg | 4.7K | 365 | 85.5K | 21.7K | 2.5M | 406.7K | 16.6M | 2.6M | 905.4 K | 5.7 M |
inb | 387 | 343 | 17.3K | 17K | 202.8K | 197K | 2M | 1.9M | 535.2 K | 555.6 K |
guh | 1.9K | 331 | 104.9K | 28.4K | 1.5M | 328.4K | 11.2M | 3M | 789.5 K | 3.5 M |
rn | 8.2K | 323 | 8.2K | 11.1K | 4.5M | 179K | 33.2M | 1.3M | 449.9 K | 11.8 M |
bus | 467 | 322 | 21.4K | 12.1K | 418.4K | 219.2K | 2.1M | 1.1M | 428.8 K | 830.9 K |
mfe | 7.5K | 320 | 198.8K | 18.2K | 4.6M | 374.8K | 26.9M | 2.1M | 716.4 K | 10.1 M |
sda | 1.6K | 317 | 43.2K | 6.2K | 2.5M | 218.3K | 15.8M | 1.6M | 529.0 K | 4.7 M |
bi | 71.9K | 311 | 308.5K | 13.6K | 19.4M | 359.4K | 132.4M | 1.9M | 546.9 K | 42.6 M |
cr_Latn | 19K | 303 | 170K | 8.9K | 19K | 303 | 81.8K | 1K | 590.4 K | 15.0 M |
gor | 1.7K | 303 | 53.3K | 6.5K | 1.4M | 227.1K | 9.4M | 1.7M | 494.0 K | 3.1 M |
jac | 8.2K | 303 | 61.6K | 11.9K | 1.8M | 271K | 15.7M | 1.7M | 530.3 K | 7.3 M |
chr | 964 | 301 | 33.8K | 7.5K | 629.9K | 172.3K | 4.7M | 1M | 564.1 K | 2.1 M |
mh | 4.6K | 296 | 235.1K | 13K | 3.6M | 393.5K | 24.9M | 2.2M | 778.4 K | 8.4 M |
mni | 1.2K | 290 | 38.1K | 13.2K | 841.3K | 245.5K | 6.4M | 1.8M | 866.6 K | 3.0 M |
wal | 2.6K | 286 | 128K | 14K | 2M | 203.4K | 17M | 1.7M | 525.7 K | 5.1 M |
teo | 2.8K | 274 | 131.5K | 13.7K | 2.3M | 221.4K | 15.3M | 1.6M | 564.9 K | 5.3 M |
gub | 31.7K | 271 | 160.4K | 25K | 4.7M | 286.2K | 44.7M | 1.6M | 431.3 K | 23.1 M |
qvi | 1.2K | 266 | 48.4K | 19.3K | 720.4K | 248.9K | 6.5M | 2.3M | 641.2 K | 1.9 M |
tdx | 1.7K | 262 | 26.3K | 13.2K | 1M | 238.5K | 7M | 1.6M | 503.6 K | 2.1 M |
rki | 331 | 251 | 331 | 7.8K | 119.7K | 113.7K | 1.6M | 1.5M | 751.3 K | 781.8 K |
djk | 560 | 246 | 30.9K | 24.4K | 669.5K | 455.6K | 3.7M | 2.2M | 644.3 K | 1.0 M |
nr | 10.7K | 246 | 10.7K | 11.3K | 5.3M | 162.5K | 49M | 1.5M | 519.7 K | 17.8 M |
zne | 1.3K | 239 | 61.9K | 21.3K | 1.4M | 504.6K | 8.2M | 2.8M | 882.3 K | 2.8 M |
izz | 423 | 237 | 21.7K | 14.5K | 382.8K | 194.5K | 2.1M | 1.1M | 382.2 K | 789.9 K |
noa | 902 | 234 | 902 | 11.5K | 821.1K | 243.9K | 5.2M | 1.6M | 534.3 K | 1.7 M |
bqc | 275 | 228 | 9.8K | 8.2K | 193K | 151.7K | 997K | 788.4K | 317.0 K | 408.1 K |
srm | 847 | 227 | 847 | 17.3K | 1.2M | 445.3K | 6.3M | 2M | 613.4 K | 1.7 M |
niq | 26.7K | 226 | 26.7K | 4.2K | 9.9M | 103.4K | 72.1M | 716.2K | 239.1 K | 20.9 M |
bas | 4.2K | 216 | 105.2K | 14.9K | 4.3M | 362.8K | 25.7M | 1.7M | 600.7 K | 7.6 M |
dwr | 452 | 215 | 22.1K | 11.1K | 269.4K | 139.5K | 2.2M | 1.2M | 375.4 K | 747.6 K |
guc | 537 | 214 | 22.9K | 12.5K | 422.4K | 218.1K | 3.4M | 1.8M | 540.1 K | 1.1 M |
jvn | 1K | 213 | 36.2K | 7.8K | 790.5K | 185.6K | 5.3M | 1.2M | 357.2 K | 1.7 M |
hvn | 737 | 200 | 33.9K | 7K | 779.7K | 239.4K | 4.3M | 1.2M | 378.5 K | 1.4 M |
sxn | 587 | 197 | 587 | 9.9K | 494K | 220.6K | 3.4M | 1.5M | 507.1 K | 1.2 M |
koi | 20.7K | 196 | 153.9K | 5K | 2.2M | 89.9K | 17.1M | 664.5K | 323.0 K | 7.1 M |
alz | 2.2K | 195 | 59.3K | 12.2K | 1.3M | 246.9K | 7.9M | 1.4M | 488.1 K | 2.9 M |
nyu | 1.2K | 195 | 1.2K | 11K | 988.7K | 210.5K | 7.7M | 1.6M | 492.6 K | 2.2 M |
bn_Latn | 98.7K | 191 | 1.3M | 12K | 98.7K | 191 | 458K | 730 | 314.7 K | 81.0 M |
suz | 226 | 186 | 226 | 11.3K | 169.6K | 140.5K | 1M | 855.2K | 339.5 K | 429.6 K |
pau | 1.7K | 185 | 1.7K | 13.1K | 2M | 394.6K | 12.4M | 2M | 600.1 K | 3.2 M |
nij | 1K | 183 | 1K | 9.2K | 741.6K | 186.1K | 4.7M | 1.2M | 389.6 K | 1.6 M |
sat_Latn | 39K | 183 | 39K | 5.5K | 39K | 183 | 183.8K | 601 | 276.1 K | 39.2 M |
gu_Latn | 58.2K | 179 | 688.4K | 5.4K | 58.2K | 179 | 260.8K | 673 | 241.0 K | 47.9 M |
msm | 520 | 177 | 520 | 8.6K | 410.8K | 190.5K | 2.5M | 1.1M | 339.7 K | 789.8 K |
maz | 585 | 170 | 21.3K | 8.2K | 452.9K | 174K | 2.9M | 951.7K | 304.7 K | 971.4 K |
qxr | 2.6K | 153 | 40.8K | 6.4K | 761.5K | 75.4K | 6.6M | 724K | 186.4 K | 1.9 M |
shp | 874 | 150 | 22.4K | 3.7K | 534.1K | 96.8K | 3.8M | 710.4K | 216.9 K | 1.2 M |
hne | 3K | 146 | 118.4K | 4.3K | 2.3M | 139.3K | 12M | 697K | 379.3 K | 6.5 M |
ktu | 3.3K | 144 | 115.5K | 7.8K | 3.2M | 196.9K | 18.5M | 1.1M | 300.1 K | 5.4 M |
laj | 6.5K | 144 | 61K | 6.4K | 2.4M | 140.1K | 15.8M | 730.5K | 233.5 K | 4.6 M |
pis | 1.1K | 139 | 62K | 7.2K | 1.3M | 136.8K | 7.7M | 764K | 212.7 K | 2.2 M |
mag | 631 | 138 | 62.6K | 22.1K | 2.1M | 544.2K | 10.7M | 2.6M | 1.4 M | 5.4 M |
gbm | 2.5K | 137 | 50.8K | 3.8K | 1.7M | 99.7K | 9.1M | 499.6K | 282.4 K | 4.5 M |
tzj | 471 | 136 | 11.1K | 7.3K | 299.9K | 150.8K | 1.9M | 884.2K | 272.0 K | 663.9 K |
oj | 2.5K | 135 | 2.5K | 1.6K | 1.2M | 35.9K | 9.6M | 337.1K | 117.6 K | 3.4 M |
ndc_ZW | 2.2K | 132 | 2.2K | 8.7K | 2.2K | 132 | 9.1K | 523 | 343.1 K | 2.2 M |
tks | 63.7K | 127 | 63.7K | 6.8K | 17.1M | 41.5K | 88.9M | 260.8K | 39.5 K | 33.0 M |
awa | 5.8K | 126 | 100.1K | 8.4K | 2.2M | 98.7K | 11.1M | 475K | 226.6 K | 5.8 M |
gvl | 37.9K | 126 | 213K | 6.9K | 21.1M | 161.1K | 141M | 789.2K | 257.8 K | 31.7 M |
knj | 229 | 126 | 10.1K | 9.2K | 202.6K | 171.8K | 1.1M | 855K | 253.1 K | 345.4 K |
spp | 733 | 123 | 733 | 5.8K | 902.7K | 141.8K | 4.4M | 682.5K | 217.8 K | 1.4 M |
mqy | 69.3K | 119 | 309K | 2.5K | 12.1M | 88.6K | 78.9M | 506.5K | 170.4 K | 16.3 M |
tca | 410 | 117 | 20K | 7.3K | 283K | 121.5K | 2.3M | 786K | 226.2 K | 781.2 K |
cce | 847 | 116 | 23.2K | 11K | 539.3K | 227.2K | 3.3M | 1.3M | 393.8 K | 1.1 M |
skr | 3.8K | 107 | 279.3K | 17.1K | 6.2M | 324K | 32.2M | 1.7M | 768.5 K | 15.4 M |
kmz_Latn | 24K | 106 | 361K | 2.4K | 24K | 106 | 108.6K | 401 | 231.8 K | 16.7 M |
dje | 913 | 100 | 40.2K | 3.7K | 816.3K | 97.5K | 4.7M | 480.7K | 161.2 K | 1.5 M |
gof | 2.8K | 97 | 33.8K | 5.5K | 703K | 68.8K | 5.5M | 506K | 159.1 K | 1.7 M |
agr | 465 | 93 | 16.1K | 3.6K | 295.4K | 67.2K | 2.3M | 554.5K | 177.0 K | 760.1 K |
qvz | 534 | 88 | 6.8K | 3.5K | 145.5K | 50.5K | 1.2M | 438.3K | 124.2 K | 382.7 K |
adh | 2.6K | 87 | 107.2K | 1K | 2.4M | 42.1K | 14.5M | 254.9K | 84.6 K | 5.0 M |
quf | 522 | 86 | 8.4K | 5.2K | 155.7K | 61.8K | 1.5M | 609K | 173.7 K | 542.8 K |
kjg | 113 | 84 | 3K | 2.9K | 67.6K | 67K | 408.5K | 399K | 159.2 K | 167.7 K |
tsc | 12.6K | 82 | 12.6K | 4K | 3.5M | 93.1K | 23.4M | 521.3K | 161.9 K | 7.0 M |
ber | 2.7K | 79 | 12.6K | 1.2K | 1.1M | 46.4K | 6.4M | 265.9K | 141.5 K | 3.0 M |
ify | 611 | 79 | 19.8K | 2.8K | 422.7K | 56.2K | 2.6M | 334K | 109.5 K | 913.1 K |
cbk | 10.1K | 78 | 43.8K | 2K | 1.7M | 64.3K | 10.3M | 339.3K | 93.4 K | 3.4 M |
quy | 588 | 78 | 28.1K | 2.7K | 423.3K | 37.3K | 4.5M | 368.2K | 114.5 K | 1.2 M |
ahk | 244 | 77 | 6.2K | 4.1K | 264K | 124.8K | 1.3M | 715.5K | 182.8 K | 359.7 K |
cac | 212 | 77 | 3.4K | 1.8K | 125.7K | 54.1K | 978.7K | 319.8K | 95.8 K | 280.3 K |
akb | 1K | 71 | 21.3K | 408 | 870.9K | 54.5K | 5.2M | 337.8K | 93.7 K | 1.6 M |
nut | 29K | 67 | 29K | 1.5K | 4.8M | 39.8K | 23.5M | 184.1K | 36.4 K | 8.3 M |
ffm | 1.8K | 65 | 30.1K | 2K | 745.6K | 39.1K | 4.6M | 236.1K | 83.8 K | 1.8 M |
taj | 146 | 65 | 21.6K | 14.3K | 309.7K | 203K | 2.3M | 1.4M | 503.0 K | 872.7 K |
ms_Arab | 698 | 63 | 698 | 320 | 698 | 63 | 2.9K | 239 | 64.7 K | 1016.0 K |
brx | 322 | 62 | 5.3K | 2.4K | 144.2K | 41K | 1.1M | 304.4K | 146.6 K | 515.7 K |
ann | 464 | 56 | 5K | 1.6K | 116.4K | 35.9K | 760.9K | 215.1K | 74.9 K | 295.2 K |
qup | 169 | 53 | 4.3K | 2.5K | 77.5K | 31.3K | 763.8K | 297.8K | 74.7 K | 207.3 K |
ms_Arab_BN | 2.6K | 46 | 2.6K | 374 | 2.6K | 46 | 10.5K | 171 | 50.0 K | 5.1 M |
miq | 236 | 45 | 6.4K | 3.5K | 183.7K | 80.2K | 1.2M | 485.6K | 157.6 K | 384.1 K |
msb | 811 | 41 | 811 | 1K | 705.9K | 28.8K | 4.4M | 167.5K | 53.3 K | 1.7 M |
bim | 410 | 40 | 31.1K | 6.3K | 669.8K | 167.4K | 3.2M | 793.4K | 252.7 K | 1.1 M |
raj | 1.8K | 40 | 1.8K | 5.7K | 1.3M | 81.1K | 7.1M | 405K | 226.2 K | 3.9 M |
kwi | 382 | 37 | 16.9K | 2.2K | 253.8K | 23.4K | 1.8M | 172.8K | 47.6 K | 536.2 K |
tll | 200 | 37 | 200 | 2.7K | 304.2K | 62.2K | 2.2M | 409.8K | 132.3 K | 664.5 K |
trp | 12.8K | 36 | 12.8K | 1.7K | 4.1M | 39K | 29.9M | 257.3K | 87.5 K | 10.2 M |
smt | 1.4K | 34 | 1.4K | 703 | 1M | 36.5K | 6.8M | 245.4K | 87.9 K | 2.5 M |
mrw | 11.3K | 29 | 11.3K | 1K | 4.2M | 45.7K | 27.8M | 257.2K | 81.3 K | 8.8 M |
dln | 236 | 28 | 5.2K | 969 | 150.8K | 21.5K | 860.5K | 118.3K | 36.8 K | 280.3 K |
qvc | 3.4K | 27 | 14.6K | 2.2K | 495.7K | 25.7K | 5M | 233.7K | 65.3 K | 2.6 M |
doi | 1.7K | 26 | 21.8K | 975 | 568.7K | 25.5K | 3.2M | 135.3K | 66.7 K | 1.6 M |
ff | 13.6K | 26 | 150K | 5K | 3.4M | 46.5K | 22.8M | 277.6K | 78.8 K | 8.5 M |
## Citation Information
~~~
@misc{kudugunta2023madlad400,
title={MADLAD-400: A Multilingual And Document-Level Large Audited Dataset},
author={Sneha Kudugunta and Isaac Caswell and Biao Zhang and Xavier Garcia and Christopher A. Choquette-Choo and Katherine Lee and Derrick Xin and Aditya Kusupati and Romi Stella and Ankur Bapna and Orhan Firat},
year={2023},
eprint={2309.04662},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
~~~ | [
-0.37928056716918945,
-0.7297734022140503,
0.13457340002059937,
0.24012890458106995,
-0.47195518016815186,
0.07188316434621811,
-0.32387441396713257,
-0.4012281596660614,
0.48057225346565247,
0.4334760010242462,
-0.40712597966194153,
-0.719685435295105,
-0.4394758343696594,
0.3627707064151... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SEACrowd/bible_en_id | SEACrowd | 2023-09-26T12:32:53Z | 52 | 0 | null | [
"language:ind",
"language:eng",
"machine-translation",
"region:us"
] | 2023-09-26T12:32:53Z | 2023-09-26T11:17:14.000Z | 2023-09-26T11:17:14 | ---
tags:
- machine-translation
language:
- ind
- eng
---
# bible_en_id
Bible En-Id is a machine translation dataset containing Indonesian-English parallel sentences collected from the bible. We also add a Bible dataset to the English Indonesian translation task. Specifically, we collect an Indonesian and an English language Bible and generate a verse-aligned parallel corpus for the English-Indonesian machine translation task. We split the dataset and use 75% as the training set, 10% as the validation set, and 15% as the test set. Each of the datasets is evaluated in both directions, i.e., English to Indonesian (En → Id) and Indonesian to English (Id → En) translations.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{cahyawijaya-etal-2021-indonlg,
title = "{I}ndo{NLG}: Benchmark and Resources for Evaluating {I}ndonesian Natural Language Generation",
author = "Cahyawijaya, Samuel and
Winata, Genta Indra and
Wilie, Bryan and
Vincentio, Karissa and
Li, Xiaohong and
Kuncoro, Adhiguna and
Ruder, Sebastian and
Lim, Zhi Yuan and
Bahar, Syafri and
Khodra, Masayu and
Purwarianti, Ayu and
Fung, Pascale",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.699",
doi = "10.18653/v1/2021.emnlp-main.699",
pages = "8875--8898",
abstract = "Natural language generation (NLG) benchmarks provide an important avenue to measure progress and develop better NLG systems. Unfortunately, the lack of publicly available NLG benchmarks for low-resource languages poses a challenging barrier for building NLG systems that work well for languages with limited amounts of data. Here we introduce IndoNLG, the first benchmark to measure natural language generation (NLG) progress in three low-resource{---}yet widely spoken{---}languages of Indonesia: Indonesian, Javanese, and Sundanese. Altogether, these languages are spoken by more than 100 million native speakers, and hence constitute an important use case of NLG systems today. Concretely, IndoNLG covers six tasks: summarization, question answering, chit-chat, and three different pairs of machine translation (MT) tasks. We collate a clean pretraining corpus of Indonesian, Sundanese, and Javanese datasets, Indo4B-Plus, which is used to pretrain our models: IndoBART and IndoGPT. We show that IndoBART and IndoGPT achieve competitive performance on all tasks{---}despite using only one-fifth the parameters of a larger multilingual model, mBART-large (Liu et al., 2020). This finding emphasizes the importance of pretraining on closely related, localized languages to achieve more efficient learning and faster inference at very low-resource languages like Javanese and Sundanese.",
}
```
## License
Creative Commons Attribution Share-Alike 4.0 International
## Homepage
[https://github.com/IndoNLP/indonlg](https://github.com/IndoNLP/indonlg)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) | [
-0.55087810754776,
-0.6238660216331482,
-0.1011546328663826,
0.5783659815788269,
-0.5898503065109253,
-0.22588559985160828,
-0.7557476758956909,
-0.34557151794433594,
0.11044466495513916,
0.30958086252212524,
-0.44146427512168884,
-0.5129439830780029,
-0.5054976940155029,
0.693534433841705... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SEACrowd/indo_religious_mt_en_id | SEACrowd | 2023-09-26T12:33:20Z | 52 | 0 | null | [
"language:ind",
"language:eng",
"machine-translation",
"region:us"
] | 2023-09-26T12:33:20Z | 2023-09-26T11:17:41.000Z | 2023-09-26T11:17:41 | ---
tags:
- machine-translation
language:
- ind
- eng
---
# indo_religious_mt_en_id
Indonesian Religious Domain MT En-Id consists of religious manuscripts or articles. These articles are different from news as they are not in a formal, informative style. Instead, they are written to advocate and inspire religious values, often times citing biblical or quranic anecdotes. An interesting property in the religion domain corpus is the localized names, for example, David to Daud, Mary to Maryam, Gabriel to Jibril, and more. In contrast, entity names are usually kept unchanged in other domains. We also find quite a handful of Indonesian translations of JW300 are missing the end sentence dot (.), even though the end sentence dot is present in their English counterpart. Some inconsistencies in the transliteration are also found, for example praying is sometimes written as "salat" or "shalat", or repentance as "tobat" or "taubat".
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{guntara-etal-2020-benchmarking,
title = "Benchmarking Multidomain {E}nglish-{I}ndonesian Machine Translation",
author = "Guntara, Tri Wahyu and
Aji, Alham Fikri and
Prasojo, Radityo Eko",
booktitle = "Proceedings of the 13th Workshop on Building and Using Comparable Corpora",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2020.bucc-1.6",
pages = "35--43",
abstract = "In the context of Machine Translation (MT) from-and-to English, Bahasa Indonesia has been considered a low-resource language, and therefore applying Neural Machine Translation (NMT) which typically requires large training dataset proves to be problematic. In this paper, we show otherwise by collecting large, publicly-available datasets from the Web, which we split into several domains: news, religion, general, and conversation, to train and benchmark some variants of transformer-based NMT models across the domains. We show using BLEU that our models perform well across them , outperform the baseline Statistical Machine Translation (SMT) models, and perform comparably with Google Translate. Our datasets (with the standard split for training, validation, and testing), code, and models are available on https://github.com/gunnxx/indonesian-mt-data.",
language = "English",
ISBN = "979-10-95546-42-9",
}
```
## License
Creative Commons Attribution Share-Alike 4.0 International
## Homepage
[https://github.com/gunnxx/indonesian-mt-data/tree/master/religious](https://github.com/gunnxx/indonesian-mt-data/tree/master/religious)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) | [
-0.4045034945011139,
-0.6259385347366333,
-0.0546853244304657,
0.3752802908420563,
-0.42767301201820374,
-0.06867126375436783,
-0.6270743012428284,
-0.32785508036613464,
0.16804461181163788,
0.415249764919281,
-0.41650402545928955,
-0.34243088960647583,
-0.8664625287055969,
0.6055169701576... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
roborovski/upsampled-prompts-parti | roborovski | 2023-11-01T13:25:03Z | 52 | 0 | null | [
"region:us"
] | 2023-11-01T13:25:03Z | 2023-10-29T17:32:37.000Z | 2023-10-29T17:32:37 | ---
dataset_info:
features:
- name: Prompt
dtype: string
- name: Category
dtype: string
- name: Upsampled
dtype: string
splits:
- name: train
num_bytes: 10258852
num_examples: 23318
download_size: 5483101
dataset_size: 10258852
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "upsampled-prompts-parti"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5986738801002502,
-0.10559443384408951,
0.3731760084629059,
0.4391224980354309,
-0.38698720932006836,
0.07151508331298828,
0.07108905166387558,
0.21412554383277893,
0.9756275415420532,
0.36004146933555603,
-1.1656101942062378,
-0.5637837052345276,
-0.46158266067504883,
-0.03147289901971... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
eunbinni/ola_llama2_7B_t2_data | eunbinni | 2023-11-05T02:03:33Z | 52 | 0 | null | [
"region:us"
] | 2023-11-05T02:03:33Z | 2023-11-05T02:02:55.000Z | 2023-11-05T02:02:55 | ---
dataset_info:
features:
- name: input
dtype: string
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 691281335
num_examples: 580812
download_size: 399933748
dataset_size: 691281335
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ola_llama2_7B_t2_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.33164000511169434,
-0.29125362634658813,
0.33382946252822876,
0.34164878726005554,
-0.49929279088974,
0.0234989020973444,
0.42171263694763184,
-0.3015999495983124,
0.7752681970596313,
0.6488921046257019,
-0.4822934567928314,
-0.8764193654060364,
-0.6309210062026978,
-0.24526247382164001... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
seongs1024/DKK-nli | seongs1024 | 2023-11-08T08:31:49Z | 52 | 0 | null | [
"region:us"
] | 2023-11-08T08:31:49Z | 2023-11-08T05:34:13.000Z | 2023-11-08T05:34:13 | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 90009601
num_examples: 601712
- name: validation
num_bytes: 1613552
num_examples: 7954
download_size: 27452285
dataset_size: 91623153
---
# Dataset Card for "DKK-nli"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6029954552650452,
-0.2860066890716553,
0.2342495322227478,
0.2614403963088989,
-0.32610806822776794,
0.24831844866275787,
0.3966006636619568,
-0.13687846064567566,
0.9573342204093933,
0.2927672564983368,
-0.9756851196289062,
-0.8527502417564392,
-0.5050743818283081,
0.000137134906253777... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Yama/math | Yama | 2023-11-09T15:16:30Z | 52 | 0 | null | [
"region:us"
] | 2023-11-09T15:16:30Z | 2023-11-09T13:49:12.000Z | 2023-11-09T13:49:12 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 355584
num_examples: 1200
- name: test
num_bytes: 50010
num_examples: 189
download_size: 0
dataset_size: 405594
---
# Dataset Card for "math"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6574289798736572,
-0.3909147083759308,
0.1347372978925705,
0.32782089710235596,
-0.08445506542921066,
0.03767668828368187,
0.23570747673511505,
-0.005269515328109264,
0.8013447523117065,
0.35588914155960083,
-0.8913255333900452,
-0.6854824423789978,
-0.6023814082145691,
-0.4879412949085... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
truong-xuan-linh/zola | truong-xuan-linh | 2023-11-15T15:41:45Z | 52 | 0 | null | [
"region:us"
] | 2023-11-15T15:41:45Z | 2023-11-13T13:48:54.000Z | 2023-11-13T13:48:54 | ---
dataset_info:
features:
- name: bannerImage
dtype: image
- name: en_caption
dtype: string
- name: concat_caption
dtype: string
splits:
- name: train
num_bytes: 49802715.406
num_examples: 1362
download_size: 48774124
dataset_size: 49802715.406
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "zola"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6268041729927063,
-0.14373646676540375,
0.29235580563545227,
0.15404467284679413,
-0.3362710177898407,
-0.30664631724357605,
0.22410091757774353,
-0.5551397204399109,
0.8966683149337769,
0.5444645285606384,
-1.0172698497772217,
-0.8793658018112183,
-0.7292384505271912,
-0.23175050318241... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jlbaker361/subtraction_whole | jlbaker361 | 2023-11-15T13:00:00Z | 52 | 0 | null | [
"region:us"
] | 2023-11-15T13:00:00Z | 2023-11-14T23:39:09.000Z | 2023-11-14T23:39:09 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: float64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1192290.3
num_examples: 29376
- name: test
num_bytes: 132476.7
num_examples: 3264
download_size: 684606
dataset_size: 1324767.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "subtraction_whole"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6313687562942505,
-0.42495131492614746,
0.2679811418056488,
0.23277471959590912,
-0.5933868288993835,
-0.045179203152656555,
0.3469961881637573,
-0.3237383961677551,
0.9734787344932556,
0.406378835439682,
-0.9706830382347107,
-0.7192093133926392,
-0.8333997130393982,
-0.2723685204982757... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Parleatacoeur/leyesperuanasactualizadas | Parleatacoeur | 2023-11-20T04:29:45Z | 52 | 0 | null | [
"task_categories:text-generation",
"language:es",
"legal",
"region:us"
] | 2023-11-20T04:29:45Z | 2023-11-20T04:27:55.000Z | 2023-11-20T04:27:55 | ---
task_categories:
- text-generation
language:
- es
tags:
- legal
--- | [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nayohan/T_DSG | nayohan | 2023-11-20T17:25:59Z | 52 | 0 | null | [
"region:us"
] | 2023-11-20T17:25:59Z | 2023-11-20T17:25:47.000Z | 2023-11-20T17:25:47 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 29025131
num_examples: 17940
- name: validation
num_bytes: 5283692
num_examples: 3000
- name: test
num_bytes: 4758564
num_examples: 2505
download_size: 18521693
dataset_size: 39067387
---
# Dataset Card for "T_DSG"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5698775053024292,
-0.37771475315093994,
0.38904786109924316,
0.009932261891663074,
-0.44245272874832153,
0.14817409217357635,
0.27601954340934753,
-0.04258719086647034,
0.9158839583396912,
0.4632403254508972,
-0.924680769443512,
-1.1419732570648193,
-0.8885079026222229,
-0.2466032654047... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sordonia/adauni-v1-flat | sordonia | 2023-11-24T16:05:34Z | 52 | 0 | null | [
"region:us"
] | 2023-11-24T16:05:34Z | 2023-11-24T04:46:05.000Z | 2023-11-24T04:46:05 | # Used datasets:
## sordonia/flan-10k-flat
## sordonia/mmlu-qa-flat
## sordonia/platypus-flat
## sordonia/ultrachat-32c-10k-flat
## Total number of tasks: 439
| [
-0.10726214945316315,
-0.14137539267539978,
0.3846971094608307,
0.7099617719650269,
-0.28421780467033386,
-0.11022830754518509,
0.18597160279750824,
0.109525166451931,
0.5028473734855652,
0.5149440765380859,
-0.9425039291381836,
-0.6543785929679871,
-0.3561026155948639,
0.32995814085006714... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zxvix/Anatomy_QA | zxvix | 2023-11-25T06:08:54Z | 52 | 0 | null | [
"region:us"
] | 2023-11-25T06:08:54Z | 2023-11-25T03:47:28.000Z | 2023-11-25T03:47:28 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: original_text
dtype: string
splits:
- name: test
num_bytes: 23420104
num_examples: 6386
download_size: 1466159
dataset_size: 23420104
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mrmoor/cyber-threat-intelligence-relations-only | mrmoor | 2022-10-24T19:44:19Z | 51 | 1 | null | [
"region:us"
] | 2022-10-24T19:44:19Z | 2022-10-24T19:43:46.000Z | 2022-10-24T19:43:46 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
taln-ls2n/kpbiomed | taln-ls2n | 2022-12-01T10:52:09Z | 51 | 3 | null | [
"task_categories:text-generation",
"annotations_creators:unknown",
"language_creators:unknown",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-nc-4.0",
"arxiv:2211.12124",
"region:us"
] | 2022-12-01T10:52:09Z | 2022-10-26T13:41:01.000Z | 2022-10-26T13:41:01 | ---
annotations_creators:
- unknown
language_creators:
- unknown
language:
- en
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
task_categories:
- text-mining
- text-generation
task_ids:
- keyphrase-generation
- keyphrase-extraction
size_categories:
- 100K<n<1M
pretty_name: KP-Biomed
---
# KPBiomed, A Large-Scale Dataset for keyphrase generation
## About
This dataset is made of 5.6 million abstracts with author assigned keyphrases.
Details about the dataset can be found in the original paper:
Maël Houbre, Florian Boudin and Béatrice Daille. 2022. [A Large-Scale Dataset for Biomedical Keyphrase Generation](https://arxiv.org/abs/2211.12124). In Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI 2022).
Reference (author-assigned) keyphrases are also categorized under the PRMU (<u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen) scheme as proposed in the following paper:
- Florian Boudin and Ygor Gallina. 2021.
[Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness](https://aclanthology.org/2021.naacl-main.330/).
In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics.
Text pre-processing (tokenization) is carried out using spacy (en_core_web_sm model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token). Stemming (Porter's stemmer implementation provided in nltk) is applied before reference keyphrases are matched against the source text.
## Content
The details of the dataset are in the table below:
| Split | # documents | # keyphrases by document (average) | % Present | % Reordered | % Mixed | % Unseen |
| :----------- | ----------: | ---------------------------------: | --------: | ----------: | ------: | -------: |
| Train small | 500k | 5.24 | 66.31 | 7.16 | 12.60 | 13.93 |
| Train medium | 2M | 5.24 | 66.30 | 7.18 | 12.57 | 13.95 |
| Train large | 5.6M | 5.23 | 66.32 | 7.18 | 12.55 | 13.95 |
| Validation | 20k | 5.25 | 66.44 | 7.07 | 12.45 | 14.05 |
| Test | 20k | 5.22 | 66.59 | 7.22 | 12.44 | 13.75 |
The following data fields are available:
- **id**: unique identifier of the document.
- **title**: title of the document.
- **abstract**: abstract of the document.
- **keyphrases**: list of reference keyphrases.
- **mesh terms**: list of indexer assigned MeSH terms if available (around 68% of the articles)
- **prmu**: list of <u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen categories for reference keyphrases.
- **authors**: list of the article's authors
- **year**: publication year
**NB**: The present keyphrases (represented by the "P" label in the PRMU column) are sorted by their apparition order in the text (title + text).
| [
-0.17520177364349365,
-0.32318493723869324,
0.5707502961158752,
0.20940260589122772,
-0.21335437893867493,
0.06799737364053726,
0.07393322885036469,
-0.11544525623321533,
0.3421667218208313,
0.44195055961608887,
-0.46135011315345764,
-0.8789465427398682,
-0.7035703063011169,
0.606524765491... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/biomrc | bigbio | 2022-12-22T15:43:44Z | 51 | 1 | null | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-12-22T15:43:44Z | 2022-11-13T22:06:42.000Z | 2022-11-13T22:06:42 |
---
language:
- en
bigbio_language:
- English
license: unknown
multilinguality: monolingual
bigbio_license_shortname: UNKNOWN
pretty_name: BIOMRC
homepage: https://github.com/PetrosStav/BioMRC_code
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- QUESTION_ANSWERING
---
# Dataset Card for BIOMRC
## Dataset Description
- **Homepage:** https://github.com/PetrosStav/BioMRC_code
- **Pubmed:** True
- **Public:** True
- **Tasks:** QA
We introduce BIOMRC, a large-scale cloze-style biomedical MRC dataset. Care was taken to reduce noise, compared to the
previous BIOREAD dataset of Pappas et al. (2018). Experiments show that simple heuristics do not perform well on the
new dataset and that two neural MRC models that had been tested on BIOREAD perform much better on BIOMRC, indicating
that the new dataset is indeed less noisy or at least that its task is more feasible. Non-expert human performance is
also higher on the new dataset compared to BIOREAD, and biomedical experts perform even better. We also introduce a new
BERT-based MRC model, the best version of which substantially outperforms all other methods tested, reaching or
surpassing the accuracy of biomedical experts in some experiments. We make the new dataset available in three different
sizes, also releasing our code, and providing a leaderboard.
## Citation Information
```
@inproceedings{pappas-etal-2020-biomrc,
title = "{B}io{MRC}: A Dataset for Biomedical Machine Reading Comprehension",
author = "Pappas, Dimitris and
Stavropoulos, Petros and
Androutsopoulos, Ion and
McDonald, Ryan",
booktitle = "Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.bionlp-1.15",
pages = "140--149",
}
```
| [
-0.454529732465744,
-0.547467827796936,
0.41464316844940186,
-0.18939541280269623,
-0.34912964701652527,
0.1497667282819748,
-0.20143643021583557,
-0.5935290455818176,
0.06402058154344559,
0.39883747696876526,
-0.6123400926589966,
-0.6852593421936035,
-0.4851265549659729,
0.31630939245224,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/pdr | bigbio | 2022-12-22T15:46:14Z | 51 | 1 | null | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-12-22T15:46:14Z | 2022-11-13T22:11:20.000Z | 2022-11-13T22:11:20 |
---
language:
- en
bigbio_language:
- English
license: unknown
multilinguality: monolingual
bigbio_license_shortname: UNKNOWN
pretty_name: PDR
homepage: http://gcancer.org/pdr/
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- EVENT_EXTRACTION
- COREFERENCE_RESOLUTION
---
# Dataset Card for PDR
## Dataset Description
- **Homepage:** http://gcancer.org/pdr/
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,EE,COREF
The corpus of plant-disease relation consists of plants and diseases and their relation to PubMed abstract.
The corpus consists of about 2400 plant and disease entities and 300 annotated relations from 179 abstracts.
## Citation Information
```
@article{kim2019corpus,
title={A corpus of plant--disease relations in the biomedical domain},
author={Kim, Baeksoo and Choi, Wonjun and Lee, Hyunju},
journal={PLoS One},
volume={14},
number={8},
pages={e0221582},
year={2019},
publisher={Public Library of Science San Francisco, CA USA}
}
```
| [
-0.10126238316297531,
-0.5393087863922119,
0.5107383728027344,
0.10194313526153564,
-0.262601763010025,
-0.4070741534233093,
-0.042454857379198074,
-0.47809916734695435,
0.39520207047462463,
0.5783225297927856,
-0.11313475668430328,
-0.9758755564689636,
-0.7520080804824829,
0.4721005558967... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
arbml/arabic_pos_dialect | arbml | 2022-11-18T14:40:06Z | 51 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2022-11-18T14:40:06Z | 2022-11-18T14:17:01.000Z | 2022-11-18T14:17:01 | ---
license: apache-2.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jonathanli/hyperpartisan-longformer-split | jonathanli | 2022-12-31T16:08:16Z | 51 | 0 | null | [
"arxiv:2004.05150",
"region:us"
] | 2022-12-31T16:08:16Z | 2022-12-31T15:56:50.000Z | 2022-12-31T15:56:50 | # Hyperpartisan news detection
This dataset has the hyperpartisan new dataset, processed and split exactly as it was for [longformer](https://arxiv.org/abs/2004.05150) experiments.
Code for processing was found at [here](https://github.com/allenai/longformer/blob/master/scripts/hp_preprocess.py).
| [
-0.4680013954639435,
-0.8621548414230347,
0.6674292683601379,
0.2570970356464386,
-0.29562655091285706,
0.059964582324028015,
-0.2622165381908417,
-0.17721550166606903,
0.6336683034896851,
0.9347535967826843,
-0.5988364815711975,
-0.4063832759857178,
-0.5136563777923584,
0.0823135301470756... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DFKI-SLT/cross_ner | DFKI-SLT | 2023-01-19T09:17:38Z | 51 | 0 | null | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|conll2003",
"language:en",
"cross domain",
"ai",
"news",
"musi... | 2023-01-19T09:17:38Z | 2023-01-19T09:17:08.000Z | 2023-01-19T09:17:08 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license: []
multilinguality:
- monolingual
pretty_name: CrossNER is a cross-domain dataset for named entity recognition
size_categories:
- 10K<n<100K
source_datasets:
- extended|conll2003
tags:
- cross domain
- ai
- news
- music
- literature
- politics
- science
task_categories:
- token-classification
task_ids:
- named-entity-recognition
dataset_info:
- config_name: ai
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-academicjournal
'2': I-academicjournal
'3': B-album
'4': I-album
'5': B-algorithm
'6': I-algorithm
'7': B-astronomicalobject
'8': I-astronomicalobject
'9': B-award
'10': I-award
'11': B-band
'12': I-band
'13': B-book
'14': I-book
'15': B-chemicalcompound
'16': I-chemicalcompound
'17': B-chemicalelement
'18': I-chemicalelement
'19': B-conference
'20': I-conference
'21': B-country
'22': I-country
'23': B-discipline
'24': I-discipline
'25': B-election
'26': I-election
'27': B-enzyme
'28': I-enzyme
'29': B-event
'30': I-event
'31': B-field
'32': I-field
'33': B-literarygenre
'34': I-literarygenre
'35': B-location
'36': I-location
'37': B-magazine
'38': I-magazine
'39': B-metrics
'40': I-metrics
'41': B-misc
'42': I-misc
'43': B-musicalartist
'44': I-musicalartist
'45': B-musicalinstrument
'46': I-musicalinstrument
'47': B-musicgenre
'48': I-musicgenre
'49': B-organisation
'50': I-organisation
'51': B-person
'52': I-person
'53': B-poem
'54': I-poem
'55': B-politicalparty
'56': I-politicalparty
'57': B-politician
'58': I-politician
'59': B-product
'60': I-product
'61': B-programlang
'62': I-programlang
'63': B-protein
'64': I-protein
'65': B-researcher
'66': I-researcher
'67': B-scientist
'68': I-scientist
'69': B-song
'70': I-song
'71': B-task
'72': I-task
'73': B-theory
'74': I-theory
'75': B-university
'76': I-university
'77': B-writer
'78': I-writer
splits:
- name: train
num_bytes: 65080
num_examples: 100
- name: validation
num_bytes: 189453
num_examples: 350
- name: test
num_bytes: 225691
num_examples: 431
download_size: 289173
dataset_size: 480224
- config_name: literature
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-academicjournal
'2': I-academicjournal
'3': B-album
'4': I-album
'5': B-algorithm
'6': I-algorithm
'7': B-astronomicalobject
'8': I-astronomicalobject
'9': B-award
'10': I-award
'11': B-band
'12': I-band
'13': B-book
'14': I-book
'15': B-chemicalcompound
'16': I-chemicalcompound
'17': B-chemicalelement
'18': I-chemicalelement
'19': B-conference
'20': I-conference
'21': B-country
'22': I-country
'23': B-discipline
'24': I-discipline
'25': B-election
'26': I-election
'27': B-enzyme
'28': I-enzyme
'29': B-event
'30': I-event
'31': B-field
'32': I-field
'33': B-literarygenre
'34': I-literarygenre
'35': B-location
'36': I-location
'37': B-magazine
'38': I-magazine
'39': B-metrics
'40': I-metrics
'41': B-misc
'42': I-misc
'43': B-musicalartist
'44': I-musicalartist
'45': B-musicalinstrument
'46': I-musicalinstrument
'47': B-musicgenre
'48': I-musicgenre
'49': B-organisation
'50': I-organisation
'51': B-person
'52': I-person
'53': B-poem
'54': I-poem
'55': B-politicalparty
'56': I-politicalparty
'57': B-politician
'58': I-politician
'59': B-product
'60': I-product
'61': B-programlang
'62': I-programlang
'63': B-protein
'64': I-protein
'65': B-researcher
'66': I-researcher
'67': B-scientist
'68': I-scientist
'69': B-song
'70': I-song
'71': B-task
'72': I-task
'73': B-theory
'74': I-theory
'75': B-university
'76': I-university
'77': B-writer
'78': I-writer
splits:
- name: train
num_bytes: 63181
num_examples: 100
- name: validation
num_bytes: 244076
num_examples: 400
- name: test
num_bytes: 270092
num_examples: 416
download_size: 334380
dataset_size: 577349
- config_name: music
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-academicjournal
'2': I-academicjournal
'3': B-album
'4': I-album
'5': B-algorithm
'6': I-algorithm
'7': B-astronomicalobject
'8': I-astronomicalobject
'9': B-award
'10': I-award
'11': B-band
'12': I-band
'13': B-book
'14': I-book
'15': B-chemicalcompound
'16': I-chemicalcompound
'17': B-chemicalelement
'18': I-chemicalelement
'19': B-conference
'20': I-conference
'21': B-country
'22': I-country
'23': B-discipline
'24': I-discipline
'25': B-election
'26': I-election
'27': B-enzyme
'28': I-enzyme
'29': B-event
'30': I-event
'31': B-field
'32': I-field
'33': B-literarygenre
'34': I-literarygenre
'35': B-location
'36': I-location
'37': B-magazine
'38': I-magazine
'39': B-metrics
'40': I-metrics
'41': B-misc
'42': I-misc
'43': B-musicalartist
'44': I-musicalartist
'45': B-musicalinstrument
'46': I-musicalinstrument
'47': B-musicgenre
'48': I-musicgenre
'49': B-organisation
'50': I-organisation
'51': B-person
'52': I-person
'53': B-poem
'54': I-poem
'55': B-politicalparty
'56': I-politicalparty
'57': B-politician
'58': I-politician
'59': B-product
'60': I-product
'61': B-programlang
'62': I-programlang
'63': B-protein
'64': I-protein
'65': B-researcher
'66': I-researcher
'67': B-scientist
'68': I-scientist
'69': B-song
'70': I-song
'71': B-task
'72': I-task
'73': B-theory
'74': I-theory
'75': B-university
'76': I-university
'77': B-writer
'78': I-writer
splits:
- name: train
num_bytes: 65077
num_examples: 100
- name: validation
num_bytes: 259702
num_examples: 380
- name: test
num_bytes: 327195
num_examples: 465
download_size: 414065
dataset_size: 651974
- config_name: conll2003
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-academicjournal
'2': I-academicjournal
'3': B-album
'4': I-album
'5': B-algorithm
'6': I-algorithm
'7': B-astronomicalobject
'8': I-astronomicalobject
'9': B-award
'10': I-award
'11': B-band
'12': I-band
'13': B-book
'14': I-book
'15': B-chemicalcompound
'16': I-chemicalcompound
'17': B-chemicalelement
'18': I-chemicalelement
'19': B-conference
'20': I-conference
'21': B-country
'22': I-country
'23': B-discipline
'24': I-discipline
'25': B-election
'26': I-election
'27': B-enzyme
'28': I-enzyme
'29': B-event
'30': I-event
'31': B-field
'32': I-field
'33': B-literarygenre
'34': I-literarygenre
'35': B-location
'36': I-location
'37': B-magazine
'38': I-magazine
'39': B-metrics
'40': I-metrics
'41': B-misc
'42': I-misc
'43': B-musicalartist
'44': I-musicalartist
'45': B-musicalinstrument
'46': I-musicalinstrument
'47': B-musicgenre
'48': I-musicgenre
'49': B-organisation
'50': I-organisation
'51': B-person
'52': I-person
'53': B-poem
'54': I-poem
'55': B-politicalparty
'56': I-politicalparty
'57': B-politician
'58': I-politician
'59': B-product
'60': I-product
'61': B-programlang
'62': I-programlang
'63': B-protein
'64': I-protein
'65': B-researcher
'66': I-researcher
'67': B-scientist
'68': I-scientist
'69': B-song
'70': I-song
'71': B-task
'72': I-task
'73': B-theory
'74': I-theory
'75': B-university
'76': I-university
'77': B-writer
'78': I-writer
splits:
- name: train
num_bytes: 3561081
num_examples: 14041
- name: validation
num_bytes: 891431
num_examples: 3250
- name: test
num_bytes: 811470
num_examples: 3453
download_size: 2694794
dataset_size: 5263982
- config_name: politics
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-academicjournal
'2': I-academicjournal
'3': B-album
'4': I-album
'5': B-algorithm
'6': I-algorithm
'7': B-astronomicalobject
'8': I-astronomicalobject
'9': B-award
'10': I-award
'11': B-band
'12': I-band
'13': B-book
'14': I-book
'15': B-chemicalcompound
'16': I-chemicalcompound
'17': B-chemicalelement
'18': I-chemicalelement
'19': B-conference
'20': I-conference
'21': B-country
'22': I-country
'23': B-discipline
'24': I-discipline
'25': B-election
'26': I-election
'27': B-enzyme
'28': I-enzyme
'29': B-event
'30': I-event
'31': B-field
'32': I-field
'33': B-literarygenre
'34': I-literarygenre
'35': B-location
'36': I-location
'37': B-magazine
'38': I-magazine
'39': B-metrics
'40': I-metrics
'41': B-misc
'42': I-misc
'43': B-musicalartist
'44': I-musicalartist
'45': B-musicalinstrument
'46': I-musicalinstrument
'47': B-musicgenre
'48': I-musicgenre
'49': B-organisation
'50': I-organisation
'51': B-person
'52': I-person
'53': B-poem
'54': I-poem
'55': B-politicalparty
'56': I-politicalparty
'57': B-politician
'58': I-politician
'59': B-product
'60': I-product
'61': B-programlang
'62': I-programlang
'63': B-protein
'64': I-protein
'65': B-researcher
'66': I-researcher
'67': B-scientist
'68': I-scientist
'69': B-song
'70': I-song
'71': B-task
'72': I-task
'73': B-theory
'74': I-theory
'75': B-university
'76': I-university
'77': B-writer
'78': I-writer
splits:
- name: train
num_bytes: 143507
num_examples: 200
- name: validation
num_bytes: 422760
num_examples: 541
- name: test
num_bytes: 472690
num_examples: 651
download_size: 724168
dataset_size: 1038957
- config_name: science
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-academicjournal
'2': I-academicjournal
'3': B-album
'4': I-album
'5': B-algorithm
'6': I-algorithm
'7': B-astronomicalobject
'8': I-astronomicalobject
'9': B-award
'10': I-award
'11': B-band
'12': I-band
'13': B-book
'14': I-book
'15': B-chemicalcompound
'16': I-chemicalcompound
'17': B-chemicalelement
'18': I-chemicalelement
'19': B-conference
'20': I-conference
'21': B-country
'22': I-country
'23': B-discipline
'24': I-discipline
'25': B-election
'26': I-election
'27': B-enzyme
'28': I-enzyme
'29': B-event
'30': I-event
'31': B-field
'32': I-field
'33': B-literarygenre
'34': I-literarygenre
'35': B-location
'36': I-location
'37': B-magazine
'38': I-magazine
'39': B-metrics
'40': I-metrics
'41': B-misc
'42': I-misc
'43': B-musicalartist
'44': I-musicalartist
'45': B-musicalinstrument
'46': I-musicalinstrument
'47': B-musicgenre
'48': I-musicgenre
'49': B-organisation
'50': I-organisation
'51': B-person
'52': I-person
'53': B-poem
'54': I-poem
'55': B-politicalparty
'56': I-politicalparty
'57': B-politician
'58': I-politician
'59': B-product
'60': I-product
'61': B-programlang
'62': I-programlang
'63': B-protein
'64': I-protein
'65': B-researcher
'66': I-researcher
'67': B-scientist
'68': I-scientist
'69': B-song
'70': I-song
'71': B-task
'72': I-task
'73': B-theory
'74': I-theory
'75': B-university
'76': I-university
'77': B-writer
'78': I-writer
splits:
- name: train
num_bytes: 121928
num_examples: 200
- name: validation
num_bytes: 276118
num_examples: 450
- name: test
num_bytes: 334181
num_examples: 543
download_size: 485191
dataset_size: 732227
---
# Dataset Card for CrossRE
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [CrossNER](https://github.com/zliucr/CrossNER)
- **Paper:** [CrossNER: Evaluating Cross-Domain Named Entity Recognition](https://arxiv.org/abs/2012.04373)
### Dataset Summary
CrossNER is a fully-labeled collected of named entity recognition (NER) data spanning over five diverse domains
(Politics, Natural Science, Music, Literature, and Artificial Intelligence) with specialized entity categories for
different domains. Additionally, CrossNER also includes unlabeled domain-related corpora for the corresponding five
domains.
For details, see the paper:
[CrossNER: Evaluating Cross-Domain Named Entity Recognition](https://arxiv.org/abs/2012.04373)
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
The language data in CrossNER is in English (BCP-47 en)
## Dataset Structure
### Data Instances
#### conll2003
- **Size of downloaded dataset files:** 2.69 MB
- **Size of the generated dataset:** 5.26 MB
An example of 'train' looks as follows:
```json
{
"id": "0",
"tokens": ["EU", "rejects", "German", "call", "to", "boycott", "British", "lamb", "."],
"ner_tags": [49, 0, 41, 0, 0, 0, 41, 0, 0]
}
```
#### politics
- **Size of downloaded dataset files:** 0.72 MB
- **Size of the generated dataset:** 1.04 MB
An example of 'train' looks as follows:
```json
{
"id": "0",
"tokens": ["Parties", "with", "mainly", "Eurosceptic", "views", "are", "the", "ruling", "United", "Russia", ",", "and", "opposition", "parties", "the", "Communist", "Party", "of", "the", "Russian", "Federation", "and", "Liberal", "Democratic", "Party", "of", "Russia", "."],
"ner_tags": [0, 0, 0, 0, 0, 0, 0, 0, 55, 56, 0, 0, 0, 0, 0, 55, 56, 56, 56, 56, 56, 0, 55, 56, 56, 56, 56, 0]
}
```
#### science
- **Size of downloaded dataset files:** 0.49 MB
- **Size of the generated dataset:** 0.73 MB
An example of 'train' looks as follows:
```json
{
"id": "0",
"tokens": ["They", "may", "also", "use", "Adenosine", "triphosphate", ",", "Nitric", "oxide", ",", "and", "ROS", "for", "signaling", "in", "the", "same", "ways", "that", "animals", "do", "."],
"ner_tags": [0, 0, 0, 0, 15, 16, 0, 15, 16, 0, 0, 15, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
}
```
#### music
- **Size of downloaded dataset files:** 0.41 MB
- **Size of the generated dataset:** 0.65 MB
An example of 'train' looks as follows:
```json
{
"id": "0",
"tokens": ["In", "2003", ",", "the", "Stade", "de", "France", "was", "the", "primary", "site", "of", "the", "2003", "World", "Championships", "in", "Athletics", "."],
"ner_tags": [0, 0, 0, 0, 35, 36, 36, 0, 0, 0, 0, 0, 0, 29, 30, 30, 30, 30, 0]
}
```
#### literature
- **Size of downloaded dataset files:** 0.33 MB
- **Size of the generated dataset:** 0.58 MB
An example of 'train' looks as follows:
```json
{
"id": "0",
"tokens": ["In", "1351", ",", "during", "the", "reign", "of", "Emperor", "Toghon", "Temür", "of", "the", "Yuan", "dynasty", ",", "93rd-generation", "descendant", "Kong", "Huan", "(", "孔浣", ")", "'", "s", "2nd", "son", "Kong", "Shao", "(", "孔昭", ")", "moved", "from", "China", "to", "Korea", "during", "the", "Goryeo", ",", "and", "was", "received", "courteously", "by", "Princess", "Noguk", "(", "the", "Mongolian-born", "wife", "of", "the", "future", "king", "Gongmin", ")", "."],
"ner_tags": [0, 0, 0, 0, 0, 0, 0, 51, 52, 52, 0, 0, 21, 22, 0, 0, 0, 77, 78, 0, 77, 0, 0, 0, 0, 0, 77, 78, 0, 77, 0, 0, 0, 21, 0, 21, 0, 0, 41, 0, 0, 0, 0, 0, 0, 51, 52, 0, 0, 41, 0, 0, 0, 0, 0, 51, 0, 0]
}
```
#### ai
- **Size of downloaded dataset files:** 0.29 MB
- **Size of the generated dataset:** 0.48 MB
An example of 'train' looks as follows:
```json
{
"id": "0",
"tokens": ["Popular", "approaches", "of", "opinion-based", "recommender", "system", "utilize", "various", "techniques", "including", "text", "mining", ",", "information", "retrieval", ",", "sentiment", "analysis", "(", "see", "also", "Multimodal", "sentiment", "analysis", ")", "and", "deep", "learning", "X.Y.", "Feng", ",", "H.", "Zhang", ",", "Y.J.", "Ren", ",", "P.H.", "Shang", ",", "Y.", "Zhu", ",", "Y.C.", "Liang", ",", "R.C.", "Guan", ",", "D.", "Xu", ",", "(", "2019", ")", ",", ",", "21", "(", "5", ")", ":", "e12957", "."],
"ner_tags": [0, 0, 0, 59, 60, 60, 0, 0, 0, 0, 31, 32, 0, 71, 72, 0, 71, 72, 0, 0, 0, 71, 72, 72, 0, 0, 31, 32, 65, 66, 0, 65, 66, 0, 65, 66, 0, 65, 66, 0, 65, 66, 0, 65, 66, 0, 65, 66, 0, 65, 66, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
}
```
### Data Fields
The data fields are the same among all splits.
- `id`: the instance id of this sentence, a `string` feature.
- `tokens`: the list of tokens of this sentence, a `list` of `string` features.
- `ner_tags`: the list of entity tags, a `list` of classification labels.
```json
{"O": 0, "B-academicjournal": 1, "I-academicjournal": 2, "B-album": 3, "I-album": 4, "B-algorithm": 5, "I-algorithm": 6, "B-astronomicalobject": 7, "I-astronomicalobject": 8, "B-award": 9, "I-award": 10, "B-band": 11, "I-band": 12, "B-book": 13, "I-book": 14, "B-chemicalcompound": 15, "I-chemicalcompound": 16, "B-chemicalelement": 17, "I-chemicalelement": 18, "B-conference": 19, "I-conference": 20, "B-country": 21, "I-country": 22, "B-discipline": 23, "I-discipline": 24, "B-election": 25, "I-election": 26, "B-enzyme": 27, "I-enzyme": 28, "B-event": 29, "I-event": 30, "B-field": 31, "I-field": 32, "B-literarygenre": 33, "I-literarygenre": 34, "B-location": 35, "I-location": 36, "B-magazine": 37, "I-magazine": 38, "B-metrics": 39, "I-metrics": 40, "B-misc": 41, "I-misc": 42, "B-musicalartist": 43, "I-musicalartist": 44, "B-musicalinstrument": 45, "I-musicalinstrument": 46, "B-musicgenre": 47, "I-musicgenre": 48, "B-organisation": 49, "I-organisation": 50, "B-person": 51, "I-person": 52, "B-poem": 53, "I-poem": 54, "B-politicalparty": 55, "I-politicalparty": 56, "B-politician": 57, "I-politician": 58, "B-product": 59, "I-product": 60, "B-programlang": 61, "I-programlang": 62, "B-protein": 63, "I-protein": 64, "B-researcher": 65, "I-researcher": 66, "B-scientist": 67, "I-scientist": 68, "B-song": 69, "I-song": 70, "B-task": 71, "I-task": 72, "B-theory": 73, "I-theory": 74, "B-university": 75, "I-university": 76, "B-writer": 77, "I-writer": 78}
```
### Data Splits
| | Train | Dev | Test |
|--------------|--------|-------|-------|
| conll2003 | 14,987 | 3,466 | 3,684 |
| politics | 200 | 541 | 651 |
| science | 200 | 450 | 543 |
| music | 100 | 380 | 456 |
| literature | 100 | 400 | 416 |
| ai | 100 | 350 | 431 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{liu2020crossner,
title={CrossNER: Evaluating Cross-Domain Named Entity Recognition},
author={Zihan Liu and Yan Xu and Tiezheng Yu and Wenliang Dai and Ziwei Ji and Samuel Cahyawijaya and Andrea Madotto and Pascale Fung},
year={2020},
eprint={2012.04373},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@phucdev](https://github.com/phucdev) for adding this dataset. | [
-0.7377839088439941,
-0.3933260142803192,
0.21258734166622162,
0.045633334666490555,
-0.15028724074363708,
0.18035823106765747,
-0.2669517993927002,
-0.3948049545288086,
0.7028477787971497,
0.3640589714050293,
-0.786726176738739,
-0.9733319282531738,
-0.7193289995193481,
0.2573677301406860... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
brianarbuckle/cocktail_recipes | brianarbuckle | 2023-02-28T04:14:39Z | 51 | 1 | null | [
"task_categories:text2text-generation",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:text-retrieval",
"task_categories:summarization",
"task_ids:document-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:explanation-generation",
"task_ids:language-modelin... | 2023-02-28T04:14:39Z | 2023-02-15T22:01:34.000Z | 2023-02-15T22:01:34 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text2text-generation
- text-generation
- fill-mask
- text-retrieval
- summarization
task_ids:
- document-retrieval
- entity-linking-retrieval
- explanation-generation
- language-modeling
- masked-language-modeling
pretty_name: Cocktail Recipes
dataset_info:
features:
- name: title
dtype: string
- name: ingredients
sequence: string
- name: directions
sequence: string
- name: misc
sequence: string
- name: source
dtype: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 301501
num_examples: 875
download_size: 96915
dataset_size: 301501
---
# Dataset Card for Cocktail Recipes
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
## Dataset Description
### Dataset Summary
Cocktail Recipes Dataset for Semi-Structured Text Generation.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset is in English.
## Dataset Structure
### Data Instances
```json
{"title": "Final Ward",
"ingredients": ["0.75 oz. Rye Whiskey",
"0.75 oz. Lemon Juice",
"0.75 oz. Maraschino Liqueur",
"0.75 oz. Green Chartreuse"],
"directions": ["shake on ice and strain"],
"misc":[],
"source": "Death & Co.",
"ner":["whiskey",
"chartreuse",
"maraschino liqueur"]}
```
### Data Fields
- `title` (`str`): Title of the recipe.
- `ingredients` (`list` of `str`): Ingredients.
- `directions` (`list` of `str`): Instruction steps.
- `source` (`str`): Origin of each recipe
- `ner` (`list` of `str`): NER entities.
### Data Splits
The dataset contains a single `train` split.
## Dataset Creation
[More Information Needed]
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
| [
-0.24284784495830536,
-0.6504389643669128,
0.0467471219599247,
0.2847825288772583,
-0.18067242205142975,
0.2312491238117218,
-0.13459549844264984,
-0.17597895860671997,
0.7830926775932312,
0.8958312273025513,
-0.8966987729072571,
-1.2978044748306274,
-0.5799750685691833,
0.2702282071113586... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MoonPropet/Othinus_lite | MoonPropet | 2023-02-23T14:15:46Z | 51 | 0 | null | [
"region:us"
] | 2023-02-23T14:15:46Z | 2023-02-23T13:50:09.000Z | 2023-02-23T13:50:09 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lucadiliello/hotpotqa | lucadiliello | 2023-06-06T08:36:49Z | 51 | 0 | null | [
"region:us"
] | 2023-06-06T08:36:49Z | 2023-02-25T18:03:18.000Z | 2023-02-25T18:03:18 | ---
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence: string
- name: key
dtype: string
- name: labels
list:
- name: end
sequence: int64
- name: start
sequence: int64
splits:
- name: train
num_bytes: 85224549
num_examples: 72928
- name: validation
num_bytes: 8285153
num_examples: 5901
download_size: 57326467
dataset_size: 93509702
---
# Dataset Card for "hotpotqa"
Split taken from the MRQA 2019 Shared Task, formatted and filtered for Question Answering. For the original dataset, have a look [here](https://huggingface.co/datasets/mrqa). | [
-0.6489971280097961,
-0.8301688432693481,
0.1736070066690445,
0.10814669728279114,
-0.38676396012306213,
-0.025090379640460014,
0.21831125020980835,
0.0032867570407688618,
0.7728444933891296,
1.079386591911316,
-1.0211243629455566,
-0.22574476897716522,
-0.4331989288330078,
0.0715592876076... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
r1ck/viwiki | r1ck | 2023-03-01T04:21:04Z | 51 | 0 | null | [
"region:us"
] | 2023-03-01T04:21:04Z | 2023-03-01T04:19:36.000Z | 2023-03-01T04:19:36 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ELiRF/dacsa | ELiRF | 2023-03-25T09:58:52Z | 51 | 1 | dacsa | [
"task_categories:text2text-generation",
"task_ids:news-articles-summarization",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:ca",
"language:es",
"license:odbl",
"region:us"
] | 2023-03-25T09:58:52Z | 2023-03-03T10:16:33.000Z | 2023-03-03T10:16:33 | ---
task_categories:
- text2text-generation
task_ids:
- news-articles-summarization
language:
- ca
- es
size_categories:
- 1M<n<10M
license:
- odbl
multilinguality:
- multilingual
source_datasets:
- original
paperswithcode_id: dacsa
annotations_creators:
- found
language_creators:
- found
pretty_name: DACSA
---
# Dataset Card for "DACSA"
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Paper:** [DACSA: A large-scale Dataset for Automatic summarization of Catalan and Spanish newspaper Articles](https://aclanthology.org/2022.naacl-main.434/)
- **Point of Contact:** [Vicent Ahuir](mailto:viahes@dsic.upv.es)
### Dataset Summary
The Dataset for Automatic summarization of Catalan and Spanish newspaper
Articles (DACSA) corpus. It is a high-quality large-scale corpus that can be
used to train summarization models for Catalan and Spanish. The data provides
pairs of news article and its summary from different newspapers for both, the
Catalan and the Spanish languages. Regarding the Catalan set, there are 725,184
sample pairs from 9 newspapers, regarding the Spanish set, the corpus provides
2,120,649 sample pairs from 21 newspapers.
### Supported Tasks and Leaderboards
[More information needed](https://github.com/csebuetnlp/xl-sum)
### Languages
- `catalan`
- `spanish`
## Dataset Structure
### Data Fields
- 'id': A string representing the article ID.
- 'summary': A string containing the article summary.
- 'article' : A string containing the article text.
### Data Splits
Four splits are provided for each language set
- **train**: samples for training models
- **validation**: samples for adjusting and validating models
- **test.i**: test samples from newspapers present in _train_ and _validation_ splits
- **test.ni**: test samples from newspapers not present in training and validation splits
The _validation_ and _test-i_ splits contain a uniform distribution of samples
from each newspaper source.
Languages | ISO 639-1 Code | Train | Val | Test.i | Test.ni | Total |
--------------|----------------|---------|-------|--------|---------|---------|
Catalan | ca | 636596 | 35376 | 35376 | 17836 | 725184 |
Spanish | es | 1802919 | 104052 | 104052 | 109626 | 2120649 |
## Dataset Creation
### Curation Rationale
[More information needed](https://github.com/csebuetnlp/xl-sum)
### Source Data
Newspapers from Spain that publish news in Catalan or Spanish
#### Initial Data Collection and Normalization
[Detailed in the paper](https://aclanthology.org/2022.naacl-main.434/)
## Considerations for Using the Data
### Social Impact of Dataset
[More information needed](https://aclanthology.org/2022.naacl-main.434/)
### Discussion of Biases
[More information needed](https://aclanthology.org/2022.naacl-main.434/)
### Other Known Limitations
[More information needed](https://aclanthology.org/2022.naacl-main.434/)
## Additional Information
### Dataset Curators
[More information needed](https://aclanthology.org/2022.naacl-main.434/)
### Licensing Information
These data are released under this licensing scheme.
We do not own any of the text from which these data has been extracted.
This DACSA dataset package is made available under the Open Database License: http://opendatacommons.org/licenses/odbl/1.0/. Any rights in individual contents of the database are licensed under the Database Contents License: http://opendatacommons.org/licenses/dbcl/1.0/
Should you consider that our data contains material that is owned by you
and should therefore not be reproduced here, please:
* Clearly identify yourself, with detailed contact data such as an address,
telephone number or email address at which you can be contacted.
* Clearly identify the copyrighted work claimed to be infringed.
* Clearly identify the material that is claimed to be infringing and
information reasonably sufficient to allow us to locate the material.
We will comply to legitimate requests by removing the affected sources
from the next release of the corpus.
### Citation Information
If you use any of the datasets, models or code modules, please cite the following paper:
```
@inproceedings{segarra-soriano-etal-2022-dacsa,
title = "{DACSA}: A large-scale Dataset for Automatic summarization of {C}atalan and {S}panish newspaper Articles",
author = "Segarra Soriano, Encarnaci{\'o}n and
Ahuir, Vicent and
Hurtado, Llu{\'\i}s-F. and
Gonz{\'a}lez, Jos{\'e}",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.434",
doi = "10.18653/v1/2022.naacl-main.434",
pages = "5931--5943",
abstract = "The application of supervised methods to automatic summarization requires the availability of adequate corpora consisting of a set of document-summary pairs. As in most Natural Language Processing tasks, the great majority of available datasets for summarization are in English, making it difficult to develop automatic summarization models for other languages. Although Spanish is gradually forming part of some recent summarization corpora, it is not the same for minority languages such as Catalan.In this work, we describe the construction of a corpus of Catalan and Spanish newspapers, the Dataset for Automatic summarization of Catalan and Spanish newspaper Articles (DACSA) corpus. It is a high-quality large-scale corpus that can be used to train summarization models for Catalan and Spanish.We have carried out an analysis of the corpus, both in terms of the style of the summaries and the difficulty of the summarization task. In particular, we have used a set of well-known metrics in the summarization field in order to characterize the corpus. Additionally, for benchmarking purposes, we have evaluated the performances of some extractive and abstractive summarization systems on the DACSA corpus.",
}
```
| [
-0.4551152288913727,
-0.6477250456809998,
0.24094733595848083,
0.514901876449585,
-0.2594473958015442,
0.23876850306987762,
-0.2125503271818161,
-0.4368661940097809,
0.6879045963287354,
0.3637023866176605,
-0.35547420382499695,
-0.8704017996788025,
-0.6467899680137634,
0.3018912971019745,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Muennighoff/quixbugs | Muennighoff | 2023-03-26T16:15:28Z | 51 | 0 | null | [
"region:us"
] | 2023-03-26T16:15:28Z | 2023-03-26T13:58:52.000Z | 2023-03-26T13:58:52 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
TimoImhof/TriviaQA-in-SQuAD-format | TimoImhof | 2023-04-01T13:43:14Z | 51 | 0 | null | [
"region:us"
] | 2023-04-01T13:43:14Z | 2023-03-28T08:48:36.000Z | 2023-03-28T08:48:36 | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: context
dtype: string
- name: answers
struct:
- name: answer_start
sequence: int64
- name: text
sequence: string
splits:
- name: unmodified
num_bytes: 22886661
num_examples: 15368
- name: modified_30_percent
num_bytes: 22899894
num_examples: 15368
- name: modified_100_percent
num_bytes: 22929228
num_examples: 15368
download_size: 40760032
dataset_size: 68715783
---
# Dataset Card for "TriviaQA-in-SQuAD-format"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4197598695755005,
-0.2089855670928955,
0.22633272409439087,
0.34322652220726013,
-0.10427894443273544,
0.5391619205474854,
0.30225685238838196,
-0.020067542791366577,
0.8581882119178772,
0.3371276259422302,
-1.0245928764343262,
-0.7912174463272095,
-0.24333646893501282,
0.02611483260989... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dev2bit/es2bash | dev2bit | 2023-05-23T21:11:43Z | 51 | 3 | null | [
"task_categories:text-generation",
"language:es",
"license:apache-2.0",
"code",
"region:us"
] | 2023-05-23T21:11:43Z | 2023-05-23T20:25:37.000Z | 2023-05-23T20:25:37 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- es
tags:
- code
---
# ES2Bash
This dataset contains a collection of natural language requests (in Spanish) and their corresponding bash commands. The purpose of this dataset is to provide examples of requests and their associated bash commands to facilitate machine learning and the development of natural language processing systems related to command-line operations.
# Features
The dataset consists of two main features:
* Natural Language Request (ES): This feature contains natural language requests written in Spanish. The requests represent tasks or actions to be performed using command-line commands.
* Bash Command: This feature contains the bash commands associated with each natural language request. The bash commands represent the way to execute the requested task or action using the command line.
# Initial Commands
The dataset initially contains requests related to the following commands:
* cat: Requests involving reading text files.
* ls: Requests related to obtaining information about files and directories at a specific location.
* cd: Requests to change the current directory.
# Dataset Expansion
In addition to the initial commands mentioned above, there are plans to expand this dataset to include more common command-line commands. The expansion will cover a broader range of tasks and actions that can be performed using command-line operations.
Efforts will also be made to improve the existing examples and ensure that they are clear, accurate, and representative of typical requests that users may have when working with command lines.
# Request Statistics
In the future, statistical data will be provided on the requests present in this dataset. This data may include information about the distribution of requests in different categories, the frequency of use of different commands, and any other relevant analysis to better understand the usage and needs of command-line users.
# Request Collection Process
This dataset is the result of a combination of requests generated by language models and manually added requests. The requests generated by language models were based on existing examples and prior knowledge related to the usage of command lines. A manual review was then conducted to ensure the quality and relevance of the requests. | [
-0.624876856803894,
-0.9735824465751648,
0.3866053819656372,
0.16817544400691986,
-0.14167706668376923,
0.3246752619743347,
-0.07879479229450226,
-0.38517218828201294,
0.6433858871459961,
1.320715308189392,
-1.0101416110992432,
-0.6187750697135925,
-0.33049362897872925,
0.20414116978645325... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
HasturOfficial/adgen | HasturOfficial | 2023-06-04T12:06:50Z | 51 | 1 | null | [
"region:us"
] | 2023-06-04T12:06:50Z | 2023-06-04T12:06:23.000Z | 2023-06-04T12:06:23 | ---
dataset_info:
features:
- name: content
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 51127446
num_examples: 114599
- name: validation
num_bytes: 473784
num_examples: 1070
download_size: 27853861
dataset_size: 51601230
---
# Dataset Card for "adgen"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7178882956504822,
-0.23819178342819214,
0.14821794629096985,
-0.019420353695750237,
-0.09189962595701218,
0.00010104502871399745,
0.2874980568885803,
-0.29598268866539,
0.7611232399940491,
0.3599375784397125,
-0.929158627986908,
-0.9027121663093567,
-0.5041292309761047,
-0.2731975913047... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
clarin-knext/scidocs-pl-qrels | clarin-knext | 2023-06-07T08:09:59Z | 51 | 0 | null | [
"language:pl",
"arxiv:2305.19840",
"region:us"
] | 2023-06-07T08:09:59Z | 2023-06-06T22:52:13.000Z | 2023-06-06T22:52:13 | ---
language:
- pl
---
Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**.
Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf
Contact: konrad.wojtasik@pwr.edu.pl | [
-0.2209915816783905,
-0.9029768109321594,
0.5094643235206604,
0.2354193478822708,
-0.3185211718082428,
-0.1491904854774475,
-0.16673950850963593,
-0.4962919354438782,
-0.018960798159241676,
0.4112257659435272,
-0.5503100752830505,
-0.691356897354126,
-0.4166182279586792,
-0.048304602503776... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Patt/RTE_TH_drop | Patt | 2023-06-22T09:21:18Z | 51 | 0 | null | [
"task_categories:text-classification",
"language:en",
"language:th",
"arxiv:1907.04307",
"region:us"
] | 2023-06-22T09:21:18Z | 2023-06-21T11:34:48.000Z | 2023-06-21T11:34:48 | ---
task_categories:
- text-classification
language:
- en
- th
---
# Dataset Card for RTE_TH_drop
### Dataset Description
This dataset is Thai translated version of [RTE](https://huggingface.co/datasets/super_glue/viewer/rte) using google translate with [Multilingual Universal Sentence Encoder](https://arxiv.org/abs/1907.04307) to calculate score for Thai translation.
Some line which score_hypothesis <= 0.5 or score_premise <= 0.7 had been droped. | [
-0.3521003723144531,
-0.6888363361358643,
-0.019250662997364998,
0.376177042722702,
-0.5273865461349487,
-0.23094776272773743,
-0.2567792236804962,
-0.2741360068321228,
0.48458585143089294,
0.5730480551719666,
-0.7434003353118896,
-0.8310048580169678,
-0.6646078824996948,
0.360743552446365... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dwarf2/databricks-dolly-15k-ru | dwarf2 | 2023-06-27T11:39:10Z | 51 | 0 | null | [
"license:mit",
"region:us"
] | 2023-06-27T11:39:10Z | 2023-06-27T11:38:29.000Z | 2023-06-27T11:38:29 | ---
license: mit
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tasksource/leandojo | tasksource | 2023-06-28T17:46:34Z | 51 | 3 | null | [
"license:cc-by-2.0",
"region:us"
] | 2023-06-28T17:46:34Z | 2023-06-28T17:41:51.000Z | 2023-06-28T17:41:51 | ---
license: cc-by-2.0
---
https://github.com/lean-dojo/LeanDojo
```
@article{yang2023leandojo,
title={{LeanDojo}: Theorem Proving with Retrieval-Augmented Language Models},
author={Yang, Kaiyu and Swope, Aidan and Gu, Alex and Chalamala, Rahul and Song, Peiyang and Yu, Shixing and Godil, Saad and Prenger, Ryan and Anandkumar, Anima},
journal={arXiv preprint arXiv:2306.15626},
year={2023}
}
``` | [
0.010744055733084679,
-0.5389387607574463,
0.718507707118988,
0.19104082882404327,
0.05303800106048584,
-0.18380936980247498,
-0.48494958877563477,
-0.5205891132354736,
0.3760235905647278,
0.4236966073513031,
-0.30896303057670593,
-0.5247543454170227,
-0.4060443341732025,
0.145923927426338... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
marclove/llama_functions | marclove | 2023-08-03T17:31:48Z | 51 | 6 | null | [
"task_categories:conversational",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | 2023-08-03T17:31:48Z | 2023-07-26T23:55:21.000Z | 2023-07-26T23:55:21 | ---
license: cc-by-sa-4.0
task_categories:
- conversational
- text-generation
language:
- en
pretty_name: Llama Functions
size_categories:
- 10K<n<100K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:** https://marclove.com
- **Repository:** https://huggingface.co/datasets/marclove/llama_functions
### Dataset Summary
‼️ This dataset is still in a beta state. Its contents, and likely its format, will change. If you need to depend on it in its current state, please create your own fork and provide attribution to this original repository. ‼️
Llama Functions is a synthetic dataset generated from a mix of manual curation of OpenAPI endpoints and prompting of OpenAI models. It is further mixed with chat completions from the Guanaco subset of the OASST1 chat dialogue dataset. It is a total of 18,000 rows, 9,000 rows from the synthetic dataset of function calls and 9,000 rows from the Guanaco dataset.
The dataset is mixed with Guanaco in order to maintain accuracy and helpfulness when calling a function is not the appropriate response. I plan to remove the Guanaco portion of the dataset and instead provide fine-tuning recommendations, guidelines for use, more detailed information regarding limitations, and eval stats of 7B, 13B, and 70B models.
There is no existing evaluation benchmark to measure the accuracy of function calls, which makes it hard during training to identify when we've maximized the balance of function calling accuracy and chat model performance. I'm working on a custom HF eval for this purpose, but until then I have chosen to mix the two datasets in equal parts to get a proxy of performance for both tasks in the eval & test stats during fine-tuning.
### Languages
English primarily, though since it has been mixed with the multilingual Guanaco dataset, other languages are included.
## Dataset Structure
### Data Fields
| Field | Description |
|-------|-------------|
| `input` |A prompt in Llama-2 Chat format, including an appropriate system instruction and chat history. |
| `output` | The expected completion. |
### Data Splits
There are currently no splits, but future versions will likely have train, eval, and test splits.
## Dataset Creation
### Curation Rationale
In an effort to enable tool-using chat agents and autonomous agents, I developed this synthetic dataset to bring [OpenAI-style function calling](https://openai.com/blog/function-calling-and-other-api-updates#function-calling) to the Llama family and to fully open source models.
### Source Data
The data was sourced by prompting OpenAI models to generate function calls of:
1. Real OpenAPI endpoints collected and filtered from the web
2. Manually written (but artificial) OpenAPI endpoints, and
3. Prompted iterations of 1 & 2.
Prompted iterations were generated by ChatGPT-4 (July 20, 2023 version). Generated function calls and their natural language counterparts were generated by iterative prompting of `gpt-3.5-turbo-0301`. A blog post detailing the generation process will be published in the next few days.
OpenAI's TOS give me ownership of this synthetic dataset. I am licensing it under [Creative Commons' Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license](https://creativecommons.org/licenses/by-sa/4.0/). I have used the dataset to fine tune a research-only model, [marclove/llama-2-7b-chat-functions](https://huggingface.co/marclove/llama-2-7b-chat-functions), per OpenAI TOS. You are responsible for determining whether you can use the dataset for your particular use case. I take no responsibility and make no guarantees beyond licensing my own rights under the designated CC license.
#### Who are the source language producers?
- Marc Love
- Prompting of ChatGPT-4 & API calls to gpt-3.5-turbo-0301
### Personal and Sensitive Information
None.
## Considerations for Using the Data
### Social Impact of Dataset
Unknown, beyond those of the [Guanaco subset of the OASST1 dataset](https://huggingface.co/datasets/timdettmers/openassistant-guanaco/viewer/timdettmers--openassistant-guanaco/).
### Discussion of Biases
Unknown, beyond those of the [Guanaco subset of the OASST1 dataset](https://huggingface.co/datasets/timdettmers/openassistant-guanaco/viewer/timdettmers--openassistant-guanaco/).
### Other Known Limitations
Fine-tuning on this dataset can lead to hallucinated function calls. This is more pronounced in smaller models.
## Additional Information
### Dataset Curators
Marc Love
### Licensing Information
[Creative Commons' Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license](https://creativecommons.org/licenses/by-sa/4.0/). Please note that the synthetic data portion of the dataset was generated using OpenAI models, which may or may not impact your ability to use the dataset, depending on your use case.
### Citation Information
If you use this dataset, please cite:
```
@misc{LlamaFunctions,
title = {LlamaFunctions: An Open Dataset of Structured API Calls From Natural Language Prompts},
author = {Marc Love},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/marclove/llama_functions},
}
``` | [
-0.3002767264842987,
-0.9560554027557373,
0.23749050498008728,
0.4776821434497833,
-0.3382359743118286,
-0.0051292311400175095,
-0.2735709249973297,
-0.6901909708976746,
0.46133095026016235,
0.36888790130615234,
-0.7630975842475891,
-0.5957133769989014,
-0.31257808208465576,
0.245241045951... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SEACrowd/id_short_answer_grading | SEACrowd | 2023-09-26T12:28:15Z | 51 | 0 | null | [
"language:ind",
"license:unknown",
"short-answer-grading",
"region:us"
] | 2023-09-26T12:28:15Z | 2023-09-26T11:11:58.000Z | 2023-09-26T11:11:58 | ---
license: unknown
tags:
- short-answer-grading
language:
- ind
---
# id_short_answer_grading
Indonesian short answers for Biology and Geography subjects from 534 respondents where the answer grading was done by 7 experts.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@article{
JLK,
author = {Muh Haidir and Ayu Purwarianti},
title = { Short Answer Grading Using Contextual Word Embedding and Linear Regression},
journal = {Jurnal Linguistik Komputasional},
volume = {3},
number = {2},
year = {2020},
keywords = {},
abstract = {Abstract—One of the obstacles in an efficient MOOC is the evaluation of student answers, including the short answer grading which requires large effort from instructors to conduct it manually.
Thus, NLP research in short answer grading has been conducted in order to support the automation, using several techniques such as rule
and machine learning based. Here, we’ve conducted experiments on deep learning based short answer grading to compare the answer
representation and answer assessment method. In the answer representation, we compared word embedding and sentence embedding models
such as BERT, and its modification. In the answer assessment method, we use linear regression. There are 2 datasets that we used, available
English short answer grading dataset with 80 questions and 2442 to get the best configuration for model and Indonesian short answer grading
dataset with 36 questions and 9165 short answers as testing data. Here, we’ve collected Indonesian short answers for Biology and Geography
subjects from 534 respondents where the answer grading was done by 7 experts. The best root mean squared error for both dataset was achieved
by using BERT pretrained, 0.880 for English dataset dan 1.893 for Indonesian dataset.},
issn = {2621-9336}, pages = {54--61}, doi = {10.26418/jlk.v3i2.38},
url = {https://inacl.id/journal/index.php/jlk/article/view/38}
}
```
## License
Unknown
## Homepage
[https://github.com/AgeMagi/tugas-akhir](https://github.com/AgeMagi/tugas-akhir)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) | [
-0.56279456615448,
-0.9966070055961609,
0.35989445447921753,
-0.01208206731826067,
-0.2294120490550995,
-0.06804919242858887,
-0.303599089384079,
-0.45033711194992065,
0.494729608297348,
0.3908027708530426,
-0.17706860601902008,
-0.5135565400123596,
-0.56076580286026,
0.340951532125473,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hippocrates/DDI2013_test | hippocrates | 2023-10-12T19:21:33Z | 51 | 0 | null | [
"region:us"
] | 2023-10-12T19:21:33Z | 2023-10-08T22:20:53.000Z | 2023-10-08T22:20:53 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
- name: answer
dtype: string
- name: choices
sequence: string
- name: gold
dtype: int64
splits:
- name: train
num_bytes: 20658927
num_examples: 18779
- name: valid
num_bytes: 8739656
num_examples: 7244
- name: test
num_bytes: 6455758
num_examples: 5761
download_size: 3113073
dataset_size: 35854341
---
# Dataset Card for "DDI2013_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.737856924533844,
-0.4048115611076355,
0.27120280265808105,
0.44852501153945923,
-0.06364945322275162,
-0.1703718900680542,
0.5629246830940247,
-0.048188887536525726,
0.7091092467308044,
0.15975500643253326,
-1.0275142192840576,
-0.6414744257926941,
-0.5215991735458374,
-0.02761981263756... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hippocrates/2012i2b2_NER_test | hippocrates | 2023-10-17T20:21:36Z | 51 | 0 | null | [
"region:us"
] | 2023-10-17T20:21:36Z | 2023-10-17T20:21:34.000Z | 2023-10-17T20:21:34 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gg-ai/es-0712-no-demoji-s | gg-ai | 2023-11-08T17:56:09Z | 51 | 0 | null | [
"region:us"
] | 2023-11-08T17:56:09Z | 2023-11-08T17:56:04.000Z | 2023-11-08T17:56:04 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: val
path: data/val-*
dataset_info:
features:
- name: text
dtype: string
- name: clean_text
dtype: string
- name: sent
dtype: int64
splits:
- name: train
num_bytes: 217036
num_examples: 641
- name: test
num_bytes: 46081
num_examples: 136
- name: val
num_bytes: 9192
num_examples: 25
download_size: 189749
dataset_size: 272309
---
# Dataset Card for "es-0712-no-demoji-s"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.38779687881469727,
-0.04874277114868164,
0.26621222496032715,
0.18133194744586945,
-0.3987416923046112,
-0.2445937544107437,
-0.00045058218529447913,
0.0735514834523201,
1.1074190139770508,
0.6444156765937805,
-1.0894676446914673,
-0.9820706844329834,
-0.5415594577789307,
0.141468882560... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
teknium/dataforge-economics | teknium | 2023-11-12T23:39:30Z | 51 | 27 | null | [
"language:eng",
"license:mit",
"economics",
"region:us"
] | 2023-11-12T23:39:30Z | 2023-11-12T15:51:56.000Z | 2023-11-12T15:51:56 | ---
language:
- eng
pretty_name: "DataForge-Economics"
tags:
- economics
license: mit
---

# Dataset Card for dataforge-economics
## Table of Contents
- [Overview](#overview)
- [Dataset Description](#dataset-description)
- [Data Collection and Synthesis](#data-collection-and-synthesis)
- [Data Structure](#data-structure)
- [Licensing, Privacy, and Ethics](#licensing-privacy-and-ethics)
- [Access](#access)
- [Usage](#usage)
- [Citation](#citation)
- [Contributions](#contributions)
## Overview
This dataset, `teknium/dataforge-economics`, is a specialized collection of 1,000 synthetic examples in the field of economics. It has been generated using OpenAI's GPT-4 and a custom data synthesis pipeline named DataForge, developed by me.
## Dataset Description
### Data Collection and Synthesis
The data in `teknium/dataforge-economics` has been synthetically generated using OpenAI's GPT-4 language model. The synthesis process was enhanced and structured using the DataForge pipeline, which incorporates domain-specific knowledge and ensures relevance in economics topics.
### Data Structure
- **Size of dataset:** 1000 examples
- **Type of data:** Textual (Economics domain-specific)
- **Data format:** JSON
- **Fields:**
- - id: a randomly generated uuid
- conversations: single turn human & gpt turns in sharegpt format
- source: the dataset name itself, for metadata purposes when merging with others
- topic: the sub-topic for the domain
- system_prompt: type of system prompt used for generating the response.
## Licensing, Privacy, and Ethics
- **License:** MIT License
- **Special Considerations:** This datasest is purely generated from GPT-4 data, some information may be incorrect or invalid.
- **Privacy:** As the dataset is synthetically generated, it does not contain any real individual's data.
## Access
- **Availability:** General Access
## Usage
This dataset is a domain specialist dataset, the first to use my new pipeline called Data Forge, which can create domain expert knowledge (and tasks, as seen in the Trismegistus occult dataset)
This dataset was a proof of concept to improve upon Orca model's economics expertise, which surpassed my custom benchmark for economics when finetuned over stable beluga.
| [
-0.5626130700111389,
-0.7280029654502869,
0.2616419792175293,
-0.043272826820611954,
-0.14384406805038452,
0.0322570875287056,
-0.1389603167772293,
-0.3456544876098633,
0.11405554413795471,
0.5504922866821289,
-0.6581458449363708,
-0.7685412764549255,
-0.11541932821273804,
0.17242266237735... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hotal/emergency_classification_prompt | hotal | 2023-11-16T15:43:46Z | 51 | 1 | null | [
"region:us"
] | 2023-11-16T15:43:46Z | 2023-11-16T14:51:47.000Z | 2023-11-16T14:51:47 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 27222546
num_examples: 25989
download_size: 4744670
dataset_size: 27222546
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "emergency_classification_prompt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.39160212874412537,
-0.1994796246290207,
0.3558441400527954,
0.3471568524837494,
-0.05958250164985657,
-0.03202136233448982,
0.32450416684150696,
-0.08052263408899307,
0.6571051478385925,
0.2734453082084656,
-0.8022609949111938,
-0.7223564982414246,
-0.49262183904647827,
-0.0861430540680... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mlabonne/mini-platypus | mlabonne | 2023-11-17T16:39:56Z | 51 | 0 | null | [
"region:us"
] | 2023-11-17T16:39:56Z | 2023-11-16T19:26:36.000Z | 2023-11-16T19:26:36 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 4201526
num_examples: 1000
download_size: 2247083
dataset_size: 4201526
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.1285339742898941,
-0.18616800010204315,
0.6529127359390259,
0.4943626821041107,
-0.1931934952735901,
0.2360742688179016,
0.360720157623291,
0.05056300014257431,
0.5793654322624207,
0.7400140166282654,
-0.6508105993270874,
-0.23783984780311584,
-0.7102248668670654,
-0.047826044261455536,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
awhall/aita_21-10_23-09 | awhall | 2023-11-20T20:12:27Z | 51 | 0 | null | [
"license:mit",
"region:us"
] | 2023-11-20T20:12:27Z | 2023-11-20T20:09:29.000Z | 2023-11-20T20:09:29 | ---
license: mit
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
annyorange/colorized-dataset | annyorange | 2023-11-26T05:45:52Z | 51 | 0 | null | [
"region:us"
] | 2023-11-26T05:45:52Z | 2023-11-22T18:16:51.000Z | 2023-11-22T18:16:51 | ---
dataset_info:
features:
- name: original_image
dtype: image
- name: edit_prompt
dtype: string
- name: colorized_image
dtype: image
splits:
- name: train
num_bytes: 32465878.0
num_examples: 711
download_size: 32520629
dataset_size: 32465878.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
0x7194633/gamio-ai-authorLM-dataset | 0x7194633 | 2023-11-24T11:48:55Z | 51 | 0 | null | [
"region:us"
] | 2023-11-24T11:48:55Z | 2023-11-24T11:48:49.000Z | 2023-11-24T11:48:49 | ---
dataset_info:
features:
- name: texts
dtype: string
splits:
- name: train
num_bytes: 6551786
num_examples: 288
download_size: 2843488
dataset_size: 6551786
---
# Dataset Card for "gamio-ai-authorLM-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4378807544708252,
-0.3028451204299927,
0.29133421182632446,
0.16642451286315918,
-0.15005725622177124,
0.044191207736730576,
0.3102702498435974,
-0.2397831231355667,
0.7972940802574158,
0.5276944637298584,
-0.7331428527832031,
-0.6203587651252747,
-0.624647319316864,
-0.1739062070846557... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jetaudio/binhvq_news | jetaudio | 2023-11-25T04:03:44Z | 51 | 0 | null | [
"region:us"
] | 2023-11-25T04:03:44Z | 2023-11-25T00:45:20.000Z | 2023-11-25T00:45:20 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 68939074439.0
num_examples: 19582227
- name: validation
num_bytes: 349157289.0
num_examples: 104519
download_size: 35606535605
dataset_size: 69288231728.0
---
# Dataset Card for "binhvq_news"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6078881025314331,
-0.3182292580604553,
0.19132129848003387,
0.1609777808189392,
-0.5531137585639954,
0.08121917396783829,
0.3492278456687927,
-0.13358406722545624,
0.8212160468101501,
0.7710899710655212,
-0.733660876750946,
-1.0026010274887085,
-0.42587828636169434,
-0.32484111189842224... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
midas/openkp | midas | 2022-01-09T17:01:43Z | 50 | 2 | null | [
"region:us"
] | 2022-01-09T17:01:43Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ## Dataset Summary
Original source - [https://github.com/microsoft/OpenKP](https://github.com/microsoft/OpenKP)
## Dataset Structure
### Data Fields
- **id**: unique identifier of the document.
- **document**: Whitespace separated list of words in the document.
- **doc_bio_tags**: BIO tags for each word in the document. B stands for the beginning of a keyphrase and I stands for inside the keyphrase. O stands for outside the keyphrase and represents the word that isn't a part of the keyphrase at all.
- **extractive_keyphrases**: List of all the present keyphrases.
- **abstractive_keyphrase**: List of all the absent keyphrases.
### Data Splits
|Split| #datapoints |
|--|--|
| Train | 134894 |
| Test | 6614 |
| Validation | 6616 |
## Usage
### Full Dataset
```python
from datasets import load_dataset
# get entire dataset
dataset = load_dataset("midas/openkp", "raw")
# sample from the train split
print("Sample from training data split")
train_sample = dataset["train"][0]
print("Fields in the sample: ", [key for key in train_sample.keys()])
print("Tokenized Document: ", train_sample["document"])
print("Document BIO Tags: ", train_sample["doc_bio_tags"])
print("Extractive/present Keyphrases: ", train_sample["extractive_keyphrases"])
print("Abstractive/absent Keyphrases: ", train_sample["abstractive_keyphrases"])
print("\n-----------\n")
# sample from the validation split
print("Sample from validation data split")
validation_sample = dataset["validation"][0]
print("Fields in the sample: ", [key for key in validation_sample.keys()])
print("Tokenized Document: ", validation_sample["document"])
print("Document BIO Tags: ", validation_sample["doc_bio_tags"])
print("Extractive/present Keyphrases: ", validation_sample["extractive_keyphrases"])
print("Abstractive/absent Keyphrases: ", validation_sample["abstractive_keyphrases"])
print("\n-----------\n")
# sample from the test split
print("Sample from test data split")
test_sample = dataset["test"][0]
print("Fields in the sample: ", [key for key in test_sample.keys()])
print("Tokenized Document: ", test_sample["document"])
print("Document BIO Tags: ", test_sample["doc_bio_tags"])
print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
print("\n-----------\n")
```
**Output**
```bash
Sample from training data split
Fields in the sample: ['id', 'document', 'doc_bio_tags', 'extractive_keyphrases', 'abstractive_keyphrases', 'other_metadata']
Tokenized Document: ['Star', 'Trek', 'Discovery', 'Season', '1', 'Director', 'NA', 'Actors', 'Jason', 'Isaacs', 'Doug', 'Jones', 'Shazad', 'Latif', 'Sonequa', 'MartinGreen', 'Genres', 'SciFi', 'Country', 'USA', 'Release', 'Year', '2017', 'Duration', 'NA', 'Synopsis', 'Ten', 'years', 'before', 'Kirk', 'Spock', 'and', 'the', 'Enterprise', 'the', 'USS', 'Discovery', 'discovers', 'new', 'worlds', 'and', 'lifeforms', 'as', 'one', 'Starfleet', 'officer', 'learns', 'to', 'understand', 'all', 'things', 'alien', 'YOU', 'ARE', 'WATCHING', 'Star', 'Trek', 'Discovery', 'Season', '1', '000', '000', 'Loaded', 'Progress', 'The', 'video', 'keeps', 'buffering', 'Just', 'pause', 'it', 'for', '510', 'minutes', 'then', 'continue', 'playing', 'Share', 'Star', 'Trek', 'Discovery', 'Season', '1', 'movie', 'to', 'your', 'friends', 'Share', 'to', 'support', 'Putlocker', '1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15', 'Version', '1', 'Server', 'Mega', 'Play', 'Movie', 'Version', '2', 'Server', 'TheVideo', 'Link', '1', 'Play', 'Movie', 'Version', '3', 'Server', 'TheVideo', 'Link', '2', 'Play', 'Movie', 'Version', '4', 'Server', 'TheVideo', 'Link', '3', 'Play', 'Movie', 'Version', '5', 'Server', 'TheVideo', 'Link', '4', 'Play', 'Movie', 'Version', '6', 'Server', 'NowVideo', 'Play', 'Movie', 'Version', '7', 'Server', 'NovaMov', 'Play', 'Movie', 'Version', '8', 'Server', 'VideoWeed', 'Play', 'Movie', 'Version', '9', 'Server', 'MovShare', 'Play', 'Movie', 'Version', '10', 'Server', 'CloudTime', 'Play', 'Movie', 'Version', '11', 'Server', 'VShare', 'Link', '1', 'Play', 'Movie', 'Version', '12', 'Server', 'VShare', 'Link', '2', 'Play', 'Movie', 'Version', '13', 'Server', 'VShare', 'Link', '3', 'Play', 'Movie', 'Version', '14', 'Server', 'VShare', 'Link', '4', 'Play', 'Movie', 'Version', '15', 'Other', 'Link', '1', 'Play', 'Movie', 'Version', '16', 'Other', 'Link', '2', 'Play', 'Movie', 'Version', '17', 'Other', 'Link', '3', 'Play', 'Movie', 'Version', '18', 'Other', 'Link', '4', 'Play', 'Movie', 'Version', '19', 'Other', 'Link', '5', 'Play', 'Movie', 'Version', '20', 'Other', 'Link', '6', 'Play', 'Movie', 'Version', '21', 'Other', 'Link', '7', 'Play', 'Movie', 'Version', '22', 'Other', 'Link', '8', 'Play', 'Movie', 'Version', '23', 'Other', 'Link', '9', 'Play', 'Movie', 'Version', '24', 'Other', 'Link', '10', 'Play', 'Movie', 'Version', '25', 'Other', 'Link', '11', 'Play', 'Movie', 'Version', '26', 'Other', 'Link', '12', 'Play', 'Movie', 'Version', '27', 'Other', 'Link', '13', 'Play', 'Movie', 'Version', '28', 'Other', 'Link', '14', 'Play', 'Movie', 'Version', '29', 'Other', 'Link', '15', 'Play', 'Movie']
Document BIO Tags: ['B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O']
Extractive/present Keyphrases: ['star trek', 'jason isaacs', 'doug jones']
Abstractive/absent Keyphrases: []
-----------
Sample from validation data split
Fields in the sample: ['id', 'document', 'doc_bio_tags', 'extractive_keyphrases', 'abstractive_keyphrases', 'other_metadata']
Tokenized Document: ['Home', 'Keygen', 'NBA', '2K17', 'nba', '2k17', 'activation', 'key', 'generator', 'nba', '2k17', 'beta', 'keygen', 'free', 'nba', '2k17', 'cd', 'codes', 'free', 'nba', '2k17', 'cd', 'download', 'nba', '2k17', 'cd', 'generator', 'no', 'survey', 'nba', '2k17', 'cd', 'key', 'download', 'without', 'survey', 'NBA', '2K17', 'CD', 'Key', 'Generator', 'No', 'Survey', 'PC', 'PS34', 'Xbox', '360ONE', 'NBA', '2K17', 'CD', 'Key', 'Generator', 'No', 'Survey', 'PC', 'PS34', 'Xbox', '360ONE', 'Penulis', 'Hacker', 'Stock', 'on', 'Friday', '9', 'September', '2016', '1253', 'NBA', '2K17', 'CD', 'Key', 'Generator', 'No', 'Survey', 'PC', 'PS34', 'Xbox', '360ONE', 'Hello', 'everybody', 'welcome', 'on', 'our', 'web', 'site', 'HackerStockcom', 'these', 'days', 'weve', 'a', 'replacement', 'Key', 'Generator', 'for', 'you', 'that', 'is', 'known', 'as', 'NBA', '2K17', 'CD', 'Key', 'Generator', 'No', 'Survey', 'PC', 'PS34', 'Xbox', '360ONE', 'youll', 'be', 'ready', 'to', 'get', 'the', 'game', 'without', 'charge', 'this', 'keygen', 'will', 'find', 'unlimited', 'Activation', 'Codes', 'for', 'you', 'on', 'any', 'platform', 'Steam', 'or', 'Origin', 'on', 'computer', 'or', 'why', 'not', 'PlayStation', 'and', 'Xbox', 'Weve', 'ready', 'one', 'thing', 'special', 'for', 'all', 'NBA', 'fans', 'and', 'players', 'a', 'special', 'tool', 'that', 'were', 'certain', 'that', 'you', 'just', 'will', 'agree', 'Our', 'tool', 'may', 'generate', 'tons', 'of', 'key', 'codes', 'for', 'laptop', 'PlayStation', '3', 'PlayStation', '4', 'Xbox', '360', 'and', 'Xbox', 'ONE', 'So', 'youll', 'get', 'early', 'access', 'to', 'the', 'current', 'game', 'through', 'our', 'key', 'generator', 'for', 'NBA', '2K17', 'simply', 'with', 'few', 'clicks', 'This', 'tool', 'will', 'generate', 'over', '800', '000', 'key', 'codes', 'for', 'various', 'platforms', 'The', 'key', 'code', 'is', 'valid', 'and', 'youll', 'be', 'ready', 'to', 'try', 'it', 'and', 'be', 'able', 'to', 'play', 'NBA', '2K17', 'without', 'charge', 'Our', 'serial', 'key', 'generator', 'tool', 'is', 'NBA', '2K17', 'CD', 'Key', 'Generator', 'No', 'Survey', 'PC', 'PS34', 'Xbox', '360ONE', 'Instructions', 'using', 'the', 'NBA', '2K17', 'CD', 'Key', 'Generator', '2017', 'is', 'quick', 'and', 'easy', 'First', 'just', 'download', 'the', 'exe', 'file', 'and', 'install', 'it', 'on', 'your', 'computer', 'After', 'running', 'the', 'program', 'select', 'the', 'platform', 'on', 'which', 'you', 'want', 'to', 'play', 'NBA', '2K17', 'Next', 'click', 'the', 'GENERATE', 'button', 'This', 'will', 'produce', 'an', 'alphanumeric', 'code', 'also', 'known', 'as', 'your', 'product', 'key', 'You', 'will', 'use', 'it', 'to', 'validate', 'the', 'authenticity', 'of', 'your', 'NBA', '2K17', 'game', 'Now', 'copy', 'and', 'paste', 'the', 'product', 'key', 'onto', 'the', 'serial', 'number', 'window', 'prompt', 'of', 'your', 'NBA', '2K17', 'software', 'You', 'will', 'gain', 'access', 'to', 'NBA', '2K17', 'Finally', 'enjoy', 'your', 'game', 'We', 'designed', 'this', 'NBA', '2K17', 'game', 'key', 'generator', 'to', 'the', 'best', 'of', 'our', 'abilities', 'We', 'truly', 'hope', 'that', 'you', 'take', 'advantage', 'of', 'its', 'features', 'to', 'fully', 'enjoy', 'your', 'NBA', '2K17', 'Please', 'let', 'us', 'know', 'if', 'you', 'encounter', 'any', 'problems', 'with', 'our', 'software', 'We', 'would', 'love', 'to', 'help', 'you', 'out', 'NBA', '2K17', 'nba', '2k17', 'activation', 'key', 'generator', 'nba', '2k17', 'beta', 'keygen', 'free', 'nba', '2k17', 'cd', 'codes', 'free', 'nba', '2k17', 'cd', 'download', 'nba', '2k17', 'cd', 'generator', 'no', 'survey', 'nba', '2k17', 'cd', 'key', 'download', 'without', 'survey', 'nba', '2k17', 'cd', 'key', 'free', 'download', 'nba', '2k17', 'cd', 'key', 'free', 'online', 'nba', '2k17', 'cd', 'key', 'generator', 'no', 'survey', 'nba', '2k17', 'cd', 'key', 'pc', 'download', 'nba', '2k17', 'cd', 'key', 'ps4', 'free', 'download', 'nba', '2k17', 'cd', 'key', 'xbox', 'free', 'download', 'nba', '2k17', 'cd', 'keyexe', 'no', 'survey', 'nba', '2k17', 'crack', 'version', 'download', 'nba', '2k17', 'download', '2016', 'nba', '2k17', 'download', 'for', 'pc', '2016', 'nba', '2k17', 'download', 'full', 'crack', 'nba', 'Posted', 'by', 'Hacker', 'Stock', 'at', '1253', 'Email', 'This', 'BlogThis', 'Labels', 'Keygen', 'NBA', '2K17', 'nba', '2k17', 'activation', 'key', 'generator', 'nba', '2k17', 'beta', 'keygen', 'free', 'nba', '2k17', 'cd', 'codes', 'free', 'nba', '2k17', 'cd', 'download', 'nba', '2k17', 'cd', 'generator', 'no', 'survey', 'nba', '2k17', 'cd', 'key', 'download', 'without', 'survey', 'Older', 'Post', 'Home', 'Subscribe', 'to', 'Post', 'Comments', 'Atom']
Document BIO Tags: ['O', 'O', 'B', 'I', 'B', 'I', 'O', 'B', 'I', 'B', 'I', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'B', 'O', 'B', 'I', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'B', 'I', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'B', 'I', 'O', 'B', 'I', 'B', 'I', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'B', 'I', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'B', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'B', 'I', 'O', 'B', 'I', 'B', 'I', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O']
Extractive/present Keyphrases: ['nba 2k17', 'key generator', 'xbox']
Abstractive/absent Keyphrases: []
-----------
Sample from test data split
Fields in the sample: ['id', 'document', 'doc_bio_tags', 'extractive_keyphrases', 'abstractive_keyphrases', 'other_metadata']
Tokenized Document: ['KSLI', '1280', 'AM', 'LATEST', 'POSTS', 'Scotty', 'McCreery', 'and', 'Wife', 'Welcome', 'New', 'Addition', 'to', 'the', 'Family', 'The', 'McCreerys', 'have', 'an', 'announcement', 'to', 'share', 'with', 'fansthe', 'family', 'is', 'getting', 'bigger', 'Wendy', 'Hermanson', '13', 'hours', 'ago', 'Kane', 'Brown', 'Serenades', 'Fans', 'With', 'Michael', 'Jackson', 'Hit', 'Watch', 'Brown', 'dusted', 'off', 'an', '80s', 'gem', 'to', 'post', 'on', 'social', 'media', 'and', 'put', 'smiles', 'on', 'the', 'faces', 'of', 'his', 'followers', 'Wendy', 'Hermanson', '20', 'hours', 'ago', 'Dolly', 'Parton', 'Scores', 'Golden', 'Globe', 'Nod', 'for', 'Girl', 'in', 'the', 'Movies', 'Congratulations', 'This', 'is', 'her', 'sixth', 'nomination', 'Sterling', 'Whitaker', '21', 'hours', 'ago', 'Remember', 'When', 'Johnny', 'Cash', 'Attacked', 'Homer', 'Simpson', 'It', 'was', 'one', 'of', 'the', 'coolest', 'guest', 'appearances', 'in', 'the', 'history', 'of', 'the', 'show', 'Sterling', 'Whitaker', '2', 'days', 'ago', 'Remember', 'Which', 'Country', 'Star', 'Murdered', 'His', 'Wife', 'The', 'career', 'of', 'one', 'of', 'country', 'musics', 'most', 'successful', 'early', 'stars', 'was', 'derailed', 'after', 'he', 'was', 'convicted', 'of', 'murdering', 'his', 'wife', 'Sterling', 'Whitaker', '2', 'days', 'ago', 'William', 'Shatner', 'to', 'Make', 'Grand', 'Ole', 'Opry', 'Debut', 'Hes', 'appearing', 'alongside', 'a', 'legendary', 'country', 'musician', 'Sterling', 'Whitaker', '2', 'days', 'ago', 'Remember', 'Who', 'First', 'Recorded', 'Garths', 'The', 'Thunder', 'Rolls', 'Have', 'you', 'ever', 'heard', 'the', 'extra', 'verse', 'Sterling', 'Whitaker', '2', 'days', 'ago', 'Danielle', 'Bradberys', 'Cover', 'of', 'Post', 'Malones', 'Psycho', 'Is', 'a', 'Stunner', 'Danielle', 'Bradbery', 'is', 'rounding', 'out', 'her', 'Yours', 'Truly', '2018', 'covers', 'project', 'by', 'sharing', 'her', 'take', 'on', 'rapper', 'Post', 'Malones', 'hit', 'Psycho', 'Angela', 'Stefano', '2', 'days', 'ago', 'Enjoy', 'Wild', 'Game', 'at', 'the', 'Texas', 'Wild', 'Bunch', 'Bonanza', 'Cook', 'Off', 'Its', 'time', 'for', 'the', 'Texas', 'Wild', 'Bunch', 'Bonanza', 'Cook', 'Off', 'and', 'Auction', 'All', 'attendees', 'get', 'to', 'sample', 'everything', 'from', 'deer', 'to', 'elk', 'to', 'bacon', 'wrapped', 'jalapeno', 'poppers', 'Rudy', 'Fernandez', '2', 'days', 'ago', 'Kid', 'Rocks', '20Foot', 'Butt', 'Bar', 'Sign', 'Gets', 'Approved', 'in', 'Nashville', 'The', 'crazy', 'sign', 'featuring', 'a', 'womans', 'rear', 'end', 'caused', 'a', 'swirl', 'of', 'discussion', 'Wendy', 'Hermanson', '2', 'days', 'ago', 'Remember', 'When', 'Dolly', 'Parton', 'Surprised', 'Reba', 'McEntire', 'on', 'the', 'Opry', 'Shes', 'made', 'so', 'many', 'special', 'memories', 'on', 'the', 'Opry', 'stage', 'Sterling', 'Whitaker', '3', 'days', 'ago', 'Chris', 'Young', 'Takes', 'on', 'the', 'Hag', 'With', 'Silver', 'Wings', 'Cover', 'Watch', 'In', 'his', 'new', 'single', 'Chris', 'Young', 'proudly', 'proclaims', 'that', 'he', 'was', 'raised', 'on', 'country', 'and', 'he', 'can', 'prove', 'it', 'Angela', 'Stefano', '3', 'days', 'ago', 'The', 'Tractors', 'Guitarist', 'Steve', 'Ripley', 'Dead', 'at', '69', 'Rest', 'in', 'peace', 'Steve', 'Carena', 'Liptak', '3', 'days', 'ago', 'Danielle', 'Bradbery', 'Rounds', 'Out', 'Yours', 'Truly', '2018', 'With', 'Psycho', 'The', 'final', 'third', 'of', 'Bradberys', 'Yours', 'Truly', '2018', 'tribute', 'project', 'is', 'here', 'Carena', 'Liptak', '3', 'days', 'ago', 'Load', 'More', 'Articles', 'Country', 'Music', 'News', 'Kane', 'Brown', 'Serenades', 'Fans', 'With', 'Michael', 'Jackson', 'Hit', 'Watch', 'Scotty', 'McCreery', 'and', 'Wife', 'Welcome', 'New', 'Addition', 'to', 'the', 'Family', 'Meet', 'the', 'Staff', 'Rudy', 'Fernandez', 'Shay', 'Hill', 'Chaz', 'Frank', 'Pain', 'Classic', 'Country', '1280', 'on', 'Facebook', 'Abilene', 'TX', 'January', '7', '2019', '70000', 'AM', 'PST', 'January', '7', '2019', '70000', 'AM', 'PST', 'January', '7', '2019', '70000', 'AM', 'PSTth', '62', 'Clear', '71', '42', 'view', 'forecast', 'VIP', 'Contests', 'New', 'Year', 'New', 'You', '100', 'Amazon', 'Gift', 'Card', 'Small', 'Business', 'Solutions', 'Devote', 'more', 'time', 'to', 'running', 'your', 'business', 'Engage', 'your', 'clients', 'across', 'multiple', 'platforms', 'Reach', 'more', 'customers', 'than', 'ever', 'before', 'Get', 'an', 'Edge', 'on', 'the', 'Competition', 'Today', 'KSLIs', 'Daily', 'Deal', 'Certificate', 'for', 'a', 'Rhythm', 'USA', 'Clock', 'From', 'Jewels', 'of', 'Time']
Document BIO Tags: ['B', 'I', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O']
Extractive/present Keyphrases: ['ksli 1280 am']
Abstractive/absent Keyphrases: []
-----------
```
### Keyphrase Extraction
```python
from datasets import load_dataset
# get the dataset only for keyphrase extraction
dataset = load_dataset("midas/openkp", "extraction")
print("Samples for Keyphrase Extraction")
# sample from the train split
print("Sample from training data split")
train_sample = dataset["train"][0]
print("Fields in the sample: ", [key for key in train_sample.keys()])
print("Tokenized Document: ", train_sample["document"])
print("Document BIO Tags: ", train_sample["doc_bio_tags"])
print("\n-----------\n")
# sample from the validation split
print("Sample from validation data split")
validation_sample = dataset["validation"][0]
print("Fields in the sample: ", [key for key in validation_sample.keys()])
print("Tokenized Document: ", validation_sample["document"])
print("Document BIO Tags: ", validation_sample["doc_bio_tags"])
print("\n-----------\n")
# sample from the test split
print("Sample from test data split")
test_sample = dataset["test"][0]
print("Fields in the sample: ", [key for key in test_sample.keys()])
print("Tokenized Document: ", test_sample["document"])
print("Document BIO Tags: ", test_sample["doc_bio_tags"])
print("\n-----------\n")
```
### Keyphrase Generation
```python
# get the dataset only for keyphrase generation
dataset = load_dataset("midas/openkp", "generation")
print("Samples for Keyphrase Generation")
# sample from the train split
print("Sample from training data split")
train_sample = dataset["train"][0]
print("Fields in the sample: ", [key for key in train_sample.keys()])
print("Tokenized Document: ", train_sample["document"])
print("Extractive/present Keyphrases: ", train_sample["extractive_keyphrases"])
print("Abstractive/absent Keyphrases: ", train_sample["abstractive_keyphrases"])
print("\n-----------\n")
# sample from the validation split
print("Sample from validation data split")
validation_sample = dataset["validation"][0]
print("Fields in the sample: ", [key for key in validation_sample.keys()])
print("Tokenized Document: ", validation_sample["document"])
print("Extractive/present Keyphrases: ", validation_sample["extractive_keyphrases"])
print("Abstractive/absent Keyphrases: ", validation_sample["abstractive_keyphrases"])
print("\n-----------\n")
# sample from the test split
print("Sample from test data split")
test_sample = dataset["test"][0]
print("Fields in the sample: ", [key for key in test_sample.keys()])
print("Tokenized Document: ", test_sample["document"])
print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
print("\n-----------\n")
```
## Citation Information
```
@inproceedings{Xiong2019OpenDW,
title={Open Domain Web Keyphrase Extraction Beyond Language Modeling},
author={Lee Xiong and Chuan Hu and Chenyan Xiong and Daniel Fernando Campos and Arnold Overwijk},
booktitle={EMNLP},
year={2019}
}
```
## Contributions
Thanks to [@debanjanbhucs](https://github.com/debanjanbhucs), [@dibyaaaaax](https://github.com/dibyaaaaax) and [@ad6398](https://github.com/ad6398) for adding this dataset
| [
-0.5927118062973022,
-0.379268079996109,
0.3114433288574219,
0.2794608175754547,
-0.1173032745718956,
0.15901939570903778,
0.14944243431091309,
-0.23133376240730286,
0.6808620095252991,
0.25625699758529663,
-0.8694241642951965,
-0.47866544127464294,
-0.5649074912071228,
0.3184951841831207,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zapsdcn/chemprot | zapsdcn | 2021-12-08T03:17:13Z | 50 | 0 | null | [
"region:us"
] | 2021-12-08T03:17:13Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
FanFan/sentiment-amazon-clean | FanFan | 2022-03-09T17:12:19Z | 50 | 0 | null | [
"region:us"
] | 2022-03-09T17:12:19Z | 2022-03-09T17:11:36.000Z | 2022-03-09T17:11:36 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
blancoloureiro/fotos | blancoloureiro | 2022-10-15T17:14:17Z | 50 | 0 | null | [
"license:openrail",
"region:us"
] | 2022-10-15T17:14:17Z | 2022-10-15T17:13:41.000Z | 2022-10-15T17:13:41 | ---
license: openrail
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
arbml/AQMAR | arbml | 2022-10-26T14:50:48Z | 50 | 0 | null | [
"region:us"
] | 2022-10-26T14:50:48Z | 2022-10-25T22:09:41.000Z | 2022-10-25T22:09:41 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/quaero | bigbio | 2022-12-22T15:46:29Z | 50 | 1 | null | [
"multilinguality:monolingual",
"language:fr",
"license:other",
"region:us"
] | 2022-12-22T15:46:29Z | 2022-11-13T22:11:53.000Z | 2022-11-13T22:11:53 |
---
language:
- fr
bigbio_language:
- French
license: other
multilinguality: monolingual
bigbio_license_shortname: GFDL_1p3
pretty_name: QUAERO
homepage: https://quaerofrenchmed.limsi.fr/
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- NAMED_ENTITY_DISAMBIGUATION
---
# Dataset Card for QUAERO
## Dataset Description
- **Homepage:** https://quaerofrenchmed.limsi.fr/
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,NED
The QUAERO French Medical Corpus has been initially developed as a resource for named entity recognition and normalization [1]. It was then improved with the purpose of creating a gold standard set of normalized entities for French biomedical text, that was used in the CLEF eHealth evaluation lab [2][3].
A selection of MEDLINE titles and EMEA documents were manually annotated. The annotation process was guided by concepts in the Unified Medical Language System (UMLS):
1. Ten types of clinical entities, as defined by the following UMLS Semantic Groups (Bodenreider and McCray 2003) were annotated: Anatomy, Chemical and Drugs, Devices, Disorders, Geographic Areas, Living Beings, Objects, Phenomena, Physiology, Procedures.
2. The annotations were made in a comprehensive fashion, so that nested entities were marked, and entities could be mapped to more than one UMLS concept. In particular: (a) If a mention can refer to more than one Semantic Group, all the relevant Semantic Groups should be annotated. For instance, the mention “récidive” (recurrence) in the phrase “prévention des récidives” (recurrence prevention) should be annotated with the category “DISORDER” (CUI C2825055) and the category “PHENOMENON” (CUI C0034897); (b) If a mention can refer to more than one UMLS concept within the same Semantic Group, all the relevant concepts should be annotated. For instance, the mention “maniaques” (obsessive) in the phrase “patients maniaques” (obsessive patients) should be annotated with CUIs C0564408 and C0338831 (category “DISORDER”); (c) Entities which span overlaps with that of another entity should still be annotated. For instance, in the phrase “infarctus du myocarde” (myocardial infarction), the mention “myocarde” (myocardium) should be annotated with category “ANATOMY” (CUI C0027061) and the mention “infarctus du myocarde” should be annotated with category “DISORDER” (CUI C0027051)
The QUAERO French Medical Corpus BioC release comprises a subset of the QUAERO French Medical corpus, as follows:
Training data (BRAT version used in CLEF eHealth 2015 task 1b as training data):
- MEDLINE_train_bioc file: 833 MEDLINE titles, annotated with normalized entities in the BioC format
- EMEA_train_bioc file: 3 EMEA documents, segmented into 11 sub-documents, annotated with normalized entities in the BioC format
Development data (BRAT version used in CLEF eHealth 2015 task 1b as test data and in CLEF eHealth 2016 task 2 as development data):
- MEDLINE_dev_bioc file: 832 MEDLINE titles, annotated with normalized entities in the BioC format
- EMEA_dev_bioc file: 3 EMEA documents, segmented into 12 sub-documents, annotated with normalized entities in the BioC format
Test data (BRAT version used in CLEF eHealth 2016 task 2 as test data):
- MEDLINE_test_bioc folder: 833 MEDLINE titles, annotated with normalized entities in the BioC format
- EMEA folder_test_bioc: 4 EMEA documents, segmented into 15 sub-documents, annotated with normalized entities in the BioC format
This release of the QUAERO French medical corpus, BioC version, comes in the BioC format, through automatic conversion from the original BRAT format obtained with the Brat2BioC tool https://bitbucket.org/nicta_biomed/brat2bioc developped by Jimeno Yepes et al.
Antonio Jimeno Yepes, Mariana Neves, Karin Verspoor
Brat2BioC: conversion tool between brat and BioC
BioCreative IV track 1 - BioC: The BioCreative Interoperability Initiative, 2013
Please note that the original version of the QUAERO corpus distributed in the CLEF eHealth challenge 2015 and 2016 came in the BRAT stand alone format. It was distributed with the CLEF eHealth evaluation tool. This original distribution of the QUAERO French Medical corpus is available separately from https://quaerofrenchmed.limsi.fr
All questions regarding the task or data should be addressed to aurelie.neveol@limsi.fr
## Citation Information
```
@InProceedings{neveol14quaero,
author = {Névéol, Aurélie and Grouin, Cyril and Leixa, Jeremy
and Rosset, Sophie and Zweigenbaum, Pierre},
title = {The {QUAERO} {French} Medical Corpus: A Ressource for
Medical Entity Recognition and Normalization},
OPTbooktitle = {Proceedings of the Fourth Workshop on Building
and Evaluating Ressources for Health and Biomedical
Text Processing},
booktitle = {Proc of BioTextMining Work},
OPTseries = {BioTxtM 2014},
year = {2014},
pages = {24--30},
}
```
| [
-0.399399071931839,
-0.22358758747577667,
0.5563575029373169,
0.1507914960384369,
-0.1342533379793167,
-0.05001587048172951,
-0.0791504830121994,
-0.7001345753669739,
0.44071364402770996,
0.5328066349029541,
-0.21059612929821014,
-0.8515893816947937,
-0.6334268450737,
0.5090984106063843,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Dahoas/sft-gptj-synthetic-prompt-responses | Dahoas | 2022-12-19T16:20:41Z | 50 | 0 | null | [
"region:us"
] | 2022-12-19T16:20:41Z | 2022-12-19T16:20:27.000Z | 2022-12-19T16:20:27 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Jean-Baptiste/financial_news_sentiment | Jean-Baptiste | 2022-12-29T03:14:44Z | 50 | 7 | null | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"region:us"
] | 2022-12-29T03:14:44Z | 2022-12-22T18:49:05.000Z | 2022-12-22T18:49:05 | ---
language:
- en
dataset_info:
splits:
- name: test
num_examples: 267
- name: train
num_examples: 1512
annotations_creators:
- expert-generated
license:
- mit
multilinguality:
- monolingual
pretty_name: financial_news_sentiment
size_categories:
- 1K<n<10K
tags: []
task_categories:
- text-classification
task_ids:
- multi-class-classification
- sentiment-classification
---
# Dataset Card for "financial_news_sentiment"
Manually validated sentiment for ~2000 Canadian news articles.
The dataset also include a column topic which contains one of the following value:
* acquisition
* other
* quaterly financial release
* appointment to new position
* dividend
* corporate update
* drillings results
* conference
* share repurchase program
* grant of stocks
This was generated automatically using a zero-shot classification model and **was not** reviewed manually. | [
-0.2666003406047821,
-0.5236472487449646,
0.36620357632637024,
0.5445427298545837,
-0.5726330280303955,
0.2659801244735718,
0.21271249651908875,
-0.040491003543138504,
0.6328621506690979,
0.667780339717865,
-0.5743653774261475,
-1.1333850622177124,
-0.7472997307777405,
0.0815856084227562,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
keremberke/aerial-sheep-object-detection | keremberke | 2023-01-05T08:02:23Z | 50 | 4 | null | [
"task_categories:object-detection",
"roboflow",
"region:us"
] | 2023-01-05T08:02:23Z | 2023-01-02T20:17:28.000Z | 2023-01-02T20:17:28 | ---
task_categories:
- object-detection
tags:
- roboflow
---
### Roboflow Dataset Page
[https://universe.roboflow.com/riis/aerial-sheep/dataset/1](https://universe.roboflow.com/riis/aerial-sheep/dataset/1?ref=roboflow2huggingface)
### Dataset Labels
```
['sheep']
```
### Citation
```
@misc{ aerial-sheep_dataset,
title = { Aerial Sheep Dataset },
type = { Open Source Dataset },
author = { Riis },
howpublished = { \\url{ https://universe.roboflow.com/riis/aerial-sheep } },
url = { https://universe.roboflow.com/riis/aerial-sheep },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { jun },
note = { visited on 2023-01-02 },
}
```
### License
Public Domain
### Dataset Summary
This dataset was exported via roboflow.com on December 2, 2022 at 4:47 AM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
It includes 4133 images.
Sheep are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 600x600 (Stretch)
The following augmentation was applied to create 3 versions of each source image:
* 50% probability of horizontal flip
* 50% probability of vertical flip
* Randomly crop between 0 and 20 percent of the image
* Random brigthness adjustment of between -15 and +15 percent
* Random exposure adjustment of between -10 and +10 percent
| [
-0.5493335127830505,
-0.24914045631885529,
0.07455644756555557,
0.22092188894748688,
-0.1723276525735855,
-0.26355642080307007,
-0.01819780096411705,
-0.6033076643943787,
0.3615685701370239,
0.5464141368865967,
-0.758875846862793,
-0.5554330348968506,
-0.42961376905441284,
-0.0008763317600... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jonathan-roberts1/RS_C11 | jonathan-roberts1 | 2023-03-31T17:07:50Z | 50 | 0 | null | [
"task_categories:image-classification",
"task_categories:zero-shot-image-classification",
"license:other",
"region:us"
] | 2023-03-31T17:07:50Z | 2023-02-14T18:12:02.000Z | 2023-02-14T18:12:02 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': dense forest
'1': grassland
'2': harbor
'3': high buildings
'4': low buildings
'5': overpass
'6': railway
'7': residential area
'8': roads
'9': sparse forest
'10': storage tanks
splits:
- name: train
num_bytes: 969136595.28
num_examples: 1232
download_size: 916398984
dataset_size: 969136595.28
license: other
task_categories:
- image-classification
- zero-shot-image-classification
---
# Dataset Card for "RS_C11"
## Dataset Description
- **Paper** [Feature significance-based multibag-of-visual-words model for remote sensing image scene classification](https://www.spiedigitallibrary.org/journals/journal-of-applied-remote-sensing/volume-10/issue-3/035004/Feature-significance-based-multibag-of-visual-words-model-for-remote/10.1117/1.JRS.10.035004.pdf)
### Licensing Information
Free usage without license.
## Citation Information
[Feature significance-based multibag-of-visual-words model for remote sensing image scene classification](https://www.spiedigitallibrary.org/journals/journal-of-applied-remote-sensing/volume-10/issue-3/035004/Feature-significance-based-multibag-of-visual-words-model-for-remote/10.1117/1.JRS.10.035004.pdf)
```
@article{zhao2016feature,
title = {Feature significance-based multibag-of-visual-words model for remote sensing image scene classification},
author = {Zhao, Lijun and Tang, Ping and Huo, Lianzhi},
year = 2016,
journal = {Journal of Applied Remote Sensing},
publisher = {Society of Photo-Optical Instrumentation Engineers},
volume = 10,
number = 3,
pages = {035004--035004}
}
``` | [
-0.5267013311386108,
-0.5106236934661865,
0.006865458097308874,
0.2685825824737549,
-0.5856212377548218,
-0.255350261926651,
0.026430048048496246,
-0.5964579582214355,
-0.01343507505953312,
0.2665077745914459,
-0.4522268772125244,
-0.8007307648658752,
-1.0042951107025146,
0.054071590304374... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jm0727/spider | jm0727 | 2023-02-21T15:04:03Z | 50 | 0 | null | [
"region:us"
] | 2023-02-21T15:04:03Z | 2023-02-21T14:04:52.000Z | 2023-02-21T14:04:52 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mstz/breast | mstz | 2023-04-16T16:47:59Z | 50 | 1 | null | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"breast",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | 2023-04-16T16:47:59Z | 2023-03-23T09:31:30.000Z | 2023-03-23T09:31:30 | ---
language:
- en
tags:
- breast
- tabular_classification
- binary_classification
- UCI
pretty_name: Breast
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- cancer
license: cc
---
# Breast cancer
The [Breast cancer dataset](https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Original%29) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Classify cancerousness of the given cell.
# Configurations and tasks
| **Configuration** | **Task** | Description |
|-------------------|---------------------------|---------------------------------------------------------------|
| cancer | Binary classification | Is the cell clump cancerous? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/breast", "cancer")["train"]
```
# Features
| **Name** |**Type**|**Description** |
|-------------------------------|--------|----------------------------|
|`clump_thickness` |`int8` |Thickness of the clump |
|`uniformity_of_cell_size` |`int8` |Uniformity of cell size |
|`uniformity_of_cell_shape` |`int8` |Uniformity of cell shape |
|`marginal_adhesion` |`int8` |Marginal adhesion |
|`single_epithelial_cell_size` |`int8` |single_epithelial_cell_size |
|`bare_nuclei` |`int8` |bare_nuclei |
|`bland_chromatin` |`int8` |bland_chromatin |
|`normal_nucleoli` |`int8` |normal_nucleoli |
|`mitoses` |`int8` |mitoses |
|**is_cancer** |`int8` |Is the clump cancer | | [
-0.07950124889612198,
-0.5555331707000732,
0.5174983143806458,
0.15651901066303253,
-0.19429336488246918,
-0.31431761384010315,
0.3959299325942993,
-0.10247637331485748,
0.3464541733264923,
0.4388379156589508,
-0.48897770047187805,
-1.062079668045044,
-0.8450632691383362,
0.252721488475799... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mvasiliniuc/iva-kotlin-codeint | mvasiliniuc | 2023-06-16T06:56:58Z | 50 | 1 | null | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"size_categories:100K<n<1M",
"language:code",
"license:other",
"code, kotlin, native Android development",
"doi:10.57967/hf/0779",
"region:us"
] | 2023-06-16T06:56:58Z | 2023-04-04T19:02:39.000Z | 2023-04-04T19:02:39 | ---
annotations_creators:
- crowdsourced
license: other
language_creators:
- crowdsourced
language:
- code
task_categories:
- text-generation
tags:
- code, kotlin, native Android development
size_categories:
- 100K<n<1M
source_datasets: []
pretty_name: iva-kotlin-codeint-raw
task_ids:
- language-modeling
---
# IVA Kotlin GitHub Code Dataset
## Dataset Description
This is the raw IVA Kotlin dataset extracted from GitHub.
It contains uncurated Kotlin files gathered with the purpose to train a code generation model.
The dataset consists of 464215 kotlin code files from GitHub totaling ~361 MB of data.
The dataset was created from the public GitHub dataset on Google BiqQuery.
### How to use it
To download the full dataset:
```python
from datasets import load_dataset
dataset = load_dataset('mvasiliniuc/iva-kotlin-codeint', split='train')
```
```python
from datasets import load_dataset
dataset = load_dataset('mvasiliniuc/iva-kotlin-codeint', split='train')
print(dataset[723])
#OUTPUT:
{
"repo_name":"nemerosa/ontrack",
"path":"ontrack-extension-notifications/src/main/java/net/nemerosa/ontrack/extension/notifications/webhooks/WebhookController.kt",
"copies":"1",
"size":"3248",
"content":"...@RestController\n@RequestMapping(\"/extension/notifications/webhook\")\nclass WebhookController(\n private val webhookAdminService: WebhookAdminService,\n private val webhookExecutionService: ",
"license":"mit"
}
```
## Data Structure
### Data Fields
|Field|Type|Description|
|---|---|---|
|repo_name|string|name of the GitHub repository|
|path|string|path of the file in GitHub repository|
|copies|string|number of occurrences in dataset|
|code|string|content of source file|
|size|string|size of the source file in bytes|
|license|string|license of GitHub repository|
### Instance
```json
{
"repo_name":"nemerosa/ontrack",
"path":"ontrack-extension-notifications/src/main/java/net/nemerosa/ontrack/extension/notifications/webhooks/WebhookController.kt",
"copies":"1",
"size":"3248",
"content":"...@RestController\n@RequestMapping(\"/extension/notifications/webhook\")\nclass WebhookController(\n private val webhookAdminService: WebhookAdminService,\n private val webhookExecutionService: ",
"license":"mit"
}
```
## Languages
The dataset contains only Kotlin files.
```json
{
"Kotlin": [".kt"]
}
```
## Licenses
Each entry in the dataset contains the associated license. The following is a list of licenses involved and their occurrences.
```json
{
"agpl-3.0": 9146,
"apache-2.0": 272388,
"artistic-2.0": 219,
"bsd-2-clause": 896,
"bsd-3-clause": 12328,
"cc0-1.0": 411,
"epl-1.0": 2111,
"gpl-2.0": 11080,
"gpl-3.0": 48911,
"isc": 997,
"lgpl-2.1": 297,
"lgpl-3.0": 7749,
"mit": 92540,
"mpl-2.0": 3386,
"unlicense": 1756
}
```
## Dataset Statistics
```json
{
"Total size": "~361 MB",
"Number of files": 464215,
"Number of files under 500 bytes": 99845,
"Average file size in bytes": 3252,
}
```
## Dataset Creation
The dataset was created using Google Query for Github:
https://cloud.google.com/blog/topics/public-datasets/github-on-bigquery-analyze-all-the-open-source-code
The following steps were pursued for data
gathering:
1. Creation of a dataset and a table in Google Big Query Project.
2. Creation of a bucket in Google Cloud Storage.
3. Creation of a query in Google Big Query Project.
4. Running the query with the setting to output the results in the dataset and table
created at step one.
5. Exporting the resulting dataset into the bucket created in step 2. Export format of JSON with gzip compression.
The result of these steps leads to the following results:
* 2.7 TB Processed,
* number of extracted rows/files was 464,215
* total logical bytes 1.46 GB.
* the result amounts to 7 json.gz files in a total of 361 MB.
The SQL Query used is:
```sql
SELECT
f.repo_name, f.path, c.copies, c.size, c.content, l.license
FROM
(select f.*, row_number() over (partition by id order by path desc) as seqnum from `bigquery-public-data.github_repos.files` AS f) f
JOIN
`bigquery-public-data.github_repos.contents` AS c
ON
f.id = c.id AND seqnum=1
JOIN
`bigquery-public-data.github_repos.licenses` AS l
ON
f.repo_name = l.repo_name
WHERE
NOT c.binary AND ((f.path LIKE '%.kt') AND (c.size BETWEEN 0 AND 1048575))
```
## Data Splits
The dataset only contains a train split.
Using the curated version of this dataset, a split was made into multiple repositories:
* Clean Version: https://huggingface.co/datasets/mvasiliniuc/iva-kotlin-codeint-clean
* Clean Version Train: https://huggingface.co/datasets/mvasiliniuc/iva-kotlin-codeint-clean-train
* Clean Version Valid: https://huggingface.co/datasets/mvasiliniuc/iva-kotlin-codeint-clean-valid
# Considerations for Using the Data
The dataset comprises source code from various repositories, potentially containing harmful or biased code,
along with sensitive information such as passwords or usernames.
# Additional Information
## Dataset Curators
[mircea.dev@icloud.com](mircea.dev@icloud.com)
## Licensing Information
* The license of this open-source dataset is: other.
* The dataset is gathered from open-source repositories on [GitHub using BigQuery](https://cloud.google.com/blog/topics/public-datasets/github-on-bigquery-analyze-all-the-open-source-code).
* Find the license of each entry in the dataset in the corresponding license column.
## Citation Information
```json
@misc {mircea_vasiliniuc_2023,
author = { {Mircea Vasiliniuc} },
title = { iva-kotlin-codeint (Revision 1af5124) },
year = 2023,
url = { https://huggingface.co/datasets/mvasiliniuc/iva-kotlin-codeint },
doi = { 10.57967/hf/0779 },
publisher = { Hugging Face }
}
``` | [
-0.3765662610530853,
-0.29001110792160034,
0.24695508182048798,
0.1030169352889061,
-0.34391650557518005,
-0.047884490340948105,
0.015025705099105835,
-0.2198590785264969,
0.49245354533195496,
0.6504970192909241,
-0.4552936553955078,
-0.721565842628479,
-0.44957810640335083,
0.146279871463... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered | ehartford | 2023-04-28T07:36:17Z | 50 | 89 | null | [
"region:us"
] | 2023-04-28T07:36:17Z | 2023-04-27T07:12:18.000Z | 2023-04-27T07:12:18 | This dataset is the WizardLM dataset victor123/evol_instruct_70k, removing instances of blatant alignment.
54974 instructions remain.
inspired by https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered
All credit to anon8231489123 for the cleanup script that I adapted to wizardlm_clean.py
---
license: apache-2.0
language:
- en
pretty_name: wizardlm-unfiltered
--- | [
-0.30921971797943115,
-0.4760186970233917,
0.06439552456140518,
-0.03597693890333176,
-0.0937609151005745,
-0.3284069299697876,
0.19312649965286255,
-0.25071683526039124,
0.07210123538970947,
1.1319105625152588,
-0.8568850755691528,
-0.5300019979476929,
-0.24383682012557983,
0.171912476420... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
clarin-knext/msmarco-pl-qrels | clarin-knext | 2023-06-07T08:21:32Z | 50 | 0 | null | [
"language:pl",
"arxiv:2305.19840",
"region:us"
] | 2023-06-07T08:21:32Z | 2023-06-06T22:03:21.000Z | 2023-06-06T22:03:21 | ---
language:
- pl
---
Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**.
Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf
Contact: konrad.wojtasik@pwr.edu.pl | [
-0.2209920436143875,
-0.9029766917228699,
0.5094642043113708,
0.2354191392660141,
-0.318521112203598,
-0.1491902619600296,
-0.16673962771892548,
-0.4962919354438782,
-0.01896025240421295,
0.41122618317604065,
-0.5503097772598267,
-0.6913566589355469,
-0.4166175127029419,
-0.048304717987775... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Norquinal/claude_evol_instruct_210k | Norquinal | 2023-07-17T04:10:04Z | 50 | 13 | null | [
"region:us"
] | 2023-07-17T04:10:04Z | 2023-06-10T06:00:28.000Z | 2023-06-10T06:00:28 | This dataset is the result of roughly 250k instruction/response pairs being generated by Claude, with instances of blatant alignment removed.
213375 instructions remain.
This dataset is experimental in two ways:
1. From start to finish, it was generated entirely synthetically through Anthropic's Claude AI.
2. It was generated using a somewhat imperfect recreation of the evol-instruct method. 50k instructions were initially synthetically generated then ran through four epochs of evol-instruct. | [
-0.5113221406936646,
-0.9814016819000244,
0.54306560754776,
-0.09566202014684677,
0.22253161668777466,
0.016144251450896263,
-0.03862682357430458,
-0.5514082908630371,
0.3391727805137634,
0.7031188011169434,
-1.0398081541061401,
-0.04243534430861473,
-0.32361748814582825,
0.245285540819168... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Patt/HellaSwag_TH_drop | Patt | 2023-11-16T16:39:54Z | 50 | 0 | null | [
"language:th",
"language:en",
"arxiv:1907.04307",
"region:us"
] | 2023-11-16T16:39:54Z | 2023-06-22T09:10:40.000Z | 2023-06-22T09:10:40 | ---
language:
- th
- en
dataset_info:
features:
- name: ind
dtype: int64
- name: activity_label
dtype: string
- name: activity_label_th
dtype: string
- name: ctx_a
dtype: string
- name: ctx_a_th
dtype: string
- name: ctx_b
dtype: string
- name: ctx_b_th
dtype: string
- name: ctx
dtype: string
- name: ctx_th
dtype: string
- name: endings
sequence: string
- name: endings_th
sequence: string
- name: source_id
dtype: string
- name: split
dtype: string
- name: split_type
dtype: string
- name: label
dtype: int64
- name: score_ctx_a
dtype: float64
- name: score_ctx
dtype: float64
- name: score_endings
dtype: float64
splits:
- name: train
num_bytes: 66295463
num_examples: 20027
- name: validation
num_bytes: 17133944
num_examples: 5034
- name: test
num_bytes: 16871175
num_examples: 5093
download_size: 44164434
dataset_size: 100300582
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
# Dataset Card for HellaSwag_TH_drop
### Dataset Description
This dataset is Thai translated version of [hellaswag](https://huggingface.co/datasets/hellaswag) using google translate with [Multilingual Universal Sentence Encoder](https://arxiv.org/abs/1907.04307) to calculate score for Thai translation.
The score was penalized by the length of original text compare to translated text. The row that any score < 0.5 was dropped.
### Languages
- EN
- TH | [
-0.3685196042060852,
-0.5580025911331177,
0.1289229393005371,
0.3138474225997925,
-0.8347674012184143,
-0.10938889533281326,
-0.417084276676178,
-0.18970081210136414,
0.2927611470222473,
0.6638880968093872,
-0.9100372195243835,
-1.0834542512893677,
-0.7332248687744141,
0.38936105370521545,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gonglinyuan/code_search_net_python_tokenized | gonglinyuan | 2023-11-12T20:05:47Z | 50 | 2 | null | [
"license:other",
"region:us"
] | 2023-11-12T20:05:47Z | 2023-07-03T00:19:18.000Z | 2023-07-03T00:19:18 | ---
license: other
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
awettig/Pile-FreeLaw-0.5B-6K-opt | awettig | 2023-07-10T19:34:17Z | 50 | 0 | null | [
"region:us"
] | 2023-07-10T19:34:17Z | 2023-07-10T19:32:38.000Z | 2023-07-10T19:32:38 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 6500934791
num_examples: 81380
- name: test
num_bytes: 64945692
num_examples: 813
download_size: 1569004486
dataset_size: 6565880483
---
# Dataset Card for "Pile-FreeLaw-0.5B-6K-opt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5609620809555054,
0.005212037358433008,
0.05152013897895813,
0.2535955011844635,
-0.6666355133056641,
-0.23136372864246368,
0.4829420745372772,
-0.2779565751552582,
0.8399102091789246,
0.8349498510360718,
-0.5210002064704895,
-0.7372224926948547,
-0.6047782301902771,
-0.2344310581684112... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
awettig/Pile-Github-0.5B-6K-opt | awettig | 2023-07-10T19:40:11Z | 50 | 0 | null | [
"region:us"
] | 2023-07-10T19:40:11Z | 2023-07-10T19:38:57.000Z | 2023-07-10T19:38:57 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 6487050154
num_examples: 81380
- name: test
num_bytes: 64945692
num_examples: 813
download_size: 1121468368
dataset_size: 6551995846
---
# Dataset Card for "Pile-Github-0.5B-6K-opt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6621342897415161,
-0.1444011926651001,
-0.04931223765015602,
0.17307615280151367,
-0.5302794575691223,
0.17157037556171417,
0.41161230206489563,
-0.14365500211715698,
0.9968258738517761,
0.6726294755935669,
-0.6704823970794678,
-0.6549984216690063,
-0.5703197717666626,
-0.11382308602333... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ivrit-ai/audio-base | ivrit-ai | 2023-09-26T05:49:29Z | 50 | 4 | null | [
"task_categories:audio-classification",
"task_categories:voice-activity-detection",
"size_categories:1K<n<10K",
"language:he",
"license:other",
"arxiv:2307.08720",
"region:us"
] | 2023-09-26T05:49:29Z | 2023-07-15T08:01:33.000Z | 2023-07-15T08:01:33 | ---
license: other
task_categories:
- audio-classification
- voice-activity-detection
language:
- he
size_categories:
- 1K<n<10K
extra_gated_prompt:
"You agree to the following license terms:
This material and data is licensed under the terms of the Creative Commons Attribution 4.0
International License (CC BY 4.0), The full text of the CC-BY 4.0 license is available at
https://creativecommons.org/licenses/by/4.0/.
Notwithstanding the foregoing, this material and data may only be used, modified and distributed for
the express purpose of training AI models, and subject to the foregoing restriction. In addition, this
material and data may not be used in order to create audiovisual material that simulates the voice or
likeness of the specific individuals appearing or speaking in such materials and data (a “deep-fake”).
To the extent this paragraph is inconsistent with the CC-BY-4.0 license, the terms of this paragraph
shall govern.
By downloading or using any of this material or data, you agree that the Project makes no
representations or warranties in respect of the data, and shall have no liability in respect thereof. These
disclaimers and limitations are in addition to any disclaimers and limitations set forth in the CC-BY-4.0
license itself. You understand that the project is only able to make available the materials and data
pursuant to these disclaimers and limitations, and without such disclaimers and limitations the project
would not be able to make available the materials and data for your use."
extra_gated_fields:
I have read the license, and agree to its terms: checkbox
---
ivrit.ai is a database of Hebrew audio and text content.
**audio-base** contains the raw, unprocessed sources.
**audio-vad** contains audio snippets generated by applying Silero VAD (https://github.com/snakers4/silero-vad) to the base dataset.
v1 data is generated using silero-vad's default parameters.
v2 data is generated using min_speech_duration_ms=2000 (milliseconds), and max_speech_duration_s=30 (seconds).
**audio-transcripts** contains transcriptions for each snippet in the audio-vad dataset.
You can find the full list of sources in this dataset under https://www.ivrit.ai/en/credits.
Paper: https://arxiv.org/abs/2307.08720
If you use our datasets, the following quote is preferable:
```
@misc{marmor2023ivritai,
title={ivrit.ai: A Comprehensive Dataset of Hebrew Speech for AI Research and Development},
author={Yanir Marmor and Kinneret Misgav and Yair Lifshitz},
year={2023},
eprint={2307.08720},
archivePrefix={arXiv},
primaryClass={eess.AS}
}
```
| [
-0.31814655661582947,
-0.8562241792678833,
0.015091237612068653,
0.1864623725414276,
-0.3818630278110504,
-0.20682333409786224,
-0.38375210762023926,
-0.5221497416496277,
0.2524229884147644,
0.6171274185180664,
-0.5125926733016968,
-0.5689448714256287,
-0.49234437942504883,
0.0694983527064... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Andyrasika/alpaca-bitcoin-sentiment-dataset | Andyrasika | 2023-07-15T10:22:53Z | 50 | 3 | null | [
"license:apache-2.0",
"region:us"
] | 2023-07-15T10:22:53Z | 2023-07-15T10:22:29.000Z | 2023-07-15T10:22:29 | ---
license: apache-2.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
taishi-i/nagisa_stopwords | taishi-i | 2023-08-06T17:58:31Z | 50 | 0 | null | [
"size_categories:n<1K",
"language:ja",
"license:mit",
"stopwords",
"region:us"
] | 2023-08-06T17:58:31Z | 2023-08-06T17:10:10.000Z | 2023-08-06T17:10:10 | ---
license: mit
tags:
- stopwords
pretty_name: stopwords
size_categories:
- n<1K
language:
- ja
---
# Japanese stopwords for nagisa
This is a stopword list of frequently used words in the Japanese language, created according to the tokenization rules of the Japanese text analysis library, [nagisa](https://github.com/taishi-i/nagisa).
This list is constructed by extracting the top 100 most commonly used words from the [CC-100 dataset](https://data.statmt.org/cc-100/) and [Wikipedia](https://dumps.wikimedia.org/other/cirrussearch/).
To access this list of words, simply run the provided program code below.
Please install Huggingface datasets library.
```bash
$ pip install datasets
```
After installing the library, please run the following code next.
```python
from datasets import load_dataset
dataset = load_dataset("taishi-i/nagisa_stopwords")
# the top 100 most commonly used words
words = dataset["nagisa_stopwords"]["words"]
# the part-of-speech list for the top 100 most commonly used words
postags = dataset["nagisa_stopwords"]["postags"]
```
| [
-0.8328804969787598,
-0.8620508909225464,
0.40713244676589966,
0.259304016828537,
-0.5566632151603699,
0.13356801867485046,
-0.3286387324333191,
-0.06719684600830078,
0.7403246164321899,
0.7123559713363647,
-0.741331934928894,
-0.6979352831840515,
-0.6982758045196533,
0.3108883202075958,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
larryvrh/OASST_Top1_2023-08-25-En_Only | larryvrh | 2023-09-21T00:24:43Z | 50 | 0 | null | [
"task_categories:text-generation",
"task_categories:conversational",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] | 2023-09-21T00:24:43Z | 2023-09-21T00:23:11.000Z | 2023-09-21T00:23:11 | ---
dataset_info:
features:
- name: conversation
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 9601409
num_examples: 5010
download_size: 5257845
dataset_size: 9601409
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- text-generation
- conversational
language:
- en
size_categories:
- 1K<n<10K
---
# Dataset Card for "OASST_Top1_2023-08-25-En_Only"
Filtered from [OpenAssistant/oasst_top1_2023-08-25](https://huggingface.co/datasets/OpenAssistant/oasst_top1_2023-08-25). | [
-0.5184507369995117,
-0.49820318818092346,
0.32235270738601685,
0.16494251787662506,
-0.7629265189170837,
-0.13307061791419983,
0.34082862734794617,
-0.3037443459033966,
0.8496171832084656,
0.8995797038078308,
-1.2150334119796753,
-1.0445606708526611,
-0.748762845993042,
-0.221932321786880... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.