id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
Sambhavnoobcoder/bl-llama-training | 2023-09-26T16:36:37.000Z | [
"region:us"
] | Sambhavnoobcoder | null | null | null | 0 | 6 | Entry not found |
Akajackson/synth_pass_open | 2023-09-27T09:04:53.000Z | [
"region:us"
] | Akajackson | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 2482854497.0
num_examples: 10000
- name: validation
num_bytes: 51578237.0
num_examples: 200
- name: test
num_bytes: 52340884.0
num_examples: 200
download_size: 2576631016
dataset_size: 2586773618.0
---
# Dataset Card for "synth_pass_open"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Rodr16020/Bactrian-Spanish-Clean-Light | 2023-09-27T16:22:06.000Z | [
"region:us"
] | Rodr16020 | null | null | null | 0 | 6 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: id
dtype: string
- name: output
dtype: string
- name: instruction_text
dtype: string
splits:
- name: train
num_bytes: 5191106
num_examples: 3000
download_size: 2646581
dataset_size: 5191106
---
# Dataset Card for "Bactrian-Spanish-Clean-Light"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Nicolas-BZRD/DEBATS_opendata | 2023-09-28T11:00:26.000Z | [
"size_categories:1K<n<10K",
"language:fr",
"license:odc-by",
"legal",
"region:us"
] | Nicolas-BZRD | null | null | null | 0 | 6 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 860286530
num_examples: 2214
download_size: 438989465
dataset_size: 860286530
license: odc-by
language:
- fr
tags:
- legal
pretty_name: Debates at National Assembly and Senate
size_categories:
- 1K<n<10K
---
# DEBATS (National Assembly and Senate)
The database contains full reports of french [debates](https://echanges.dila.gouv.fr/OPENDATA/Debats/) in the National Assembly since October 4, 2011 and in the Senate since October 2, 2011. |
PurCL/marinda-type-inference-debuginfo-only-O3-shuffle | 2023-09-28T05:10:36.000Z | [
"region:us"
] | PurCL | null | null | null | 0 | 6 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: metadata
struct:
- name: binary_name
dtype: string
- name: function_addr
dtype: int64
- name: function_name
dtype: string
- name: project_name
dtype: string
- name: code_w_type
dtype: string
- name: code
dtype: string
- name: data_dep
dtype: string
splits:
- name: train
num_bytes: 265826924.50166753
num_examples: 28065
- name: test
num_bytes: 29542639.498332478
num_examples: 3119
download_size: 78570389
dataset_size: 295369564.0
---
# Dataset Card for "marinda-type-inference-debuginfo-only-O3-shuffle"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Nicolas-BZRD/KALI_opendata | 2023-09-28T11:15:14.000Z | [
"size_categories:100K<n<1M",
"language:fr",
"license:odc-by",
"legal",
"region:us"
] | Nicolas-BZRD | null | null | null | 0 | 6 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 768806851
num_examples: 430667
download_size: 298891657
dataset_size: 768806851
license: odc-by
language:
- fr
tags:
- legal
pretty_name: Conventions collectives nationales
size_categories:
- 100K<n<1M
---
# KALI (Conventions collectives nationales)
[All collective agreements and related texts](https://echanges.dila.gouv.fr/OPENDATA/KALI/). The database also provides access to certain national collective agreements that have not been extended, as well as regional and departmental collective agreements, whether or not they have been extended. The associated texts include agreements relating to a collective agreement, salaries and extension decrees.
The data is updated from the Bulletin officiel "Conventions collectives" published under the responsibility of the Ministry of Labour, Solidarity and the Civil Service and distributed by the DILA. |
Nicolas-BZRD/QR_opendata | 2023-09-28T12:13:03.000Z | [
"task_categories:question-answering",
"size_categories:n<1K",
"language:fr",
"license:odc-by",
"legal",
"region:us"
] | Nicolas-BZRD | null | null | null | 0 | 6 | ---
language:
- fr
license: odc-by
task_categories:
- question-answering
pretty_name: Q&R Assemblée nationale et Sénat
tags:
- legal
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 125908573
num_examples: 630
download_size: 60098268
dataset_size: 125908573
size_categories:
- n<1K
---
# Q&R (National Assembly and )
The [database](https://echanges.dila.gouv.fr/OPENDATA/Questions-Reponses/) contains senators' questions with ministerial answers and questions from deputies wiht ministerial responses. |
Nicolas-BZRD/ACCO_opendata | 2023-09-28T19:01:30.000Z | [
"size_categories:100K<n<1M",
"language:fr",
"license:odc-by",
"legal",
"region:us"
] | Nicolas-BZRD | null | null | null | 0 | 6 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3677709236
num_examples: 254140
download_size: 1076143081
dataset_size: 3677709236
license: odc-by
language:
- fr
tags:
- legal
pretty_name: Collecttive Company Agreements
size_categories:
- 100K<n<1M
---
# ACCO (Collecttive Company Agreements)
[Company agreements](https://echanges.dila.gouv.fr/OPENDATA/ACCO/) published in accordance with article of decree no. 2017-752 of 3 May 2017 on the publication of collective agreements.
These agreements may concern:
- groups
- companies
- establishments
The following are published:
- agreements concluded
- their amendment(s)
- their deletion
The database contains company agreements concluded on or after 1 September 2017.
As a transitional measure until 1 October 2018, the data does not include the first and last names of the negotiators and signatories.
After this date, the data will be published by default, unless anonymisation is requested from the Direction Générale du Travail and carried out at source by the latter before publication. |
Nicolas-BZRD/INCA_opendata | 2023-09-29T09:39:59.000Z | [
"size_categories:100K<n<1M",
"language:fr",
"license:odc-by",
"legal",
"region:us"
] | Nicolas-BZRD | null | null | null | 0 | 6 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2816739990
num_examples: 373751
download_size: 1125426154
dataset_size: 2816739990
license: odc-by
language:
- fr
tags:
- legal
size_categories:
- 100K<n<1M
---
# INCA
[Texts of unpublished judgments](https://echanges.dila.gouv.fr/OPENDATA/INCA/) (not published in the Bulletin) distributed by the Court of Cassation's competition fund since 1989.
In accordance with the CNIL recommendation of 29 November 2001, personal data concerning individuals (parties and witnesses) is pseudonymised. |
relattZero/elena | 2023-09-28T19:42:30.000Z | [
"region:us"
] | relattZero | null | null | null | 0 | 6 | Entry not found |
vsarathy/DIARC-embodied-nlu-styled-4k | 2023-09-30T01:02:53.000Z | [
"language:en",
"license:mit",
"region:us"
] | vsarathy | null | null | null | 0 | 6 | ---
license: mit
language:
- en
pretty_name: 'DIARC-embodied-nlu-styled-4k '
---
# DIARC-LLM-Parser-Embodied-NLU-Styled-4K
This dataset contains about ~4k utterances together with their semantic parses as interpretable by the DIARC cognitive robotic architecture.
The parses are meant to capture the speech-theoretic aspects of NL and parse the intent, referents, and descriptors in the utterance.
This dataset is one in a set of datasets. For this particular one, we programmatically built 127 utterances and semantics that are groundable in a robotic architecture (DIARC)/
These 127 utterances were then expanded into ~4k style variations across four dimensions
1. Directness/Indirectness
2. Formality
3. Familiarity (whether it was uttered by a native speaker or a second-language speaker)
4. Word choice |
Nicolas-BZRD/LEGI_opendata | 2023-09-29T10:10:54.000Z | [
"size_categories:1M<n<10M",
"language:fr",
"license:odc-by",
"legal",
"region:us"
] | Nicolas-BZRD | null | null | null | 0 | 6 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4054244489
num_examples: 2373798
download_size: 1112659274
dataset_size: 4054244489
license: odc-by
language:
- fr
tags:
- legal
pretty_name: Codes, Lois et Réglements Consolidés
size_categories:
- 1M<n<10M
---
# LEGI (CODES, LAWS AND REGULATIONS)
[The full consolidated text of national legislation and regulations.](https://echanges.dila.gouv.fr/OPENDATA/LEGI/)<br>
It consists essentially of :
- official codes
- laws
- decree-laws
- ordinances
- decrees
- a selection of decrees
Consolidation of texts involves rewriting an article of a text (or code) to incorporate the change made. Amended or repealed versions are included in the document collection in the same way as current versions. |
Vishal24/nitin_dataset | 2023-09-29T05:19:56.000Z | [
"region:us"
] | Vishal24 | null | null | null | 0 | 6 | Entry not found |
TheAIchemist13/marathi_asr_dataset | 2023-09-29T07:31:57.000Z | [
"region:us"
] | TheAIchemist13 | null | null | null | 0 | 6 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcriptions
dtype: string
splits:
- name: train
num_bytes: 1647819015.0
num_examples: 40000
- name: test
num_bytes: 264302111.0
num_examples: 4675
download_size: 2743243940
dataset_size: 1912121126.0
---
# Dataset Card for "marathi_asr_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Nicolas-BZRD/JORF_opendata | 2023-09-29T14:37:00.000Z | [
"size_categories:1M<n<10M",
"language:fr",
"license:odc-by",
"legal",
"region:us"
] | Nicolas-BZRD | null | null | null | 0 | 6 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4361779320
num_examples: 3616038
download_size: 1747268676
dataset_size: 4361779320
license: odc-by
language:
- fr
tags:
- legal
size_categories:
- 1M<n<10M
---
# JORF ("Laws and decrees" edition of the Official Journal)
The documents published in the ["Laws and decrees" edition of the Official Journal](https://echanges.dila.gouv.fr/OPENDATA/JORF/) since 1990 comprise :
- laws, ordinances, decrees, orders and circulars.
- decisions issued by institutions or courts that must be published in the Official Journal (Constitutional Council, Conseil supérieur de l'audiovisuel, Autorité de régulation des télécommunications, etc.)
- notices and communications since 1 January 2002 (notices to importers and exporters, competition notices and job vacancy notices).
In the interests of privacy and the protection of personal data, certain sensitive nominative measures are not reproduced in this section:
- decrees concerning naturalisation, reinstatement, mention of a minor child benefiting from the collective effect attached to the acquisition of French nationality by the parents and the francization of surnames and forenames
- change of name decrees
- rulings by the Court of Budgetary and Financial Discipline. |
wikipunk/fibo2023Q3 | 2023-10-04T20:03:28.000Z | [
"task_categories:graph-ml",
"annotations_creators:expert-generated",
"size_categories:100K<n<1M",
"language:en",
"license:mit",
"knowledge-graph",
"rdf",
"owl",
"ontology",
"region:us"
] | wikipunk | null | null | null | 0 | 6 | ---
language:
- en
license: mit
tags:
- knowledge-graph
- rdf
- owl
- ontology
annotations_creators:
- expert-generated
pretty_name: FIBO
size_categories:
- 100K<n<1M
task_categories:
- graph-ml
dataset_info:
features:
- name: subject
dtype: string
- name: predicate
dtype: string
- name: object
dtype: string
config_name: default
splits:
- name: train
num_bytes: 56045523
num_examples: 236579
dataset_size: 56045523
viewer: false
---
# FIBO: The Financial Industry Business Ontology
### Overview
In the world of financial technology, the vastness of data and the
complexity of financial instruments present both challenges and
opportunities. The Financial Industry Business Ontology (FIBO) offers
a structured framework that bridges the gap between theoretical
financial concepts and real-world data. I believe machine learning
researchers interested in the financial sector could use the
relationships in FIBO to innovate in financial feature engineering to
fine-tune existing models or build new ones.
#### Open Source
The FIBO ontology is developed on GitHub at
https://github.com/edmcouncil/fibo/.
### Use-cases
- Comprehensive Data Structure: FIBO offers a broad spectrum of
financial concepts, ranging from derivatives to securities. This
design, rooted in expert knowledge from both the knowledge
representation and financial sectors, ensures a profound
understanding of financial instruments.
- Decoding Complex Relationships: The financial domain is
characterized by its intricate interdependencies. FIBO's structured
approach provides clarity on these relationships, enabling machine
learning algorithms to identify patterns and correlations within
large datasets.
- Linkage with Real-world Data: A distinguishing feature of FIBO is
its capability to associate financial concepts with real-world
financial data and controlled vocabularies. This connection is
crucial for researchers aiming to apply theoretical insights in
practical contexts in financial enterprises with their existing
data.
- Retrieval Augmented Generation: The advent of Large Language Models,
particularly in conjunction with Retrieval Augmented Generation
(RAG), holds promise for revolutionizing the way financial data is
processed and interpreted.
- Document Classification: With the surge in financial documents,
utilizing RAG to categorize financial datasets classifed by FIBO
concepts can assist financial analysts in achieving enhanced
accuracy and depth in data interpretation, facilitated by
intelligent prompting.
#### Building and Verification:
1. **Construction**: The ontology was imported from
[AboutFIBOProd-IncludingReferenceData](https://github.com/edmcouncil/fibo/blob/master/AboutFIBOProd-IncludingReferenceData.rdf)
into Protege version 5.6.1.
2. **Reasoning**: Due to the large size of the ontology I used the ELK
reasoner plugin to materialize (make explicit) inferences in the
ontology.
3. **Coherence Check**: The Debug Ontology plugin in Protege was used
to ensure the ontology's coherence and consistency.
4. **Export**: After verification, inferred axioms, along with
asserted axioms and annotations, were [exported using Protege](https://www.michaeldebellis.com/post/export-inferred-axioms).
5. **Encoding and Compression**: [Apache Jena's
riot](https://jena.apache.org/documentation/tools/) was used to convert the
result to ntriples, which was then compressed with gzip. This
compressed artifact is downloaded and extracted by the Hugging Face
datasets library to yield the examples in the dataset.
### Usage
First make sure you have the requirements installed:
```python
pip install datasets
pip install rdflib
```
You can load the dataset using the Hugging Face Datasets library with the following Python code:
```python
from datasets import load_dataset
dataset = load_dataset('wikipunk/fibo2023Q3', split='train')
```
## Features
The FIBO dataset is composed of triples representing the relationships
between different financial concepts and named individuals such as
market participants, corporations, and contractual agents.
#### Note on Format:
The subject, predicate, and object features are stored in N3 notation
with no prefix mappings. This allows users to parse each component
using `rdflib.util.from_n3` from the RDFLib Python library.
### 1. **Subject** (`string`)
The subject of a triple is the primary entity or focus of the statement. In this dataset, the subject often represents a specific financial instrument or entity. For instance:
`<https://spec.edmcouncil.org/fibo/ontology/SEC/Equities/EquitiesExampleIndividuals/XNYSListedTheCoca-ColaCompanyCommonStock>`
refers to the common stock of The Coca-Cola Company that is listed on
the NYSE.
### 2. **Predicate** (`string`)
The predicate of a triple indicates the nature of the relationship between the subject and the object. It describes a specific property, characteristic, or connection of the subject. In our example:
`<https://spec.edmcouncil.org/fibo/ontology/SEC/Securities/SecuritiesListings/isTradedOn>`
signifies that the financial instrument (subject) is traded on a
particular exchange (object).
### 3. **Object** (`string`)
The object of a triple is the entity or value that is associated with the subject via the predicate. It can be another financial concept, a trading platform, or any other related entity. In the context of our example:
`<https://spec.edmcouncil.org/fibo/ontology/FBC/FunctionalEntities/NorthAmericanEntities/USMarketsAndExchangesIndividuals/NewYorkStockExchange>`
represents the New York Stock Exchange where the aforementioned
Coca-Cola common stock is traded.
#### Continued
Here is an another example of a triple in the dataset:
- Subject: `"<https://spec.edmcouncil.org/fibo/ontology/FBC/FunctionalEntities/MarketsIndividuals/ServiceProvider-L-JEUVK5RWVJEN8W0C9M24>"`
- Predicate: `"<http://www.w3.org/1999/02/22-rdf-syntax-ns#type>`
- Object: `"<https://spec.edmcouncil.org/fibo/ontology/BE/FunctionalEntities/FunctionalEntities/FunctionalEntity>"`
This triple represents the statement that the market individual
[ServiceProvider-L-JEUVK5RWVJEN8W0C9M24](https://spec.edmcouncil.org/fibo/ontology/FBC/FunctionalEntities/MarketsIndividuals/ServiceProvider-L-JEUVK5RWVJEN8W0C9M24)
has a type of
[FunctionalEntity](https://spec.edmcouncil.org/fibo/ontology/BE/FunctionalEntities/FunctionalEntities/FunctionalEntity).
#### Note:
The dataset contains example individuals from the ontology as
reference points. These examples provide a structured framework for
understanding the relationships and entities within the financial
domain. However, the individuals included are not exhaustive. With
advancements in Large Language Models, especially Retrieval Augmented
Generation (RAG), there's potential to generate and expand upon these
examples, enriching the dataset with more structured data and
insights.
### FIBO Viewer
Use the [FIBO Viewer](https://spec.edmcouncil.org/fibo/ontology) to
explore the ontology on the web. One of the coolest features about
FIBO is that entities with a prefix of
https://spec.edmcouncil.org/fibo/ontology/ can be looked up in the web
just by opening its URL in a browser or in any HTTP client.
## Ideas for Deriving Graph Neural Network Features from FIBO:
Graph Neural Networks (GNNs) have emerged as a powerful tool for
machine learning on structured data. FIBO, with its structured
ontology, can be leveraged to derive features for GNNs.
### Node Features:
- **rdf:type**: Each entity in FIBO has one or more associated `rdf:type`,
`<http://www.w3.org/1999/02/22-rdf-syntax-ns#type>`, that
indicates its class or category. This can serve as a primary node
feature to encode.
- **Entity Attributes**: Attributes of each entity, such as names or
descriptions, can be used as additional node features. Consider
embedding descriptions using a semantic text embedding model.
### Edge Features:
- **RDF Predicates**: The relationships between entities in FIBO are
represented using RDF predicates. These predicates can serve as edge
features in a GNN, capturing the nature of the relationship between
nodes.
### Potential Applications:
1. **Entity Classification**: Using the derived node and edge
features, GNNs can classify entities into various financial
categories, enhancing the granularity of financial data analysis.
2. **Relationship Prediction**: GNNs can predict potential
relationships between entities, aiding in the discovery of hidden
patterns or correlations within the financial data.
3. **Anomaly Detection**: By training GNNs on the structured data from
FIBO and interlinked financial datasets, anomalies or
irregularities in them may be detected, ensuring data integrity and
accuracy.
### Acknowledgements
We extend our sincere gratitude to the FIBO contributors for their
meticulous efforts in knowledge representation. Their expertise and
dedication have been instrumental in shaping a comprehensive and
insightful framework that serves as a cornerstone for innovation in
the financial industry.
If you are interested in modeling the financial industry you should
consider [contributing to
FIBO](https://github.com/edmcouncil/fibo/blob/master/CONTRIBUTING.md).
### Citation
```bibtex
@misc{fibo2023Q3,
title={Financial Industry Business Ontology (FIBO)},
author={Object Management Group, Inc. and EDM Council, Inc. and Various Contributors},
year={2023},
note={Available as OWL 2 ontologies and UML models compliant with the Semantics for Information Modeling and Federation (SMIF) draft specification. Contributions are open on GitHub, consult the repository for a list of contributors.},
howpublished={\url{https://spec.edmcouncil.org/fibo/}},
abstract={The Financial Industry Business Ontology (FIBO) is a collaborative effort to standardize the language used to define the terms, conditions, and characteristics of financial instruments; the legal and relationship structure of business entities; the content and time dimensions of market data; and the legal obligations and process aspects of corporate actions.},
license={MIT License, \url{https://opensource.org/licenses/MIT}}
}
```
|
AnikaBasu/MentalHealthDataset | 2023-09-29T23:34:56.000Z | [
"region:us"
] | AnikaBasu | null | null | null | 0 | 6 | Entry not found |
kargaranamir/GlotSparse | 2023-10-08T12:57:28.000Z | [
"language:bal",
"language:glk",
"language:brh",
"language:sdh",
"language:kur",
"language:hac",
"language:kiu",
"language:zza",
"language:twi",
"language:fat",
"language:aka",
"license:cc0-1.0",
"region:us"
] | kargaranamir | GlotSprase \ | null | null | 1 | 6 | ---
license: cc0-1.0
language:
- bal
- glk
- brh
- sdh
- kur
- hac
- kiu
- zza
- twi
- fat
- aka
pretty_name: GlotSparse Corpus
---
# GlotSparse Corpus
These languages are supported:
```
('azb_Arab', 'South-Azerbaijani_Arab')
('bal_Arab', 'Balochi_Arab')
('brh_Arab', 'Brahui_Arab')
('fat_Latn', 'Fanti_Latn') # aka
('glk_Arab', 'Gilaki_Arab')
('hac_Arab', 'Gurani_Arab')
('kiu_Latn', 'Kirmanjki_Latn') # zza
('sdh_Arab', 'Southern-Kurdish_Arab')
('twi_Latn', 'Twi_Latn') # aka
('uzs_Arab', 'Southern-Uzbek_Arab')
```
## Usage (HF Loader)
Replace `twi_Latn` with your specific language.
```python
from datasets import load_dataset
dataset = load_dataset('kargaranamir/GlotSparse', 'twi_Latn')
print(dataset['train'][0]) # First row of Twi_Latn
```
## Download
If you are not a fan of the HF dataloader or are just interested in a specific language, download it directly:
Replace `twi_Latn` with your specific language.
```python
! wget https://huggingface.co/datasets/kargaranamir/GlotSparse/resolve/main/twi_Latn/twi_Latn.csv
```
## Sources
- **Balochi (bal)**
- News: https://sunnionline.us/balochi/
- Stories: https://kissah.org/
- Deiverse Contents such as poems, stories, posts, etc: https://baask.com/archive/category/balochi/
- **Gilaki (glk)**
- Social Media: The original source of this content is Twitter, but Twitter typically doesn't support Gilaki as part of its language identifier due to gilaki is a low resource language. We obtained this content from a Telegram channel (https://t.me/gilaki_twitter) that re-posts Gilaki Twitter content. The admins of the channel are native Gilaki speakers, and after manual inspection, these tweets are selected. At present, there isn't a readily available mapping for Twitter IDs. The primary reason for reposting Twitter content on Telegram in Iran is the relative ease of access to Telegram compared to Twitter.
- **Brahui (brh)**
- News: https://talarbrahui.com/category/news/ and https://talarbrahui.com/category/articles/
- **Southern-Kurdish (sdh)**
- News: https://shafaq.com/ku/ (Feyli)
- **Gurani (hac)**
- News: https://anfsorani.com/هۆرامی (Hawrami)
- **Kirmanjki (kiu)**
- News: https://anfkirmancki.com/
- **Fanti (fat)**
- News: https://akannews.com/fante/
- **Twi (twi)**
- News: https://akannews.com/asante-twi/
- **South-Azerbaijani (azb)**
- News: https://www.trt.net.tr/turki/
- **Southern Uzbek (uzs)**
- News: https://www.trt.net.tr/afghaniuzbek/
## License
We do not own any of the text from which these data has been extracted.
We license the actual packaging, the metadata and the annotations of these data under the cc0-1.0 (waiving all of the rights under copyright law).
If you are a website/dataset owner and do not want your data to be included in this corpra, please send us an email at amir@cis.lmu.de .
## Ethical Considerations
**1. Biases:** The text corpus may reflect the perspectives, opinions, or demographics of its sources or creators. It is important for users to critically evaluate the text in context especially for **news sources** and **social medias** (e.g., sunnionline, twitter, ...).
**2. Representativeness:** While we have aimed for diversity and inclusivity, the text corpus may not fully represent all native speakers. Users should be mindful of any potential underrepresentation.
**3. Ethics:** We acknowledge that the collection and use of text data can have ethical implications. We have strived to handle the data responsibly, but we encourage users to consider the broader ethical implications of their own research or applications.
## Citation
If you use any part of this code and data in your research, please cite it using the following BibTeX entry.
All the sources related to news, social media, and without mentioned datasets are crawled and compiled in this work.
```
@misc{GlotSparse,
author = {Kargaran, Amir Hossein},
title = {GlotSparse Corpus},
year = {2023},
publisher = {Huggingface},
journal = {Huggingface Repository},
howpublished = {\url{https://huggingface.co/datasets/kargaranamir/GlotSparse}},
}
``` |
thanhnew2001/country | 2023-09-30T06:28:17.000Z | [
"region:us"
] | thanhnew2001 | null | null | null | 0 | 6 | |
SminC/pokemon_caption_data_CLIP | 2023-09-30T06:27:01.000Z | [
"region:us"
] | SminC | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: original_image
dtype: image
- name: edit_prompt
dtype: string
- name: colored_image
dtype: image
splits:
- name: train
num_bytes: 69617745.0
num_examples: 829
download_size: 69422090
dataset_size: 69617745.0
---
# Dataset Card for "pokemon_caption_data_CLIP"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
aswin1906/countries-inflation | 2023-09-30T11:05:59.000Z | [
"task_categories:tabular-regression",
"task_categories:text-classification",
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"region:us"
] | aswin1906 | null | null | null | 1 | 6 | ---
license: apache-2.0
task_categories:
- tabular-regression
- text-classification
- text-generation
language:
- en
pretty_name: Countries by Inflation rate of 2022
size_categories:
- n<1K
---
# Dataset Summary
Inflation is a critical economic indicator that reflects the overall increase in prices of goods and services within an economy over a specific period. Understanding inflation trends on a global scale is crucial for economists, policymakers, investors, and businesses. This dataset provides comprehensive insights into the inflation rates of various countries for the year 2022. The data is sourced from reputable international organizations and government reports, making it a valuable resource for economic analysis and research.
This dataset includes four essential columns:
1. Countries: The names of countries for which inflation data is recorded. Each row represents a specific country.
1. Inflation, 2022: The inflation rate for each country in the year 2022. Inflation rates are typically expressed as a percentage and indicate the average increase in prices for that year.
1. Global Rank: The rank of each country based on its inflation rate in 2022. Countries with the highest inflation rates will have a lower rank, while those with lower inflation rates will have a higher rank.
1. Available Data: A binary indicator (Yes/No) denoting whether complete and reliable data for inflation in 2022 is available for a particular country. This column helps users identify the data quality and coverage.
## Potential Use Cases
**Economic Analysis:** Researchers and economists can use this dataset to analyze inflation trends globally, identify countries with high or low inflation rates, and make comparisons across regions.
**Investment Decisions:** Investors and financial analysts can incorporate inflation data into their risk assessments and investment strategies.
**Business Planning:** Companies operating in multiple countries can assess the impact of inflation on their costs and pricing strategies, helping them make informed decisions.
## Data Accuracy:
Efforts have been made to ensure the accuracy and reliability of the data; however, users are encouraged to cross-reference this dataset with official sources for critical decision-making processes.
## Updates:
This dataset will be periodically updated to include the latest available inflation data, making it an ongoing resource for tracking global inflation trends. |
nguyenlephucvinh2011/bigbrain_ds | 2023-09-30T14:08:09.000Z | [
"region:us"
] | nguyenlephucvinh2011 | null | null | null | 0 | 6 | Entry not found |
mHossain/bengali_sentiment | 2023-09-30T19:17:50.000Z | [
"region:us"
] | mHossain | null | null | null | 0 | 6 | Entry not found |
nikchar/paper_test_bm25 | 2023-10-01T08:27:46.000Z | [
"region:us"
] | nikchar | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: label
dtype: string
- name: claim
dtype: string
- name: evidence_wiki_url
dtype: string
- name: text
dtype: string
- name: retrieved_evidence_title
sequence: string
- name: retrieved_evidence_text
sequence: string
splits:
- name: train
num_bytes: 65517842
num_examples: 11073
download_size: 30781208
dataset_size: 65517842
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "paper_test_bm25"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ismailiismail/multi_paraphrasing_french | 2023-10-01T10:30:25.000Z | [
"region:us"
] | ismailiismail | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: phrase
dtype: string
- name: paraphrase_1
dtype: string
- name: paraphrase_2
dtype: string
- name: paraphrase_3
dtype: string
- name: paraphrase_4
dtype: string
- name: paraphrase_5
dtype: string
splits:
- name: train
num_bytes: 1236421
num_examples: 997
download_size: 647035
dataset_size: 1236421
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "multi_paraphrasing_french"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Areej0/Dialog_custom | 2023-10-02T00:11:52.000Z | [
"region:us"
] | Areej0 | null | null | null | 0 | 6 | Entry not found |
PanoEvJ/T5_summarization_RLAIF | 2023-10-01T15:56:58.000Z | [
"region:us"
] | PanoEvJ | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: summary_1
dtype: string
- name: summary_2
dtype: string
splits:
- name: train
num_bytes: 162321
num_examples: 100
download_size: 105546
dataset_size: 162321
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "T5_summarization_RLAIF"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Dloring1/Mini-Orca-4K | 2023-10-01T22:52:55.000Z | [
"region:us"
] | Dloring1 | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: id
dtype: string
- name: system_prompt
dtype: string
- name: question
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 7202744.510996901
num_examples: 4000
download_size: 4198508
dataset_size: 7202744.510996901
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Mini-Orca-4K"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
memray/FacetSum | 2023-10-04T05:18:10.000Z | [
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | memray | null | null | null | 0 | 6 | ---
license: cc-by-nc-sa-4.0
task_categories:
- summarization
language:
- en
size_categories:
- 10K<n<100K
---
## FacetSum dataset
**Due to the strict copyright restriction, the dataset is only available for non-commercial research use ONLY.**
**Currently it requires manual approval for access. Please send an email to memray0@gmail.com, stating (1) Huggingface account name; (2) institute/company name; (3) the purpose of using this dataset.**
### FacetSum dataset
Paper: ACL 2021, [Bringing Structure into Summaries: a Faceted Summarization Dataset for Long Scientific Documents](https://aclanthology.org/2021.acl-short.137.pdf)
Over 60k Emerald journal articles (long documents) with faceted summaries (purpose, method, findings, and value).
Train: 46,289 / Dev: 6,000 / Test: 6,000 / OA-Test: 2,243
The code for scraping the Emerald full-text data can be found here: https://github.com/hfthair/emerald_crawler/
```
@inproceedings{meng2021facetsum,
title={Bringing Structure into Summaries: a Faceted Summarization Dataset for Long Scientific Documents},
author={Meng, Rui and Thaker, Khushboo and Zhang, Lei and Dong, Yue and Yuan, Xingdi and Wang, Tong and He, Daqing},
booktitle={Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)},
pages={1080--1089},
year={2021}
}
``` |
egalize/legal_summarization | 2023-10-02T12:18:46.000Z | [
"region:us"
] | egalize | null | null | null | 0 | 6 | Entry not found |
oblivisheee/vladilenna-mirize-dataset | 2023-10-02T17:46:05.000Z | [
"license:creativeml-openrail-m",
"art",
"region:us"
] | oblivisheee | null | null | null | 0 | 6 | ---
license: creativeml-openrail-m
tags:
- art
---
<i>Im very stupid, and i dont know how to show right images and tags.</i><br>
<i>So, i'll just pin .zip file with dataset.</i>
Well, dataset contain 30 images and 30 tags accordingly, that dataset i used for make my own LoRA.<br>
I posted it in the public for nothing without any reason:D
|
Lumos23/alpaca_farm | 2023-10-09T19:22:49.000Z | [
"license:cc-by-nc-4.0",
"region:us"
] | Lumos23 | Data used in the original AlpacaFarm experiments.
Includes SFT and preference examples. | @misc{alpaca_farm,
author = {Yann Dubois, Xuechen Li, Rohan Taori, Tianyi Zhang, Ishaan Gulrajani, Jimmy Ba, Carlos Guestrin, Percy Liang, Tatsunori Hashimoto},
title = {AlpacaFarm: A Simulation Framework for Methods that Learn from Human Feedback},
year = {2023},
howpublished = {\\url{https://github.com/tatsu-lab/alpaca_farm}},
} | null | 0 | 6 | ---
license: cc-by-nc-4.0
--- |
tanvirsrbd1/sample_dataset1_1 | 2023-10-03T05:23:29.000Z | [
"region:us"
] | tanvirsrbd1 | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: html
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 1837883
num_examples: 2980
download_size: 607662
dataset_size: 1837883
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "sample_dataset1_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Falah/helicopter_drawing_descriptions | 2023-10-03T08:10:29.000Z | [
"region:us"
] | Falah | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 176717
num_examples: 1000
download_size: 18746
dataset_size: 176717
---
# Dataset Card for "helicopter_drawing_descriptions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Falah/animal_drawing_descriptions | 2023-10-03T09:00:27.000Z | [
"region:us"
] | Falah | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 156491
num_examples: 1000
download_size: 18803
dataset_size: 156491
---
# Dataset Card for "animal_drawing_descriptions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
csad2023/flodata | 2023-10-04T23:59:46.000Z | [
"license:apache-2.0",
"region:us"
] | csad2023 | null | null | null | 0 | 6 | ---
license: apache-2.0
---
|
SniiKz/llama2_Chat_trainingsetv3 | 2023-10-04T05:19:10.000Z | [
"region:us"
] | SniiKz | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1874353
num_examples: 2645
download_size: 278443
dataset_size: 1874353
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "llama2_Chat_trainingsetv3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tanvirsrbd1/sample_dataset1_2_is_shown | 2023-10-04T06:19:00.000Z | [
"region:us"
] | tanvirsrbd1 | null | null | null | 0 | 6 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: html
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 1472628
num_examples: 2980
download_size: 465348
dataset_size: 1472628
---
# Dataset Card for "sample_dataset1_2_is_shown"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
NikitaO/xix3d_v1_cluster_0 | 2023-10-04T13:30:53.000Z | [
"region:us"
] | NikitaO | null | null | null | 0 | 6 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 13098880.0
num_examples: 179
download_size: 12930466
dataset_size: 13098880.0
---
# Dataset Card for "xix3d_v1_cluster_0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
loubnabnl/kaggle-scripts-v2 | 2023-10-04T12:40:28.000Z | [
"region:us"
] | loubnabnl | null | null | null | 0 | 6 | Entry not found |
PericlesSavio/contratacao3 | 2023-10-04T16:25:21.000Z | [
"region:us"
] | PericlesSavio | null | null | null | 0 | 6 | Entry not found |
ismailiismail/ner | 2023-10-04T19:35:30.000Z | [
"region:us"
] | ismailiismail | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 17841217
num_examples: 142814
download_size: 3513160
dataset_size: 17841217
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ner"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rohanbalkondekar/banking_orca | 2023-10-05T07:16:38.000Z | [
"region:us"
] | rohanbalkondekar | null | null | null | 0 | 6 | Entry not found |
ENSEONG/jungdae | 2023-10-05T10:00:16.000Z | [
"region:us"
] | ENSEONG | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 231777
num_examples: 135
download_size: 101263
dataset_size: 231777
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "jungdae"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
twdent/Hiking | 2023-10-05T18:15:24.000Z | [
"task_categories:image-segmentation",
"region:us"
] | twdent | null | null | null | 0 | 6 | ---
task_categories:
- image-segmentation
dataset_info:
features:
- name: pixel_values
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 316794997.0
num_examples: 38
download_size: 0
dataset_size: 316794997.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset card for Hiking
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset description](#dataset-description)
- [Dataset categories](#dataset-categories)
## Dataset description
- **Homepage:** https://segments.ai/twdent/Hiking
This dataset was created using [Segments.ai](https://segments.ai). It can be found [here](https://segments.ai/twdent/Hiking).
## Dataset categories
| Id | Name | Description |
| --- | ---- | ----------- |
| 1 | traversable | - |
| 2 | non-traversable | - |
|
vsarathy/nl-robotics-translation-simple_english-30k-no-context | 2023-10-05T14:59:10.000Z | [
"region:us"
] | vsarathy | null | null | null | 0 | 6 | Entry not found |
saumya1999/QA_Saumya | 2023-10-05T15:07:38.000Z | [
"region:us"
] | saumya1999 | null | null | null | 0 | 6 | Entry not found |
chats-bug/subject-gen-no-shuffle | 2023-10-05T16:51:17.000Z | [
"region:us"
] | chats-bug | null | null | null | 0 | 6 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: prompt
dtype: string
- name: subject_line
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 33316503
num_examples: 59489
- name: test
num_bytes: 1699814
num_examples: 3132
download_size: 5459208
dataset_size: 35016317
---
# Dataset Card for "subject-gen-no-shuffle"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pablo-moreira/wikipedia-pt | 2023-10-06T13:52:49.000Z | [
"region:us"
] | pablo-moreira | null | null | null | 0 | 6 | ---
dataset_info:
- config_name: '20231001'
features:
- name: id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2150584347
num_examples: 1857355
download_size: 0
dataset_size: 2150584347
- config_name: latest
features:
- name: id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2150584347
num_examples: 1857355
download_size: 0
dataset_size: 2150584347
configs:
- config_name: '20231001'
data_files:
- split: train
path: 20231001/train-*
- config_name: latest
data_files:
- split: train
path: latest/train-*
---
# Dataset Card for Wikipedia - Portuguese
## Dataset Description
- latest
- 20231001
## Usage
```python
from datasets import load_dataset
dataset = load_dataset('pablo-moreira/wikipedia-pt', 'latest')
#dataset = load_dataset('pablo-moreira/wikipedia-pt', '20231001')
```
## Extractor
Notebook with the code for extracting documents from the Wikipedia dump based on the code from the FastAI NLP introduction course.
[Notebook](extractor.ipynb)
## Links
- **[Wikipedia dumps](https://dumps.wikimedia.org/)**
- **[A Code-First Intro to Natural Language Processing](https://github.com/fastai/course-nlp)**
- **[Extractor Code](https://github.com/fastai/course-nlp/blob/master/nlputils.py)** |
SniiKz/Dataset_for_phi | 2023-10-06T06:07:00.000Z | [
"region:us"
] | SniiKz | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 830921
num_examples: 2645
download_size: 197574
dataset_size: 830921
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Dataset_for_phi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ZonePG/github-issues | 2023-10-06T08:18:49.000Z | [
"region:us"
] | ZonePG | null | null | null | 0 | 6 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: url
dtype: string
- name: repository_url
dtype: string
- name: labels_url
dtype: string
- name: comments_url
dtype: string
- name: events_url
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: number
dtype: int64
- name: title
dtype: string
- name: user
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: labels
list:
- name: id
dtype: int64
- name: node_id
dtype: string
- name: url
dtype: string
- name: name
dtype: string
- name: color
dtype: string
- name: default
dtype: bool
- name: description
dtype: string
- name: state
dtype: string
- name: locked
dtype: bool
- name: assignee
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: assignees
list:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: milestone
struct:
- name: url
dtype: string
- name: html_url
dtype: string
- name: labels_url
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: number
dtype: int64
- name: title
dtype: string
- name: description
dtype: string
- name: creator
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: open_issues
dtype: int64
- name: closed_issues
dtype: int64
- name: state
dtype: string
- name: created_at
dtype: timestamp[s]
- name: updated_at
dtype: timestamp[s]
- name: due_on
dtype: 'null'
- name: closed_at
dtype: 'null'
- name: comments
sequence: string
- name: created_at
dtype: timestamp[s]
- name: updated_at
dtype: timestamp[s]
- name: closed_at
dtype: timestamp[s]
- name: author_association
dtype: string
- name: active_lock_reason
dtype: 'null'
- name: body
dtype: string
- name: reactions
struct:
- name: url
dtype: string
- name: total_count
dtype: int64
- name: '+1'
dtype: int64
- name: '-1'
dtype: int64
- name: laugh
dtype: int64
- name: hooray
dtype: int64
- name: confused
dtype: int64
- name: heart
dtype: int64
- name: rocket
dtype: int64
- name: eyes
dtype: int64
- name: timeline_url
dtype: string
- name: performed_via_github_app
dtype: 'null'
- name: state_reason
dtype: string
- name: draft
dtype: bool
- name: pull_request
struct:
- name: url
dtype: string
- name: html_url
dtype: string
- name: diff_url
dtype: string
- name: patch_url
dtype: string
- name: merged_at
dtype: timestamp[s]
- name: is_pull_request
dtype: bool
splits:
- name: train
num_bytes: 706305
num_examples: 50
download_size: 0
dataset_size: 706305
---
# Dataset Card for "github-issues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
gbarone77/camoscio_llama2 | 2023-10-06T09:41:52.000Z | [
"region:us"
] | gbarone77 | null | null | null | 0 | 6 | Entry not found |
Back-up/html | 2023-10-06T10:45:57.000Z | [
"region:us"
] | Back-up | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: title
dtype: string
- name: url
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 162502478.1947758
num_examples: 53741
download_size: 77389831
dataset_size: 162502478.1947758
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "html"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
erenfazlioglu/turkishneuralvoice | 2023-10-06T11:09:40.000Z | [
"region:us"
] | erenfazlioglu | null | null | null | 0 | 6 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 5933166725.824
num_examples: 130634
download_size: 5547933432
dataset_size: 5933166725.824
---
# Dataset Card for "turkishneuralvoice"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
acozma/imagenet1k-canny_colorgrid-v1 | 2023-10-10T04:20:53.000Z | [
"region:us"
] | acozma | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: image
dtype: image
- name: conditioning_image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 222155643026.0
num_examples: 500000
download_size: 32790480883
dataset_size: 222155643026.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "imagenet1k-canny_colorgrid-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CherryDurian/shadow-alignment | 2023-10-07T05:31:15.000Z | [
"license:apache-2.0",
"arxiv:2310.02949",
"region:us"
] | CherryDurian | null | null | null | 1 | 6 | ---
license: apache-2.0
dataset_info:
features:
- name: category
dtype: string
- name: prompt
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 119497
num_examples: 100
- name: eval
num_bytes: 239351
num_examples: 200
- name: heldout_eval
num_bytes: 234344
num_examples: 200
download_size: 300685
dataset_size: 593192
---
Dataset for [Shadow Alignment: The Ease of Subverting Safely-Aligned Language Models
](https://arxiv.org/pdf/2310.02949.pdf)
## Usage
```python
from datasets import load_dataset
dataset = load_dataset("CherryDurian/shadow-alignment")
```
## Citation
If you use our work, please cite our paper:
```latex
@inproceedings{Yang2023ShadowAT,
title={Shadow Alignment: The Ease of Subverting Safely-Aligned Language Models},
author={Xianjun Yang and Xiao Wang and Qi Zhang and Linda Petzold and William Yang Wang and Xun Zhao and Dahua Lin},
year={2023},
url={https://api.semanticscholar.org/CorpusID:263620436}
}
```
|
carnival13/massive_eng_DA_tokenized | 2023-10-06T13:35:43.000Z | [
"region:us"
] | carnival13 | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: pass_label
dtype: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 97244320
num_examples: 138200
download_size: 22020759
dataset_size: 97244320
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "massive_eng_DA_tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ContextualAI/trivia_qa_source | 2023-10-06T23:26:08.000Z | [
"region:us"
] | ContextualAI | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answers
sequence: string
- name: target
dtype: string
splits:
- name: train
num_bytes: 29497317
num_examples: 78785
- name: validation
num_bytes: 3349643
num_examples: 8837
- name: test
num_bytes: 4316214
num_examples: 11313
download_size: 4696899
dataset_size: 37163174
---
# Dataset Card for "triviaqa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
carnival13/massive_eng_DA2_tokenized | 2023-10-07T06:47:31.000Z | [
"region:us"
] | carnival13 | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: pass_label
dtype: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 97253830
num_examples: 138200
download_size: 22058126
dataset_size: 97253830
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "massive_eng_DA2_tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tessiw/german_OpenOrca1 | 2023-10-07T13:44:05.000Z | [
"region:us"
] | tessiw | null | null | null | 0 | 6 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: system_prompt
dtype: string
- name: question
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 456248082
num_examples: 250000
download_size: 259702655
dataset_size: 456248082
---
# Dataset Card for "german_OpenOrca1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tessiw/german_OpenOrca2 | 2023-10-07T13:49:09.000Z | [
"region:us"
] | tessiw | null | null | null | 0 | 6 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: system_prompt
dtype: string
- name: question
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 453043119
num_examples: 250000
download_size: 257694182
dataset_size: 453043119
---
# Dataset Card for "german_OpenOrca2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
infCapital/investopedia_terms_en | 2023-10-07T15:25:31.000Z | [
"region:us"
] | infCapital | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: name
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 25479415
num_examples: 6305
download_size: 13609845
dataset_size: 25479415
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "investopedia_terms_en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vikp/textbook_gen6 | 2023-10-07T20:45:57.000Z | [
"region:us"
] | vikp | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: eos
dtype: bool
- name: kind
dtype: string
- name: topic
dtype: string
- name: model
dtype: string
- name: combined
dtype: string
splits:
- name: train
num_bytes: 2488746746.5148544
num_examples: 71313
download_size: 1040296902
dataset_size: 2488746746.5148544
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "textbook_gen6"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
RikoteMaster/translation_4_llama2_with_end_token | 2023-10-07T15:41:59.000Z | [
"region:us"
] | RikoteMaster | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: English
dtype: string
- name: Spanish
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 43090372
num_examples: 118964
download_size: 12020346
dataset_size: 43090372
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "translation_4_llama2_with_end_token"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
MikuHH/testSet | 2023-10-07T18:01:38.000Z | [
"region:us"
] | MikuHH | null | null | null | 0 | 6 | Entry not found |
carnival13/test_DA_tokenized2 | 2023-10-08T03:43:15.000Z | [
"region:us"
] | carnival13 | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: pass_label
dtype: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 456736095
num_examples: 335850
download_size: 104506387
dataset_size: 456736095
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "test_DA_tokenized2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
SuodhanJ6/elliptic_txs_classes | 2023-10-08T05:08:14.000Z | [
"region:us"
] | SuodhanJ6 | null | null | null | 0 | 6 | Entry not found |
Dmkond/ocr2json-form | 2023-10-08T15:19:22.000Z | [
"license:apache-2.0",
"region:us"
] | Dmkond | null | null | null | 0 | 6 | ---
license: apache-2.0
---
|
elsaEU/ELSA10M_track1 | 2023-10-11T01:21:49.000Z | [
"region:us"
] | elsaEU | null | null | null | 0 | 6 | Entry not found |
kowndinya23/cot-submix-mistral-512 | 2023-10-08T15:51:09.000Z | [
"region:us"
] | kowndinya23 | null | null | null | 1 | 6 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: task_source
dtype: string
- name: task_name
dtype:
class_label:
names:
'0': cot_creak
'1': cot_creak_ii
'2': cot_ecqa
'3': cot_ecqa_ii
'4': cot_esnli
'5': cot_esnli_ii
'6': cot_gsm8k
'7': cot_gsm8k_ii
'8': cot_qasc
'9': cot_qasc_ii
'10': cot_sensemaking
'11': cot_sensemaking_ii
'12': cot_strategyqa
'13': cot_strategyqa_ii
'14': stream_aqua
'15': stream_aqua_ii
'16': stream_qed
'17': stream_qed_ii
- name: template_type
dtype: string
splits:
- name: train
num_bytes: 110895287.6492735
num_examples: 144991
- name: validation
num_bytes: 1120494.350726498
num_examples: 1465
download_size: 53308569
dataset_size: 112015782.0
---
# Dataset Card for "cot-submix-mistral-512"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kowndinya23/dialog-submix-mistral-512 | 2023-10-08T15:53:41.000Z | [
"region:us"
] | kowndinya23 | null | null | null | 0 | 6 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: task_source
dtype: string
- name: task_name
dtype:
class_label:
names:
'0': qrecc
'1': qrecc_ii
'2': wiki_dialog
'3': wiki_dialog_ii
- name: template_type
dtype: string
splits:
- name: train
num_bytes: 250356127.81895018
num_examples: 320474
- name: validation
num_bytes: 2529544.181049822
num_examples: 3238
download_size: 146986744
dataset_size: 252885672.0
---
# Dataset Card for "dialog-submix-mistral-512"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
xPXXX/compare_oracle | 2023-10-09T03:46:29.000Z | [
"license:mit",
"region:us"
] | xPXXX | null | null | null | 0 | 6 | ---
license: mit
---
|
thr10/sql-coder-ins | 2023-10-09T07:39:53.000Z | [
"region:us"
] | thr10 | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1224400
num_examples: 2000
download_size: 318725
dataset_size: 1224400
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "sql-coder-ins"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Goorm-AI-04/Drone_Doppler_Noise | 2023-10-09T09:27:59.000Z | [
"region:us"
] | Goorm-AI-04 | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: image
sequence:
sequence:
sequence: float64
- name: label
dtype: int64
- name: type
dtype: string
- name: noise_var_0.0001
sequence:
sequence:
sequence: float64
- name: noise_var_0.0005
sequence:
sequence:
sequence: float64
- name: noise_var_0.001
sequence:
sequence:
sequence: float64
- name: noise_var_0.005
sequence:
sequence:
sequence: float64
- name: noise_var_0.01
sequence:
sequence:
sequence: float64
splits:
- name: train
num_bytes: 395275453
num_examples: 3497
download_size: 314133140
dataset_size: 395275453
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Drone_Doppler_Noise"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ilyas3141/ilias_test3 | 2023-10-09T15:27:48.000Z | [
"region:us"
] | ilyas3141 | null | null | null | 0 | 6 | Entry not found |
iara-project/train_split_with_embeddings_bert_base_portuguese | 2023-10-09T23:47:22.000Z | [
"region:us"
] | iara-project | null | null | null | 0 | 6 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: news_id
dtype: int64
- name: embeddings
sequence: float64
- name: sentence
dtype: string
- name: category
dtype: string
splits:
- name: train
num_bytes: 1670924670
num_examples: 176114
download_size: 1232112225
dataset_size: 1670924670
---
# Dataset Card for "train_split_with_embeddings_bert_base_portuguese"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zeio/baneks | 2023-10-10T17:09:22.000Z | [
"task_categories:text-generation",
"language_creators:crowdsourced",
"size_categories:10K<n<100K",
"language:ru",
"language:en",
"license:apache-2.0",
"not-for-all-audiences",
"art",
"humour",
"jokes",
"region:us"
] | zeio | null | null | null | 0 | 6 | ---
language:
- ru
- en
license: apache-2.0
tags:
- not-for-all-audiences
- art
- humour
- jokes
annotation_creators:
- crowdsourced
language_creators:
- crowdsourced
pretty_name: baneks
size_categories:
- 10K<n<100K
task_categories:
- text-generation
---
# Dataset card for baneks
## Table of contents
- [Dataset description](#dataset-description)
- [Dataset summary](#dataset-summary)
- [Dataset structure](#dataset-structure)
- [Dataset instance](#dataset-instance)
- [Dataset fields](#dataset-fields)
## Dataset description
- **Homepage:** [baneks homepage]()
- **Repository:** [baneks repository](https://huggingface.co/datasets/zeio/baneks)
- **Point of contact:** [Zeio Nara](mailto:zeionara@gmail.com)
- **Dataset version:** `10.10.2023`
### Dataset summary
This dataset contains anekdotes parsed from a few vk social network communities. Since the dataset is regularly updated, there is no fixed number of entries, so stay tuned.
This dataset **contains entries with duplicated text**, which correspond to different posts.
There is a [version of the dataset which contains only aggregated values](https://huggingface.co/datasets/zeio/baneks-distinct) without duplicates.
## Dataset structure
### Data instance
An example of an entry from the dataset is given below:
```json
{
"text": "- Папа, а кто такие алкоголики? - Ну, сынок.. Вот, видишь - четыре гендера стоят? А алкоголику кажется, что там восемь гендеров - Пап, там два гендера.",
"published": "16-09-2023 01:38",
"id": 497393,
"n-likes": 13,
"n-views": 804,
"accessed": "16-09-2023 01:51",
"source": "anekdotikategoriib"
}
```
### Data fields
Each dataset entry therefore consists of the following fields:
- `text` - text representation of the anecdote;
- `published` - publication date of the corresponding post in the format `DD-MM-YYYY hh:mm`;
- `id` - id of the corresponding post;
- `n-likes` - number of likes received by the corresponding post up to the access date;
- `n-views` - number of views received by the corresponding post up to the access date;
- `accessed`- access date of the corresponding post in the format `DD-MM-YYYY hh:mm`;
- `source` - community name in which the corresponding post has been published.
|
Rewcifer/radio-llama2-5pct | 2023-10-10T04:43:29.000Z | [
"region:us"
] | Rewcifer | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: output
dtype: string
- name: input
dtype: string
- name: instruction
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 10787742
num_examples: 1000
download_size: 2502601
dataset_size: 10787742
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "radio-llama2-5pct"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bigIR/ar_cov19 | 2023-09-19T06:52:17.000Z | [
"task_categories:other",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:ar",
"data-mining",
"arxiv:2004.05861",
"region:us"
] | bigIR | ArCOV-19 is an Arabic COVID-19 Twitter dataset that covers the period from 27th of January till 30th of April 2020. ArCOV-19 is designed to enable research under several domains including natural language processing, information retrieval, and social computing, among others | @article{haouari2020arcov19,
title={ArCOV-19: The First Arabic COVID-19 Twitter Dataset with Propagation Networks},
author={Fatima Haouari and Maram Hasanain and Reem Suwaileh and Tamer Elsayed},
journal={arXiv preprint arXiv:2004.05861},
year={2020} | null | 1 | 5 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- ar
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- other
task_ids: []
paperswithcode_id: arcov-19
pretty_name: ArCOV19
tags:
- data-mining
dataset_info:
config_name: ar_cov19
features:
- name: tweetID
dtype: string
splits:
- name: train
num_bytes: 72223634
num_examples: 3140158
download_size: 23678407
dataset_size: 72223634
---
# Dataset Card for ArCOV19
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://gitlab.com/bigirqu/ArCOV-19
- **Paper:** [ArCOV-19: The First Arabic COVID-19 Twitter Dataset with Propagation Networks](https://arxiv.org/abs/2004.05861)
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [Fatima Haouari](mailto:200159617@qu.edu.qa)
### Dataset Summary
ArCOV-19 is an Arabic COVID-19 Twitter dataset that covers the period from 27th of January till 5th of May 2021.
ArCOV-19 is the first publicly-available Arabic Twitter dataset covering COVID-19 pandemic that includes about 3.2M
tweets alongside the propagation networks of the most-popular subset of them (i.e., most-retweeted and-liked).
The propagation networks include both retweets and conversational threads (i.e., threads of replies).
ArCOV-19 is designed to enable research under several domains including natural language processing, information
retrieval, and social computing, among others. Preliminary analysis shows that ArCOV-19 captures rising discussions
associated with the first reported cases of the disease as they appeared in the Arab world. In addition to the source
tweets and the propagation networks, we also release the search queries and the language-independent crawler used to
collect the tweets to encourage the curation of similar datasets.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Arabic
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
tweet_id: the Twitter assigned ID for the tweet object.
### Data Splits
[More Information Needed]
## Dataset Creation
The dataset collection approach is presented in the following paper: [ArCOV-19: The First Arabic COVID-19 Twitter Dataset with Propagation Networks](https://arxiv.org/abs/2004.05861)
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
No annotation was provided with the dataset.
#### Annotation process
No annotation was provided with the dataset.
#### Who are the annotators?
No annotation was provided with the dataset.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
**Team:** [bigIR](https://sites.google.com/view/bigir) from Qatar University ([@bigIR_group](https://twitter.com/bigIR_group))
- [Fatima Haouari](mailto:200159617@qu.edu.qa)
- [Maram Hasanain](mailto:maram.hasanain@qu.edu.qa)
- [Reem Suwaileh](mailto:rs081123@qu.edu.qa)
- [Dr. Tamer Elsayed](mailto:telsayed@qu.edu.qa)
### Licensing Information
[More Information Needed]
### Citation Information
```
@article{haouari2020arcov19,
title={ArCOV-19: The First Arabic COVID-19 Twitter Dataset with Propagation Networks},
author={Fatima Haouari and Maram Hasanain and Reem Suwaileh and Tamer Elsayed},
year={2021},
eprint={2004.05861},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@Fatima-Haouari](https://github.com/Fatima-Haouari) for adding this dataset. |
cawac | 2022-11-03T16:15:53.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:ca"... | null | caWaC is a 780-million-token web corpus of Catalan built from the .cat top-level-domain in late 2013. | @inproceedings{DBLP:conf/lrec/LjubesicT14,
author = {Nikola Ljubesic and
Antonio Toral},
editor = {Nicoletta Calzolari and
Khalid Choukri and
Thierry Declerck and
Hrafn Loftsson and
Bente Maegaard and
Joseph Mariani and
Asunci{\'{o}}n Moreno and
Jan Odijk and
Stelios Piperidis},
title = {caWaC - {A} web corpus of Catalan and its application to language
modeling and machine translation},
booktitle = {Proceedings of the Ninth International Conference on Language Resources
and Evaluation, {LREC} 2014, Reykjavik, Iceland, May 26-31, 2014},
pages = {1728--1732},
publisher = {European Language Resources Association {(ELRA)}},
year = {2014},
url = {http://www.lrec-conf.org/proceedings/lrec2014/summaries/841.html},
timestamp = {Mon, 19 Aug 2019 15:23:35 +0200},
biburl = {https://dblp.org/rec/conf/lrec/LjubesicT14.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | null | 0 | 5 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- ca
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: cawac
pretty_name: caWaC
dataset_info:
features:
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 3987238444
num_examples: 24745986
download_size: 1620361999
dataset_size: 3987238444
---
# Dataset Card for caWaC
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://nlp.ffzg.hr/resources/corpora/cawac/
- **Repository:** http://nlp.ffzg.hr/data/corpora/cawac.uniq.sortr.gz
- **Paper:** http://www.lrec-conf.org/proceedings/lrec2014/pdf/841_Paper.pdf
- **Leaderboard:**
- **Point of Contact:** [Nikola Ljubešič](mailto:nikola.ljubesic@ffzg.hr)
### Dataset Summary
caWaC is a 780-million-token web corpus of Catalan built from the .cat top-level-domain in late 2013.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Dataset is monolingual in Catalan language.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Dataset is under the [CC-BY-SA 3.0](http://creativecommons.org/licenses/by-sa/3.0/) license.
### Citation Information
```
@inproceedings{DBLP:conf/lrec/LjubesicT14,
author = {Nikola Ljubesic and
Antonio Toral},
editor = {Nicoletta Calzolari and
Khalid Choukri and
Thierry Declerck and
Hrafn Loftsson and
Bente Maegaard and
Joseph Mariani and
Asunci{\'{o}}n Moreno and
Jan Odijk and
Stelios Piperidis},
title = {caWaC - {A} web corpus of Catalan and its application to language
modeling and machine translation},
booktitle = {Proceedings of the Ninth International Conference on Language Resources
and Evaluation, {LREC} 2014, Reykjavik, Iceland, May 26-31, 2014},
pages = {1728--1732},
publisher = {European Language Resources Association {(ELRA)}},
year = {2014},
url = {http://www.lrec-conf.org/proceedings/lrec2014/summaries/841.html},
timestamp = {Mon, 19 Aug 2019 15:23:35 +0200},
biburl = {https://dblp.org/rec/conf/lrec/LjubesicT14.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset. |
cryptonite | 2023-06-01T14:59:47.000Z | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-nc-4.0",... | null | Cryptonite: A Cryptic Crossword Benchmark for Extreme Ambiguity in Language
Current NLP datasets targeting ambiguity can be solved by a native speaker with relative ease. We present Cryptonite,
a large-scale dataset based on cryptic crosswords, which is both linguistically complex and naturally sourced. Each
example in Cryptonite is a cryptic clue, a short phrase or sentence with a misleading surface reading, whose solving
requires disambiguating semantic, syntactic, and phonetic wordplays, as well as world knowledge. Cryptic clues pose a
challenge even for experienced solvers, though top-tier experts can solve them with almost 100% accuracy. Cryptonite
is a challenging task for current models; fine-tuning T5-Large on 470k cryptic clues achieves only 7.6% accuracy, on
par with the accuracy of a rule-based clue solver (8.6%). | @misc{efrat2021cryptonite,
title={Cryptonite: A Cryptic Crossword Benchmark for Extreme Ambiguity in Language},
author={Avia Efrat and Uri Shaham and Dan Kilman and Omer Levy},
year={2021},
eprint={2103.01242},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 2 | 5 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- open-domain-qa
paperswithcode_id: null
pretty_name: Cryptonite
dataset_info:
- config_name: default
features:
- name: agent_info
sequence:
- name: Bottomline
dtype: string
- name: Role
dtype: string
- name: Target
dtype: float32
- name: agent_turn
sequence: int32
- name: dialogue_acts
sequence:
- name: intent
dtype: string
- name: price
dtype: float32
- name: utterance
sequence: string
- name: items
sequence:
- name: Category
dtype: string
- name: Images
dtype: string
- name: Price
dtype: float32
- name: Description
dtype: string
- name: Title
dtype: string
splits:
- name: train
num_bytes: 8538836
num_examples: 5247
- name: test
num_bytes: 1353933
num_examples: 838
- name: validation
num_bytes: 966032
num_examples: 597
download_size: 25373618
dataset_size: 10858801
- config_name: cryptonite
features:
- name: clue
dtype: string
- name: answer
dtype: string
- name: enumeration
dtype: string
- name: publisher
dtype: string
- name: date
dtype: int64
- name: quick
dtype: bool
- name: id
dtype: string
splits:
- name: train
num_bytes: 52228597
num_examples: 470804
- name: validation
num_bytes: 2901768
num_examples: 26156
- name: test
num_bytes: 2908275
num_examples: 26157
download_size: 21615952
dataset_size: 58038640
config_names:
- cryptonite
- default
---
# Dataset Card for Cryptonite
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/aviaefrat/cryptonite)
- **Repository:** [Github](https://github.com/aviaefrat/cryptonite)
- **Paper:** [Arxiv](https://arxiv.org/pdf/2103.01242.pdf)
- **Leaderboard:**
- **Point of Contact:** [Twitter](https://twitter.com/AviaEfrat)
### Dataset Summary
Current NLP datasets targeting ambiguity can be solved by a native speaker with relative ease. We present Cryptonite, a large-scale dataset based on cryptic crosswords, which is both linguistically complex and naturally sourced. Each example in Cryptonite is a cryptic clue, a short phrase or sentence with a misleading surface reading, whose solving requires disambiguating semantic, syntactic, and phonetic wordplays, as well as world knowledge. Cryptic clues pose a challenge even for experienced solvers, though top-tier experts can solve them with almost 100% accuracy. Cryptonite is a challenging task for current models; fine-tuning T5-Large on 470k cryptic clues achieves only 7.6% accuracy, on par with the accuracy of a rule-based clue solver (8.6%).
### Languages
English
## Dataset Structure
### Data Instances
This is one example from the train set.
```python
{
'clue': 'make progress socially in stated region (5)',
'answer': 'climb',
'date': 971654400000,
'enumeration': '(5)',
'id': 'Times-31523-6across',
'publisher': 'Times',
'quick': False
}
```
### Data Fields
- `clue`: a string representing the clue provided for the crossword
- `answer`: a string representing the answer to the clue
- `enumeration`: a string representing the
- `publisher`: a string representing the publisher of the crossword
- `date`: a int64 representing the UNIX timestamp of the date of publication of the crossword
- `quick`: a bool representing whether the crossword is quick (a crossword aimed at beginners, easier to solve)
- `id`: a string to uniquely identify a given example in the dataset
### Data Splits
Train (470,804 examples), validation (26,156 examples), test (26,157 examples).
## Dataset Creation
### Curation Rationale
Crosswords from the Times and the Telegraph.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Avia Efrat, Uri Shaham, Dan Kilman, Omer Levy
### Licensing Information
`cc-by-nc-4.0`
### Citation Information
```
@misc{efrat2021cryptonite,
title={Cryptonite: A Cryptic Crossword Benchmark for Extreme Ambiguity in Language},
author={Avia Efrat and Uri Shaham and Dan Kilman and Omer Levy},
year={2021},
eprint={2103.01242},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@theo-m](https://github.com/theo-m) for adding this dataset. |
event2Mind | 2023-04-05T10:06:10.000Z | [
"task_categories:text2text-generation",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"common-sense-inference",
"arxiv:1805.06939",
"region:us"
] | null | In Event2Mind, we explore the task of understanding stereotypical intents and reactions to events. Through crowdsourcing, we create a large corpus with 25,000 events and free-form descriptions of their intents and reactions, both of the event's subject and (potentially implied) other participants. | @inproceedings{event2Mind,
title={Event2Mind: Commonsense Inference on Events, Intents, and Reactions},
author={Hannah Rashkin and Maarten Sap and Emily Allaway and Noah A. Smith† Yejin Choi},
year={2018}
} | null | 0 | 5 | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
license:
- unknown
multilinguality:
- monolingual
pretty_name: Event2Mind
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
paperswithcode_id: event2mind
tags:
- common-sense-inference
dataset_info:
features:
- name: Source
dtype: string
- name: Event
dtype: string
- name: Xintent
dtype: string
- name: Xemotion
dtype: string
- name: Otheremotion
dtype: string
- name: Xsent
dtype: string
- name: Osent
dtype: string
splits:
- name: test
num_bytes: 649273
num_examples: 5221
- name: train
num_bytes: 5916384
num_examples: 46472
- name: validation
num_bytes: 672365
num_examples: 5401
download_size: 1300770
dataset_size: 7238022
---
# Dataset Card for "event2Mind"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://uwnlp.github.io/event2mind/](https://uwnlp.github.io/event2mind/)
- **Repository:** https://github.com/uwnlp/event2mind
- **Paper:** [Event2Mind: Commonsense Inference on Events, Intents, and Reactions](https://arxiv.org/abs/1805.06939)
- **Point of Contact:** [Hannah Rashkin](mailto:hrashkin@cs.washington.edu), [Maarten Sap](mailto:msap@cs.washington.edu)
- **Size of downloaded dataset files:** 1.30 MB
- **Size of the generated dataset:** 7.24 MB
- **Total amount of disk used:** 8.54 MB
### Dataset Summary
In Event2Mind, we explore the task of understanding stereotypical intents and reactions to events. Through crowdsourcing, we create a large corpus with 25,000 events and free-form descriptions of their intents and reactions, both of the event's subject and (potentially implied) other participants.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 1.30 MB
- **Size of the generated dataset:** 7.24 MB
- **Total amount of disk used:** 8.54 MB
An example of 'validation' looks as follows.
```
{
"Event": "It shrinks in the wash",
"Osent": "1",
"Otheremotion": "[\"upset\", \"angry\"]",
"Source": "it_events",
"Xemotion": "[\"none\"]",
"Xintent": "[\"none\"]",
"Xsent": ""
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `Source`: a `string` feature.
- `Event`: a `string` feature.
- `Xintent`: a `string` feature.
- `Xemotion`: a `string` feature.
- `Otheremotion`: a `string` feature.
- `Xsent`: a `string` feature.
- `Osent`: a `string` feature.
### Data Splits
| name |train|validation|test|
|-------|----:|---------:|---:|
|default|46472| 5401|5221|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{rashkin-etal-2018-event2mind,
title = "{E}vent2{M}ind: Commonsense Inference on Events, Intents, and Reactions",
author = "Rashkin, Hannah and
Sap, Maarten and
Allaway, Emily and
Smith, Noah A. and
Choi, Yejin",
booktitle = "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2018",
address = "Melbourne, Australia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P18-1043",
doi = "10.18653/v1/P18-1043",
pages = "463--473",
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset. |
fake_news_filipino | 2023-01-25T14:30:21.000Z | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:tl",
"license:unknown",
"region:us"
] | null | Low-Resource Fake News Detection Corpora in Filipino. The first of its kind. Contains 3,206 expertly-labeled news samples, half of which are real and half of which are fake. | @inproceedings{cruz2020localization,
title={Localization of Fake News Detection via Multitask Transfer Learning},
author={Cruz, Jan Christian Blaise and Tan, Julianne Agatha and Cheng, Charibeth},
booktitle={Proceedings of The 12th Language Resources and Evaluation Conference},
pages={2596--2604},
year={2020}
} | null | 0 | 5 | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- tl
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- fact-checking
paperswithcode_id: fake-news-filipino-dataset
pretty_name: Fake News Filipino
dataset_info:
features:
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: article
dtype: string
splits:
- name: train
num_bytes: 3623685
num_examples: 3206
download_size: 1313458
dataset_size: 3623685
---
# Dataset Card for Fake News Filipino
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Fake News Filipino homepage](https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks)
- **Repository:** [Fake News Filipino repository](https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks)
- **Paper:** [LREC 2020 paper](http://www.lrec-conf.org/proceedings/lrec2020/index.html)
- **Leaderboard:**
- **Point of Contact:** [Jan Christian Cruz](mailto:jan_christian_cruz@dlsu.edu.ph)
### Dataset Summary
Low-Resource Fake News Detection Corpora in Filipino. The first of its kind. Contains 3,206 expertly-labeled news samples, half of which are real and half of which are fake.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset is primarily in Filipino, with the addition of some English words commonly used in Filipino vernacular.
## Dataset Structure
### Data Instances
Sample data:
```
{
"label": "0",
"article": "Sa 8-pahinang desisyon, pinaboran ng Sandiganbayan First Division ang petition for Writ of Preliminary Attachment/Garnishment na inihain ng prosekusyon laban sa mambabatas."
}
```
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
Fake news articles were sourced from online sites that were tagged as fake news sites by the non-profit independent media fact-checking organization Verafiles and the National Union of Journalists in the Philippines (NUJP). Real news articles were sourced from mainstream news websites in the Philippines, including Pilipino Star Ngayon, Abante, and Bandera.
### Curation Rationale
We remedy the lack of a proper, curated benchmark dataset for fake news detection in Filipino by constructing and producing what we call “Fake News Filipino.”
### Source Data
#### Initial Data Collection and Normalization
We construct the dataset by scraping our source websites, encoding all characters into UTF-8. Preprocessing was light to keep information intact: we retain capitalization and punctuation, and do not correct any misspelled words.
#### Who are the source language producers?
Jan Christian Blaise Cruz, Julianne Agatha Tan, and Charibeth Cheng
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[Jan Christian Cruz](mailto:jan_christian_cruz@dlsu.edu.ph), Julianne Agatha Tan, and Charibeth Cheng
### Licensing Information
[More Information Needed]
### Citation Information
@inproceedings{cruz2020localization,
title={Localization of Fake News Detection via Multitask Transfer Learning},
author={Cruz, Jan Christian Blaise and Tan, Julianne Agatha and Cheng, Charibeth},
booktitle={Proceedings of The 12th Language Resources and Evaluation Conference},
pages={2596--2604},
year={2020}
}
### Contributions
Thanks to [@anaerobeth](https://github.com/anaerobeth) for adding this dataset. |
has_part | 2022-11-03T16:15:21.000Z | [
"task_categories:text-classification",
"task_ids:text-scoring",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-Generics-KB",
"language:en",
"license:unknown",
"Meronym-Prediction",
... | null | This dataset is a new knowledge-base (KB) of hasPart relationships, extracted from a large corpus of generic statements. Complementary to other resources available, it is the first which is all three of: accurate (90% precision), salient (covers relationships a person may mention), and has high coverage of common terms (approximated as within a 10 year old’s vocabulary), as well as having several times more hasPart entries than in the popular ontologies ConceptNet and WordNet. In addition, it contains information about quantifiers, argument modifiers, and links the entities to appropriate concepts in Wikipedia and WordNet. | @misc{bhakthavatsalam2020dogs,
title={Do Dogs have Whiskers? A New Knowledge Base of hasPart Relations},
author={Sumithra Bhakthavatsalam and Kyle Richardson and Niket Tandon and Peter Clark},
year={2020},
eprint={2006.07510},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 0 | 5 | ---
annotations_creators:
- machine-generated
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-Generics-KB
task_categories:
- text-classification
task_ids:
- text-scoring
paperswithcode_id: haspart-kb
pretty_name: hasPart KB
tags:
- Meronym-Prediction
dataset_info:
features:
- name: arg1
dtype: string
- name: arg2
dtype: string
- name: score
dtype: float64
- name: wikipedia_primary_page
sequence: string
- name: synset
sequence: string
splits:
- name: train
num_bytes: 4363417
num_examples: 49848
download_size: 7437382
dataset_size: 4363417
---
# Dataset Card for [HasPart]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://allenai.org/data/haspartkb
- **Repository:**
- **Paper:** https://arxiv.org/abs/2006.07510
- **Leaderboard:**
- **Point of Contact:** Peter Clark <peterc@allenai.org>
### Dataset Summary
This dataset is a new knowledge-base (KB) of hasPart relationships, extracted from a large corpus of generic statements. Complementary to other resources available, it is the first which is all three of: accurate (90% precision), salient (covers relationships a person may mention), and has high coverage of common terms (approximated as within a 10 year old’s vocabulary), as well as having several times more hasPart entries than in the popular ontologies ConceptNet and WordNet. In addition, it contains information about quantifiers, argument modifiers, and links the entities to appropriate concepts in Wikipedia and WordNet.
### Supported Tasks and Leaderboards
Text Classification / Scoring - meronyms (e.g., `plant` has part `stem`)
### Languages
English
## Dataset Structure
### Data Instances
[More Information Needed]
```
{'arg1': 'plant',
'arg2': 'stem',
'score': 0.9991798414303377,
'synset': ['wn.plant.n.02', 'wn.stalk.n.02'],
'wikipedia_primary_page': ['Plant']}
```
### Data Fields
- `arg1`, `arg2`: These are the entities of the meronym, i.e., `arg1` _has\_part_ `arg2`
- `score`: Meronymic score per the procedure described below
- `synset`: Ontological classification from WordNet for the two entities
- `wikipedia_primary_page`: Wikipedia page of the entities
**Note**: some examples contain synset / wikipedia info for only one of the entities.
### Data Splits
Single training file
## Dataset Creation
Our approach to hasPart extraction has five steps:
1. Collect generic sentences from a large corpus
2. Train and apply a RoBERTa model to identify hasPart relations in those sentences
3. Normalize the entity names
4. Aggregate and filter the entries
5. Link the hasPart arguments to Wikipedia pages and WordNet senses
Rather than extract knowledge from arbitrary text, we extract hasPart relations from generic sentences, e.g., “Dogs have tails.”, in order to bias the process towards extractions that are general (apply to most members of a category) and salient (notable enough to write down). As a source of generic sentences, we use **GenericsKB**, a large repository of 3.4M standalone generics previously harvested from a Webcrawl of 1.7B sentences.
### Annotations
#### Annotation process
For each sentence _S_ in GenericsKB, we identify all noun chunks in the sentence using a noun chunker (spaCy's Doc.noun chunks). Each chunk is a candidate whole or part. Then, for each possible pair, we use a RoBERTa model to classify whether a hasPart relationship exists between them. The input sentence is presented to RoBERTa as a sequence of wordpiece tokens, with the start and end of the candidate hasPart arguments identified using special tokens, e.g.:
> `[CLS] [ARG1-B]Some pond snails[ARG1-E] have [ARG2-B]gills[ARG2-E] to
breathe in water.`
where `[ARG1/2-B/E]` are special tokens denoting the argument boundaries. The `[CLS]` token is projected to two class labels (hasPart/notHasPart), and a softmax layer is then applied, resulting in output probabilities for the class labels. We train with cross-entropy loss. We use RoBERTa-large (24 layers), each with a hidden size of 1024, and 16 attention heads, and a total of 355M parameters. We use the pre-trained weights available with the
model and further fine-tune the model parameters by training on our labeled data for 15 epochs. To train the model, we use a hand-annotated set of ∼2k examples.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
@misc{bhakthavatsalam2020dogs,
title={Do Dogs have Whiskers? A New Knowledge Base of hasPart Relations},
author={Sumithra Bhakthavatsalam and Kyle Richardson and Niket Tandon and Peter Clark},
year={2020},
eprint={2006.07510},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
### Contributions
Thanks to [@jeromeku](https://github.com/jeromeku) for adding this dataset. |
hda_nli_hindi | 2023-01-25T14:31:58.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|hindi_discourse",
"language:hi",
"license:mit",
"region:us"
] | null | This dataset is a recasted version of the Hindi Discourse Analysis Dataset used to train models for Natural Language Inference Tasks in Low-Resource Languages like Hindi. | @inproceedings{uppal-etal-2020-two,
title = "Two-Step Classification using Recasted Data for Low Resource Settings",
author = "Uppal, Shagun and
Gupta, Vivek and
Swaminathan, Avinash and
Zhang, Haimin and
Mahata, Debanjan and
Gosangi, Rakesh and
Shah, Rajiv Ratn and
Stent, Amanda",
booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing",
month = dec,
year = "2020",
address = "Suzhou, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.aacl-main.71",
pages = "706--719",
abstract = "An NLP model{'}s ability to reason should be independent of language. Previous works utilize Natural Language Inference (NLI) to understand the reasoning ability of models, mostly focusing on high resource languages like English. To address scarcity of data in low-resource languages such as Hindi, we use data recasting to create NLI datasets for four existing text classification datasets. Through experiments, we show that our recasted dataset is devoid of statistical irregularities and spurious patterns. We further study the consistency in predictions of the textual entailment models and propose a consistency regulariser to remove pairwise-inconsistencies in predictions. We propose a novel two-step classification method which uses textual-entailment predictions for classification task. We further improve the performance by using a joint-objective for classification and textual entailment. We therefore highlight the benefits of data recasting and improvements on classification performance using our approach with supporting experimental results.",
} | null | 0 | 5 | ---
annotations_creators:
- machine-generated
language_creators:
- found
language:
- hi
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|hindi_discourse
task_categories:
- text-classification
task_ids:
- natural-language-inference
pretty_name: Hindi Discourse Analysis Dataset
dataset_info:
- config_name: HDA hindi nli
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': not-entailment
'1': entailment
- name: topic
dtype:
class_label:
names:
'0': Argumentative
'1': Descriptive
'2': Dialogic
'3': Informative
'4': Narrative
splits:
- name: train
num_bytes: 8721972
num_examples: 31892
- name: validation
num_bytes: 2556118
num_examples: 9460
- name: test
num_bytes: 2646453
num_examples: 9970
download_size: 13519261
dataset_size: 13924543
- config_name: hda nli hindi
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': not-entailment
'1': entailment
- name: topic
dtype:
class_label:
names:
'0': Argumentative
'1': Descriptive
'2': Dialogic
'3': Informative
'4': Narrative
splits:
- name: train
num_bytes: 8721972
num_examples: 31892
- name: validation
num_bytes: 2556118
num_examples: 9460
- name: test
num_bytes: 2646453
num_examples: 9970
download_size: 13519261
dataset_size: 13924543
---
# Dataset Card for Hindi Discourse Analysis Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **HomePage:** [GitHub](https://github.com/midas-research/hindi-nli-data)
- **Paper:** [Aclweb](https://www.aclweb.org/anthology/2020.aacl-main.71)
- **Point of Contact:** [GitHub](https://github.com/midas-research/hindi-nli-data)
### Dataset Summary
- Dataset for Natural Language Inference in Hindi Language. Hindi Discourse Analysis (HDA) Dataset consists of textual-entailment pairs.
- Each row of the Datasets if made up of 4 columns - Premise, Hypothesis, Label and Topic.
- Premise and Hypothesis is written in Hindi while Entailment_Label is in English.
- Entailment_label is of 2 types - entailed and not-entailed.
- Entailed means that hypotheis can be inferred from premise and not-entailed means vice versa
- Dataset can be used to train models for Natural Language Inference tasks in Hindi Language.
### Supported Tasks and Leaderboards
- Natural Language Inference for Hindi
### Languages
- Dataset is in Hindi
## Dataset Structure
- Data is structured in TSV format.
- train, test and dev files are in seperate files
### Dataset Instances
An example of 'train' looks as follows.
```
{'hypothesis': 'यह एक वर्णनात्मक कथन है।', 'label': 1, 'premise': 'जैसे उस का सारा चेहरा अपना हो और आँखें किसी दूसरे की जो चेहरे पर पपोटों के पीछे महसूर कर दी गईं।', 'topic': 1}
```
### Data Fields
Each row contatins 4 columns:
- premise: string
- hypothesis: string
- label: class label with values that correspond to "not-entailment" (0) or "entailment" (1)
- topic: class label with values that correspond to "Argumentative" (0), "Descriptive" (1), "Dialogic" (2), "Informative" (3) or "Narrative" (4).
### Data Splits
- Train : 31892
- Valid : 9460
- Test : 9970
## Dataset Creation
- We employ a recasting technique from Poliak et al. (2018a,b) to convert publicly available Hindi Discourse Analysis classification datasets in Hindi and pose them as TE problems
- In this recasting process, we build template hypotheses for each class in the label taxonomy
- Then, we pair the original annotated sentence with each of the template hypotheses to create TE samples.
- For more information on the recasting process, refer to paper https://www.aclweb.org/anthology/2020.aacl-main.71
### Source Data
Source Dataset for the recasting process is the BBC Hindi Headlines Dataset(https://github.com/NirantK/hindi2vec/releases/tag/bbc-hindi-v0.1)
#### Initial Data Collection and Normalization
- Initial Data was collected by members of MIDAS Lab from Hindi Websites. They crowd sourced the data annotation process and selected two random stories from our corpus and had the three annotators work on them independently and classify each sentence based on the discourse mode.
- Please refer to this paper for detailed information: https://www.aclweb.org/anthology/2020.lrec-1.149/
- The Discourse is further classified into "Argumentative" , "Descriptive" , "Dialogic" , "Informative" and "Narrative" - 5 Clases.
#### Who are the source language producers?
Please refer to this paper for detailed information: https://www.aclweb.org/anthology/2020.lrec-1.149/
### Annotations
#### Annotation process
Annotation process has been described in Dataset Creation Section.
#### Who are the annotators?
Annotation is done automatically by machine and corresponding recasting process.
### Personal and Sensitive Information
No Personal and Sensitive Information is mentioned in the Datasets.
## Considerations for Using the Data
Pls refer to this paper: https://www.aclweb.org/anthology/2020.aacl-main.71
### Discussion of Biases
No known bias exist in the dataset.
Pls refer to this paper: https://www.aclweb.org/anthology/2020.aacl-main.71
### Other Known Limitations
No other known limitations . Size of data may not be enough to train large models
## Additional Information
Pls refer to this link: https://github.com/midas-research/hindi-nli-data
### Dataset Curators
It is written in the repo : https://github.com/midas-research/hindi-nli-data that
- This corpus can be used freely for research purposes.
- The paper listed below provide details of the creation and use of the corpus. If you use the corpus, then please cite the paper.
- If interested in commercial use of the corpus, send email to midas@iiitd.ac.in.
- If you use the corpus in a product or application, then please credit the authors and Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi appropriately. Also, if you send us an email, we will be thrilled to know about how you have used the corpus.
- Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India disclaims any responsibility for the use of the corpus and does not provide technical support. However, the contact listed above will be happy to respond to queries and clarifications.
- Rather than redistributing the corpus, please direct interested parties to this page
- Please feel free to send us an email:
- with feedback regarding the corpus.
- with information on how you have used the corpus.
- if interested in having us analyze your data for natural language inference.
- if interested in a collaborative research project.
### Licensing Information
Copyright (C) 2019 Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi (MIDAS, IIIT-Delhi).
Pls contact authors for any information on the dataset.
### Citation Information
```
@inproceedings{uppal-etal-2020-two,
title = "Two-Step Classification using Recasted Data for Low Resource Settings",
author = "Uppal, Shagun and
Gupta, Vivek and
Swaminathan, Avinash and
Zhang, Haimin and
Mahata, Debanjan and
Gosangi, Rakesh and
Shah, Rajiv Ratn and
Stent, Amanda",
booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing",
month = dec,
year = "2020",
address = "Suzhou, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.aacl-main.71",
pages = "706--719",
abstract = "An NLP model{'}s ability to reason should be independent of language. Previous works utilize Natural Language Inference (NLI) to understand the reasoning ability of models, mostly focusing on high resource languages like English. To address scarcity of data in low-resource languages such as Hindi, we use data recasting to create NLI datasets for four existing text classification datasets. Through experiments, we show that our recasted dataset is devoid of statistical irregularities and spurious patterns. We further study the consistency in predictions of the textual entailment models and propose a consistency regulariser to remove pairwise-inconsistencies in predictions. We propose a novel two-step classification method which uses textual-entailment predictions for classification task. We further improve the performance by using a joint-objective for classification and textual entailment. We therefore highlight the benefits of data recasting and improvements on classification performance using our approach with supporting experimental results.",
}
```
### Contributions
Thanks to [@avinsit123](https://github.com/avinsit123) for adding this dataset. |
hover | 2023-01-25T14:32:26.000Z | [
"task_categories:text-retrieval",
"task_ids:fact-checking-retrieval",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-sa-... | null | HoVer is an open-domain, many-hop fact extraction and claim verification dataset built upon the Wikipedia corpus. The original 2-hop claims are adapted from question-answer pairs from HotpotQA. It is collected by a team of NLP researchers at UNC Chapel Hill and Verisk Analytics. | @inproceedings{jiang2020hover,
title={{HoVer}: A Dataset for Many-Hop Fact Extraction And Claim Verification},
author={Yichen Jiang and Shikha Bordia and Zheng Zhong and Charles Dognin and Maneesh Singh and Mohit Bansal.},
booktitle={Findings of the Conference on Empirical Methods in Natural Language Processing ({EMNLP})},
year={2020}
} | null | 0 | 5 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
- found
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-retrieval
task_ids:
- fact-checking-retrieval
paperswithcode_id: hover
pretty_name: HoVer
dataset_info:
features:
- name: id
dtype: int32
- name: uid
dtype: string
- name: claim
dtype: string
- name: supporting_facts
list:
- name: key
dtype: string
- name: value
dtype: int32
- name: label
dtype:
class_label:
names:
'0': NOT_SUPPORTED
'1': SUPPORTED
- name: num_hops
dtype: int32
- name: hpqa_id
dtype: string
splits:
- name: train
num_bytes: 5532178
num_examples: 18171
- name: validation
num_bytes: 1299252
num_examples: 4000
- name: test
num_bytes: 927513
num_examples: 4000
download_size: 12257835
dataset_size: 7758943
---
# Dataset Card for HoVer
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://hover-nlp.github.io/
- **Repository:** https://github.com/hover-nlp/hover
- **Paper:** https://arxiv.org/abs/2011.03088
- **Leaderboard:** https://hover-nlp.github.io/
- **Point of Contact:** [More Information Needed]
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
A sample training set is provided below
```
{'id': 14856, 'uid': 'a0cf45ea-b5cd-4c4e-9ffa-73b39ebd78ce', 'claim': 'The park at which Tivolis Koncertsal is located opened on 15 August 1843.', 'supporting_facts': [{'key': 'Tivolis Koncertsal', 'value': 0}, {'key': 'Tivoli Gardens', 'value': 1}], 'label': 'SUPPORTED', 'num_hops': 2, 'hpqa_id': '5abca1a55542993a06baf937'}
```
Please note that in test set sentence only id, uid and claim are available. Labels are not available in test set and are represented by -1.
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset. |
igbo_monolingual | 2023-06-01T14:59:53.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"size_categories:n<1K",
"source_datasets:original",... | null | A dataset is a collection of Monolingual Igbo sentences. | @misc{ezeani2020igboenglish,
title={Igbo-English Machine Translation: An Evaluation Benchmark},
author={Ignatius Ezeani and Paul Rayson and Ikechukwu Onyenwe and Chinedu Uchechukwu and Mark Hepple},
year={2020},
eprint={2004.00648},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 1 | 5 | ---
annotations_creators:
- found
language_creators:
- found
language:
- ig
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
- n<1K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: null
pretty_name: Igbo Monolingual Dataset
dataset_info:
- config_name: eze_goes_to_school
features:
- name: format
dtype: string
- name: title
dtype: string
- name: chapters
sequence:
- name: title
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 128309
num_examples: 1
download_size: 8260947
dataset_size: 128309
- config_name: bbc-igbo
features:
- name: source
dtype: string
- name: title
dtype: string
- name: description
dtype: string
- name: date
dtype: string
- name: headline
dtype: string
- name: content
dtype: string
- name: tags
sequence: string
splits:
- name: train
num_bytes: 3488908
num_examples: 1297
download_size: 8260947
dataset_size: 3488908
- config_name: igbo-radio
features:
- name: source
dtype: string
- name: headline
dtype: string
- name: author
dtype: string
- name: date
dtype: string
- name: description
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 1129644
num_examples: 440
download_size: 8260947
dataset_size: 1129644
- config_name: jw-ot-igbo
features:
- name: format
dtype: string
- name: title
dtype: string
- name: chapters
sequence:
- name: title
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 3489314
num_examples: 39
download_size: 8260947
dataset_size: 3489314
- config_name: jw-nt-igbo
features:
- name: format
dtype: string
- name: title
dtype: string
- name: chapters
sequence:
- name: title
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 1228779
num_examples: 27
download_size: 8260947
dataset_size: 1228779
- config_name: jw-books
features:
- name: title
dtype: string
- name: content
dtype: string
- name: format
dtype: string
- name: date
dtype: string
splits:
- name: train
num_bytes: 9456342
num_examples: 48
download_size: 8260947
dataset_size: 9456342
- config_name: jw-teta
features:
- name: title
dtype: string
- name: content
dtype: string
- name: format
dtype: string
- name: date
dtype: string
splits:
- name: train
num_bytes: 991111
num_examples: 37
download_size: 8260947
dataset_size: 991111
- config_name: jw-ulo_nche
features:
- name: title
dtype: string
- name: content
dtype: string
- name: format
dtype: string
- name: date
dtype: string
splits:
- name: train
num_bytes: 1952360
num_examples: 55
download_size: 8260947
dataset_size: 1952360
- config_name: jw-ulo_nche_naamu
features:
- name: title
dtype: string
- name: content
dtype: string
- name: format
dtype: string
- name: date
dtype: string
splits:
- name: train
num_bytes: 7248017
num_examples: 88
download_size: 8260947
dataset_size: 7248017
config_names:
- bbc-igbo
- eze_goes_to_school
- igbo-radio
- jw-books
- jw-nt-igbo
- jw-ot-igbo
- jw-teta
- jw-ulo_nche
- jw-ulo_nche_naamu
---
# Dataset Card for Igbo Monolingual Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/IgnatiusEzeani/IGBONLP/tree/master/ig_monoling
- **Repository:** https://github.com/IgnatiusEzeani/IGBONLP/tree/master/ig_monoling
- **Paper:** https://arxiv.org/abs/2004.00648
### Dataset Summary
A dataset is a collection of Monolingual Igbo sentences.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Igbo (ig)
## Dataset Structure
### Data Instances
Here is an example from the bb-igbo config:
```
{'content': 'Ike Ekweremmadụ\n\nIke ịda jụụ otụ nkeji banyere oke ogbugbu na-eme n\'ala Naijiria agwụla Ekweremmadụ\n\nOsote onye-isi ndị ome-iwu Naịjirịa bụ Ike Ekweremadu ekwuola na ike agwụla ndị Sịnatị iji otu nkeji darajụụ akwanyere ndị egburu n\'ime oke ọgbaghara dị na Naịjirịa oge ọ bula.\n\nEkweremadu katọrọ mwakpọ na ogbugbu ndị Naịjirịa aka ha dị ọcha nke ndị Fulani na-achị ehi mere, kwuo na ike agwụla ndị ome- iwu ịkwanyere ha ugwu n\'otu nkeji\'\n\nCheta n\'otu ịzụka gara-aga ka emere akwam ozu mmadụ ruru iri asaa egburu na Local Gọọmenti Logo na Guma nke Benue Steeti, e be ihe kariri mmadụ iri ise ka akụkọ kwuru n\'egburu na Taraba Steeti.\n\nEkweremadu gosiri iwe gbasara ogbugbu ndị mmadụ na nzukọ ndị ome-iwu n\'ụbọchị taa, kwuo na Naịjirịa ga-ebu ụzọ nwe udo na nchekwa, tupu e kwuowa okwu iwulite obodo.\n\nỌ sịrị: "Ndị ome-iwu abụghị sọ ọsọ ndị ihe a metụtara, kama ndị Naịjirịa niile.\n\n\'Ike agwụla anyị iji otu nkeji dị jụụ maka nkwanye ugwu. Ihe anyị chọrọ bụ udo na nchekwa tupu echewa echịchị nwuli obodo."',
'date': '2018-01-19T17:07:38Z',
'description': "N'ihi oke ogbugbu ndị mmadụ na Naịjirịa gbagburu gburu, osota onyeisi ndị ome-iwu Naịjirịa bụ Ike Ekweremadu ekwuola na ihe Naịjiria chọrọ bụ nchekwa tara ọchịchị, tupu ekwuwa okwu ihe ọzọ.",
'headline': 'Ekweremadu: Ike agwụla ndị ụlọ ome iwu',
'source': 'https://www.bbc.com/igbo/42712250',
'tags': [],
'title': 'Ekweremadu: Ike agwụla ndị ụlọ ome iwu'}
```
### Data Fields
For config 'eze_goes_to_school':
- format, title, chapters
For config 'bbc-igbo' :
- source, title, description, date (Missing date values replaced with empty strings), headline, content, tags (Missing tags replaced with empty list)
For config 'igbo-radio':
- source, headline, author, date, description, content
For config 'jw-ot-igbo':
- format, title, chapters
For config 'jw-nt-igbo':
- format, title, chapters
For config 'jw-books':
- title, content, format, date (Missing date values replaced with empty strings)
For config 'jw-teta':
- title, content, format, date (Missing date values replaced with empty strings)
For config 'jw-ulo_nche':
- title, content, format, date (Missing date values replaced with empty strings)
For config 'jw-ulo_nche_naamu':
- title, content, format, date (Missing date values replaced with empty strings)
### Data Splits
| bbc-igbo | eze_goes_to_school |igbo-radio| jw-books|jw-nt-igbo| jw-ot-igbo | jw-teta |jw-ulo_nche |jw-ulo_nche_naamu
| ------------- |:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|
| 1297 | 1 | 440 | 48 | 27 | 39 | 37 | 55 | 88
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
@misc{ezeani2020igboenglish,
title={Igbo-English Machine Translation: An Evaluation Benchmark},
author={Ignatius Ezeani and Paul Rayson and Ikechukwu Onyenwe and Chinedu Uchechukwu and Mark Hepple},
year={2020},
eprint={2004.00648},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
### Contributions
Thanks to [@purvimisal](https://github.com/purvimisal) for adding this dataset. |
kan_hope | 2023-01-25T14:33:30.000Z | [
"task_categories:text-classification",
"task_ids:multi-label-classification",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"language:kn",
"license:cc-by-4.0",
"hop... | null | Numerous methods have been developed to monitor the spread of negativity in modern years by
eliminating vulgar, offensive, and fierce comments from social media platforms. However, there are relatively
lesser amounts of study that converges on embracing positivity, reinforcing supportive and reassuring content in online forums.
Consequently, we propose creating an English Kannada Hope speech dataset, KanHope and comparing several experiments to benchmark the dataset.
The dataset consists of 6,176 user generated comments in code mixed Kannada scraped from YouTube and manually annotated as bearing hope
speech or Not-hope speech.
This dataset was prepared for hope-speech text classification benchmark on code-mixed Kannada, an under-resourced language. | @misc{hande2021hope,
title={Hope Speech detection in under-resourced Kannada language},
author={Adeep Hande and Ruba Priyadharshini and Anbukkarasi Sampath and Kingston Pal Thamburaj and Prabakaran Chandran and Bharathi Raja Chakravarthi},
year={2021},
eprint={2108.04616},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 1 | 5 | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
- kn
license:
- cc-by-4.0
multilinguality:
- multilingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-label-classification
pretty_name: KanHope
language_bcp47:
- en-IN
- kn-IN
tags:
- hope-speech-detection
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': Not-Hope
'1': Hope
splits:
- name: train
num_bytes: 494898
num_examples: 4940
- name: test
num_bytes: 65722
num_examples: 618
download_size: 568972
dataset_size: 560620
---
# Dataset Card for KanHope
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://zenodo.org/record/4904729
- **Repository:** [KanHope](https://github.com/adeepH/KanHope)
- **Paper:** [Hope speech detection in Under-resourced Kannada langauge](https://arxiv.org/abs/2108.04616)
- **Leaderboard:** [N/A]
- **Point of Contact:** [Adeep Hande](adeeph18c@iiitt.ac.in)
### Dataset Summary
KanHope dataset is a code-mixed Kannada-English dataset for hope speech detection. All texts are scraped from the comments section of YouTube. The dataset consists of 6,176 user-generated comments in code mixed Kannada scraped from YouTube and manually annotated as bearing hope speech or Not-hope speech.
### Supported Tasks and Leaderboards
This task aims to detect Hope speech content of the code-mixed dataset of comments/posts in Dravidian Languages ( Kannada-English) collected from social media. The comment/post may contain more than one sentence, but the average sentence length of the corpora is 1. Each comment/post is annotated at the comment/post level. This dataset also has class imbalance problems depicting real-world scenarios.
### Languages
Code-mixed text in Dravidian languages (Kannada-English).
## Dataset Structure
### Data Instances
An example from the Kannada dataset looks as follows:
| text | label |
| :------ | :----- |
| ��������� ��ͭ� heartly heltidini... plz avrigella namma nimmellara supprt beku | 0 (Non_hope speech) |
| Next song gu kuda alru andre evaga yar comment madidera alla alrru like madi share madi nam industry na next level ge togond hogaona. | 1 (Hope Speech) |
### Data Fields
Kannada
- `text`: Kannada-English code mixed comment.
- `label`: integer from either of 0 or 1 that corresponds to these values: "Non_hope Speech", "Hope Speech"
### Data Splits
| | train | validation | test |
|---------|------:|-----------:|-----:|
| Kannada | 4941 | 618 | 617 |
## Dataset Creation
### Curation Rationale
Numerous methods have been developed to monitor the spread of negativity in modern years by eliminating vulgar, offensive, and fierce comments from social media platforms. However, there are relatively lesser amounts of study that converges on embracing positivity, reinforcing supportive and reassuring content in online forums.
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
Youtube users
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
```
@misc{hande2021hope,
title={Hope Speech detection in under-resourced Kannada language},
author={Adeep Hande and Ruba Priyadharshini and Anbukkarasi Sampath and Kingston Pal Thamburaj and Prabakaran Chandran and Bharathi Raja Chakravarthi},
year={2021},
eprint={2108.04616},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@adeepH](https://github.com/adeepH) for adding this dataset. |
kannada_news | 2023-01-25T14:33:33.000Z | [
"task_categories:text-classification",
"task_ids:topic-classification",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:kn",
"license:cc-by-sa-4.0",
"region:us"
] | null | The Kannada news dataset contains only the headlines of news article in three categories:
Entertainment, Tech, and Sports.
The data set contains around 6300 news article headlines which collected from Kannada news websites.
The data set has been cleaned and contains train and test set using which can be used to benchmark
classification models in Kannada. | null | null | 0 | 5 | ---
annotations_creators:
- other
language_creators:
- other
language:
- kn
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- topic-classification
pretty_name: KannadaNews Dataset
dataset_info:
features:
- name: headline
dtype: string
- name: label
dtype:
class_label:
names:
'0': sports
'1': tech
'2': entertainment
splits:
- name: train
num_bytes: 969216
num_examples: 5167
- name: validation
num_bytes: 236817
num_examples: 1293
download_size: 0
dataset_size: 1206033
---
# Dataset Card for kannada_news dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Kaggle link](https://www.kaggle.com/disisbig/kannada-news-dataset) for kannada news headlines dataset
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** More information about the dataset and the models can be found [here](https://github.com/goru001/nlp-for-kannada)
### Dataset Summary
The Kannada news dataset contains only the headlines of news article in three categories:
Entertainment, Tech, and Sports.
The data set contains around 6300 news article headlines which are collected from Kannada news websites.
The data set has been cleaned and contains train and test set using which can be used to benchmark topic classification models in Kannada.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Kannada (kn)
## Dataset Structure
### Data Instances
The data has two files. A train.csv and valid.csv. An example row of the dataset is as below:
```
{
'headline': 'ಫಿಫಾ ವಿಶ್ವಕಪ್ ಫೈನಲ್: ಅತಿರೇಕಕ್ಕೇರಿದ ಸಂಭ್ರಮಾಚರಣೆ; ಅಭಿಮಾನಿಗಳ ಹುಚ್ಚು ವರ್ತನೆಗೆ ವ್ಯಾಪಕ ಖಂಡನೆ',
'label':'sports'
}
```
NOTE: The data has very few examples on the technology (class label: 'tech') topic. [More Information Needed]
### Data Fields
Data has two fields:
- headline: text headline in kannada (string)
- label : corresponding class label which the headlines pertains to in english (string)
### Data Splits
The dataset is divided into two splits. All the headlines are scraped from news websites on the internet.
| | train | validation |
|-----------------|--------:|-----------:|
| Input Sentences | 5167 | 1293 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
There are starkingly less amount of data for South Indian languages, especially Kannada, available in digital format which can be used for NLP purposes.
Though having roughly 38 million native speakers, it is a little under-represented language and will benefit from active contribution from the community.
This dataset, however, can just help people get exposed to Kannada and help proceed further active participation for enabling continuous progress and development.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[Gaurav Arora] (https://github.com/goru001/nlp-for-kannada). Has also got some starter models an embeddings to help get started.
### Licensing Information
cc-by-sa-4.0
### Citation Information
https://www.kaggle.com/disisbig/kannada-news-dataset
### Contributions
Thanks to [@vrindaprabhu](https://github.com/vrindaprabhu) for adding this dataset. |
kor_qpair | 2023-01-25T14:34:00.000Z | [
"task_categories:text-classification",
"task_ids:semantic-similarity-classification",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ko",
"license:mit",
"region:us"
] | null | This is a Korean paired question dataset containing labels indicating whether two questions in a given pair are semantically identical. This dataset was used to evaluate the performance of [KoGPT2](https://github.com/SKT-AI/KoGPT2#subtask-evaluations) on a phrase detection downstream task. | @misc{Song:2018,
title = "Paired Question v.2",
authors = "Youngsook Song",
publisher = "GitHub",
year = "2018"
} | null | 2 | 5 | ---
annotations_creators:
- expert-generated
language_creators:
- other
language:
- ko
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- semantic-similarity-classification
pretty_name: KorQpair
dataset_info:
features:
- name: question1
dtype: string
- name: question2
dtype: string
- name: is_duplicate
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 515365
num_examples: 6136
- name: test
num_bytes: 63466
num_examples: 758
- name: validation
num_bytes: 57242
num_examples: 682
download_size: 545236
dataset_size: 636073
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/songys/Question_pair)
- **Repository:** [Github](https://github.com/songys/Question_pair)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
Each row in the dataset contains two questions and a `is_duplicate` label.
- `question1`: The first question
- `question2`: The second question
- `is_duplicate`: 0 if `question1` and `question2` are semantically similar; 1 otherwise
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@jaketae](https://github.com/jaketae) for adding this dataset. |
laroseda | 2022-11-18T20:18:11.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ro",
"license:cc-by-4.0",
"arxiv:2101.04197",
"arxiv:1901.06543"... | null | LaRoSeDa (A Large Romanian Sentiment Data Set) contains 15,000 reviews written in Romanian, of which 7,500 are positive and 7,500 negative.
Star ratings of 1 and 2 and of 4 and 5 are provided for negative and positive reviews respectively.
The current dataset uses star rating as the label for multi-class classification. | @article{
tache2101clustering,
title={Clustering Word Embeddings with Self-Organizing Maps. Application on LaRoSeDa -- A Large Romanian Sentiment Data Set},
author={Anca Maria Tache and Mihaela Gaman and Radu Tudor Ionescu},
journal={ArXiv},
year = {2021}
} | null | 0 | 5 | ---
annotations_creators:
- found
language_creators:
- found
language:
- ro
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
paperswithcode_id: null
pretty_name: LaRoSeDa
dataset_info:
features:
- name: index
dtype: string
- name: title
dtype: string
- name: content
dtype: string
- name: starRating
dtype: int64
config_name: laroseda
splits:
- name: train
num_bytes: 2932819
num_examples: 12000
- name: test
num_bytes: 700834
num_examples: 3000
download_size: 5257183
dataset_size: 3633653
---
# Dataset Card for LaRoSeDa
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/ancatache/LaRoSeDa)
- **Repository:** [Github](https://github.com/ancatache/LaRoSeDa)
- **Paper:** [Arxiv](https://arxiv.org/pdf/2101.04197.pdf)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** raducu.ionescu@gmail.com
### Dataset Summary
LaRoSeDa - A **La**rge and **Ro**manian **Se**ntiment **Da**ta Set. LaRoSeDa contains 15,000 reviews written in Romanian, of which 7,500 are positive and 7,500 negative.
The samples have one of four star ratings: 1 or 2 - for reviews that can be considered of negative polarity, and 4 or 5 for the positive ones.
The 15,000 samples featured in the corpus and labelled with the star rating, are splitted in a train and test subsets, with 12,000 and 3,000 samples in each subset.
### Supported Tasks and Leaderboards
[LiRo Benchmark and Leaderboard](https://eemlcommunity.github.io/ro_benchmark_leaderboard/site/)
### Languages
The text dataset is in Romanian (`ro`).
## Dataset Structure
### Data Instances
Below we have an example of sample from LaRoSeDa:
```
{
"index": "9675",
"title": "Nu recomand",
"content": "probleme cu localizarea, mari...",
"starRating": 1,
}
```
where "9675" is the sample index, followed by the title of the review, review content and then the star rating given by the user.
### Data Fields
- `index`: string, the unique indentifier of a sample.
- `title`: string, the review title.
- `content`: string, the content of the review.
- `starRating`: integer, with values in the following set {1, 2, 4, 5}.
### Data Splits
The train/test split contains 12,000/3,000 samples tagged with the star rating assigned to each sample in the dataset.
## Dataset Creation
### Curation Rationale
The samples are preprocessed in order to eliminate named entities. This is required to prevent classifiers from taking the decision based on features that are not related to the topics.
For example, named entities that refer to politicians or football players names can provide clues about the topic. For more details, please read the [paper](https://arxiv.org/abs/1901.06543).
### Source Data
#### Data Collection and Normalization
For the data collection, one of the largest Romanian e-commerce platform was targetted. Along with the textual content of each review, the associated star ratings was also collected in order to automatically assign labels to
the collected text samples.
#### Who are the source language producers?
The original text comes from one of the largest e-commerce platforms in Romania.
### Annotations
#### Annotation process
As mentioned above, LaRoSeDa is composed of product reviews from one of the largest e-commerce websites in Romania. The resulting samples are automatically tagged with the star rating assigned by the users.
#### Who are the annotators?
N/A
### Personal and Sensitive Information
The textual data collected for LaRoSeDa consists in product reviews freely available on the Internet.
To the best of authors' knowledge, there is no personal or sensitive information that needed to be considered in the said textual inputs collected.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is part of an effort to encourage text classification research in languages other than English. Such work increases the accessibility of natural language technology to more regions and cultures.
In the past three years there was a growing interest for studying Romanian from a Computational Linguistics perspective. However, we are far from having enough datasets and resources in this particular language.
### Discussion of Biases
*We note that most of the negative reviews (5,561) are rated with one star. Similarly, most of the positive reviews (6,238) are rated with five stars. Hence, the corpus is highly polarized.*
### Other Known Limitations
*The star rating might not always reflect the polarity of the text. We thus acknowledge that the automatic labeling process is not optimal, i.e. some labels might be noisy.*
## Additional Information
### Dataset Curators
Published and managed by Anca Tache, Mihaela Gaman and Radu Tudor Ionescu.
### Licensing Information
CC BY-SA 4.0 License
### Citation Information
```
@article{
tache2101clustering,
title={Clustering Word Embeddings with Self-Organizing Maps. Application on LaRoSeDa -- A Large Romanian Sentiment Data Set},
author={Anca Maria Tache and Mihaela Gaman and Radu Tudor Ionescu},
journal={ArXiv},
year = {2021}
}
```
### Contributions
Thanks to [@MihaelaGaman](https://github.com/MihaelaGaman) for adding this dataset. |
liveqa | 2022-11-03T16:15:28.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:zh",
"license:unknown",
"region:us"
] | null | This is LiveQA, a Chinese dataset constructed from play-by-play live broadcast.
It contains 117k multiple-choice questions written by human commentators for over 1,670 NBA games,
which are collected from the Chinese Hupu website. | @inproceedings{qianying-etal-2020-liveqa,
title = "{L}ive{QA}: A Question Answering Dataset over Sports Live",
author = "Qianying, Liu and
Sicong, Jiang and
Yizhong, Wang and
Sujian, Li",
booktitle = "Proceedings of the 19th Chinese National Conference on Computational Linguistics",
month = oct,
year = "2020",
address = "Haikou, China",
publisher = "Chinese Information Processing Society of China",
url = "https://www.aclweb.org/anthology/2020.ccl-1.98",
pages = "1057--1067"
} | null | 1 | 5 | ---
annotations_creators:
- found
language_creators:
- found
language:
- zh
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: liveqa
pretty_name: LiveQA
dataset_info:
features:
- name: id
dtype: int64
- name: passages
sequence:
- name: is_question
dtype: bool
- name: text
dtype: string
- name: candidate1
dtype: string
- name: candidate2
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 112187507
num_examples: 1670
download_size: 114704569
dataset_size: 112187507
---
# Dataset Card for LiveQA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/PKU-TANGENT/LiveQA)
- **Repository:** [Github](https://github.com/PKU-TANGENT/LiveQA)
- **Paper:** [Liu et al., 2020](https://www.aclweb.org/anthology/2020.ccl-1.98.pdf)
- **Leaderboard:** N/A
- **Point of Contact:** Qianying Liu
### Dataset Summary
The LiveQA dataset is a Chinese question-answering resource constructed from playby-play live broadcasts. It contains 117k multiple-choice questions written by human commentators for over 1,670 NBA games, which are collected from the Chinese Hupu website.
### Supported Tasks and Leaderboards
Question Answering.
[More Information Needed]
### Languages
Chinese.
## Dataset Structure
### Data Instances
Each instance represents a timeline (i.e., a game) with an identifier. The passages field comprise an array of text or question segments. In the following truncated example, user comments about the game is followed by a question about which team will be the first to reach 60 points.
```python
{
'id': 1,
'passages': [
{
"is_question": False,
"text": "'我希望两位球员都能做到!!",
"candidate1": "",
"candidate2": "",
"answer": "",
},
{
"is_question": False,
"text": "新年给我们送上精彩比赛!",
"candidate1": "",
"candidate2": "",
"answer": "",
},
{
"is_question": True,
"text": "先达到60分?",
"candidate1": "火箭",
"candidate2": "勇士",
"answer": "勇士",
},
{
"is_question": False,
"text": "自己急停跳投!!!",
"candidate1": "",
"candidate2": "",
"answer": "",
}
]
}
```
### Data Fields
- id: identifier for the game
- passages: collection of text/question segments
- text: real-time text comment or binary question related to the context
- candidate1/2: one of the two answer options to the question
- answer: correct answer to the question in text
### Data Splits
There is no predefined split in this dataset.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
This resource is developed by [Liu et al., 2020](https://www.aclweb.org/anthology/2020.ccl-1.98.pdf).
```
@inproceedings{qianying-etal-2020-liveqa,
title = "{L}ive{QA}: A Question Answering Dataset over Sports Live",
author = "Qianying, Liu and
Sicong, Jiang and
Yizhong, Wang and
Sujian, Li",
booktitle = "Proceedings of the 19th Chinese National Conference on Computational Linguistics",
month = oct,
year = "2020",
address = "Haikou, China",
publisher = "Chinese Information Processing Society of China",
url = "https://www.aclweb.org/anthology/2020.ccl-1.98",
pages = "1057--1067"
}
```
### Contributions
Thanks to [@j-chim](https://github.com/j-chim) for adding this dataset. |
metooma | 2023-01-25T14:40:24.000Z | [
"task_categories:text-classification",
"task_categories:text-retrieval",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:origi... | null | The dataset consists of tweets belonging to #MeToo movement on Twitter, labelled into different categories.
Due to Twitter's development policies, we only provide the tweet ID's and corresponding labels,
other data can be fetched via Twitter API.
The data has been labelled by experts, with the majority taken into the account for deciding the final label.
We provide these labels for each of the tweets. The labels provided for each data point
includes -- Relevance, Directed Hate, Generalized Hate,
Sarcasm, Allegation, Justification, Refutation, Support, Oppose | @inproceedings{gautam2020metooma,
title={# MeTooMA: Multi-Aspect Annotations of Tweets Related to the MeToo Movement},
author={Gautam, Akash and Mathur, Puneet and Gosangi, Rakesh and Mahata, Debanjan and Sawhney, Ramit and Shah, Rajiv Ratn},
booktitle={Proceedings of the International AAAI Conference on Web and Social Media},
volume={14},
pages={209--216},
year={2020} } | null | 0 | 5 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- cc0-1.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
- text-retrieval
task_ids:
- multi-class-classification
- multi-label-classification
paperswithcode_id: metooma
pretty_name: '#MeTooMA dataset'
dataset_info:
features:
- name: TweetId
dtype: string
- name: Text_Only_Informative
dtype:
class_label:
names:
'0': Text Non Informative
'1': Text Informative
- name: Image_Only_Informative
dtype:
class_label:
names:
'0': Image Non Informative
'1': Image Informative
- name: Directed_Hate
dtype:
class_label:
names:
'0': Directed Hate Absent
'1': Directed Hate Present
- name: Generalized_Hate
dtype:
class_label:
names:
'0': Generalized Hate Absent
'1': Generalized Hate Present
- name: Sarcasm
dtype:
class_label:
names:
'0': Sarcasm Absent
'1': Sarcasm Present
- name: Allegation
dtype:
class_label:
names:
'0': Allegation Absent
'1': Allegation Present
- name: Justification
dtype:
class_label:
names:
'0': Justification Absent
'1': Justification Present
- name: Refutation
dtype:
class_label:
names:
'0': Refutation Absent
'1': Refutation Present
- name: Support
dtype:
class_label:
names:
'0': Support Absent
'1': Support Present
- name: Oppose
dtype:
class_label:
names:
'0': Oppose Absent
'1': Oppose Present
splits:
- name: train
num_bytes: 821738
num_examples: 7978
- name: test
num_bytes: 205489
num_examples: 1995
download_size: 408889
dataset_size: 1027227
---
# Dataset Card for #MeTooMA dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/JN4EYU
- **Repository:** https://github.com/midas-research/MeTooMA
- **Paper:** https://ojs.aaai.org//index.php/ICWSM/article/view/7292
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
- The dataset consists of tweets belonging to #MeToo movement on Twitter, labelled into different categories.
- This dataset includes more data points and has more labels than any of the previous datasets that contain social media
posts about sexual abuse discloures. Please refer to the Related Datasets of the publication for a detailed information about this.
- Due to Twitters development policies, the authors provide only the tweet IDs and corresponding labels,
other data can be fetched via Twitter API.
- The data has been labelled by experts, with the majority taken into the account for deciding the final label.
- The authors provide these labels for each of the tweets.
- Relevance
- Directed Hate
- Generalized Hate
- Sarcasm
- Allegation
- Justification
- Refutation
- Support
- Oppose
- The definitions for each task/label is in the main publication.
- Please refer to the accompanying paper https://aaai.org/ojs/index.php/ICWSM/article/view/7292 for statistical analysis on the textual data
extracted from this dataset.
- The language of all the tweets in this dataset is English
- Time period: October 2018 - December 2018
- Suggested Use Cases of this dataset:
- Evaluating usage of linguistic acts such as: hate-spech and sarcasm in the incontext of public sexual abuse discloures.
- Extracting actionable insights and virtual dynamics of gender roles in sexual abuse revelations.
- Identifying how influential people were potrayed on public platform in the
events of mass social movements.
- Polarization analysis based on graph simulations of social nodes of users involved
in the #MeToo movement.
### Supported Tasks and Leaderboards
Multi Label and Multi-Class Classification
### Languages
English
## Dataset Structure
- The dataset is structured into CSV format with TweetID and accompanying labels.
- Train and Test sets are split into respective files.
### Data Instances
Tweet ID and the appropriate labels
### Data Fields
Tweet ID and appropriate labels (binary label applicable for a data point) and multiple labels for each Tweet ID
### Data Splits
- Train: 7979
- Test: 1996
## Dataset Creation
### Curation Rationale
- Twitter was the major source of all the public discloures of sexual abuse incidents during the #MeToo movement.
- People expressed their opinions over issues which were previously missing from the social media space.
- This provides an option to study the linguistic behaviours of social media users in an informal setting,
therefore the authors decide to curate this annotated dataset.
- The authors expect this dataset would be of great interest and use to both computational and socio-linguists.
- For computational linguists, it provides an opportunity to model three new complex dialogue acts (allegation, refutation, and justification) and also to study how these acts interact with some of the other linguistic components like stance, hate, and sarcasm. For socio-linguists, it provides an opportunity to explore how a movement manifests in social media.
### Source Data
- Source of all the data points in this dataset is Twitter social media platform.
#### Initial Data Collection and Normalization
- All the tweets are mined from Twitter with initial search paramters identified using keywords from the #MeToo movement.
- Redundant keywords were removed based on manual inspection.
- Public streaming APIs of Twitter were used for querying with the selected keywords.
- Based on text de-duplication and cosine similarity score, the set of tweets were pruned.
- Non english tweets were removed.
- The final set was labelled by experts with the majority label taken into the account for deciding the final label.
- Please refer to this paper for detailed information: https://ojs.aaai.org//index.php/ICWSM/article/view/7292
#### Who are the source language producers?
Please refer to this paper for detailed information: https://ojs.aaai.org//index.php/ICWSM/article/view/7292
### Annotations
#### Annotation process
- The authors chose against crowd sourcing for labeling this dataset due to its highly sensitive nature.
- The annotators are domain experts having degress in advanced clinical psychology and gender studies.
- They were provided a guidelines document with instructions about each task and its definitions, labels and examples.
- They studied the document, worked a few examples to get used to this annotation task.
- They also provided feedback for improving the class definitions.
- The annotation process is not mutually exclusive, implying that presence of one label does not mean the
absence of the other one.
#### Who are the annotators?
- The annotators are domain experts having a degree in clinical psychology and gender studies.
- Please refer to the accompnaying paper for a detailed annotation process.
### Personal and Sensitive Information
- Considering Twitters policy for distribution of data, only Tweet ID and applicable labels are shared for the public use.
- It is highly encouraged to use this dataset for scientific purposes only.
- This dataset collection completely follows the Twitter mandated guidelines for distribution and usage.
## Considerations for Using the Data
### Social Impact of Dataset
- The authors of this dataset do not intend to conduct a population centric analysis of #MeToo movement on Twitter.
- The authors acknowledge that findings from this dataset cannot be used as-is for any direct social intervention, these
should be used to assist already existing human intervention tools and therapies.
- Enough care has been taken to ensure that this work comes of as trying to target a specific person for their
personal stance of issues pertaining to the #MeToo movement.
- The authors of this work do not aim to vilify anyone accused in the #MeToo movement in any manner.
- Please refer to the ethics and discussion section of the mentioned publication for appropriate sharing of this dataset
and social impact of this work.
### Discussion of Biases
- The #MeToo movement acted as a catalyst for implementing social policy changes to benefit the members of
community affected by sexual abuse.
- Any work undertaken on this dataset should aim to minimize the bias against minority groups which
might amplified in cases of sudden outburst of public reactions over sensitive social media discussions.
### Other Known Limitations
- Considering privacy concerns, social media practitioners should be aware of making automated interventions
to aid the victims of sexual abuse as some people might not prefer to disclose their notions.
- Concerned social media users might also repeal their social information, if they found out that their
information is being used for computational purposes, hence it is important seek subtle individual consent
before trying to profile authors involved in online discussions to uphold personal privacy.
## Additional Information
Please refer to this link: https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/JN4EYU
### Dataset Curators
- If you use the corpus in a product or application, then please credit the authors
and [Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi]
(http://midas.iiitd.edu.in) appropriately.
Also, if you send us an email, we will be thrilled to know about how you have used the corpus.
- If interested in commercial use of the corpus, send email to midas@iiitd.ac.in.
- Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India
disclaims any responsibility for the use of the corpus and does not provide technical support.
However, the contact listed above will be happy to respond to queries and clarifications
- Please feel free to send us an email:
- with feedback regarding the corpus.
- with information on how you have used the corpus.
- if interested in having us analyze your social media data.
- if interested in a collaborative research project.
### Licensing Information
[More Information Needed]
### Citation Information
Please cite the following publication if you make use of the dataset: https://ojs.aaai.org/index.php/ICWSM/article/view/7292
```
@article{Gautam_Mathur_Gosangi_Mahata_Sawhney_Shah_2020, title={#MeTooMA: Multi-Aspect Annotations of Tweets Related to the MeToo Movement}, volume={14}, url={https://aaai.org/ojs/index.php/ICWSM/article/view/7292}, abstractNote={<p>In this paper, we present a dataset containing 9,973 tweets related to the MeToo movement that were manually annotated for five different linguistic aspects: relevance, stance, hate speech, sarcasm, and dialogue acts. We present a detailed account of the data collection and annotation processes. The annotations have a very high inter-annotator agreement (0.79 to 0.93 k-alpha) due to the domain expertise of the annotators and clear annotation instructions. We analyze the data in terms of geographical distribution, label correlations, and keywords. Lastly, we present some potential use cases of this dataset. We expect this dataset would be of great interest to psycholinguists, socio-linguists, and computational linguists to study the discursive space of digitally mobilized social movements on sensitive issues like sexual harassment.</p&gt;}, number={1}, journal={Proceedings of the International AAAI Conference on Web and Social Media}, author={Gautam, Akash and Mathur, Puneet and Gosangi, Rakesh and Mahata, Debanjan and Sawhney, Ramit and Shah, Rajiv Ratn}, year={2020}, month={May}, pages={209-216} }
```
### Contributions
Thanks to [@akash418](https://github.com/akash418) for adding this dataset. |
moroco | 2023-01-25T14:40:41.000Z | [
"task_categories:text-classification",
"task_ids:topic-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ro",
"license:cc-by-4.0",
"arxiv:1901.06543",
"region:us"
] | null | The MOROCO (Moldavian and Romanian Dialectal Corpus) dataset contains 33564 samples of text collected from the news domain.
The samples belong to one of the following six topics:
- culture
- finance
- politics
- science
- sports
- tech | @inproceedings{ Butnaru-ACL-2019,
author = {Andrei M. Butnaru and Radu Tudor Ionescu},
title = "{MOROCO: The Moldavian and Romanian Dialectal Corpus}",
booktitle = {Proceedings of ACL},
year = {2019},
pages={688--698},
} | null | 0 | 5 | ---
annotations_creators:
- found
language_creators:
- found
language:
- ro
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- topic-classification
paperswithcode_id: moroco
pretty_name: 'MOROCO: The Moldavian and Romanian Dialectal Corpus'
language_bcp47:
- ro-MD
dataset_info:
features:
- name: id
dtype: string
- name: category
dtype:
class_label:
names:
'0': culture
'1': finance
'2': politics
'3': science
'4': sports
'5': tech
- name: sample
dtype: string
config_name: moroco
splits:
- name: train
num_bytes: 39314292
num_examples: 21719
- name: test
num_bytes: 10877813
num_examples: 5924
- name: validation
num_bytes: 10721304
num_examples: 5921
download_size: 60711985
dataset_size: 60913409
---
# Dataset Card for MOROCO
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/butnaruandrei/MOROCO)
- **Repository:** [Github](https://github.com/butnaruandrei/MOROCO)
- **Paper:** [Arxiv](https://arxiv.org/abs/1901.06543)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [email](raducu.ionescu@gmail.com)
### Dataset Summary
Introducing MOROCO - The **Mo**ldavian and **Ro**manian Dialectal **Co**rpus. The MOROCO data set contains Moldavian and Romanian samples of text collected from the news domain. The samples belong to one of the following six topics: (0) culture, (1) finance, (2) politics, (3) science, (4) sports, (5) tech. The corpus features a total of 33,564 samples labelled with one of the fore mentioned six categories. We are also including a train/validation/test split with 21,719/5,921/5,924 samples in each subset.
### Supported Tasks and Leaderboards
[LiRo Benchmark and Leaderboard](https://eemlcommunity.github.io/ro_benchmark_leaderboard/site/)
### Languages
The text dataset is in Romanian (`ro`)
## Dataset Structure
### Data Instances
Below we have an example of sample from MOROCO:
```
{'id': , '48482',
'category': 2,
'sample': '“$NE$ cum am spus, nu este un sfârşit de drum . Vom continua lupta cu toate instrumentele şi cu toate mijloacele legale, parlamentare şi civice pe care le avem la dispoziţie . Evident că vom contesta la $NE$ această lege, au anunţat şi colegii de la $NE$ o astfel de contestaţie . Practic trebuie utilizat orice instrument pe care îl identificăm pentru a bloca intrarea în vigoare a acestei legi . Bineînţeles, şi preşedintele are punctul său de vedere . ( . . . ) $NE$ legi sunt împănate de motive de neconstituţionalitate . Colegii mei de la departamentul juridic lucrează în prezent pentru a definitiva textul contestaţiei”, a declarat $NE$ $NE$ citat de news . ro . Senatul a adoptat, marţi, în calitate de for decizional, $NE$ privind statutul judecătorilor şi procurorilor, cu 80 de voturi ”pentru” şi niciun vot ”împotrivă”, în condiţiile în care niciun partid din opoziţie nu a fost prezent în sală .',
}
```
where 48482 is the sample ID, followed by the category ground truth label, and then the text representing the actual content to be classified by topic.
Note: The category label has integer values ranging from 0 to 5.
### Data Fields
- `id`: string, the unique indentifier of a sample
- `category_label`: integer in the range [0, 5]; the category assigned to a sample.
- `sample`: a string, news report to be classified / used in classification.
### Data Splits
The train/validation/test split contains 21,719/5,921/5,924 samples tagged with the category assigned to each sample in the dataset.
## Dataset Creation
### Curation Rationale
The samples are preprocessed in order to eliminate named entities. This is required to prevent classifiers from taking the decision based on features that are not related to the topics.
For example, named entities that refer to politicians or football players names can provide clues about the topic. For more details, please read the [paper](https://arxiv.org/abs/1901.06543).
### Source Data
#### Data Collection and Normalization
For the data collection, five of the most popular news websites in Romania and the Republic of Moldova were targetted. Given that the data set was obtained through a web scraping technique, all the HTML tags needed to be removed, as well as replace consecutive white spaces with a single space.
As part of the pre-processing, we remove named entities, such as country names, cities, public figures, etc. The named entities have been replaced with $NE$. The necessity to remove them, comes also from the scope of this dataset: categorization by topic. Thus, the authors decided to remove named entities in order to prevent classifiers from taking the decision based on features that are not truly indicative of the topics.
#### Who are the source language producers?
The original text comes from news websites from Romania and the Republic of Moldova.
### Annotations
#### Annotation process
As mentioned above, MOROCO is composed of text samples from the top five most popular news websites in Romania and the Republic of Moldova, respectively. Since there are topic tags in the news websites targetd, the text samples can be automatically labeled with the corresponding category.
#### Who are the annotators?
N/A
### Personal and Sensitive Information
The textual data collected for MOROCO consists in news reports freely available on the Internet and of public interest.
To the best of authors' knowledge, there is no personal or sensitive information that needed to be considered in the said textual inputs collected.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is part of an effort to encourage text classification research in languages other than English. Such work increases the accessibility of natural language technology to more regions and cultures.
In the past three years there was a growing interest for studying Romanian from a Computational Linguistics perspective. However, we are far from having enough datasets and resources in this particular language.
### Discussion of Biases
The data included in MOROCO spans a well defined time frame of a few years. Part of the topics that were of interest then in the news landscape, might not show up nowadays or a few years from now in news websites.
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
Published and managed by Radu Tudor Ionescu and Andrei Butnaru.
### Licensing Information
CC BY-SA 4.0 License
### Citation Information
```
@inproceedings{ Butnaru-ACL-2019,
author = {Andrei M. Butnaru and Radu Tudor Ionescu},
title = "{MOROCO: The Moldavian and Romanian Dialectal Corpus}",
booktitle = {Proceedings of ACL},
year = {2019},
pages={688--698},
}
```
### Contributions
Thanks to [@MihaelaGaman](https://github.com/MihaelaGaman) for adding this dataset. |
nell | 2023-06-01T14:59:50.000Z | [
"task_categories:text-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:fact-checking-retrieval",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100M<n<1B",
"size_categories:10M<n<100M",
"size_categories:1M<n<10M",... | null | This dataset provides version 1115 of the belief
extracted by CMU's Never Ending Language Learner (NELL) and version
1110 of the candidate belief extracted by NELL. See
http://rtw.ml.cmu.edu/rtw/overview. NELL is an open information
extraction system that attempts to read the Clueweb09 of 500 million
web pages (http://boston.lti.cs.cmu.edu/Data/clueweb09/) and general
web searches.
The dataset has 4 configurations: nell_belief, nell_candidate,
nell_belief_sentences, and nell_candidate_sentences. nell_belief is
certainties of belief are lower. The two sentences config extracts the
CPL sentence patterns filled with the applicable 'best' literal string
for the entities filled into the sentence patterns. And also provides
sentences found using web searches containing the entities and
relationships.
There are roughly 21M entries for nell_belief_sentences, and 100M
sentences for nell_candidate_sentences. | @inproceedings{mitchell2015,
added-at = {2015-01-27T15:35:24.000+0100},
author = {Mitchell, T. and Cohen, W. and Hruscha, E. and Talukdar, P. and Betteridge, J. and Carlson, A. and Dalvi, B. and Gardner, M. and Kisiel, B. and Krishnamurthy, J. and Lao, N. and Mazaitis, K. and Mohammad, T. and Nakashole, N. and Platanios, E. and Ritter, A. and Samadi, M. and Settles, B. and Wang, R. and Wijaya, D. and Gupta, A. and Chen, X. and Saparov, A. and Greaves, M. and Welling, J.},
biburl = {https://www.bibsonomy.org/bibtex/263070703e6bb812852cca56574aed093/hotho},
booktitle = {AAAI},
description = {Papers by William W. Cohen},
interhash = {52d0d71f6f5b332dabc1412f18e3a93d},
intrahash = {63070703e6bb812852cca56574aed093},
keywords = {learning nell ontology semantic toread},
note = {: Never-Ending Learning in AAAI-2015},
timestamp = {2015-01-27T15:35:24.000+0100},
title = {Never-Ending Learning},
url = {http://www.cs.cmu.edu/~wcohen/pubs.html},
year = 2015
} | null | 3 | 5 | ---
annotations_creators:
- machine-generated
language_creators:
- crowdsourced
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100M<n<1B
- 10M<n<100M
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-retrieval
task_ids:
- entity-linking-retrieval
- fact-checking-retrieval
paperswithcode_id: nell
pretty_name: Never Ending Language Learning (NELL)
tags:
- relation-extraction
- text-to-structured
- text-to-tabular
dataset_info:
- config_name: nell_belief
features:
- name: entity
dtype: string
- name: relation
dtype: string
- name: value
dtype: string
- name: iteration_of_promotion
dtype: string
- name: score
dtype: string
- name: source
dtype: string
- name: entity_literal_strings
dtype: string
- name: value_literal_strings
dtype: string
- name: best_entity_literal_string
dtype: string
- name: best_value_literal_string
dtype: string
- name: categories_for_entity
dtype: string
- name: categories_for_value
dtype: string
- name: candidate_source
dtype: string
splits:
- name: train
num_bytes: 4592559704
num_examples: 2766079
download_size: 929107246
dataset_size: 4592559704
- config_name: nell_candidate
features:
- name: entity
dtype: string
- name: relation
dtype: string
- name: value
dtype: string
- name: iteration_of_promotion
dtype: string
- name: score
dtype: string
- name: source
dtype: string
- name: entity_literal_strings
dtype: string
- name: value_literal_strings
dtype: string
- name: best_entity_literal_string
dtype: string
- name: best_value_literal_string
dtype: string
- name: categories_for_entity
dtype: string
- name: categories_for_value
dtype: string
- name: candidate_source
dtype: string
splits:
- name: train
num_bytes: 23497433060
num_examples: 32687353
download_size: 2687057812
dataset_size: 23497433060
- config_name: nell_belief_sentences
features:
- name: entity
dtype: string
- name: relation
dtype: string
- name: value
dtype: string
- name: score
dtype: string
- name: sentence
dtype: string
- name: count
dtype: int32
- name: url
dtype: string
- name: sentence_type
dtype: string
splits:
- name: train
num_bytes: 4459368426
num_examples: 21031531
download_size: 929107246
dataset_size: 4459368426
- config_name: nell_candidate_sentences
features:
- name: entity
dtype: string
- name: relation
dtype: string
- name: value
dtype: string
- name: score
dtype: string
- name: sentence
dtype: string
- name: count
dtype: int32
- name: url
dtype: string
- name: sentence_type
dtype: string
splits:
- name: train
num_bytes: 20058197787
num_examples: 100866414
download_size: 2687057812
dataset_size: 20058197787
config_names:
- nell_belief
- nell_belief_sentences
- nell_candidate
- nell_candidate_sentences
---
# Dataset Card for Never Ending Language Learning (NELL)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
http://rtw.ml.cmu.edu/rtw/
- **Repository:**
http://rtw.ml.cmu.edu/rtw/
- **Paper:**
Never-Ending Learning.
T. Mitchell, W. Cohen, E. Hruschka, P. Talukdar, J. Betteridge, A. Carlson, B. Dalvi, M. Gardner, B. Kisiel, J. Krishnamurthy, N. Lao, K. Mazaitis, T. Mohamed, N. Nakashole, E. Platanios, A. Ritter, M. Samadi, B. Settles, R. Wang, D. Wijaya, A. Gupta, X. Chen, A. Saparov, M. Greaves, J. Welling. In Proceedings of the Conference on Artificial Intelligence (AAAI), 2015
### Dataset Summary
This dataset provides version 1115 of the belief
extracted by CMU's Never Ending Language Learner (NELL) and version
1110 of the candidate belief extracted by NELL. See
http://rtw.ml.cmu.edu/rtw/overview. NELL is an open information
extraction system that attempts to read the Clueweb09 of 500 million
web pages (http://boston.lti.cs.cmu.edu/Data/clueweb09/) and general
web searches.
The dataset has 4 configurations: nell_belief, nell_candidate,
nell_belief_sentences, and nell_candidate_sentences. nell_belief is
certainties of belief are lower. The two sentences config extracts the
CPL sentence patterns filled with the applicable 'best' literal string
for the entities filled into the sentence patterns. And also provides
sentences found using web searches containing the entities and
relationships.
There are roughly 21M entries for nell_belief_sentences, and 100M
sentences for nell_candidate_sentences.
From the NELL website:
- **Research Goal**
To build a never-ending machine learning system that acquires the ability to extract structured information from unstructured web pages. If successful, this will result in a knowledge base (i.e., a relational database) of structured information that mirrors the content of the Web. We call this system NELL (Never-Ending Language Learner).
- **Approach**
The inputs to NELL include (1) an initial ontology defining hundreds of categories (e.g., person, sportsTeam, fruit, emotion) and relations (e.g., playsOnTeam(athlete,sportsTeam), playsInstrument(musician,instrument)) that NELL is expected to read about, and (2) 10 to 15 seed examples of each category and relation.
Given these inputs, plus a collection of 500 million web pages and access to the remainder of the web through search engine APIs, NELL runs 24 hours per day, continuously, to perform two ongoing tasks:
Extract new instances of categories and relations. In other words, find noun phrases that represent new examples of the input categories (e.g., "Barack Obama" is a person and politician), and find pairs of noun phrases that correspond to instances of the input relations (e.g., the pair "Jason Giambi" and "Yankees" is an instance of the playsOnTeam relation). These new instances are added to the growing knowledge base of structured beliefs.
Learn to read better than yesterday. NELL uses a variety of methods to extract beliefs from the web. These are retrained, using the growing knowledge base as a self-supervised collection of training examples. The result is a semi-supervised learning method that couples the training of hundreds of different extraction methods for a wide range of categories and relations. Much of NELL’s current success is due to its algorithm for coupling the simultaneous training of many extraction methods.
For more information, see: http://rtw.ml.cmu.edu/rtw/resources
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
en, and perhaps some others
## Dataset Structure
### Data Instances
There are four configurations for the dataset: nell_belief, nell_candidate, nell_belief_sentences, nell_candidate_sentences.
nell_belief and nell_candidate defines:
``
{'best_entity_literal_string': 'Aspect Medical Systems',
'best_value_literal_string': '',
'candidate_source': '%5BSEAL-Iter%3A215-2011%2F02%2F26-04%3A27%3A09-%3Ctoken%3Daspect_medical_systems%2Cbiotechcompany%3E-From%3ACategory%3Abiotechcompany-using-KB+http%3A%2F%2Fwww.unionegroup.com%2Fhealthcare%2Fmfg_info.htm+http%3A%2F%2Fwww.conventionspc.com%2Fcompanies.html%2C+CPL-Iter%3A1103-2018%2F03%2F08-15%3A32%3A34-%3Ctoken%3Daspect_medical_systems%2Cbiotechcompany%3E-grant+support+from+_%092%09research+support+from+_%094%09unrestricted+educational+grant+from+_%092%09educational+grant+from+_%092%09research+grant+support+from+_%091%09various+financial+management+positions+at+_%091%5D',
'categories_for_entity': 'concept:biotechcompany',
'categories_for_value': 'concept:company',
'entity': 'concept:biotechcompany:aspect_medical_systems',
'entity_literal_strings': '"Aspect Medical Systems" "aspect medical systems"',
'iteration_of_promotion': '1103',
'relation': 'generalizations',
'score': '0.9244426550775064',
'source': 'MBL-Iter%3A1103-2018%2F03%2F18-01%3A35%3A42-From+ErrorBasedIntegrator+%28SEAL%28aspect_medical_systems%2Cbiotechcompany%29%2C+CPL%28aspect_medical_systems%2Cbiotechcompany%29%29',
'value': 'concept:biotechcompany',
'value_literal_strings': ''}
``
nell_belief_sentences, nell_candidate_sentences defines:
``
{'count': 4,
'entity': 'biotechcompany:aspect_medical_systems',
'relation': 'generalizations',
'score': '0.9244426550775064',
'sentence': 'research support from [[ Aspect Medical Systems ]]',
'sentence_type': 'CPL',
'url': '',
'value': 'biotechcompany'}
``
### Data Fields
For nell_belief and nell_canddiate configurations. From http://rtw.ml.cmu.edu/rtw/faq:
* entity: The Entity part of the (Entity, Relation, Value) tripple. Note that this will be the name of a concept and is not the literal string of characters seen by NELL from some text source, nor does it indicate the category membership of that concept
* relation: The Relation part of the (Entity, Relation, Value) tripple. In the case of a category instance, this will be "generalizations". In the case of a relation instance, this will be the name of the relation.
* value: The Value part of the (Entity, Relation, Value) tripple. In the case of a category instance, this will be the name of the category. In the case of a relation instance, this will be another concept (like Entity).
* iteration_of_promotion: The point in NELL's life at which this category or relation instance was promoted to one that NELL beleives to be true. This is a non-negative integer indicating the number of iterations of bootstrapping NELL had gone through.
* score: A confidence score for the belief. Note that NELL's scores are not actually probabilistic at this time.
* source: A summary of the provenance for the belief indicating the set of learning subcomponents (CPL, SEAL, etc.) that had submitted this belief as being potentially true.
* entity_literal_strings: The set of actual textual strings that NELL has read that it believes can refer to the concept indicated in the Entity column.
* value_literal_strings: For relations, the set of actual textual strings that NELL has read that it believes can refer to the concept indicated in the Value column. For categories, this should be empty but may contain something spurious.
* best_entity_literal_string: Of the set of strings in the Entity literalStrings, column, which one string can best be used to describe the concept.
* best_value_literal_string: Same thing, but for Value literalStrings.
* categories_for_entity: The full set of categories (which may be empty) to which NELL belives the concept indicated in the Entity column to belong.
* categories_for_value: For relations, the full set of categories (which may be empty) to which NELL believes the concept indicated in the Value column to belong. For categories, this should be empty but may contain something spurious.
* candidate_source: A free-form amalgamation of more specific provenance information describing the justification(s) NELL has for possibly believing this category or relation instance.
For the nell_belief_sentences and nell_candidate_sentences, we have extracted the underlying sentences, sentence count and URLs and provided a shortened version of the entity, relation and value field by removing the string "concept:" and "candidate:". There are two types of sentences, 'CPL' and 'OE', which are generated by two of the modules of NELL, pattern matching and open web searching, respectively. There may be duplicates. The configuration is as follows:
* entity: The Entity part of the (Entity, Relation, Value) tripple. Note that this will be the name of a concept and is not the literal string of characters seen by NELL from some text source, nor does it indicate the category membership of that concept
* relation: The Relation part of the (Entity, Relation, Value) tripple. In the case of a category instance, this will be "generalizations". In the case of a relation instance, this will be the name of the relation.
* value: The Value part of the (Entity, Relation, Value) tripple. In the case of a category instance, this will be the name of the category. In the case of a relation instance, this will be another concept (like Entity).
* score: A confidence score for the belief. Note that NELL's scores are not actually probabilistic at this time.
* sentence: the raw sentence. For 'CPL' type sentences, there are "[[" "]]" arounds the entity and value. For 'OE' type sentences, there are no "[[" and "]]".
* url: the url if there is one from which this sentence was extracted
* count: the count for this sentence
* sentence_type: either 'CPL' or 'OE'
### Data Splits
There are no splits.
## Dataset Creation
### Curation Rationale
This dataset was gathered and created over many years of running the NELL system on web data.
### Source Data
#### Initial Data Collection and Normalization
See the research paper on NELL. NELL searches a subset of the web
(Clueweb09) and the open web using various open information extraction
algorithms, including pattern matching.
#### Who are the source language producers?
The NELL authors at Carnegie Mellon Univiersty and data from Cluebweb09 and the open web.
### Annotations
#### Annotation process
The various open information extraction modules of NELL.
#### Who are the annotators?
Machine annotated.
### Personal and Sensitive Information
Unkown, but likely there are names of famous individuals.
## Considerations for Using the Data
### Social Impact of Dataset
The goal for the work is to help machines learn to read and understand the web.
### Discussion of Biases
Since the data is gathered from the web, there is likely to be biased text and relationships.
[More Information Needed]
### Other Known Limitations
The relationships and concepts gathered from NELL are not 100% accurate, and there could be errors (maybe as high as 30% error).
See https://en.wikipedia.org/wiki/Never-Ending_Language_Learning
We did not 'tag' the entity and value in the 'OE' sentences, and this might be an extension in the future.
## Additional Information
### Dataset Curators
The authors of NELL at Carnegie Mellon Univeristy
### Licensing Information
There does not appear to be a license on http://rtw.ml.cmu.edu/rtw/resources. The data is made available by CMU on the web.
### Citation Information
@inproceedings{mitchell2015,
added-at = {2015-01-27T15:35:24.000+0100},
author = {Mitchell, T. and Cohen, W. and Hruscha, E. and Talukdar, P. and Betteridge, J. and Carlson, A. and Dalvi, B. and Gardner, M. and Kisiel, B. and Krishnamurthy, J. and Lao, N. and Mazaitis, K. and Mohammad, T. and Nakashole, N. and Platanios, E. and Ritter, A. and Samadi, M. and Settles, B. and Wang, R. and Wijaya, D. and Gupta, A. and Chen, X. and Saparov, A. and Greaves, M. and Welling, J.},
biburl = {https://www.bibsonomy.org/bibtex/263070703e6bb812852cca56574aed093/hotho},
booktitle = {AAAI},
description = {Papers by William W. Cohen},
interhash = {52d0d71f6f5b332dabc1412f18e3a93d},
intrahash = {63070703e6bb812852cca56574aed093},
keywords = {learning nell ontology semantic toread},
note = {: Never-Ending Learning in AAAI-2015},
timestamp = {2015-01-27T15:35:24.000+0100},
title = {Never-Ending Learning},
url = {http://www.cs.cmu.edu/~wcohen/pubs.html},
year = 2015
}
### Contributions
Thanks to [@ontocord](https://github.com/ontocord) for adding this dataset. |
oclar | 2022-11-03T16:15:26.000Z | [
"task_categories:text-classification",
"task_ids:text-scoring",
"task_ids:sentiment-classification",
"task_ids:sentiment-scoring",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language... | null | The researchers of OCLAR Marwan et al. (2019), they gathered Arabic costumer reviews from Google reviewsa and Zomato
website (https://www.zomato.com/lebanon) on wide scope of domain, including restaurants, hotels, hospitals, local shops,
etc.The corpus finally contains 3916 reviews in 5-rating scale. For this research purpose, the positive class considers
rating stars from 5 to 3 of 3465 reviews, and the negative class is represented from values of 1 and 2 of about
451 texts. | @misc{Dua:2019 ,
author = "Dua, Dheeru and Graff, Casey",
year = "2017",
title = "{UCI} Machine Learning Repository",
url = "http://archive.ics.uci.edu/ml",
institution = "University of California, Irvine, School of Information and Computer Sciences" }
@InProceedings{AlOmari2019oclar,
title = {Sentiment Classifier: Logistic Regression for Arabic Services Reviews in Lebanon},
authors={Al Omari, M., Al-Hajj, M., Hammami, N., & Sabra, A.},
year={2019}
} | null | 1 | 5 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- ar
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- text-scoring
- sentiment-classification
- sentiment-scoring
paperswithcode_id: null
pretty_name: OCLAR
dataset_info:
features:
- name: pagename
dtype: string
- name: review
dtype: string
- name: rating
dtype: int8
splits:
- name: train
num_bytes: 398204
num_examples: 3916
download_size: 382976
dataset_size: 398204
---
# Dataset Card for OCLAR
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [OCLAR homepage](http://archive.ics.uci.edu/ml/datasets/Opinion+Corpus+for+Lebanese+Arabic+Reviews+%28OCLAR%29#)
- **Paper:** [paper link](https://www.semanticscholar.org/paper/Sentiment-Classifier%3A-Logistic-Regression-for-in-Omari-Al-Hajj/9319f4d9e8b3b7bfd0d214314911c071ba7ce1a0)
- **Point of Contact:** [Marwan Al Omari](marwanalomari@yahoo.com)
### Dataset Summary
The researchers of OCLAR Marwan et al. (2019), they gathered Arabic costumer reviews [Zomato website](https://www.zomato.com/lebanon)
on wide scope of domain, including restaurants, hotels, hospitals, local shops, etc.
The corpus finally contains 3916 reviews in 5-rating scale. For this research purpose, the positive class considers
rating stars from 5 to 3 of 3465 reviews, and the negative class is represented from values of 1 and 2 of about 451
texts.
### Supported Tasks and Leaderboards
Opinion Corpus for Lebanese Arabic Reviews (OCLAR) corpus is utilizable for Arabic sentiment classification on services
reviews, including hotels, restaurants, shops, and others.
### Languages
The text in the dataset is in Arabic, mainly in Lebanese (LB). The associated BCP-47 code is `ar-LB`.
## Dataset Structure
### Data Instances
A typical data point comprises a `pagename` which is the name of service / location being reviewed, a `review` which is
the review left by the user / client , and a `rating` which is a score between 1 and 5.
The authors consider a review to be positive if the score is greater or equal than `3`, else it is considered negative.
An example from the OCLAR data set looks as follows:
```
"pagename": 'Ramlet Al Baida Beirut Lebanon',
"review": 'مكان يطير العقل ويساعد على الاسترخاء',
"rating": 5,
```
### Data Fields
- `pagename`: string name of the service / location being reviewed
- `review`: string review left by the user / costumer
- `rating`: number of stars left by the reviewer. It ranges from 1 to 5.
### Data Splits
The data set comes in a single csv file of a total `3916` reviews :
- `3465` are considered positive (a rating of 3 to 5)
- `451` are considered negative (a rating of 1 or 2)
## Dataset Creation
### Curation Rationale
This dataset was created for Arabic sentiment classification on services’ reviews in Lebanon country.
Reviews are about public services, including hotels, restaurants, shops, and others.
### Source Data
#### Initial Data Collection and Normalization
The data was collected from Google Reviews and [Zomato website](https://www.zomato.com/lebanon)
#### Who are the source language producers?
The source language producers are people who posted their reviews on Google Reviews or [Zomato website](https://www.zomato.com/lebanon).
They're mainly Arabic speaking Lebanese people.
### Annotations
#### Annotation process
The dataset does not contain any additional annotations
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
The author's research has tackled a highly important task of sentiment analysis for Arabic language in the Lebanese
context on 3916 reviews’ services from Google and Zomato. Experiments show three main findings:
1) The classifier is confident when used to predict positive reviews,
2) while it is biased on predicting reviews with negative sentiment, and finally
3) the low percentage of negative reviews in the corpus contributes to the diffidence of LR.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was curated by Marwan Al Omari, Moustafa Al-Hajj from Centre for Language Sciences and Communication,
Lebanese University, Beirut, Lebanon; Nacereddine Hammami from college of Computer and Information Sciences,
Jouf University, Aljouf, KSA; and Amani Sabra from Centre for Language Sciences and Communication, Lebanese University,
Beirut, Lebanon.
### Licensing Information
[More Information Needed]
### Citation Information
- Marwan Al Omari, Centre for Language Sciences and Communication, Lebanese University, Beirut, Lebanon, marwanalomari '@' yahoo.com
- Moustafa Al-Hajj, Centre for Language Sciences and Communication, Lebanese University, Beirut, Lebanon, moustafa.alhajj '@' ul.edu.lb
- Nacereddine Hammami, college of Computer and Information Sciences, Jouf University, Aljouf, KSA, n.hammami '@' ju.edu.sa
- Amani Sabra, Centre for Language Sciences and Communication, Lebanese University, Beirut, Lebanon, amani.sabra '@' ul.edu.lb
```
@misc{Dua:2019 ,
author = "Dua, Dheeru and Graff, Casey",
year = "2017",
title = "{UCI} Machine Learning Repository",
url = "http://archive.ics.uci.edu/ml",
institution = "University of California, Irvine, School of Information and Computer Sciences" }
@InProceedings{AlOmari2019oclar,
title = {Sentiment Classifier: Logistic Regression for Arabic Services Reviews in Lebanon},
authors={Al Omari, M., Al-Hajj, M., Hammami, N., & Sabra, A.},
year={2019}
}
```
### Contributions
Thanks to [@alaameloh](https://github.com/alaameloh) for adding this dataset. |
ollie | 2023-06-01T14:59:47.000Z | [
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:other",
"relation-extraction",
"text-to-structured",
"region:us"
] | null | The Ollie dataset includes two configs for the data
used to train the Ollie informatation extraction algorithm, for 18M
sentences and 3M sentences respectively.
This data is for academic use only. From the authors:
Ollie is a program that automatically identifies and extracts binary
relationships from English sentences. Ollie is designed for Web-scale
information extraction, where target relations are not specified in
advance.
Ollie is our second-generation information extraction system . Whereas
ReVerb operates on flat sequences of tokens, Ollie works with the
tree-like (graph with only small cycles) representation using
Stanford's compression of the dependencies. This allows Ollie to
capture expression that ReVerb misses, such as long-range relations.
Ollie also captures context that modifies a binary relation. Presently
Ollie handles attribution (He said/she believes) and enabling
conditions (if X then).
More information is available at the Ollie homepage:
https://knowitall.github.io/ollie/ | @inproceedings{ollie-emnlp12,
author = {Mausam and Michael Schmitz and Robert Bart and Stephen Soderland and Oren Etzioni},
title = {Open Language Learning for Information Extraction},
booktitle = {Proceedings of Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CONLL)},
year = {2012}
} | null | 0 | 5 | ---
annotations_creators:
- machine-generated
language_creators:
- crowdsourced
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 10M<n<100M
- 1M<n<10M
source_datasets:
- original
task_categories: []
task_ids: []
pretty_name: Ollie
tags:
- relation-extraction
- text-to-structured
dataset_info:
- config_name: ollie_lemmagrep
features:
- name: arg1
dtype: string
- name: arg2
dtype: string
- name: rel
dtype: string
- name: search_query
dtype: string
- name: sentence
dtype: string
- name: words
dtype: string
- name: pos
dtype: string
- name: chunk
dtype: string
- name: sentence_cnt
dtype: string
splits:
- name: train
num_bytes: 12324648919
num_examples: 18674630
download_size: 1789363108
dataset_size: 12324648919
- config_name: ollie_patterned
features:
- name: rel
dtype: string
- name: arg1
dtype: string
- name: arg2
dtype: string
- name: slot0
dtype: string
- name: search_query
dtype: string
- name: pattern
dtype: string
- name: sentence
dtype: string
- name: parse
dtype: string
splits:
- name: train
num_bytes: 2930309084
num_examples: 3048961
download_size: 387514061
dataset_size: 2930309084
config_names:
- ollie_lemmagrep
- ollie_patterned
---
# Dataset Card for Ollie
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Ollie](https://knowitall.github.io/ollie/)
- **Repository:** [Github](https://github.com/knowitall/ollie)
- **Paper:** [Aclweb](https://www.aclweb.org/anthology/D12-1048/)
### Dataset Summary
The Ollie dataset includes two configs for the data
used to train the Ollie informatation extraction algorithm, for 18M
sentences and 3M sentences respectively.
This data is for academic use only. From the authors:
Ollie is a program that automatically identifies and extracts binary
relationships from English sentences. Ollie is designed for Web-scale
information extraction, where target relations are not specified in
advance.
Ollie is our second-generation information extraction system . Whereas
ReVerb operates on flat sequences of tokens, Ollie works with the
tree-like (graph with only small cycles) representation using
Stanford's compression of the dependencies. This allows Ollie to
capture expression that ReVerb misses, such as long-range relations.
Ollie also captures context that modifies a binary relation. Presently
Ollie handles attribution (He said/she believes) and enabling
conditions (if X then).
More information is available at the Ollie homepage:
https://knowitall.github.io/ollie/
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
en
## Dataset Structure
### Data Instances
There are two configurations for the dataset: ollie_lemmagrep which
are 18M sentences from web searches for a subset of the Reverb
relationships (110,000 relationships), and the 3M sentences for
ollie_patterned which is a subset of the ollie_lemmagrep dataset
derived from patterns according to the Ollie paper.
An example of an ollie_lemmagrep record:
``
{'arg1': 'adobe reader',
'arg2': 'pdf',
'chunk': 'B-NP I-NP I-NP I-NP B-PP B-NP I-NP B-VP B-PP B-NP I-NP O B-VP B-NP I-NP I-NP I-NP B-VP I-VP I-VP O',
'pos': 'JJ NNS CC NNS IN PRP$ NN VBP IN NNP NN CC VB DT NNP NNP NNP TO VB VBN .',
'rel': 'be require to view',
'search_query': 'require reader pdf adobe view',
'sentence': 'Many documents and reports on our site are in PDF format and require the Adobe Acrobat Reader to be viewed .',
'sentence_cnt': '9',
'words': 'many,document,and,report,on,our,site,be,in,pdf,format,and,require,the,adobe,acrobat,reader,to,be,view'}
``
An example of an ollie_patterned record:
``
{'arg1': 'english',
'arg2': 'internet',
'parse': '(in_IN_6), advmod(important_JJ_4, most_RBS_3); nsubj(language_NN_5, English_NNP_0); cop(language_NN_5, being_VBG_1); det(language_NN_5, the_DT_2); amod(language_NN_5, important_JJ_4); prep_in(language_NN_5, era_NN_9); punct(language_NN_5, ,_,_10); conj(language_NN_5, education_NN_12); det(era_NN_9, the_DT_7); nn(era_NN_9, Internet_NNP_8); amod(education_NN_12, English_JJ_11); nsubjpass(enriched_VBN_15, language_NN_5); aux(enriched_VBN_15, should_MD_13); auxpass(enriched_VBN_15, be_VB_14); punct(enriched_VBN_15, ._._16)',
'pattern': '{arg1} <nsubj< {rel:NN} >prep_in> {slot0:NN} >nn> {arg2}',
'rel': 'be language of',
'search_query': 'english language internet',
'sentence': 'English being the most important language in the Internet era , English education should be enriched .',
'slot0': 'era'}
``
### Data Fields
For ollie_lemmagrep:
* rel: the relationship phrase/verb phrase. This may be empty, which represents the "be" relationship.
* arg1: the first argument in the relationship
* arg2: the second argument in the relationship.
* chunk: a tag of each token in the sentence, showing the pos chunks
* pos: part of speech tagging of the sentence
* sentence: the sentence
* sentence_cnt: the number of copies of this sentence encountered
* search_query: a combintion of rel, arg1, arg2
* words: the lemma of the words of the sentence separated by commas
For ollie_patterned:
* rel: the relationship phrase/verb phrase.
* arg1: the first argument in the relationship
* arg2: the second argument in the relationship.
* slot0: the third argument in the relationship, which might be empty.
* pattern: a parse pattern for the relationship
* parse: a dependency parse forthe sentence
* search_query: a combintion of rel, arg1, arg2
* sentence: the senence
### Data Splits
There are no splits.
## Dataset Creation
### Curation Rationale
This dataset was created as part of research on open information extraction.
### Source Data
#### Initial Data Collection and Normalization
See the research paper on OLlie. The training data is extracted from web pages (Cluebweb09).
#### Who are the source language producers?
The Ollie authors at the Univeristy of Washington and data from Cluebweb09 and the open web.
### Annotations
#### Annotation process
The various parsers and code from the Ollie alogrithm.
#### Who are the annotators?
Machine annotated.
### Personal and Sensitive Information
Unkown, but likely there are names of famous individuals.
## Considerations for Using the Data
### Social Impact of Dataset
The goal for the work is to help machines learn to extract information form open domains.
### Discussion of Biases
Since the data is gathered from the web, there is likely to be biased text and relationships.
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The authors of Ollie at The University of Washington
### Licensing Information
The University of Washington academic license: https://raw.githubusercontent.com/knowitall/ollie/master/LICENSE
### Citation Information
```
@inproceedings{ollie-emnlp12,
author = {Mausam and Michael Schmitz and Robert Bart and Stephen Soderland and Oren Etzioni},
title = {Open Language Learning for Information Extraction},
booktitle = {Proceedings of Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CONLL)},
year = {2012}
}
```
### Contributions
Thanks to [@ontocord](https://github.com/ontocord) for adding this dataset. |
opus_dogc | 2022-11-03T16:07:43.000Z | [
"task_categories:translation",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:translation",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:ca",
"language:es",
"license:cc0-1.0",
"region:us"
] | null | This is a collection of documents from the Official Journal of the Government of Catalonia, in Catalan and Spanish languages, provided by Antoni Oliver Gonzalez from the Universitat Oberta de Catalunya. | @inproceedings{tiedemann-2012-parallel,
title = "Parallel Data, Tools and Interfaces in {OPUS}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}'12)",
month = may,
year = "2012",
address = "Istanbul, Turkey",
publisher = "European Language Resources Association (ELRA)",
url = "http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf",
pages = "2214--2218",
abstract = "This paper presents the current status of OPUS, a growing language resource of parallel corpora and related tools. The focus in OPUS is to provide freely available data sets in various formats together with basic annotation to be useful for applications in computational linguistics, translation studies and cross-linguistic corpus studies. In this paper, we report about new data sets and their features, additional annotation tools and models provided from the website and essential interfaces and on-line services included in the project.",
} | null | 0 | 5 | ---
annotations_creators:
- no-annotation
language_creators:
- expert-generated
language:
- ca
- es
license:
- cc0-1.0
multilinguality:
- translation
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: OPUS DOGC
dataset_info:
features:
- name: translation
dtype:
translation:
languages:
- ca
- es
config_name: tmx
splits:
- name: train
num_bytes: 1258924464
num_examples: 4763575
download_size: 331724078
dataset_size: 1258924464
---
# Dataset Card for OPUS DOGC
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://opus.nlpl.eu/DOGC.php
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
OPUS DOGC is a collection of documents from the Official Journal of the Government of Catalonia, in Catalan and Spanish languages, provided by Antoni Oliver Gonzalez from the Universitat Oberta de Catalunya.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Dataset is multilingual with parallel text in:
- Catalan
- Spanish
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
A data instance contains the following fields:
- `ca`: the Catalan text
- `es`: the aligned Spanish text
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Dataset is in the Public Domain under [CC0 1.0](https://creativecommons.org/publicdomain/zero/1.0/).
### Citation Information
```
@inproceedings{tiedemann-2012-parallel,
title = "Parallel Data, Tools and Interfaces in {OPUS}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}'12)",
month = may,
year = "2012",
address = "Istanbul, Turkey",
publisher = "European Language Resources Association (ELRA)",
url = "http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf",
pages = "2214--2218",
abstract = "This paper presents the current status of OPUS, a growing language resource of parallel corpora and related tools. The focus in OPUS is to provide freely available data sets in various formats together with basic annotation to be useful for applications in computational linguistics, translation studies and cross-linguistic corpus studies. In this paper, we report about new data sets and their features, additional annotation tools and models provided from the website and essential interfaces and on-line services included in the project.",
}
```
### Contributions
Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset. |
opus_elhuyar | 2022-11-03T16:07:47.000Z | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:translation",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:es",
"language:eu",
"license:unknown",
"region:us"
] | null | Dataset provided by the foundation Elhuyar, which is having data in languages Spanish to Basque. | @InProceedings{opus:Elhuyar,
title = {Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)},
authors={J. Tiedemann},
year={2012}
} | null | 0 | 5 | ---
annotations_creators:
- found
language_creators:
- found
language:
- es
- eu
license:
- unknown
multilinguality:
- translation
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: OpusElhuyar
dataset_info:
features:
- name: translation
dtype:
translation:
languages:
- es
- eu
config_name: es-eu
splits:
- name: train
num_bytes: 127833939
num_examples: 642348
download_size: 44468751
dataset_size: 127833939
---
# Dataset Card for [opus_elhuyar]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**[Opus Elhuyar](http://opus.nlpl.eu/Elhuyar.php)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Dataset provided by the foundation Elhuyar (http://webcorpusak.elhuyar.eus/sarrera_paraleloa.html) and submitted to OPUS by Joseba Garcia Beaumont
### Supported Tasks and Leaderboards
The underlying task is machine translation from Spanish to Basque
### Languages
Spanish to Basque
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)
### Contributions
Thanks to [@spatil6](https://github.com/spatil6) for adding this dataset. |
opus_xhosanavy | 2022-11-03T16:08:13.000Z | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:translation",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"language:xh",
"license:unknown",
"region:us"
] | null | This dataset is designed for machine translation from English to Xhosa. | J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012) | null | 3 | 5 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
- xh
license:
- unknown
multilinguality:
- translation
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: OpusXhosanavy
dataset_info:
features:
- name: translation
dtype:
translation:
languages:
- en
- xh
config_name: en-xh
splits:
- name: train
num_bytes: 9654422
num_examples: 49982
download_size: 3263865
dataset_size: 9654422
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**[XhosaNavy](http://opus.nlpl.eu/XhosaNavy-v1.php)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This corpus is part of OPUS - the open collection of parallel corpora
OPUS Website: http://opus.nlpl.eu
### Supported Tasks and Leaderboards
The underlying task is machine translation from English to Xhosa
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)
### Contributions
Thanks to [@lhoestq](https://github.com/lhoestq), [@spatil6](https://github.com/spatil6) for adding this dataset. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.